url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
https://arxiv.org/abs/1905.06824
math.PR # Title:Sample Paths Estimates for Stochastic Fast-Slow Systems driven by Fractional Brownian Motion Abstract: We analyze the effect of additive fractional noise with Hurst parameter $H > \frac{1}{2}$ on fast-slow systems. Our strategy is based on sample paths estimates, similar to the approach by Berglund and Gentz in the Brownian motion case. Yet, the setting of fractional Brownian motion does not allow us to use the martingale methods from fast-slow systems with Brownian motion. We thoroughly investigate the case where the deterministic system permits a uniformly hyperbolic stable slow manifold. In this setting, we provide a neighborhood, tailored to the fast-slow structure of the system, that contains the process with high probability. We prove this assertion by providing exponential error estimates on the probability that the system leaves this neighborhood. We also illustrate our results in an example arising in climate modeling, where time-correlated noise processes have become of greater relevance recently. Comments: Preprint 44 pages, 14 figures Subjects: Probability (math.PR); Dynamical Systems (math.DS) MSC classes: 60G22, 60H15 Cite as: arXiv:1905.06824 [math.PR] (or arXiv:1905.06824v1 [math.PR] for this version) ## Submission history From: Alexandra Neamtu [view email] [v1] Thu, 16 May 2019 15:06:37 UTC (694 KB)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4892041087150574, "perplexity": 1466.5239264513516}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998724.57/warc/CC-MAIN-20190618123355-20190618145355-00325.warc.gz"}
http://bellaalmeria.com/en/rutas-y-viajes/tag/casa-valle-del-este/
Warning: "continue" targeting switch is equivalent to "break". Did you mean to use "continue 2"? in /homepages/18/d568124832/htdocs/www.bellaalmeria.com/wp-content/plugins/revslider/includes/operations.class.php on line 2758 Warning: "continue" targeting switch is equivalent to "break". Did you mean to use "continue 2"? in /homepages/18/d568124832/htdocs/www.bellaalmeria.com/wp-content/plugins/revslider/includes/operations.class.php on line 2762 Warning: "continue" targeting switch is equivalent to "break". Did you mean to use "continue 2"? in /homepages/18/d568124832/htdocs/www.bellaalmeria.com/wp-content/plugins/revslider/includes/output.class.php on line 3706 Warning: count(): Parameter must be an array or an object that implements Countable in /homepages/18/d568124832/htdocs/www.bellaalmeria.com/wp-content/themes/windsor/includes/lists.php on line 396 Warning: Cannot modify header information - headers already sent by (output started at /homepages/18/d568124832/htdocs/www.bellaalmeria.com/wp-content/plugins/revslider/includes/operations.class.php:2758) in /homepages/18/d568124832/htdocs/www.bellaalmeria.com/wp-content/plugins/wpglobus/includes/class-wpglobus-redirect.php on line 37 casa Valle del Este | Bella Almería
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8401710391044617, "perplexity": 19285.19643421928}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315329.55/warc/CC-MAIN-20190820113425-20190820135425-00418.warc.gz"}
https://homework.cpm.org/category/CCI_CT/textbook/pc3/chapter/13/lesson/13.2.2/problem/13-77
### Home > PC3 > Chapter 13 > Lesson 13.2.2 > Problem13-77 13-77. The Gladiator van travels $20t + 18$ miles per hour for $0\le t\le3$ hours. 1. Draw a graph of the situation. Label the axes carefully. 2. Compute the exact area under the curve. Use the formula for the area of a trapezoid. 3. How far did the Gladiator travel during those three hours? It is the area under the curve.
{"extraction_info": {"found_math": true, "script_math_tex": 2, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9016214609146118, "perplexity": 2007.9293264258044}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154304.34/warc/CC-MAIN-20210802043814-20210802073814-00268.warc.gz"}
https://ir.lib.uwo.ca/etd/5137/
#### Degree Doctor of Philosophy Computer Science Prof. Lila Kari #### Abstract DNA-based self-assembly is an autonomous process whereby a disordered system of DNA sequences forms an organized structure or pattern as a consequence of Watson-Crick complementarity of DNA sequences, without external direction. Here, we propose self-assembly (SA) hypergraph automata as an automata-theoretic model for patterned self-assembly. We investigate the computational power of SA-hypergraph automata and show that for every recognizable picture language, there exists an SA-hypergraph automaton that accepts this language. Conversely, we prove that for any restricted SA-hypergraph automaton, there exists a Wang Tile System, a model for recognizable picture languages, that accepts the same language. Moreover, we investigate the computational power of some variants of the Signal-passing Tile Assembly Model (STAM), as well as propose the concept of {\it Smart Tiles}, i.e., tiles with glues that can be activated or deactivated by signals, and which possess a limited amount of local computing capability. We demonstrate the potential of smart tiles to perform some robotic tasks such as replicating complex shapes. COinS
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8865973353385925, "perplexity": 2453.0673530940603}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215523.56/warc/CC-MAIN-20180820023141-20180820043141-00275.warc.gz"}
http://nccr-swissmap.ch/research/publications/string-topology-and-configuration-spaces-two-points
# String topology and configuration spaces of two points Thursday, 14 November, 2019 ## Published in: arXiv:1911.06202 Given a closed manifold M. We give an algebraic model for the Chas-Sullivan product and the Goresky-Hingston coproduct. In the simply-connected case, this admits a particularly nice description in terms of a Poincaré duality model of the manifold, and involves the configuration space of two points on M. We moreover, construct an IBL_\infty-structure on (a model of) cyclic chains on the cochain algebra of M, such that the natural comparison map to the S^1-equivariant loop space homology intertwines the Lie bialgebra structure on homology. The construction of the coproduct/cobracket depends on the perturbative partition function of a Chern-Simons type topological field theory. Furthermore, we give a construction for these string topology operations on the absolute loop space (not relative to constant loops) in case that M carries a non-vanishing vector field and obtain a similar description. Finally, we show that the cobracket is sensitive to the manifold structure of M beyond its homotopy type. More precisely, the action of {\rm Diff}(M) does not (in general) factor through {\rm aut}(M). ## Author(s): Florian Naef Thomas Willwacher
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9171804785728455, "perplexity": 614.1941336652849}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540518627.72/warc/CC-MAIN-20191209093227-20191209121227-00278.warc.gz"}
https://socratic.org/questions/how-do-you-express-3x-x-2-x-1-in-partial-fractions
Precalculus Topics # How do you express (3x)/((x + 2)(x - 1)) in partial fractions? Aug 10, 2016 $\frac{3 x}{\left(x + 2\right) \left(x - 1\right)} = \frac{2}{x + 2} + \frac{1}{x - 1}$ #### Explanation: $\frac{3 x}{\left(x + 2\right) \left(x - 1\right)} = \frac{A}{x + 2} + \frac{B}{x - 1}$ Use Heaviside's cover-up method to find: $A = \frac{3 \left(- 2\right)}{\left(- 2\right) - 1} = \frac{- 6}{- 3} = 2$ $B = \frac{3 \left(1\right)}{\left(1\right) + 2} = \frac{3}{3} = 1$ So: $\frac{3 x}{\left(x + 2\right) \left(x - 1\right)} = \frac{2}{x + 2} + \frac{1}{x - 1}$ ##### Impact of this question 1692 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 5, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.159775048494339, "perplexity": 13131.9676598285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487630518.38/warc/CC-MAIN-20210617162149-20210617192149-00329.warc.gz"}
http://tex.stackexchange.com/questions/129538/how-to-line-break-cites-correctly-hanging-over-problem
How to line-break cites correctly (hanging-over problem)? while using Bibtex, I experienced an frustrating issue with line-breaks on cites in my block text. I have chosen to use the following bibtex implementation: ``````\bibliographystyle{plain} \bibliography{library} `````` No voodoo here. Nevertheless, this leads to the following composition: The cite is not line-broken correctly. I assume many other users have already experienced that hanging-over problem either. Can someone help me how to fix that LaTeX is compiling the cite correctly in a way that it nicely integrates itself into block structure? EDIT: Rephrasing makes it hard to monitor the whole 300 pages every time I recompile the document due to some minor modifications. I know that many use suggest to rephrase the text but I would like to ask for a more professional solution here. - Have you tried to rephrase the sentence including the cite? Adding or deleting a word can do it ... – Kurt Aug 22 '13 at 10:14 I thought about a more elegant way instead of making my sentence sounding not so nice anymore ;). Honestly, I think rephrasing makes it hard to monitor the whole 300 pages every time I recompile the document due to some minor modifications. – Robiston Aug 22 '13 at 10:17 I think we are talking about the last fine tuning of your document. I do this usually only one time, after I have finished writing my text with the final proofread of the document. With option `draft` you can find the hanging-over lines easily and rephrase them or insert (depends on the situation) a hyphenation mark. The algorithm of LaTeX is very good, but some times it needs the help of you. Until today a good typography needs the help of a person, the program allone can not manage it in the same way a learned typographer can ... – Kurt Aug 22 '13 at 11:12 plain style alone doesn't give such citations. Beside this what do you want latex to do to handle the problem? – Ulrike Fischer Aug 22 '13 at 11:12 You should load microtype and you can loosen the settings of latex regarding line breaking see tex.ac.uk/cgi-bin/texfaq2html?label=overfull. – Ulrike Fischer Aug 22 '13 at 11:49
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8558979630470276, "perplexity": 1529.4011591755523}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049278389.62/warc/CC-MAIN-20160524002118-00236-ip-10-185-217-139.ec2.internal.warc.gz"}
http://www.yogakuhack.com/entry/saved_khalid
# 【洋楽歌詞和訳】Saved / Khalid (カリード) Khalid (カリード)の Saved の英語歌詞と日本語和訳をご紹介します。 Khalid (カリード)の洋楽歌詞和訳一覧はこちら Amazon Musicでは、好きな洋楽アーティストが聴き放題。 ### Saved の英語歌詞と和訳 1, 2, 3, 4 The hard part always seems to last forever Sometimes I forget that we aren't together Deep down in my heart, I hope you're doing alright But from time to time I often think of why you aren't mine ときどきどうして君が俺のものじゃないのか分からなくなる But I'll keep your number saved でも君の電話番号は消さないよ Cause I hope one day you'll get the sense to call me だってまた君が電話する気になるかもしれないだろ I'm hoping that you'll say You're missing me the way I'm missing you So I'll keep your number saved だから君の電話番号は消さない Cause I hope one day I'll get the pride to call you また君に電話できる日が来るかもしれないから To tell you that no one else Is gonna hold you down the way that I do Now, I can't say I'll be alright without you And I can't say that I haven't tried to But, all your stuff is gone I erased all the pictures from my phone Of me and you Here's what I'll do これが俺のやり方なんだ Cause I hope one day you'll get the sense to call me だってまた君が電話する気になるかもしれないだろ I'm hoping that you'll say You're missing me the way I'm missing you So I'll keep your number saved だから君の電話番号は消さない Cause I hope one day I'll get the pride to call you また君に電話できる日が来るかもしれないから To tell you that no one else Is gonna hold you down the way that I do I hope you think of all the times we shared I hope you'll finally realize I was the only one who cared It's crazy how this love thing seems unfair この恋は不公平すぎる You won't find a love like mine anywhere But I'll keep your number saved でも君の電話番号は消さないよ Cause I hope one day you'll get the sense to call me だってまた君が電話する気になるかもしれないだろ I'm hoping that you'll say You're missing me the way I'm missing you So I'll keep your number saved だから君の電話番号は消さない Cause I hope one day I'll get the pride to call you また君に電話できる日が来るかもしれないから To tell you that no one else Is gonna hold you down the way that I do But I'll keep your number saved でも君の電話番号は消さないよ Cause I hope one day you'll get the sense to call me だってまた君が電話する気になるかもしれないだろ I'm hoping that you'll say You're missing me the way I'm missing you So I'll keep your number saved だから君の電話番号は消さない Cause I hope one day I'll get the pride to call you また君に電話できる日が来るかもしれないから To tell you that I'm finally over you やっと君を忘れることができたって I'm finally over you
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.829649806022644, "perplexity": 3952.523195083659}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743353.54/warc/CC-MAIN-20181117102757-20181117124757-00241.warc.gz"}
https://www.mathkplus.com/I-Math/Fractions/Multiplying-Fractions.aspx
]]> Math K-Plus # How To Use Multiplying Fractions Calculator ## Solve Any Fraction Multiplication Problem And Show Step-By-Step Detail Look below the calculator to see a real fraction multiply problem solved. Please Read! Instructions to input proper fractions, improper fractions, whole numbers, and mixed numbers into the Calculator: • Example to enter a fraction "1/3", and example to enter a whole number "3". • Enter a whole number and a fraction - put a space between them: "3 1/3". • Enter both fractions in the calculator you want to multiply plus your answer. Now press MULTIPLY button and compare your answer with the calculator. • Or test example already typed in to the calculator. Just press the MULTIPLY button or enter a new problem and press the MULTIPLY button. Multiplying Fractions Calculator Name Multiply Value Enter Fraction One. You can enter up to 10 digits/characters Enter Fraction Two. You can enter up to 10 digits/characters × Your Answer. You can enter up to 12 digits/characters ## HERE IS AN EXAMPLE OF FRACTION MULTIPLICATION $3 1/3 × 5 1/2 =$ Problem Statement. Convert mixed number 3 1/3 to improper fraction. $3 1/3 = (3 × 3) + 1 3 = 103$ Convert mixed number 5 1/2 to improper fraction. $5 1/2 = (5 × 2) + 1 2 = 112$ $103 × 112 =$ Solve this fraction problem. $103 × 112 =$ $10 × 113 × 2 =$ $1106 =$ $+1106 =$ $+553$ The fraction can be reduced by dividing the numerator and denominator by the Greatest Common Factor ... 2 Problem Answer. $(+18) + (1)3 =$ Group whole numbers and fractions. $(18) + (1)3 =$ $(18) + (1)3 =$ $18 13$ Problem Answer. The calculator above has a number of valuable features: • Math Practice The student can solve a fraction multiplication problem and compare their answer to the calculator. The calculator shows each step to solve the math problem entered. If the student has made a mistake, they can study the calculator results to understand where and avoid the mistake in the future. By using the tool for practicing, students can sharpen their skills. Use the New Problem button to create random problems to practice solving. • Math Problem Solver The calculator is also a useful tool for a student to verify their homework is correct. Again an opportunity to learn from mistakes. • Math Quizzes The student can at anytime take a five question test to validate the level of their current skills. Further these tests help prepare the student for classroom tests. The quickest test is the five question Test. It is recommended you see Help Recommended reading: How To Multiply Fractions, Adding Fractions, Subtracting Fractions, and Dividing Fractions. _
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 13, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9212977886199951, "perplexity": 1506.8281000255554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056974.30/warc/CC-MAIN-20210920010331-20210920040331-00081.warc.gz"}
https://bioinformatics.stackexchange.com/questions/7013/using-preprocessing-alignment-functions-on-the-server
# Using preprocessing/alignment functions on the server I am new to bash and the processes behind cluster computing in general and need some help with understanding some basics. After looking all over the internet and this forum (+ askUbuntu) I found nothing that addresses this issue. I have a series of raw RNA sequencing files: .bam and wish to utilize samtools (or others) to begin the data preprocessing steps defined in many workflows. I have installed samtools and many other programs necessary for the workflow, however, once I am on the remote server in bash, and try and run a function over the files of interest, it gives me the error: -bash-samtools: command not found When I try to install the package, say samtools, by running sudo apt install samtools, I get an error stating I do not have permission to do such. What do I do? Must I bring up this issue to the server provider or is there a way around it? The short answer is to use conda. In the bioconda channel we have most of the tools used in bioinformatics, such as samtools. You do not need administrator permissions to install conda or packages with it, so your lack of sudo ability is not an issue. I have installed samtools and many other programs necessary for the workflow, however, once I am on the remote server in bash, and try and run a function over the files of interest, it gives me the error Installing packages on your local machine won't affect the remote server unless they share a file system, so this is expected. As an aside, BAM files aren't really "raw", they're considered processed data (fastq/fast5 files are considered truly "raw" data in the sequencing world). At a very wild guess -bash-samtools This looks shaky. For a start you need a space between the command and the application in all Linux/Unix/Mac OS X applications (its complicated to explain right now [see below] just accept it). Wait you are using sudo and thats how you are installing on a local machine. sudo on a remote machine will never work, because you are not the systems admin. sudo on a local machine is not cool (see below) Anyway I need to do some next-gen and have 6 MiSeq kits waiting for action so I installed it from here: http://www.htslib.org/download/ Okay so here's my tcsh history 4 21:44 cd sam* # like I was in the Desktop 7 21:44 more README # I needed to know how they wanted the installation 8 21:45 ./configure # okay there are setting the paths for me - saves me a job 11 21:46 make # like fairytale stuff for a load of C scripts 16 21:49 ls | grep samtools # I wanna know if its compiled and I see samtools* ... yep 17 21:49 ln -s ./samtools* ~/bin # I'm gonna pop a link in my bin so after I type 18 21:49 rehash # 'cause I still insist on tcsh rather than bash (don't do it in bash) However, the following is a bit creepy because I get loads of perl scripts in my /usr/local/bin ... make install Anyway, samtools It took 4 mins on an OS X machine, using tsch, after I just quaffed 2 pints at my pub (10pm London time). So probability is set against me .... OUTPUT Program: samtools (Tools for alignments in the SAM format) Version: 0.1.18 (r982:295) Usage: samtools <command> [options] Command: view SAM<->BAM conversion sort sort alignment file mpileup multi-way pileup depth compute the depth faidx index/extract FASTA tview text alignment viewer index index alignment idxstats BAM index stats (r595 or later) etc ... BTW seriously, fun over, avoid "sudo" if you can, for lots of reasons (which I'm not gonna explain). Its a last resort and usually means your paths are incorrectly set. sudo will be refused on a remote machine, without question. You COULD ask systems admin to do this for you. HOWEVER, I'd try and install yourself without sudo and your bioinformatics will benefit. Program: samtools (Tools for alignments in the SAM format) Version: 1.9 (using htslib 1.9) Usage: samtools <command> [options] Commands: -- Indexing dict create a sequence dictionary file faidx index/extract FASTA fqidx index/extract FASTQ index index alignment Output following make install • 0.1.18 is a truly ancient version of samtools and is unlikely to be what you compiled. – Devon Ryan Feb 13 '19 at 8:08 • Yes you are right, currently 1.9 (above) – Michael Feb 13 '19 at 15:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2798597514629364, "perplexity": 5462.981247371451}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402101163.62/warc/CC-MAIN-20200930013009-20200930043009-00441.warc.gz"}
https://arxiv.org/abs/1401.5968
astro-ph.IM (what is this?) # Title: Simulation study of the plasma brake effect Abstract: The plasma brake is a thin negatively biased tether which has been proposed as an efficient concept for deorbiting satellites and debris objects from low Earth orbit. We simulate the interaction with the ionospheric plasma ram flow with the plasma brake tether by a high performance electrostatic particle in cell code to evaluate the thrust. The tether is assumed to be perpendicular to the flow. We perform runs for different tether voltage, magnetic field orientation and plasma ion mass. We show that a simple analytical thrust formula reproduces most of the simulation results well. The interaction with the tether and the plasma flow is laminar (i.e., smooth and not turbulent) when the magnetic field is perpendicular to the tether and the flow. If the magnetic field is parallel to the tether, the behaviour is unstable and thrust is reduced by a modest factor. The case when the magnetic field is aligned with the flow can also be unstable, but does not result in notable thrust reduction. We also fix an error in an earlier reference. According to the simulations, the predicted thrust of the plasma brake is large enough to make the method promising for low Earth orbit (LEO) satellite deorbiting. As a numerical example we estimate that a 5 km long plasma brake tether weighing 0.055 kg could produce 0.43 mN breaking force which is enough to reduce the orbital altitude of a 260 kg object mass by 100 km during one year. Comments: 15 pages, 17 figures, 2 tables, in press in Annales Geophysicae Subjects: Instrumentation and Methods for Astrophysics (astro-ph.IM); Plasma Physics (physics.plasm-ph) DOI: 10.5194/angeo-32-1207-2014 Cite as: arXiv:1401.5968 [astro-ph.IM] (or arXiv:1401.5968v2 [astro-ph.IM] for this version) ## Submission history From: Pekka Janhunen [view email] [v1] Thu, 23 Jan 2014 13:37:14 GMT (11554kb,D) [v2] Fri, 29 Aug 2014 06:47:57 GMT (2400kb,D)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8296805620193481, "perplexity": 1840.8525098737705}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815934.81/warc/CC-MAIN-20180224191934-20180224211934-00670.warc.gz"}
https://codegolf.stackexchange.com/questions/133365/find-the-closest-fibonacci-number?noredirect=1
# Find the closest Fibonacci Number We are all familiar with the famous Fibonacci sequence, that starts with 0 and 1, and each element is the sum of the previous two. Here are the first few terms (OEIS A000045): 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584 Given a positive integer, return the closest number of the Fibonacci sequence, under these rules: • The closest Fibonacci number is defined as the Fibonacci number with the smallest absolute difference with the given integer. For example, 34 is the closest Fibonacci number to 30, because |34 - 30| = 4, which is smaller than the second closest one, 21, for which |21 - 30| = 9. • If the given integer belongs to the Fibonacci sequence, the closest Fibonacci number is exactly itself. For example, the closest Fibonacci number to 13 is exactly 13. • In case of a tie, you may choose to output either one of the Fibonacci numbers that are both closest to the input or just output them both. For instance, if the input is 17, all of the following are valid: 21, 13 or 21, 13. In case you return them both, please mention the format. Default Loopholes apply. You can take input and provide output through any standard method. Your program / function must only handle values up to 108. # Test Cases Input -> Output 1 -> 1 3 -> 3 4 -> 3 or 5 or 3, 5 6 -> 5 7 -> 8 11 -> 13 17 -> 13 or 21 or 13, 21 63 -> 55 101 -> 89 377 -> 377 467 -> 377 500 -> 610 1399 -> 1597 # Scoring This is , so the shortest code in bytes in every language wins! • – Mr. Xcoder Jul 19 '17 at 15:01 • FWIW, here is some Python code on SO for doing this efficiently for large inputs, along with a script that can be used for timing various algorithms. – PM 2Ring Jul 20 '17 at 7:56 • Is 0 considered as a positive integer? – Alix Eisenhardt Jul 21 '17 at 12:53 • @AlixEisenhardt No. Positive integer n implies n ≥ 1. – Mr. Xcoder Jul 21 '17 at 12:56 # Python 2, 43 bytes f=lambda n,a=0,b=1:a*(2*n<a+b)or f(n,b,a+b) Try it online! Iterates through pairs of consecutive Fibonacci numbers (a,b) until it reaches one where the input n is less than their midpoint (a+b)/2, then returns a. Written as a program (47 bytes): n=input() a=b=1 while 2*n>a+b:a,b=b,a+b print a f=lambda n,a=0,b=1:b/2/n*(b-a)or f(n,b,a+b) # Neim, 5 bytes f𝐖𝕖S𝕔 Explanation: f Push infinite Fibonacci list 𝐖 93 𝕖 Select the first ^ elements This is the maximum amount of elements we can get before the values overflow which means the largest value we support is 7,540,113,804,746,346,429 S𝕔 Closest value to the input in the list In the newest version of Neim, this can be golfed to 3 bytes: fS𝕔 As infinite lists have been reworked to only go up to their maximum value. Try it online! • How is this 5 bytes when there are 2 characters there? And what is the difference between the first and second solution? – caird coinheringaahing Jul 19 '17 at 23:32 • Are you counting bytes or characters? It appears the first is 15 bytes, and the second 7 bytes. – Nateowami Jul 20 '17 at 3:33 • This probably has some kind of own codepage in which each character is own byte meaning the first one is 5 bytes ans the second is 3 bytes. The difference between the two is that the first one selects the first 93 elements manual while the second snipet in a newer version automatically selects the highest possible value that the languages int size can handle – Roman Gräf Jul 20 '17 at 6:13 • @cairdcoinheringaahing I've often had issues with people not being able to see my programs. Screenshot – Okx Jul 20 '17 at 9:34 • @Okx Oh OK, interesting, I would not have guessed. – Nateowami Jul 20 '17 at 14:38 # Python, 55 52 bytes f=lambda x,a=1,b=1:[a,b][b-x<x-a]*(b>x)or f(x,b,a+b) Try it online! # R, 70676462 60 bytes -2 bytes thanks to djhurio! -2 more bytes thanks to djhurio (boy can he golf!) F=1:0;while(F<1e8)F=c(F[1]+F[2],F);F[order((F-scan())^2)][1] Since we only have to handle values up to 10^8, this works. Try it online! Reads n from stdin. the while loop generates the fibonacci numbers in F (in decreasing order); in the event of a tie, the larger is returned. This will trigger a number of warnings because while(F<1e8) only evaluates the statement for the first element of F with a warning Originally I used F[which.min(abs(F-n))], the naive approach, but @djhurio suggested (F-n)^2 since the ordering will be equivalent, and order instead of which.min. order returns a permutation of indices to put its input into increasing order, though, so we need [1] at the end to get only the first value. ### faster version: F=1:0;n=scan();while(n>F)F=c(sum(F),F[1]);F[order((F-n)^2)][‌​1] only stores the last two fibonacci numbers • Nice one. -2 bytes F=1:0;n=scan();while(n>F)F=c(F[1]+F[2],F);F[order((F-n)^2)][1] – djhurio Jul 19 '17 at 16:37 • And the fast version with the same number of bytes F=1:0;n=scan();while(n>F)F=c(sum(F),F[1]);F[order((F-n)^2)][1] – djhurio Jul 19 '17 at 16:47 • @djhurio nice! thank you very much. – Giuseppe Jul 19 '17 at 17:08 • I like this. -2 bytes again F=1:0;while(F<1e8)F=c(F[1]+F[2],F);F[order((F-scan())^2)][1] – djhurio Jul 19 '17 at 18:21 • Using a builtin to generate the fibnums is shorter: numbers::fibonacci(x<-scan(),T) – JAD Jul 20 '17 at 11:42 ## JavaScript (ES6), 41 bytes f=(n,x=0,y=1)=>y<n?f(n,y,x+y):y-n>n-x?x:y <input type=number min=0 value=0 oninput=o.textContent=f(this.value)><pre id=o>0 Rounds up by preference. • Almost identical to the version I was working on. At least you didn't use the same variable names or I would have been freaked out. – Grax Jul 19 '17 at 16:37 • @Grax Huh, now you mention it, Business Cat beat me to it... – Neil Jul 19 '17 at 16:49 • (Well, almost... I made my version work with 0, because why not?) – Neil Jul 19 '17 at 16:50 • f=(n,x=0,y=1)=>x*(2*n<x+y)||f(n,y,x+y) Since you don't have to work with 0, you can golf a bit more. – Alix Eisenhardt Jul 21 '17 at 13:00 # Jelly, 9 7 bytes -2 bytes thanks to @EriktheOutgolfer ‘RÆḞạÐṂ Try it online! Golfing tips welcome :). Takes an int for input and returns an int-list. ' input -> 4 ‘ ' increment -> 5 R ' range -> [1,2,3,4,5] ÆḞ ' fibonacci (vectorizes) -> [1,1,2,3,5,8] ÐṂ ' filter and keep the minimum by: ạ ' absolute difference -> [3,3,2,1,1,4] ' after filter -> [3,5] • You can remove µḢ. – Erik the Outgolfer Jul 19 '17 at 17:12 • @EriktheOutgolfer as in: "There is a way to do it if you think about it", or as in "If you literally just backspace them it still works"? – nmjcman101 Jul 19 '17 at 17:14 • As in "it's allowed by the rules". :P – Erik the Outgolfer Jul 19 '17 at 17:17 • Ah. Thank you! (Filler text) – nmjcman101 Jul 19 '17 at 17:18 # Mathematica, 30 bytes Array[Fibonacci,2#]~Nearest~#& Try it online! # x86-64 Machine Code, 24 bytes 31 C0 8D 50 01 92 01 C2 39 FA 7E F9 89 D1 29 FA 29 C7 39 D7 0F 4F C1 C3 The above bytes of code define a function in 64-bit x86 machine code that finds the closest Fibonacci number to the specified input value, n. The function follows the System V AMD64 calling convention (standard on Gnu/Unix systems), such that the sole parameter (n) is passed in the EDI register, and the result is returned in the EAX register. Ungolfed assembly mnemonics: ; unsigned int ClosestFibonacci(unsigned int n); xor eax, eax ; initialize EAX to 0 lea edx, [rax+1] ; initialize EDX to 1 CalcFib: xchg eax, edx ; swap EAX and EDX add edx, eax ; EDX += EAX cmp edx, edi jle CalcFib ; keep looping until we find a Fibonacci number > n mov ecx, edx ; temporary copy of EDX, because we 'bout to clobber it sub edx, edi sub edi, eax cmp edi, edx cmovg eax, ecx ; EAX = (n-EAX > EDX-n) ? EDX : EAX ret Try it online! The code basically divides up into three parts: • The first part is very simple: it just initializes our working registers. EAX is set to 0, and EDX is set to 1. • The next part is a loop that iteratively calculates the Fibonacci numbers on either side of the input value, n. This code is based on my previous implementation of Fibonacci with subtraction, but…um…isn't with subtraction. :-) In particular, it uses the same trick of calculating the Fibonacci number using two variables—here, these are the EAX and EDX registers. This approach is extremely convenient here, because it gives us adjacent Fibonacci numbers. The candidate potentially less than n is held in EAX, while the candidate potentially greater than n is held in EDX. I'm quite proud of how tight I was able to make the code inside of this loop (and even more tickled that I re-discovered it independently, and only later realized how similar it was to the subtraction answer linked above). • Once we have the candidate Fibonacci values available in EAX and EDX, it is a conceptually simple matter of figuring out which one is closer (in terms of absolute value) to n. Actually taking an absolute value would cost way too many bytes, so we just do a series of subtractions. The comment out to the right of the penultimate conditional-move instruction aptly explains the logic here. This either moves EDX into EAX, or leaves EAX alone, so that when the function RETurns, the closest Fibonacci number is returned in EAX. In the case of a tie, the smaller of the two candidate values is returned, since we've used CMOVG instead of CMOVGE to do the selection. It is a trivial change, if you'd prefer the other behavior. Returning both values is a non-starter, though; only one integer result, please! . 2 } + " . | ' = = ' . @ . & } 1 . ! _ | . _ } $_ } { Broken down: start: ? { 2 ' * //set up 2*target number " ' 1 //initialize curr to 1 main loop: } = + //next + curr + last " - //test = next - (2*target) branch: <= 0 -> continue; > 0 -> return continue: { } = & //last = curr } = & //curr = next return: { } ! @ //print last Like some other posters, I realized that when the midpoint of last and curr is greater than the target, the smaller of the two is the closest or tied for closest. The midpoint is at (last + curr)/2. We can shorten that because next is already last + curr, and if we instead multiply our target integer by 2, we only need to check that (next - 2*target) > 0, then return last. # Brachylog, 22 bytes ;I≜-.∧{0;1⟨t≡+⟩ⁱhℕ↙.!} Try it online! Really all I've done here is paste together Fatalize's classic Return the closest prime number solution and my own Am I a Fibonacci Number? solution. Fortunately, the latter already operates on the output variable; unfortunately, it also includes a necessary cut which has to be isolated for +2 bytes, so the only choice point it discards is ⁱ, leaving ≜ intact. # Japt-g, 8 bytes ò!gM ñaU Try it # Java 7, 244 234 Bytes String c(int c){for(int i=1;;i++){int r=f(i);int s=f(i-1);if(r>c && s<c){if(c-s == r-c)return ""+r+","+s;else if(s-c > r-c)return ""+r;return ""+s;}}} int f(int i){if(i<1)return 0;else if(i==1)return 1;else return f(i-2)+f(i-1);} • Why don't you use Java 8 and turn this into a lambda? You can also remove static if you want to stick with Java 7. – Okx Jul 19 '17 at 15:54 • You have two errors in your code (r>c&&s<c should be r>=c&&s<=c, s-c should be c-s), You could remove not required whitespace, use int f(int i){return i<2?i:f(--i)+f(--i);}, use a single return statement with ternary operator in c and remove the special handling for c-s==r-c as returning either value is allowed. – Nevay Jul 19 '17 at 20:58 • @Nevay I don't see the error, I've tested it without fails – 0x45 Jul 20 '17 at 7:10 # Pyke, 6 bytes }~F>R^ Try it online! } - input*2 ~F - infinite list of the fibonacci numbers > - ^[:input] R^ - closest_to(^, input) # Common Lisp, 69 bytes (lambda(n)(do((x 0 y)(y 1(+ x y)))((< n y)(if(<(- n x)(- y n))x y)))) Try it online! # Perl 6, 38 bytes {(0,1,*+*...*>$_).sort((*-$_).abs)[0]} Test it { # bare block lambda with implicit parameter 「$_」 ( # generate Fibonacci sequence 0, 1, # seed the sequence * + * # WhateverCode lambda that generates the rest of the values ... # keep generating until * > $_ # it generates one larger than the original input # (that larger value is included in the sequence) ).sort( # sort it by ( * -$_ ).abs # the absolute difference to the original input )[0] # get the first value from the sorted list } For a potential speed-up add .tail(2) before .sort(…). In the case of a tie, it will always return the smaller of the two values, because sort is a stable sort. (two values which would sort the same keep their order) # Pyth, 19 bytes JU2VQ=+Js>2J)hoaNQJ Try it here ### Explanation JU2VQ=+Js>2J)hoaNQJ JU2 Set J = [0, 1]. VQ=+Js>2J) Add the next <input> Fibonacci numbers. oaNQJ Sort them by distance to <input>. h Take the first. (%)a b x|abs(b-x)>abs(a-x)=a|1>0=b%(a+b)\$x
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2757602334022522, "perplexity": 2216.0916012226344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574265.76/warc/CC-MAIN-20190921043014-20190921065014-00200.warc.gz"}
http://mathhelpforum.com/calculus/65365-inverse-x-sin-x.html
We can show the inverse exists by working with the fact that $( x + \sin x) ' > 0$ for $x\not = \pi n$ and $0$ for $x=\pi n$. Therefore, the function is increasing. But to since the inverse you need to be able to solve the equation $y + \sin (y) = x$. I do not think there is a "nice" way to solve this equation.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9898546934127808, "perplexity": 26.649376731700073}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806310.85/warc/CC-MAIN-20171121021058-20171121041058-00466.warc.gz"}
https://jmlr.org/papers/v6/goldsmith05a.html
## New Horn Revision Algorithms Judy Goldsmith, Robert H. Sloan; 6(64):1919−1938, 2005. ### Abstract A revision algorithm is a learning algorithm that identifies the target concept, starting from an initial concept. Such an algorithm is considered efficient if its complexity (in terms of the measured resource) is polynomial in the syntactic distance between the initial and the target concept, but only polylogarithmic in the number of variables in the universe. We give efficient revision algorithms in the model of learning with equivalence and membership queries. The algorithms work in a general revision model where both deletion and addition revision operators are allowed. In this model one of the main open problems is the efficient revision of Horn formulas. Two revision algorithms are presented for special cases of this problem: for depth-1 acyclic Horn formulas, and for definite Horn formulas with unique heads. [abs][pdf][bib]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9475777745246887, "perplexity": 1186.9030560679698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949689.58/warc/CC-MAIN-20230331210803-20230401000803-00409.warc.gz"}
https://worldwidescience.org/topicpages/a/ansiedad+parte+ii.html
#### Sample records for ansiedad parte ii 1. Inquérito epidemiológico em população idosa (parte II: saúde bucal, ansiedade, depressão, estresse e uso de medicamentos = Epidemiological survey in elderly people (II: oral health, anxiety, depression, stress, and drug utilization Directory of Open Access Journals (Sweden) Silva, Rafael Menezes 2016-01-01 2. Inquérito epidemiológico em população idosa (parte II): saúde bucal, ansiedade, depressão, estresse e uso de medicamentos = Epidemiological survey in elderly people (II): oral health, anxiety, depression, stress, and drug utilization OpenAIRE Silva, Rafael Menezes; Oliveira, Dhelfeson Willya Douglas de; Biscaro, Paulo Cesar Brambilla; Orti, Natália Pinheiro; Sá-Pinto, Ana Clara; Ramos-Jorge, Maria Letícia 2016-01-01 Objetivos: Investigar as correlações existentes entre saúde bucal, ansiedade, depressão, estresse, alterações cognitivas e uso de medicamentos em idosos. Métodos: Os dados foram coletados em domicílio, por meio de questionários sobre dados sociodemográficos e uso de medicações; exames bucais para investigar o índice de dentes cariados, perdidos e obturados; e instrumentos para avaliar a presença de distúrbios psicossociais. Os seguintes instrumentos foram aplicados: Mini Exame do Estado Menta... 3. Workshop 96. Part II International Nuclear Information System (INIS) 1995-12-01 Part II of the seminar proceedings contains contributions in various areas of science and technology, among them materials science in mechanical engineering, materials science in electrical, chemical and civil engineering, and electronics, measuring and communication engineering. In those areas, 6 contributions have been selected for INIS. (P.A.) OpenAIRE Rincón Hoyos, Hernán Gilberto; Fundación Valle de Lili 1997-01-01 La ansiedad/Manifestaciones del temor y de la ansiedad/Manifestaciones corporales de la ansiedad/Manifestaciones psicológicas y cognoscitivas de la ansiedad/Causas de la ansiedad/Clasificación de la asociación psiquiátrica Americana (DSM IV)/Trastorno de pánico/Trastorno obsesivo compulsivo (TOC)/Trastorno de estrés postraumático y estrés agudo/Trastorno de ansiedad generalizada (TAG)/Preguntas, respuestas y recomendaciones. 5. Stiffnites. Part II Directory of Open Access Journals (Sweden) Maria Teresa Pareschi 2011-06-01 Full Text Available The dynamics of a stiffnite are here inferred. A stiffnite is a sheet-shaped, gravity-driven submarine sediment flow, with a fabric made up of marine ooze. To infer stiffnite dynamics, order of magnitude estimations are used. Field deposits and experiments on materials taken from the literature are also used. Stiffnites can be tens or hundreds of kilometers wide, and a few centimeters/ meters thick. They move on the sea slopes over hundreds of kilometers, reaching submarine velocities as high as 100 m/s. Hard grain friction favors grain fragmentation and formation of triboelectrically electrified particles and triboplasma (i.e., ions + electrons. Marine lipids favor isolation of electrical charges. At first, two basic assumptions are introduced, and checked a posteriori: (a in a flowing stiffnite, magnetic dipole moments develop, with the magnetization proportional to the shear rate. I have named those dipoles as Ambigua. (b Ambigua are ‘vertically frozen’ along stiffnite streamlines. From (a and (b, it follows that: (i Ambigua create a magnetic field (at peak, >1 T. (ii Lorentz forces sort stiffnite particles into two superimposed sheets. The lower sheet, L+, has a sandy granulometry and a net positive electrical charge density. The upper sheet, L–, has a silty muddy granulometry and a net negative electrical charge density; the grains of sheet L– become finer upwards. (iii Faraday forces push ferromagnetic grains towards the base of a stiffnite, so that a peak of magnetic susceptibility characterizes a stiffnite deposit. (iv Stiffnites harden considerably during their motion, due to magnetic confinement. Stiffnite deposits and inferred stiffnite characteristics are compatible with a stable flow behavior against bending, pinch, or other macro instabilities. In the present report, a consistent hypothesis about the nature of Ambigua is provided. 6. Part II. Population International Nuclear Information System (INIS) 2004-01-01 Directory of Open Access Journals (Sweden) Alberto Acosta 2009-10-01 Full Text Available Tener una personalidad ansiosa o estar ansioso en una determinada situación hace que atendamos de modo diferente a lo que acontece. Investigaciones recientes están descubriendo las relaciones específicas de la ansiedad-rasgo y de la ansiedad-estado con diferentes procesos atencionales. La intervención terapéutica para aliviar los trastornos de ansiedad, tan frecuentes en nuestra época, se va a beneficiar de este conocimiento. 8. Exploring Water Pollution. Part II Science.gov (United States) Rillo, Thomas J. 1975-01-01 This is part two of a three part article related to the science activity of exploring environmental problems. Part one dealt with background information for the classroom teacher. Presented here is a suggested lesson plan on water pollution. Objectives, important concepts and instructional procedures are suggested. (EB) 9. Roots/Routes: Part II Science.gov (United States) Swanson, Dalene M. 2009-01-01 This narrative acts as an articulation of a journey of many routes. Following Part I of the same research journey of rootedness/routedness, it debates the nature of transformation and transcendence beyond personal and political paradoxes informed by neoliberalism and related repressive globalizing discourses. Through a more personal, descriptive,… 10. Understanding Radiation Thermometry. Part II Science.gov (United States) Risch, Timothy K. 2015-01-01 This document is a two-part course on the theory and practice of radiation thermometry. Radiation thermometry is the technique for determining the temperature of a surface or a volume by measuring the electromagnetic radiation it emits. This course covers the theory and practice of radiative thermometry and emphasizes the modern application of the field using commercially available electronic detectors and optical components. The course covers the historical development of the field, the fundamental physics of radiative surfaces, along with modern measurement methods and equipment. OpenAIRE Salaberría, Karmele; Fernández-Montalvo, Javier; Echeburúa, Enrique 1995-01-01 En este trabajo se analiza la ansiedad en sus distintas acepciones psicológicas. No existe ninguna otra emoción, ni siquiera ningún otro concepto, que sirva para dar nombre al mismo tiempo a una reacción adaptativa normal, a un rasgo de personalidad y a un grupo de trastornos de conducta (los trastornos de ansiedad). 12. Ansiedade no período pré-operatório de cirurgias de mama: estudo comparativo entre pacientes com suspeita de câncer e a serem submetidas a procedimentos cirúrgicos estéticos Ansiedad en el período preoperatorio de cirugías de mama: estudio comparativo entre pacientes con sospecha de cáncer a ser sometidas a procedimientos quirúrgicos estéticos Preoperative anxiety in surgeries of the breast: a comparative study between patients with suspected breast cancer and that undergoing cosmetic surgery Directory of Open Access Journals (Sweden) Maria Luiza Melo Alves 2007-04-01 13. Evitación experiencial, afrontamiento y ansiedad en estudiantes de una universidad pública de Lima Metropolitana Directory of Open Access Journals (Sweden) Pablo D. Valencia 2017-04-01 OpenAIRE Martínez-Monteagudo, Mari Carmen; García-Fernández, José Manuel; Inglés, Cándido J. 2013-01-01 Diferentes estudios han analizado la ansiedad escolar como un constructo unitario sin atender a las diferentes situaciones y sistemas de respuesta que conforman este constructo. El presente estudio considera la ansiedad escolar como un constructo multidimensional y su objetivo fue analizar las relaciones y capacidad predictiva de las situaciones y sistemas de respuesta de la ansiedad escolar sobre la ansiedad rasgo, la ansiedad estado y la depresión. El Inventario de Ansiedad Escolar (IAES), ... 15. CISG Part II in Nordic Context DEFF Research Database (Denmark) Lookofsky, Joseph 2015-01-01 formation rules in NCA Chapter I – which for nearly 100 years applied by default to all contracts – no longer apply to contracts for the international sale of goods. As regards this latter significant contract category, Chapter I of the NCA has (except for inter-Nordic sales) been pre-empted, i.e. replaced......In 2015, as the Nordic countries celebrate the 100th anniversary of the Nordic Contract Act (NCA), there is also good reason to celebrate the fact that - due to recent developments - the original field of NCA application has been narrowed in one important respect. In particular, the contract......, by Part II of the 1980 United Nations Convention on Contracts for the International Sale of Goods (CISG).... 16. Ansiedad, angustia y estrés: tres conceptos a diferenciar OpenAIRE Juan Carlos Sierra; Virgilio Ortega; Ihab Zubeidat 2003-01-01 El objetivo de este trabajo es hacer una revisión de los conceptos de ansiedad, angustia y estrés, a fin de delimitar el solapamiento entre los mismos (especialmente entre ansiedad y angustia, por una parte, y ansiedad y estrés, por otra); también, se pretende identificar aspectos que hacen posible la diferenciación de estos conceptos. Para alcanzar este objetivo, ofrecemos una introducción general sobre la confusión conceptual que se há producido en torno a dichos términ... OpenAIRE Fernández Geijo, Julia 2014-01-01 El presente trabajo realiza una investigación para evaluar las repuestas de ansiedad de 33 alumnos de primer curso de bachillerato previamente a la realización de un examen. Se ha evaluado también su puntuación en ansiedad rasgo y la frecuencia de pensamientos irracionales, observando cómo influye en su rendimiento 18. Reproduce and die! Why aging? Part II NARCIS (Netherlands) Schuiling, GA Whilst in part I of this diptych on aging the question why aging exists at all is discussed; this part deals with the question which mechanisms underly aging and, ultimately, dying. It appears that aging is not just an active process as such - although all kinds of internal (e.g., oxigen-free 19. CHILD WELFARE IN CANADA : PART II OpenAIRE 松本, 眞一; Shinichi, Matsumoto; 桃山学院大学社会学部 2006-01-01 This part study aims to research on the whole aspect of child protection in Canada. And so, this paper consists of five chapters as follows: (1)Canadian history of child protection, (2)definition of child abuse, (3)current situation of child protection in Canada, (4)outline of child protection and treatment, (5)triangular comparison of child protection and prevention in Canada, Australia and England. The first efforts at identifying and combating child abuse occurred in the latter part of the... 20. The Many Meanings of History, Part II Science.gov (United States) Szasz, Ferenc M. 1974-01-01 This article contains a collection of quotations about history collected by Professor Szasz. The first part of the collection appeared in the August 1974 issue of "The History Teacher." Readers are invited to send in other definitions they have found. (Author/RM) 1. A Fundamental Breakdown. Part II: Manipulative Skills Science.gov (United States) Townsend, J. Scott; Mohr, Derek J. 2005-01-01 In the May, 2005, issue of "TEPE," the "Research to Practice" section initiated a two-part series focused on assessing fundamental locomotor and manipulative skills. The series was generated in response to research by Pappa, Evanggelinou, & Karabourniotis (2005), recommending that curricular programming in physical education at the elementary… 2. Plasma Astrophysics, Part II Reconnection and Flares CERN Document Server Somov, Boris V 2013-01-01 This two-part book is devoted to classic fundamentals and current practices and perspectives of modern plasma astrophysics. This second part discusses the physics of magnetic reconnection and flares of electromagnetic origin in space plasmas in the solar system, single and double stars, relativistic objects, accretion disks and their coronae. More than 25% of the text is updated from the first edition, including the additions of new figures, equations and entire sections on topics such as topological triggers for solar flares and the magnetospheric physics problem. This book is aimed at professional researchers in astrophysics, but it will also be useful to graduate students in space sciences, geophysics, applied physics and mathematics, especially those seeking a unified view of plasma physics and fluid mechanics. 3. LHC related projects and studies - Part (II) International Nuclear Information System (INIS) Rossi, L.; De Maria, R. 2012-01-01 The session was devoted to address some aspects of the HL-LHC (High Luminosity LHC) project and explore ideas on new machines for the long term future. The session had two parts. The former focused on some of the key issues of the HL-LHC projects: beam current limits, evolution of the collimation system, research plans for the interaction region magnets and crab cavities. The latter explored the ideas for the long term future projects (LHeC and HE-LHC) and how the current research-development program for magnets and RF structures could fit in the envisaged scenarios 4. Short history of PACS (Part II: Europe) International Nuclear Information System (INIS) Lemke, Heinz U. 2011-01-01 Although the concept of picture archiving and communications systems (PACS) was developed in Europe during the latter part of the 1970s, no working system was completed at that time. The first PACS implementations took place in the United States in the early 1980s, e.g. at Pennsylvania University, UCLA, and Kansas City University. Some more or less successful PACS developments also took place in Europe in the 1980s, particularly in the Netherlands, Belgium, Austria, the United Kingdom, France, Italy, Scandinavia, and Germany. Most systems could be characterized by their focus on a single department, such as radiology or nuclear medicine. European hospital-wide PACS with high visibility evolved in the early 1990s in London (Hammersmith Hospital) and Vienna (SMZO). These were followed during the latter part of the 1990s by approximately 10-20 PACS installations in each of the major industrialized countries of Europe. Wide-area PACS covering several health care institutions in a region are now in the process of being implemented in a number of European countries. Because of limitations of space some countries, for example, Denmark, Finland, Spain, Greece, as well as Eastern European countries, etc. could not be appropriately represented in this paper. 5. Ansiedad al tratamiento odontológico: Características y diferencias de género OpenAIRE Caycedo, Martha; Colorado, Patricia; Rodríguez, Helena; Gama, Rocío; Cortés, Omar Fernando; Caycedo, Claudia; Barahona, Germán; Palencia, Rafael 2008-01-01 Este trabajo hace parte de un estudio mayor sobre la convergencia entre el reporte del odontólogo acerca de la ansiedad de sus pacientes y las respuestas de los pacientes a dos escalas de ansiedad ante el tratamiento odontológico, llevado a cabo con una muestra de 132 odontólogos y sus correspondientes 913 pacientes en Bogotá, Colombia. Se presentan los datos correspondientes a las respuestas de los pacientes a dos instrumentos de autorreporte acerca de la ansiedad ante los tratamientos odont... 6. Society. Part II: moderate to severe psoriasis Directory of Open Access Journals (Sweden) Jacek Szepietowski 2014-11-01 Full Text Available Psoriasis is a chronic inflammatory skin disease affecting about 1–3% of the general population. Recent years have seen great development in the treatment of this dermatosis, especially regarding moderate to severe psoriasis. More numerous and more widely available systemic therapies raise new challenges for all physicians treating patients with psoriasis. New questions arise about patients’ follow-up and long-term safety of such therapies. To meet the expectations of Polish dermatologists, we have prepared a second part of guidelines on the treatment of psoriasis, particularly concentrated on the therapy of severe forms of this disease. We hope that our suggestions will be valuable for physicians in their daily clinical practice. However, we would like to underline that every guideline is characterized by some vagueness, and the final decision about diagnosis and therapy should always be made individually for every patient based on the patient’s current clinical status and the most up-to-date scientific literature data. 7. TEXTILE STRUCTURES FOR AERONAUTICS (PART II Directory of Open Access Journals (Sweden) SOLER Miquel 2014-05-01 Full Text Available Three-dimensional (3D textile structures with better delamination resistance and damage impact tolerance to be applied in composites for structural components is one of the main goals of the aeronautical industry. Textile Research Centre in Canet de Mar has been working since 2008 in this field. Our staff has been designing, developing and producing different textile structures using different production methods and machinery to improve three-dimensional textile structures as fiber reinforcement for composites. This paper describes different tests done in our textile labs from unidirectional structures to woven, knitted or braided 3 D textile structures. Advantages and disadvantages of each textile structure are summarized. The second part of this paper deals with our know-how in the manufacturing and assessing of three-dimensional textile structures during this last five years in the field of textile structures for composites but also in the development of structures for other applications. In the field of composites for aeronautic sector we have developed textile structures using the main methods of textile production, that is to say, weaving, warp knitting, weft knitting and braiding. Comparing the advantages and disadvantages it could be said that braided fabrics, with a structure in the three space axes are the most suitable for fittings and frames. 8. [Conceptual Development in Cognitive Science. Part II]. Science.gov (United States) Fierro, Marco 2012-03-01 Cognitive science has become the most influential paradigm on mental health in the late 20(th) and the early 21(st) centuries. In few years, the concepts, problem approaches and solutions proper to this science have significantly changed. Introduction and discussion of the fundamental concepts of cognitive science divided in four stages: Start, Classic Cognitivism, Connectionism, and Embodying / Enacting. The 2(nd) Part of the paper discusses the above mentioned fourth stage and explores the clinical setting, especially in terms of cognitive psychotherapy. The embodying/enacting stage highlights the role of the body including a set of determined evolutionary movements which provide a way of thinking and exploring the world. The performance of cognitive tasks is considered as a process that uses environmental resources that enhances mental skills and deploys them beyond the domestic sphere of the brain. On the other hand, body and mind are embedded in the world, thus giving rise to cognition when interacting, a process known as enacting. There is a close connection between perception and action, hence the interest in real-time interactions with the world rather than abstract reasoning. Regarding clinics, specifically the cognitive therapy, there is little conceptual discussion maybe due to good results from practice that may led us to consider that theoretical foundations are firm and not problem-raising. Copyright © 2012 Asociación Colombiana de Psiquiatría. Publicado por Elsevier España. All rights reserved. 9. Development of aero-engines, part II Energy Technology Data Exchange (ETDEWEB) Dehn, K. 1943-01-01 In the second part of his paper, Mr. Dehn enters into the area of problems in engine construction and design. The main question was, how can more power be coaxed from an engine without increasing its stroke displacement and weight. To begin with, a change in combustion-chamber configuration was made. A hemispheric chamber appeared to be the best design. Into this new design was incorporated a system to swirl the fuel-air mixture for better combustion and more power. Also needed was relocation of the spark plug. Since there was better combustion, valves had to be improved. These were hollowed, filled with sodium and finished with chromium or stellite for more hardness. However, the temperature created by these improvements proved to be a serious problem in engine durability. Heat corrosion destroyed pistons and rings, causing the engine not just to seize-up, but to come apart. This resulted in the loss of aircraft. Finally, a system of cooling the pistons from beneath by an oil spray increased both performance and piston life. Dehn summarizes his paper by comparing two cylinder units from Curtis-Wright radial engines, one old, the other new. He claims that these represented advances in engine technology in all areas he discussed. First, the size had not been increased, but the newer one developed more power. This had been accomplished, he reports, by fuel development reducing pre-detonation. The super-charging, and most important, the re-design of the cooling fins on the cylinder head exterior, provided greater thermal capacity for the engine. In this way, power had been increased more than 100 percent. The author concludes his paper by stating that this type of development is possible only through advances in, and use of, materials technology. That would involve intensive co-operation between the scientists and engineers working in fuels development and aviation engineering. 10. Critical appraisal: dental amalgam update--part II: biological effects. Science.gov (United States) Wahl, Michael J; Swift, Edward J 2013-12-01 Dental amalgam restorations have been controversial for over 150 years. In Part I of this Critical Appraisal, the clinical efficacy of dental amalgam was updated. Here in Part II, the biological effects of dental amalgam are addressed. © 2013 The Authors.Journal of Esthetic and Restorative Dentistry © 2013 Wiley Periodicals, Inc. 11. Kick, Glide, Pole! Cross-Country Skiing Fun (Part II) Science.gov (United States) Duoos, Bridget A. 2012-01-01 Part I of Kick, Glide, Pole! Cross-Country Skiing Fun, which was published in last issue, discussed how to select cross-country ski equipment, dress for the activity and the biomechanics of the diagonal stride. Part II focuses on teaching the diagonal stride technique and begins with a progression of indoor activities. Incorporating this fun,… 12. Apudomas Pancreáticos (Parte II Directory of Open Access Journals (Sweden) Alfredo L. Jácome 1990-12-01 Full Text Available Aspectos Clínicos I. Diagnóstico de los Apudomas En este apartado describiremos algunos de los aspectos más relevantes en cuanto al diagnóstico clínico, paraclínico y anatomopatológico de los apudomas conocidos en el páncreas, basándonos en algunas casuísticas nacionales y extranjeras. A. Insulinoma En 1927 se informó el primer caso en la Clínica Mayo. Esta misma institución posee una de las series más grandes sobre el tema con 200 casos reportados; sin embargo, su incidencia global es del orden de menos de I caso por 100.000 habitantes. En una serie de 1.067 pacientes realizada en Italia, el tumor se presentó aproximadamente en un 60% en mujeres y un 40% en hombres, siendo el rango de edad entre los 30 y 60 años (21. l. Manifestaciones clínicas Las manifestaciones son debidas a la hipoglicemia secundaria al hiperinsulinismo circulante. Los síntomas neurosiquiátricos incluyen: pérdida de conciencia, confusión, vértigo, alteraciones visuales, astenia, coma profundo y epilepsia; se han reportado casos con daño del sistema nervioso central irreversible y parálisis temporal; además también puede haber somnolencia, amnesia, ataxia, cefalea, parestesias, signo de Babinski, agitación e irritabilidad. Estos se explican, en gran parte, debido a que elcerebro depende casi exclusivamente de la oxidación de la glucosa para proveer sus necesidades energéticas. En estos pacientes también son frecuentes síntomas de tipo adrenérgico tales como sudoración, temblor, empalidecimiento y síntomas cardiovasculares como palpitaciones, taquicardia, dolor precordial e hipertensión arterial. También se presentan síntomas gastrointestinales como sensación de hambre, vómito y, ocasionalmente, dolor epigástrico. A diferencia de los gastrinomas, estos tumores rara vez se asocian con la adenomatosis múltiple endocrina (MEA: probablemente est 13. Caring communications: how technology enhances interpersonal relations, Part II. Science.gov (United States) Simpson, Roy L 2008-01-01 Part I of this 2-part series about technology's role in interpersonal communications examined how humans interact; proposed a caring theory of communication, collaboration, and conflict resolution; and delineated ways that technology--in general--supports this carative model of interpersonal relations. Part II will examine the barriers to adoption of carative technologies, describe the core capabilities required to overcome them, and discuss specific technologies that can support carative interpersonal relationships. 14. Prevalencia de ansiedad en estudiantes universitarios Directory of Open Access Journals (Sweden) Jaiberth A. Cardona-Arias 2015-01-01 15. Recent Economic Perspectives on Political Economy, Part II* Science.gov (United States) Dewan, Torun; Shepsle, Kenneth A. 2013-01-01 In recent years some of the best theoretical work on the political economy of political institutions and processes has begun surfacing outside the political science mainstream in high quality economics journals. This two-part paper surveys these contributions from a recent five-year period. In Part I, the focus is on elections, voting and information aggregation, followed by treatments of parties, candidates, and coalitions. In Part II, papers on economic performance and redistribution, constitutional design, and incentives, institutions, and the quality of political elites are discussed. Part II concludes with a discussion of the methodological bases common to economics and political science, the way economists have used political science research, and some new themes and arbitrage opportunities. PMID:23606754 16. Nursing Care of Patients Undergoing Chemotherapy Desensitization: Part II. Science.gov (United States) Jakel, Patricia; Carsten, Cynthia; Carino, Arvie; Braskett, Melinda 2016-04-01 Chemotherapy desensitization protocols are safe, but labor-intensive, processes that allow patients with cancer to receive medications even if they initially experienced severe hypersensitivity reactions. Part I of this column discussed the pathophysiology of hypersensitivity reactions and described the development of desensitization protocols in oncology settings. Part II incorporates the experiences of an academic medical center and provides a practical guide for the nursing care of patients undergoing chemotherapy desensitization.
. 17. Algumas considerações sobre o paciente cirúrgico e a ansiedade OpenAIRE Peniche, Aparecida de Cássia Giani; Chaves, Eliane Corrêa 2000-01-01 Este artigo é parte resultante da pesquisa intitulada "A influência da ansiedade na resposta do paciente no período pós-operatório imediato" e tem como objetivos oferecer os aspectos teóricos da ansiedade e compartilhar as dificuldades existentes em avaliar o estado emocional do paciente no período pré-operatório, assim como insuficiência de embasamento teórico da enfermeira para intervir nesta situação. This study aims at giving theorical aspects of the anxiety and discussing about the di... OpenAIRE García López, Luis Joaquín; Rivero Burón, Raúl; Ramos Linares, Victoriano; Martínez González, Agustín Ernesto; Piqueras Rodríguez, José Antonio; Oblitas Guadalupe, Luis Armando 2008-01-01 En este artículo se ha intentado mostrar una síntesis de los datos relativos a la influencia de los factores emocionales, específicamente la ansiedad y la depresión, en el proceso de salud-enfermedad. Estos factores se han asociado con las enfermedades crónicas como variables influyentes en el inicio, desarrollo y mantenimiento. Se han hipotetizado básicamente dos vías explicativas generales. La primera hace referencia a la influencia de la ansiedad y de la depresión en la conducta, de manera... 19. Methods of humidity determination Part II: Determination of material humidity OpenAIRE Rübner, Katrin; Balköse, Devrim; Robens, E. 2008-01-01 Part II covers the most common methods of measuring the humidity of solid material. State of water near solid surfaces, gravimetric measurement of material humidity, measurement of water sorption isotherms, chemical methods for determination of water content, measurement of material humidity via the gas phase, standardisation, cosmonautical observations are reviewed. 20. Inteligencia emocional y ansiedad en estudiantes universitarios Directory of Open Access Journals (Sweden) Ubaldo Rodríguez de Ávila 2011-07-01 1. HERBICIDAS INIBIDORES DO FOTOSSISTEMA IIPARTE I /\tPHOTOSYSTEM II INHIBITOR HERBICIDES - PART I Directory of Open Access Journals (Sweden) ILCA P. DE F. E SILVA 2013-11-01 Full Text Available O controle químico tem sido o mais utilizado em grandes áreas de plantio, principalmente por ser um método rápido e eficiente. Os herbicidas inibidores do fotossistema II (PSII são fundamentais para o manejo integrado de plantas daninhas e práticas conservacionista de solo. A aplicação é realizada em pré-emergência ou pós-emergência inicial das plantas daninhas. A absorção é pelas raízes, tendo como barreira as estrias de Caspari, sendo a translocação realizada pelo xilema. O processo de absorção e translocação também são dependentes das próprias características do produto, como as propriedades lipofílicas e hidrofílicas, as quais podem ser medidas através do coeficiente de partição octanol-água (Kow. A inibição da fotossíntese acontece pela ligação dos herbicidas deste grupo ao sítio de ligação da QB, na proteína D1 do fotossistema II, o qual se localiza na membrana dos tilacóides dos cloroplastos, causando, o bloqueia do transporte de elétrons da QA para QB, interrompendo a fixação do CO2 e a produção de ATP e NAPH2. 2. Avaliação da ansiedade e depressão no período pré-operatório em pacientes submetidos a procedimentos cardíacos invasivos Evaluación de la ansiedad y depresión en el período preoperatorio en pacientes sometidos a procedimientos cardíacos Evaluation of preoperative anxiety and depression in patients undergoing invasive cardiac procedures Directory of Open Access Journals (Sweden) Antonio Fernando Carneiro 2009-08-01 3. Forced Marriage-Culture or Crime? Part II OpenAIRE TAPP, David; JENKINSON, Susan 2013-01-01 This is Part II of the series.\\ud \\ud ‘Marriage shall be entered into only with the free and full consent of the intending spouses .’ \\ud \\ud It is important to begin by acknowledging the above statement, which is part of Article 16(2) of the Universal Declaration of Human Rights and also to distinguish between an arranged marriage and a forced marriage. \\ud \\ud An arranged marriage is ‘a marriage planned and agreed by the families or guardians of the couple concerned ’, while a forced marria... 4. Treatment of cellulite: Part II. Advances and controversies. Science.gov (United States) Khan, Misbah H; Victor, Frank; Rao, Babar; Sadick, Neil S 2010-03-01 5. Motivation and Anxiety in Tennis Players Motivación y ansiedad en jugadores de tenis Directory of Open Access Journals (Sweden) E. Cervelló 2010-09-01 Full Text Available This work studied the "goals achievement theory;" how dispositional goal orientation and the perception of the motivational climate are related to the different components of competitive anxiety (cognitive anxiety, somatic anxiety and self-confidence in high-level tennis players. To the accomplish this objective structural equation modelling was employed (SEM. The results show that perception of the motivational climate that tennis players perceive is related to dispositional goal orientation. Results also show positive relationships between perception of ego involving motivational climate and somatic and cognitive components of anxiety. On the other hand, results show that ego orientation is a negative predictor of cognitive anxiety. Finally, results show that task orientation is a positive and significant predictor of self-confidence. KEY WORDS: Goal orientation, motivational climate, competitive state anxiety, tennis Este estudio analiza desde la perspectiva social-cognitiva de las metas de logro como la orientación disposicional de los sujetos y la percepción del clima motivacional en los entrenamientos se relacionan con los diferentes componentes de la ansiedad estado precompetitiva (ansiedad cognitiva, ansiedad somática y autoconfianza en tenistas de alto nivel. Para ello se utilizó un análisis de ecuaciones estructurales (SEM. Los resultados muestran que el clima motivacional que los tenistas perciben en los entrenamientos se relaciona con la orientación disposicional que estos presentan, así como existe una relación directa de influencia entre el clima motivacional orientado al ego y los componentes cognitivo y somático de la ansiedad. Por otra parte, los resultados muestran como la orientación disposicional al ego se muestra como predictor significativo y negativo de la ansiedad cognitiva y la orientación disposicional a la tarea predice la autoconfianza de forma positiva. A 6. Relación entre ansiedad escénica, perfeccionismo y calificaciones en estudiantes del Título Superior de Música Directory of Open Access Journals (Sweden) Francisco Javier Zarza-Alzugaray 2016-02-01 7. Healing and relaxation in flows of helium II. Part II. First, second, and fourth sound International Nuclear Information System (INIS) Hills, R.N.; Roberts, P.H. 1978-01-01 In Part I of this series, a theory of helium II incorporating the effects of quantum healing and relaxation was developed. In this paper, the propagation of first, second, and fourth sound is discussed. Particular attention is paid to sound propagation in the vicinity of the lambda point where the effects of relaxation and quantum healing become important 8. Ansiedad pre operatoria en pacientes quirúrgicos en el área de cirugía del Hospital Isidro Ayora Directory of Open Access Journals (Sweden) Diana Carolina Gaona Rentería 2018-03-01 9. Ansiedad ante la muerte del sujeto anciano OpenAIRE Moya Faz, Francisco José 2007-01-01 El tema a tratar en esta Tesis Doctoral es la Ansiedad ante la muerte en el sujeto anciano. Objetivos. Los objetivos planteados en esta investigación han sido: Investigar mediante un análisis bibliométrico el estado actual de la cuestión del tema de la ansiedad ante la muerte para que estudiando así su producción científica se pueda conocer su relevancia así como las categorías temáticas más significativas en relación a éste. Estudiar la preocupación de la muerte respecto a variables como... Directory of Open Access Journals (Sweden) Mariola Lupiáñez Castillo 2016-01-01 11. Relações entre controle psicológico e comportamental materno e ansiedade infantil Directory of Open Access Journals (Sweden) Janaina Nascimento Teixeira 2016-01-01 12. Melanoma in situ: Part II. Histopathology, treatment, and clinical management. Science.gov (United States) Higgins, H William; Lee, Kachiu C; Galan, Anjela; Leffell, David J 2015-08-01 Melanoma in situ (MIS) poses special challenges with regard to histopathology, treatment, and clinical management. The negligible mortality and normal life expectancy associated with patients with MIS should guide treatment for this tumor. Similarly, the approach to treatment should take into account the potential for MIS to transform into invasive melanoma, which has a significant impact on morbidity and mortality. Part II of this continuing medical education article reviews the histologic features, treatment, and management of MIS. Copyright © 2015 American Academy of Dermatology, Inc. Published by Elsevier Inc. All rights reserved. 13. History and challenges of barium titanate: Part II Directory of Open Access Journals (Sweden) Vijatović M.M. 2008-01-01 Full Text Available Barium titanate is the first ferroelectric ceramics and a good candidate for a variety of applications due to its excellent dielectric, ferroelectric and piezoelectric properties. Barium titanate is a member of a large family of compounds with the general formula ABO3 which is called perovskite. Barium titanate can be prepared using different methods. The synthesis method depends on the desired characteristics for the end application and the method used has a significant influence on the structure and properties of barium titanate materials. In this review paper, in Part II the properties of obtained materials and their application are presented. 14. Signs of revision in Don Quixote, Part II Directory of Open Access Journals (Sweden) Gonzalo Pontón 2016-11-01 Full Text Available This article provides new evidences in favour of the hypothesis that Cervantes, after finishing Don Quixote, Part II, partially revised the original, introducing some significant changes and additions, mainly in the last chapters. The analysis of some narrative inconsistencies, that cannot be interpreted as mere mistakes but as significant textual traces, reveals a process of re-elaboration –a process that affects at least four sections of the novel. Most of the evidence gathered here suggests that this revision is closely linked to Avellaneda’s continuation, in the sense that Cervantes tried to challenge the apocriphal Quixote making last-time interventions in his own text. 15. Algumas considerações sobre o paciente cirúrgico e a ansiedade Algunas consideraciones sobre el paciente quirurgico y la ansiedad Surgical patient and anxiety: some consideration OpenAIRE Aparecida de Cássia Giani Peniche; Eliane Corrêa Chaves 2000-01-01 Este artigo é parte resultante da pesquisa intitulada "A influência da ansiedade na resposta do paciente no período pós-operatório imediato" e tem como objetivos oferecer os aspectos teóricos da ansiedade e compartilhar as dificuldades existentes em avaliar o estado emocional do paciente no período pré-operatório, assim como insuficiência de embasamento teórico da enfermeira para intervir nesta situação.Este articulo es parte resultante de la investigación titulada "La influencia de la ansied... 16. Structure Learning and Statistical Estimation in Distribution Networks - Part II Energy Technology Data Exchange (ETDEWEB) Deka, Deepjyoti [Univ. of Texas, Austin, TX (United States); Backhaus, Scott N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Chertkov, Michael [Los Alamos National Lab. (LANL), Los Alamos, NM (United States) 2015-02-13 Limited placement of real-time monitoring devices in the distribution grid, recent trends notwithstanding, has prevented the easy implementation of demand-response and other smart grid applications. Part I of this paper discusses the problem of learning the operational structure of the grid from nodal voltage measurements. In this work (Part II), the learning of the operational radial structure is coupled with the problem of estimating nodal consumption statistics and inferring the line parameters in the grid. Based on a Linear-Coupled(LC) approximation of AC power flows equations, polynomial time algorithms are designed to identify the structure and estimate nodal load characteristics and/or line parameters in the grid using the available nodal voltage measurements. Then the structure learning algorithm is extended to cases with missing data, where available observations are limited to a fraction of the grid nodes. The efficacy of the presented algorithms are demonstrated through simulations on several distribution test cases. 17. CE and nanomaterials - Part II: Nanomaterials in CE. Science.gov (United States) 2017-10-01 The scope of this two-part review is to summarize publications dealing with CE and nanomaterials together. This topic can be viewed from two broad perspectives, and this article is trying to highlight these two approaches: (i) CE of nanomaterials, and (ii) nanomaterials in CE. The second part aims at summarization of publications dealing with application of nanomaterials for enhancement of CE performance either in terms of increasing the separation resolution or for improvement of the detection. To increase the resolution, nanomaterials are employed as either surface modification of the capillary wall forming open tubular column or as additives to the separation electrolyte resulting in a pseudostationary phase. Moreover, nanomaterials have proven to be very beneficial for increasing also the sensitivity of detection employed in CE or even they enable the detection (e.g., fluorescent tags of nonfluorescent molecules). © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. 18. The "Pseudocommando" mass murderer: part II, the language of revenge. Science.gov (United States) Knoll, James L 2010-01-01 In Part I of this article, research on pseudocommandos was reviewed, and the important role that revenge fantasies play in motivating such persons to commit mass murder-suicide was discussed. Before carrying out their mass shootings, pseudocommandos may communicate some final message to the public or news media. These communications are rich sources of data about their motives and psychopathology. In Part II of this article, forensic psycholinguistic analysis is applied to clarify the primary motivations, detect the presence of mental illness, and discern important individual differences in the final communications of two recent pseudocommandos: Seung-Hui Cho (Virginia Tech) and Jiverly Wong (Binghamton, NY). Although both men committed offenses that qualify them as pseudocommandos, their final communications reveal striking differences in their psychopathology. 19. Blade System Design Study. Part II, final project report (GEC). Energy Technology Data Exchange (ETDEWEB) Griffin, Dayton A. (DNV Global Energy Concepts Inc., Seattle, WA) 2009-05-01 As part of the U.S. Department of Energy's Low Wind Speed Turbine program, Global Energy Concepts LLC (GEC)1 has studied alternative composite materials for wind turbine blades in the multi-megawatt size range. This work in one of the Blade System Design Studies (BSDS) funded through Sandia National Laboratories. The BSDS program was conducted in two phases. In the Part I BSDS, GEC assessed candidate innovations in composite materials, manufacturing processes, and structural configurations. GEC also made recommendations for testing composite coupons, details, assemblies, and blade substructures to be carried out in the Part II study (BSDS-II). The BSDS-II contract period began in May 2003, and testing was initiated in June 2004. The current report summarizes the results from the BSDS-II test program. Composite materials evaluated include carbon fiber in both pre-impregnated and vacuum-assisted resin transfer molding (VARTM) forms. Initial thin-coupon static testing included a wide range of parameters, including variation in manufacturer, fiber tow size, fabric architecture, and resin type. A smaller set of these materials and process types was also evaluated in thin-coupon fatigue testing, and in ply-drop and ply-transition panels. The majority of materials used epoxy resin, with vinyl ester (VE) resin also used for selected cases. Late in the project, testing of unidirectional fiberglass was added to provide an updated baseline against which to evaluate the carbon material performance. Numerous unidirectional carbon fabrics were considered for evaluation with VARTM infusion. All but one fabric style considered suffered either from poor infusibility or waviness of fibers combined with poor compaction. The exception was a triaxial carbon-fiberglass fabric produced by SAERTEX. This fabric became the primary choice for infused articles throughout the test program. The generally positive results obtained in this program for the SAERTEX material have led to its 20. Intelligent control of HVAC systems. Part II: perceptron performance analysis Directory of Open Access Journals (Sweden) Ioan URSU 2013-09-01 Full Text Available This is the second part of a paper on intelligent type control of Heating, Ventilating, and Air-Conditioning (HVAC systems. The whole study proposes a unified approach in the design of intelligent control for such systems, to ensure high energy efficiency and air quality improving. In the first part of the study it is considered as benchmark system a single thermal space HVAC system, for which it is assigned a mathematical model of the controlled system and a mathematical model(algorithm of intelligent control synthesis. The conception of the intelligent control is of switching type, between a simple neural network, a perceptron, which aims to decrease (optimize a cost index,and a fuzzy logic component, having supervisory antisaturating role for neuro-control. Based on numerical simulations, this Part II focuses on the analysis of system operation in the presence only ofthe neural control component. Working of the entire neuro-fuzzy system will be reported in a third part of the study. 1. Factores psicosociales relacionados con la ansiedad competitiva de los deportistas en etapas de formación Directory of Open Access Journals (Sweden) M. Rocío Bohórquez Gómez-Millán 2017-01-01 2. Nursing as concrete philosophy, Part II: Engaging with reality. Science.gov (United States) Theodoridis, Kyriakos 2018-04-01 This is the second paper of an essay in two parts. The first paper (Part I) is a critical discussion of Mark Risjord's conception of nursing knowledge where I argued against the conception of nursing knowledge as a kind of nursing science. The aim of the present paper (Part II) is to explicate and substantiate the thesis of nursing as a kind of concrete philosophy. My strategy is to elaborate upon certain themes from Wittgenstein's Tractatus in order to canvass a general scheme of philosophy based on a distinction between reality and the world. This distinction will be employed in the appropriation of certain significant features of nursing and nursing knowledge. By elaborating on the contrast between the abstract and the concrete, I will suggest that nursing may be seen as a kind of concrete philosophy, being primarily concerned with reality (and secondarily with the world). This thesis, I will argue, implies that philosophy is the kind of theory that is essential to nursing (which is not so much a theory than a certain kind of activity). © 2017 John Wiley & Sons Ltd. 3. Fast transforms for acoustic imaging--part II: applications. Science.gov (United States) Ribeiro, Flávio P; Nascimento, Vítor H 2011-08-01 In Part I ["Fast Transforms for Acoustic Imaging-Part I: Theory," IEEE Transactions on Image Processing], we introduced the Kronecker array transform (KAT), a fast transform for imaging with separable arrays. Given a source distribution, the KAT produces the spectral matrix which would be measured by a separable sensor array. In Part II, we establish connections between the KAT, beamforming and 2-D convolutions, and show how these results can be used to accelerate classical and state of the art array imaging algorithms. We also propose using the KAT to accelerate general purpose regularized least-squares solvers. Using this approach, we avoid ill-conditioned deconvolution steps and obtain more accurate reconstructions than previously possible, while maintaining low computational costs. We also show how the KAT performs when imaging near-field source distributions, and illustrate the trade-off between accuracy and computational complexity. Finally, we show that separable designs can deliver accuracy competitive with multi-arm logarithmic spiral geometries, while having the computational advantages of the KAT. 4. Hysterosalpingographic Appearances of Female Genital Tract Tuberculosis: Part II: Uterus Directory of Open Access Journals (Sweden) 2014-03-01 Full Text Available Female genital tuberculosis remains as a major cause of tubal obstruction leading to infertility, especially in developing countries. The global prevalence of genital tuberculosis has increased during the past two decades due to increasing acquired immunodeficiency syndrome. Genital tuberculosis (TB is commonly asymptomatic and it is diagnosed during infertility investigations. Despite of recent advances in imaging tools such as computed tomography (CT scan, magnetic resonance imaging (MRI and ultrasongraphy, hysterosalpingography has been considered as the standard screening test for evaluation of tubal infertility and as a valuable tool for diagnosis of female genital tuberculosis. Tuberculosis gives rise to various appearances on hysterosalpingography (HSG from non-specific changes to specific findings. The present pictorial review illustrates and describes specific and non-specific radiographic features of female genital tuberculosis in two parts. Part I presents specific findings of tuberculosis related to tubes such as "beaded tube", "golf club tube", "pipestem tube", "cobble stone tube" and the "leopard skin tube". Part II will describe adverse effects of tuberculosis on structure of endometrium and radiological specific findings, such as "T-shaped" tuberculosis uterus, "pseudo-unicornuate "uterus, "collar-stud abscess" and "dwarfed" uterus with lymphatic intravasation and occluded tubes which have not been encountered in the majority of non-tuberculosis cases. OpenAIRE Fernández Valdés, José 2016-01-01 Sabemos que la ausencia de diagnóstico en la etapa infantil y adolescente de los trastornos de ansiedad favorece que los síntomas se vuelvan crónicos y muchos autores refieren la frecuencia de comorbilidad de síntomas en estas etapas a diferencia de lo que sucede en los adultos lo que dificulta la delimitación psicopatológica de los trastornos de ansiedad. Con un índice de prevalencia de dichos trastornos de ansiedad en la infancia y adolescencia que según autores oscila entre el 15% y 20% y ... 6. Puntualizaciones sobre la angustia y la ansiedad OpenAIRE Gómez, Amparo 2015-01-01 Tanto la angustia como la ansiedad son manifestaciones genéricas del sujeto, podemos decir que no hay ser humano que no haya padecido estos fenómenos. En los últimos tiempos, a través de los desarrollos provenientes de los llamados psicoanalistas posfreudianos así como en las clasificaciones de la Asociación Americana de Psiquiatría, ambos términos han caído en una indiferenciación. Incluso, en el caso específico de las reediciones del DSM (Diagnostic and Statistical Manual of Mental Disorder... 7. Relación entre los trastornos por ansiedad y alteraciones del oído interno OpenAIRE Chica Urzola, Heydy Luz 2010-01-01 La ansiedad es un proceso normal adaptativo a circunstancias que generan estrés o representan un desafío para quien la padece y puede tornarse en desadaptativa bajo algunas circunstancias. Como parte del espectro sintomático de la ansiedad, algunas de sus representaciones somáticas se relacionan con equilibrio y oído interno. Con frecuencia se encuentra relación entre la sintomatología neuropsiquiátrica y otorrinolaringológica. Las estadísticas internacionales así lo señalan. En Colombia no h... 8. Factores psicosociales relacionados con la ansiedad competitiva de los deportistas en etapas de formación OpenAIRE M. Rocío Bohórquez Gómez-Millán; Irene Checa Esquiva 2017-01-01 El objetivo de este trabajo es indagar la relación existente entre la ansiedad competitiva de futbolistas y tenistas en etapas de formación y la práctica de deporte por parte de los progenitores. Participaron en este estudio 246 niños deportistas de tenis y fútbol con una media de edad de 10,43 años (DT= 2,42), que han sido evaluados con un cuestionario de datos sociodemográ- ficos y el cuestionario de Ansiedad Competitiva SAS-2 (Ramis, Torregrosa, Viladrich, & Cruz, 2010). Utilizando pruebas... 9. Ansiedade e agressividade infantil sob o enfoque da psicologia transpessoal : uma interpretação kirliangrafica OpenAIRE Viviane França Dias 1999-01-01 Resumo: A ansiedade e a agressividade infantil tem sido objeto de muitos estudos, uma vez que tanto pais como educadores vêem-se muitas vezes envolvidos com tais problemas e verificam sua impotência na resolução dos mesmos. Ansiedade e agressividade parecem fazer parte do cotidiano das crianças, em maior ou menor grau. Neste trabalho, procurou-se determinar o papel da escola na minimização do problema, através de contribui... 10. Impacto do tipo de informação pré-anestésica sobre a ansiedade dos pais e das crianças Directory of Open Access Journals (Sweden) Débora de Oliveira Cumino 2013-12-01 11. Ansiedad al tratamiento odontológico: Características y diferencias de género Directory of Open Access Journals (Sweden) Caycedo, Martha 2008-09-01 Directory of Open Access Journals (Sweden) García López, Luis Joaquín 2008-09-01 13. 10 CFR Appendix II to Part 1050 - DOE Form 3735.3-Foreign Travel Statement Science.gov (United States) 2010-01-01 ... is official agency business. Spouses and dependents may accept such travel and expenses only when... 10 Energy 4 2010-01-01 2010-01-01 false DOE Form 3735.3-Foreign Travel Statement II Appendix II to.... II Appendix II to Part 1050—DOE Form 3735.3—Foreign Travel Statement EC01OC91.041 Statement... 14. 10 CFR Appendix II to Part 504 - Fuel Price Computation Science.gov (United States) 2010-01-01 ... 504—Fuel Price Computation (a) Introduction. This appendix provides the equations and parameters... responsible for computing the annual fuel price and inflation indices by using Equation II-1 and Equation II-2, respectively. The petitioner may compute the fuel price index specified in Equation II-1 or use his own price... 15. Part II--Management of pediatric post-traumatic headaches. Science.gov (United States) Pinchefsky, Elana; Dubrovsky, Alexander Sasha; Friedman, Debbie; Shevell, Michael 2015-03-01 16. All About Dowels - A Review Part II Considerations After Cementation Directory of Open Access Journals (Sweden) Zishan Dangra 2017-10-01 Full Text Available The present review summarizes the published literature examining cementation of the dowel and factors related to it. The peer reviewed English language literature was reviewed from the period 1990 to 2015. Articles were searched in Pubmed/ Medline for the relevant terms. Additional manual searches of some dental journals were also carried out. The original key terms resulted in 228 articles. After applying inclusion criteria, 64 articles remained to be included in part II of this review. Article search indicates that most published literature on dowels are in the form of in vitro analysis. Literature on prefabricated dowel systems far exceeds than the custom cast dowel and newer fibre dowels. Clinical evidence is not sufficient and cannot be used to inform practice confidently. However, within the limitations of this review it is suggested that adhesive fixation is preferred in case of short dowel. Dowel width should be as small as possible. A ferrule of 2 mm has to be provided. Composites have proven to be a good core material provided that adequate tooth structure remained for bonding. Dowel should be inserted if endodontically treated tooth is to be used as abutment for removable partial dentures. 17. 46 CFR Appendix II to Part 150 - Explanation of Figure 1 Science.gov (United States) 2010-10-01 ... COMPATIBILITY OF CARGOES Pt. 150, App. II Appendix II to Part 150—Explanation of Figure 1 Definition of a..., aromatic hydrocarbons or paraffins. Others will form hazardous combinations with many groups: For example... 18. Precompetición y ansiedad en fisicoculturistas OpenAIRE F\\u00E9lix Arbinaga Ibarz\\u00E1bal; Jos\\u00E9 Carlos Caracuel Tub\\u00EDo 2005-01-01 Con este trabajo se ha pretendido realizar una aproximación a los estados de ansiedad que manifiestan los fisicoculturistas en los momentos previos a la competición. Han participado un total de 52 fisicoculturistas varones, con más de dos años de experiencia, y una media de 81,5 meses en el entrenamiento de musculación. La prueba utilizada para evaluar la ansiedad precompetición ha sido el “Competitive State Anxiety Inventory-2”, que nos ha dejado unos valores medios en ansiedad cognitiva de ... 19. Ansiedad y miedos dentales en escolares hondureños OpenAIRE Ivette Carolina Rivera Zelaya; Antonio Fernández Parra 2005-01-01 La ansiedad a la atención y tratamiento dental puede afectar de forma significativa a la salud oral de los niños así como a la calidad del tratamiento dental recibido. A pesar de su importancia se han realizado muy pocos estudios sobre la ansiedad y miedo dental infantil en Latinoamérica, concretamente en Honduras. En este estudio se evaluó la ansiedad dental de una muestra aleatoria de 170 escolares (6-11 años) de la región metropolitana de Tegucigalpa. La evaluación se realiz... 20. Relação entre estressores, estresse e ansiedade Directory of Open Access Journals (Sweden) Margis Regina 2003-01-01 Full Text Available Os autores apresentam uma breve revisão de literatura sobre a relação entre ansiedade, eventos estressores e estresse. São descritas as diferentes situações estressoras, a definição de evento de vida estressor e os aspectos cognitivos, comportamentais e fisiológicos da resposta frente ao estresse. A neuroanatomia e os principais neurotransmissores envolvidos na resposta fisiológica de ansiedade ao estresse são descritos. Estudos genéticos que evidenciam a relação entre os eventos de vida estressores como fator de risco para ansiedade são apresentados. A relação causal entre os eventos de vida estressores e o aparecimento de ansiedade é abordada a partir de estudos realizados com adultos e adolescentes. 1. Fisicoculturismo: diferencias de sexo en el estado de ánimo y la ansiedad precompetitiva Directory of Open Access Journals (Sweden) F\\u00E9lix Arbinaga Ibarz\\u00E1bal 2013-01-01 2. Psychiatric emergencies (part II): psychiatric disorders coexisting with organic diseases. Science.gov (United States) Testa, A; Giannuzzi, R; Sollazzo, F; Petrongolo, L; Bernardini, L; Dain, S 2013-02-01 In this Part II psychiatric disorders coexisting with organic diseases are discussed. "Comorbidity phenomenon" defines the not univocal interrelation between medical illnesses and psychiatric disorders, each other negatively influencing morbidity and mortality. Most severe psychiatric disorders, such as schizophrenia, bipolar disorder and depression, show increased prevalence of cardiovascular disease, related to poverty, use of psychotropic medication, and higher rate of preventable risk factors such as smoking, addiction, poor diet and lack of exercise. Moreover, psychiatric and organic disorders can develop together in different conditions of toxic substance and prescription drug use or abuse, especially in the emergency setting population. Different combinations with mutual interaction of psychiatric disorders and substance use disorders are defined by the so called "dual diagnosis". The hypotheses that attempt to explain the psychiatric disorders and substance abuse relationship are examined: (1) common risk factors; (2) psychiatric disorders precipitated by substance use; (3) psychiatric disorders precipitating substance use (self-medication hypothesis); and (4) synergistic interaction. Diagnostic and therapeutic difficulty concerning the problem of dual diagnosis, and legal implications, are also discussed. Substance induced psychiatric and organic symptoms can occur both in the intoxication and withdrawal state. Since ancient history, humans selected indigene psychotropic plants for recreational, medicinal, doping or spiritual purpose. After the isolation of active principles or their chemical synthesis, higher blood concentrations reached predispose to substance use, abuse and dependence. Abuse substances have specific molecular targets and very different acute mechanisms of action, mainly involving dopaminergic and serotoninergic systems, but finally converging on the brain's reward pathways, increasing dopamine in nucleus accumbens. The most common 3. 40 CFR Appendix II to Part 600 - Sample Fuel Economy Calculations Science.gov (United States) 2010-07-01 ... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Sample Fuel Economy Calculations II... FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Pt. 600, App. II Appendix II to Part 600—Sample Fuel Economy Calculations (a) This sample fuel economy calculation is applicable to... 4. 40 CFR Appendix II to Part 1042 - Steady-State Duty Cycles Science.gov (United States) 2010-07-01 ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Steady-State Duty Cycles II Appendix..., App. II Appendix II to Part 1042—Steady-State Duty Cycles (a) The following duty cycles apply as... Time in mode(seconds) Engine speed 1, 3 Power (percent) 2, 3 1aSteady-state 229 Maximum test speed 100... 5. Coping With the Problems of a Technological Age, Part II. Science.gov (United States) New York State Education Dept., Albany. Bureau of Secondary Curriculum Development. This is another report in a series of programs dealing with the problems of a technological age. It is assumed that teachers will use both parts of this report. Part I deals with the problems of technology and how it affects our lives. It also discusses the energy crisis created, in part, by technology and deals specifically with coal and… 6. 10 CFR Appendix II to Part 960 - NRC and EPA Requirements for Preclosure Repository Performance Science.gov (United States) 2010-01-01 ... 10 Energy 4 2010-01-01 2010-01-01 false NRC and EPA Requirements for Preclosure Repository... SCREENING OF POTENTIAL SITES FOR A NUCLEAR WASTE REPOSITORY Pt. 960, App. II Appendix II to Part 960—NRC and EPA Requirements for Preclosure Repository Performance Under proposed 40 CFR part 191, subpart A... 7. Estimating Welfare Effects Consistent with Forward-Looking Behavior. Part I: Lessons from a Simulation Exercise. Part II: Empirical Results. Science.gov (United States) Keane, Michael P.; Wolpin, Kenneth I. 2002-01-01 Part I uses simulations of a model of welfare participation and women's fertility decisions, showing that increases in per-child payments have substantial impact on fertility. Part II uses estimations of decision rules of forward-looking women regarding welfare participation, fertility, marriage, work, and schooling. (SK) 8. Thinking in Nursing Education. Part I: A Student's Experience in Learning To Think. Part II: A Teacher's Experience. Science.gov (United States) Ironside, Pamela Magnussen 1999-01-01 Part I describes a nursing student's experience learning to think in clinical practice, illustrating the need for a variety of approaches to critical thinking. Part II shows how nursing teachers and students are challenging conventional approaches and creating more responsive pedagogies. (SK) 9. Inteligencia Artificial y Neurología: II Parte Directory of Open Access Journals (Sweden) Mario Camacho Pinto 1986-12-01 Full Text Available Algunos comentarios sobre la primera parte me han inducido a ampliar las bases de este trabajo mediante la presentación de aspectos comunes y conceptos disímiles sobre la hipotética relación entre inteligencia artificial (lA e inteligencia humana(IH. El tema es tan complejo que un intento por resumirlo de por sí ya es atrayente además de necesario. Microhistoria de la lA. Ciñéndome a una cruda realidad la lA nació en la Conferencia de Darmouth, año 1956, cuando John McCarthy, profesor de ciencia de computador en Stanford Un. acuñó el término de lA. Sin embargo, especulando un poco podemos decir que cierta inquietud existió desde la antigüedad, mucho antes de los computadores y aún de la electrónica (1 cuando el ser humano irresistiblemente mostraba inquietud por crear I fuera del cerebro humano. Se encuentran algunos ejemplos en la Mitología griega: Hefestos. dios del fuego y de los metales, confeccionaba creaciones semihumanas en su forja. Pigmalión desencantado de las mujeres modeló su propia ninfa en mármol y para poder casarse con ella imploró suplicante hasta conseguir que Afrodita le diera vida. En la Europa medioeval al papa Silvestre II (apodado el hechicero por su sabiduría, año 909 D.C. se le atribuye que construía cabezas parlantes. En el siglo XVI Para celso clamó haber inventado un homúnculo. Y el rabino checo Jundo ben Loew esculpió un hombre en arcilla, José Golem, y lo constituyó espía en Praga. En 1854 el matemático británico George Boole propuso un sistema para describir lógica (2 -las leyes del pensamiento en términos matemáticos: “álgebra booliana”, “mathematical logics” que representa procesos lógicos con dos dígitos, 9 y 1. En 1937 Alan Turing demostró que una máquina binaria podía ser programada para realizar cualquier tarea algorítmica. Esta máquina de Turing sólo podía ejecutar dos acciones: dibujar y borrar. En el mismo año Claude 10. Flow resistance of textile materials. Part II: Multifilament Fabrics NARCIS (Netherlands) Gooijer, H.; Gooijer, H.; Warmoeskerken, Marinus; Groot Wassink, J. 2003-01-01 Part I of this series presented a new model for predicting the flow resistance of monofilament fabrics. In this part, the model is applied to the flow resistance of multi filament fabrics. Experiments show that flow resistance in multifilament fabrics can be modeled in general, but it appears that 11. Synthesis of Dissipative Systems Using Quadratic Differential Forms : Part II NARCIS (Netherlands) Trentelman, H.L.; Willems, J.C. 2002-01-01 In this second part of this paper, we discuss several important special cases of the problem solved in Part I. These are: disturbance attenuation and passivation, the full information case, the filtering problem, and the case that the to-be-controlled plant is given in input–state–output 12. The Search for Another Earth–Part II Permanent link: https://www.ias.ac.in/article/fulltext/reso/021/10/0899-0910. Keywords. Exoplanets, earth, super-earth, diamond planet, neptune, habitability, extra-terrestrial life. Abstract. In the first part, we discussed the various methods for thedetection of planets outside the solar system known as theexoplanets. In this part ... 13. Cardiac nuclear medicine, part II: diagnosis of coronary artery diseas International Nuclear Information System (INIS) Polak, J.F.; Holman, B.L. 1981-01-01 Diagnosing coronary artery disease is difficult and requires careful consideration of the roles and limitations of the tests used. Standard ECG tests are not reliable indicators of the presence of disease in asymptomatic patients. Thallium stress testing to assess ischemia and exercise ventriculography to assess functional status of the heart are limited in sensitivity and specificity. This is the second of a three-part series on cardiac nuclear medicine. Part I (Med. Instrum., May-June, 1981) focused on the commonly used examinations in cardiac physiology and pathophysiology. Part III will focus on myocardial infarction and other cardiac diseases 14. Internal Auditing in Federal, State, and Local Governments (Part II). Science.gov (United States) Knight, Susan; Wilson, Guy 1981-01-01 This second part of an annotated bibliography of reports, books, and journal articles concerned with internal auditing in government contexts reviews the available literature for an understanding of the types of internal audit, methods and practices, and other facets. (FM) 15. Guidelines for acute ischemic stroke treatment: part II: stroke treatment Directory of Open Access Journals (Sweden) Sheila Cristina Ouriques Martins 2012-11-01 Full Text Available The second part of these Guidelines covers the topics of antiplatelet, anticoagulant, and statin therapy in acute ischemic stroke, reperfusion therapy, and classification of Stroke Centers. Information on the classes and levels of evidence used in this guideline is provided in Part I. A translated version of the Guidelines is available from the Brazilian Stroke Society website (www.sbdcv.com.br. 16. Evaluación de la motivación académica y la ansiedad escolar y posibles relaciones entre ellas Avaliação da motivação acadêmica e ansiedade escolar e possível relação entre elas Assessment of academic motivation and school anxiety and possible relationship between them Directory of Open Access Journals (Sweden) Débora Cecilio Fernandes 2012-12-01 17. Programming Models for Three-Dimensional Hydrodynamics on the CM-5 (Part II) International Nuclear Information System (INIS) Amala, P.A.K.; Rodrigue, G.H. 1994-01-01 This is a two-part presentation of a timing study on the Thinking Machines CORP. CM-5 computer. Part II is given in this study and represents domain-decomposition and message-passing models. Part I described computational problems using a SIMD model and connection machine FORTRAN (CMF) 18. Perfeccionismo y "alarma adaptativa" a la ansiedad en deportes de combate OpenAIRE González Hernández, Juan 2017-01-01 La descripción de las relaciones diferenciales entre la personalidad y la respuesta deportiva ha permitido ofrecer explicación sobre el comportamiento psicológico de las personas cuando practican deporte. Por otra parte, han sido más reducidos los estudios realizados en particular en los deportes de combate. El presente estudio se dirige al análisis de cómo los indicadores de perfeccionismo se relacionan con la vulnerabilidad y sensibilidad a la sintomatología de ansiedad. La muestra consta d... 19. Pediatric Physical Therapy: Part II. Approaches to Movement Dysfunction. Science.gov (United States) Heriza, Carolyn B.; Sweeney, Jane K. 1995-01-01 This article, the second of a three-part series, outlines neuromuscular, musculoskeletal, and cardiopulmonary physical therapy approaches to movement dysfunction in children. The multiple roles of the pediatric physical therapist in teaching, consulting, managing, referring, and conducting clinical research are discussed. (Author/DB) 20. On Railroad Tank Car Puncture Performance: Part II - Estimating Metrics Science.gov (United States) 2016-04-12 This paper is the second in a two-part series on the puncture performance of railroad tank cars carrying hazardous materials in the event of an accident. Various metrics are often mentioned in the open literature to characterize the structural perfor... 1. Topics in Finance: Part II--Financial Analysis Science.gov (United States) Laux, Judy 2010-01-01 The second article in a series designed to supplement the introductory financial management course, this essay addresses financial statement analysis, including its impact on stock valuation, disclosure, and managerial behavior. [For "Topics in Finance Part I--Introduction and Stockholder Wealth Maximization," see EJ1060345. 2. Solar Energy Education. Reader, Part II. Sun story. [Includes glossary Energy Technology Data Exchange (ETDEWEB) 1981-05-01 Magazine articles which focus on the subject of solar energy are presented. The booklet prepared is the second of a four part series of the Solar Energy Reader. Excerpts from the magazines include the history of solar energy, mythology and tales, and selected poetry on the sun. A glossary of energy related terms is included. (BCS) 3. Design of multiphysics actuators using topology optimization - Part II DEFF Research Database (Denmark) Sigmund, Ole 2001-01-01 . Several of the examples from Part I are repeated, allowing for the introduction of a second material in the design domain. The second material can differ in mechanical properties such as Young's modulus or electrical and thermal conductivity. In some cases there are significant gains in introducing... 4. Identifying Causes (Not Symptoms) of Writing Problems, Part II. Science.gov (United States) Strange, Dorothy Flanders; Kebbel, Gary W. 1979-01-01 Points out that writing errors of journalism students can result from faulty thought patterns involving thinking in sentence fragments, personifying objects, using bureaucratic abstractions, and condensing complex ideas; examines ways of dealing with bureaucratic coding and compressed sentences. (Conclusion of a two-part article.) (GT) 5. IGCSE and IB MYP: How Compatible Are They? Part II. Science.gov (United States) Guy, Judith 2001-01-01 Presents the second part of a response to the finding that there is sufficient overlap between the International General Certificate of Secondary Education (IGCSE) and the International Baccalaureate Middle Years Program (IBMYP) to allow the two programs to coexist. Argues that the two programs are incompatible and the two together place undue… 6. [Low grade renal trauma (Part II): diagnostic validity of ultrasonography]. Science.gov (United States) Grill, R; Báca, V; Otcenásek, M; Zátura, F 2010-04-01 7. Lagrangian intersection Floer theory anomaly and obstruction, part II CERN Document Server Fukaya, Kenji; Ohta, Hiroshi; Ono, Kaoru 2009-01-01 This is a two-volume series research monograph on the general Lagrangian Floer theory and on the accompanying homological algebra of filtered A_\\infty-algebras. This book provides the most important step towards a rigorous foundation of the Fukaya category in general context. In Volume I, general deformation theory of the Floer cohomology is developed in both algebraic and geometric contexts. An essentially self-contained homotopy theory of filtered A_\\infty algebras and A_\\infty bimodules and applications of their obstruction-deformation theory to the Lagrangian Floer theory are presented. Volume II contains detailed studies of two of the main points of the foundation of the theory: transversality and orientation. The study of transversality is based on the virtual fundamental chain techniques (the theory of Kuranishi structures and their multisections) and chain level intersection theories. A detailed analysis comparing the orientations of the moduli spaces and their fiber products is carried out. A self-co... 8. CÂNCER DE MAMA: ESTIMATIVA DA PREVALÊNCIA DE ANSIEDADE E DEPRESSÃO EM PACIENTES EM TRATAMENTO AMBULATORIAL OpenAIRE Ferreira, Andreia Silva; Bicalho, Bruna Pereira; Oda, Julie Massayo Maeda; Duarte, Sebastião Junior Henrique; Machado, Richardson Miranda 2016-01-01 A ansiedade e a depressão são doenças psicoemocionais que afetam grande parte das mulheres acometidas pelo câncer de mama. Pouco se sabe sobre os meios de identificação precoce, constituindo-se em desafios à equipe multiprofissional da saúde a integralidade do cuidado à vítima dessa doença. O objetivo deste estudo foi identificar a prevalência da ansiedade e depressão em mulheres em tratamento ambulatorial para o câncer de mama. Estudo descritivo transversal, realizado com 138 mulheres em tra... 9. Coexistência de ansiedade e depressão na gravidez em casais cujas mulheres são primíparas OpenAIRE Bolela, Miguel 2012-01-01 A ansiedade e a depressão são estados psicológicos que afectam a saúde materna e o desenvolvimento embrional. A maior parte dos investigadores observa que muitas mulheres exibem valores muito altos de sintomatologia ansiosa na gravidez (Conde & Figueiredo, 2003). O objectivo deste trabalho é apresentar às autoridades sanitárias e governamentais da província de Benguela os resultados de um estudo efectuado sobre a prevalência da ansiedade e da depressão gestacionais de modo a... 10. Operation of industrial electrical substations. Part II: practical applications Energy Technology Data Exchange (ETDEWEB) Sanchez Jimenez, Juan J; Zerquera Izquierdo, Mariano D; Beltran Leon, Jose S; Garcia Martinez, Juan M; Alvarez Urena, Maria V; Meza Diaz, Guillermo [Universidad de Guadalajara (Mexico)]. E-mails: [email protected]; [email protected]; [email protected]; [email protected]; [email protected]; [email protected] 2013-03-15 The practical application of the methodology explained in Part 1 in a Cuban industry is the principal objective of this paper. The calculus of the economical operation of the principal transformers of the industrial plant is shown of the one very easy form, as well as the determination of the equations of the losses when the transformers operate under a given load diagram. It is calculated the state load which will be passed to the operation in parallel. [Spanish] El objetivo principal de este trabajo es la aplicacion practica de la metodologia, en una industria cubana, que se explico en la Parte 1. El calculo de la operacion economica de los principales transformadores de la planta industrial se muestra de una forma muy facil, asi como la determinacion de las ecuaciones de las perdidas cuando los transformadores operan bajo un diagrama de carga dado. Se calcula la carga de estado que se pasa a la operacion en paralelo. 11. Comparison of microstickies measurement methods. Part II, Results and discussion Science.gov (United States) Mahendra R. Doshi; Angeles Blanco; Carlos Negro; Concepcion Monte; Gilles M. Dorris; Carlos C. Castro; Axel Hamann; R. Daniel Haynes; Carl Houtman; Karen Scallon; Hans-Joachim Putz; Hans Johansson; R. A. Venditti; K. Copeland; H.-M. Chang 2003-01-01 In part I of the article we discussed sample preparation procedure and described various methods used for the measurement of microstickies. Some of the important features of different methods are highlighted in Table 1. Temperatures used in the measurement methods vary from room temperature in some cases, 45 °C to 65 °C in other cases. Sample size ranges from as low as... 12. The equivalence myth of quantum mechanics-part II Science.gov (United States) Muller, F. A. The author endeavours to show two things: first, that Schrödingers (and Eckarts) demonstration in March (September) 1926 of the equivalence of matrix mechanics, as created by Heisenberg, Born, Jordan and Dirac in 1925, and wave mechanics, as created by Schrödinger in 1926, is not foolproof; and second, that it could not have been foolproof, because at the time matrix mechanics and wave mechanics were neither mathematically nor empirically equivalent. That they were is the Equivalence Myth. In order to make the theories equivalent and to prove this, one has to leave the historical scene of 1926 and wait until 1932, when von Neumann finished his magisterial edifice. During the period 1926-1932 the original families of mathematical structures of matrix mechanics and of wave mechanics were stretched, parts were chopped off and novel structures were added. To Procrustean places we go, where we can demonstrate the mathematical, empirical and ontological equivalence of 'the final versions of' matrix mechanics and wave mechanics. The present paper claims to be a comprehensive analysis of one of the pivotal papers in the history of quantum mechanics: Schrödingers equivalence paper. Since the analysis is performed from the perspective of Suppes structural view ('semantic view') of physical theories, the present paper can be regarded not only as a morsel of the internal history of quantum mechanics, but also as a morsel of applied philosophy of science. The paper is self-contained and presupposes only basic knowledge of quantum mechanics. For reasons of length, the paper is published in two parts; Part I appeared in the previous issue of this journal. Section 1 contains, besides an introduction, also the papers five claims and a preview of the arguments supporting these claims; so Part I, Section 1 may serve as a summary of the paper for those readers who are not interested in the detailed arguments. 13. Fictional Discourse. Replies to Organon F Papers (Part II) OpenAIRE Koťátko, P. (Petr) 2016-01-01 The author replies to the second part of the papers collected in the Supplementary Volume of Organon F 2015. He discusses the status of the literary text and the text-work relation, defends the account of fictional characters as complete beings situated (within the interpretation of narrative fiction) in the actual world, argues that the Kripkean causal theory of proper names is properly applicable also to texts of narrative fiction, defends an ontologically modest account of fictional charac... 14. Neutron detection with imaging plates Part II. Detector characteristics CERN Document Server Thoms, M 1999-01-01 On the basis of the physical processes described in Neutron detection with imaging plates - part I: image storage and readout [Nucl. Instr. and Meth. A 424 (1999) 26-33] detector characteristics, such as quantum efficiency, detective quantum efficiency, sensitivity to neutron- and gamma-radiation, readout time and dynamic range are predicted. It is estimated that quantum efficiencies and detective quantum efficiencies close to 100% can be reached making these kind of detectors interesting for a wide range of applications. 15. Slag Behavior in Gasifiers. Part II: Constitutive Modeling of Slag Directory of Open Access Journals (Sweden) 2013-02-01 Full Text Available The viscosity of slag and the thermal conductivity of ash deposits are among two of the most important constitutive parameters that need to be studied. The accurate formulation or representations of the (transport properties of coal present a special challenge of modeling efforts in computational fluid dynamics applications. Studies have indicated that slag viscosity must be within a certain range of temperatures for tapping and the membrane wall to be accessible, for example, between 1,300 °C and 1,500 °C, the viscosity is approximately 25 Pa·s. As the operating temperature decreases, the slag cools and solid crystals begin to form. Since slag behaves as a non-linear fluid, we discuss the constitutive modeling of slag and the important parameters that must be studied. We propose a new constitutive model, where the stress tensor not only has a yield stress part, but it also has a viscous part with a shear rate dependency of the viscosity, along with temperature and concentration dependency, while allowing for the possibility of the normal stress effects. In Part I, we reviewed, identify and discuss the key coal ash properties and the operating conditions impacting slag behavior. 16. Underwater Electromagnetic Sensor Networks, Part II: Localization and Network Simulations Directory of Open Access Journals (Sweden) Javier Zazo 2016-12-01 Full Text Available In the first part of the paper, we modeled and characterized the underwater radio channel in shallowwaters. In the second part,we analyze the application requirements for an underwaterwireless sensor network (U-WSN operating in the same environment and perform detailed simulations. We consider two localization applications, namely self-localization and navigation aid, and propose algorithms that work well under the specific constraints associated with U-WSN, namely low connectivity, low data rates and high packet loss probability. We propose an algorithm where the sensor nodes collaboratively estimate their unknown positions in the network using a low number of anchor nodes and distance measurements from the underwater channel. Once the network has been self-located, we consider a node estimating its position for underwater navigation communicating with neighboring nodes. We also propose a communication system and simulate the whole electromagnetic U-WSN in the Castalia simulator to evaluate the network performance, including propagation impairments (e.g., noise, interference, radio parameters (e.g., modulation scheme, bandwidth, transmit power, hardware limitations (e.g., clock drift, transmission buffer and complete MAC and routing protocols. We also explain the changes that have to be done to Castalia in order to perform the simulations. In addition, we propose a parametric model of the communication channel that matches well with the results from the first part of this paper. Finally, we provide simulation results for some illustrative scenarios. 17. Slag Behavior in Gasifiers. Part II: Constitutive Modeling of Slag Energy Technology Data Exchange (ETDEWEB) Massoudi, Mehrdad [National Energy Technology Laboratory; Wang, Ping 2013-02-07 The viscosity of slag and the thermal conductivity of ash deposits are among two of the most important constitutive parameters that need to be studied. The accurate formulation or representations of the (transport) properties of coal present a special challenge of modeling efforts in computational fluid dynamics applications. Studies have indicated that slag viscosity must be within a certain range of temperatures for tapping and the membrane wall to be accessible, for example, between 1,300 °C and 1,500 °C, the viscosity is approximately 25 Pa·s. As the operating temperature decreases, the slag cools and solid crystals begin to form. Since slag behaves as a non-linear fluid, we discuss the constitutive modeling of slag and the important parameters that must be studied. We propose a new constitutive model, where the stress tensor not only has a yield stress part, but it also has a viscous part with a shear rate dependency of the viscosity, along with temperature and concentration dependency, while allowing for the possibility of the normal stress effects. In Part I, we reviewed, identify and discuss the key coal ash properties and the operating conditions impacting slag behavior. 18. Hermeneutics as an approach to science: Part II Science.gov (United States) Eger, Martin 1993-12-01 This paper continues the hermeneutic-phenomenological investigation of natural science, in which understanding plays a role comparable to creative construction (see ‘Hermeneutics as an Approach to Science: Part I’ in Science & Education 2(1)). The first issue treated is that of language: Is the language of science part of the equipment of the scientist, the subject, or part of the object itself — nature already linguistically encased? This issue, arising from the so-called argument of ‘the double hermeneutic’, relates the general question of the role of the subject in natural science to the role of interpretation. Examples of major interpretative developments in physics are discussed. The inquiry suggests that the role of interpretation and hermeneutics is tied to the educative or ‘study-mode’ of science; and that this mode can, apparently, be found at all levels and stages of science. The nature of this interpretive mode, and its relation to the creative mode, is then analyzed on the model of Gadamer's description of the interpretation of art. 19. 12 CFR Appendix II to Part 27 - Information for Government Monitoring Purposes Science.gov (United States) 2010-01-01 ... FAIR HOUSING HOME LOAN DATA SYSTEM Pt. 27, App. II Appendix II to Part 27—Information for Government... Indian or Alaskan Native □ Asian or Pacific Islander □ Black, not of Hispanic origin □ Hispanic □ White... this information (initial)____. Race/National Origin □ American Indian or Alaskan Native □ Asian or... 20. 40 CFR Appendix II to Part 1039 - Steady-State Duty Cycles Science.gov (United States) 2010-07-01 ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Steady-State Duty Cycles II Appendix... Appendix II to Part 1039—Steady-State Duty Cycles (a) The following duty cycles apply for constant-speed...(seconds) Engine speed Torque(percent) 1, 2 1a Steady-state 53 Engine governed 100. 1b Transition 20 Engine... 1. 40 CFR Appendix II to Part 266 - Tier I Feed Rate Screening Limits for Total Chlorine Science.gov (United States) 2010-07-01 ... (CONTINUED) SOLID WASTES (CONTINUED) STANDARDS FOR THE MANAGEMENT OF SPECIFIC HAZARDOUS WASTES AND SPECIFIC TYPES OF HAZARDOUS WASTE MANAGEMENT FACILITIES Pt. 266, App. II Appendix II to Part 266—Tier I Feed Rate Screening Limits for Total Chlorine Terrain-adjusted effective stack height (m) Noncomplex Terrain Urban (g... 2. Fourier Transform Infrared Spectroscopy: Part II. Advantages of FT-IR. Science.gov (United States) Perkins, W. D. 1987-01-01 This is Part II in a series on Fourier transform infrared spectroscopy (FT-IR). Described are various advantages of FT-IR spectroscopy including energy advantages, wavenumber accuracy, constant resolution, polarization effects, and stepping at grating changes. (RH) 3. Uso da aromaterapia no controlo de stresse e ansiedade OpenAIRE Dias, Paula; Sousa, Maria João; Pereira, Olívia R. 2014-01-01 As terapias alternativas e/ou complementares surgem como terapias importantes na prevenção e tratamento de diversos problemas de saúde, de entre o quais o stresse e ansiedade, problemas cada vez mais frequentes na sociedade atual. O presente estudo acerca da utilização de produtos naturais em aromaterapia pretendeu avaliar a eficácia da aromaterapia na diminuição dos níveis de stresse e ansiedade, através da técnica de massagem Effleurage na população de utentes que recorreram à aromaterap... 4. Sesgos de Memoria en los Trastornos de Ansiedad OpenAIRE Rubén Sanz Blasco; Juan José Miguel-Tobal; M.ª Isabel Casado Morales 2011-01-01 En la actualidad existen un gran número de modelos teóricos que defienden la importancia de la valoración cognitiva en el inicio y mantenimiento de la respuesta de ansiedad. La investigación acerca de los procesos cognitivos que subyacen a la respuesta de ansiedad ha puesto de manifiesto de manera bastante sólida cómo los sujetos ansiosos en comparación con sujetos normales muestran una tendencia a atender de manera selectiva y a interpretar de un modo catastrofista información congruente con... OpenAIRE Carrasco Ortiz, Miguel Ángel; Rodríguez Testal, Juan Francisco; Rodríguez Santos, María Dolores; Sánchez Arribas, Carmen 1999-01-01 El trabajo que a continuación se presenta ha tratado de estudiar los medios y la ansiedad en un grupo de sujetos adolescentes víctimas de maltrato en comparación con un grupo control equivalente. Los resultados han puesto de manifiesto la inexistencia de diferencias en el nivel y contenido de los medios entre los grupos de maltrato y de control. No obstante, han aparecido diferencias significativas en las medidas de ansiedad, en favor de los sujetos maltratados de mayor edad, tales como el nú... 6. Unknown facets of Well-Known Scientists Series - Part II Directory of Open Access Journals (Sweden) V S Dixit 2016-01-01 7. The museum maze in oral pathology demystifed: part II. Science.gov (United States) Patil, Shankargouda; Rao, Roopa S; Ganavi, Bs 2013-09-01 Museum technology is perpetually changing due to current requirements and added inventions for our comfort and furbished display of specimens. Hence numerous methods of specimen preservation have been put on trial by diverse people in the medical feld as are the inventions. But only few have caught people's interest and are popularized today. This part provides unique insights into specialized custom-made techniques, evolution of recent advances like plastination and virtual museum that have popularized as visual delights. Plastination gives handy, perennial life-like acrylic specimens, whereas virtual museum takes museum feld to the electronic era making use of computers and virtual environment. 8. Designing carbon markets, Part II: Carbon markets in space International Nuclear Information System (INIS) Fankhauser, Samuel; Hepburn, Cameron 2010-01-01 This paper analyses the design of carbon markets in space (i.e., geographically). It is part of a twin set of papers that, starting from first principles, ask what an optimal global carbon market would look like by around 2030. Our focus is on firm-level cap-and-trade systems, although much of what we say would also apply to government-level trading and carbon offset schemes. We examine the 'first principles' of spatial design to maximise flexibility and to minimise costs, including key design issues in linking national and regional carbon markets together to create a global carbon market. 9. Nanoparticles and the blood coagulation system. Part II: safety concerns Science.gov (United States) Ilinskaya, Anna N; Dobrovolskaia, Marina A 2014-01-01 Nanoparticle interactions with the blood coagulation system can be beneficial or adverse depending on the intended use of a nanomaterial. Nanoparticles can be engineered to be procoagulant or to carry coagulation-initiating factors to treat certain disorders. Likewise, they can be designed to be anticoagulant or to carry anticoagulant drugs to intervene in other pathological conditions in which coagulation is a concern. An overview of the coagulation system was given and a discussion of a desirable interface between this system and engineered nanomaterials was assessed in part I, which was published in the May 2013 issue of Nanomedicine. Unwanted pro- and anti-coagulant properties of nanoparticles represent significant concerns in the field of nanomedicine, and often hamper the development and transition into the clinic of many promising engineered nanocarriers. This part will focus on the undesirable effects of engineered nanomaterials on the blood coagulation system. We will discuss the relationship between the physicochemical properties of nanoparticles (e.g., size, charge and hydrophobicity) that determine their negative effects on the blood coagulation system in order to understand how manipulation of these properties can help to overcome unwanted side effects. PMID:23730696 10. Histologic features of alopecias: part II: scarring alopecias. Science.gov (United States) Bernárdez, C; Molina-Ruiz, A M; Requena, L 2015-05-01 The diagnosis of disorders of the hair and scalp can generally be made on clinical grounds, but clinical signs are not always diagnostic and in some cases more invasive techniques, such as a biopsy, may be necessary. This 2-part article is a detailed review of the histologic features of the main types of alopecia based on the traditional classification of these disorders into 2 major groups: scarring and nonscarring alopecias. Scarring alopecias are disorders in which the hair follicle is replaced by fibrous scar tissue, a process that leads to permanent hair loss. In nonscarring alopecias, the follicles are preserved and hair growth can resume when the cause of the problem is eliminated. In the second part of this review, we describe the histologic features of the main forms of scarring alopecia. Since a close clinical-pathological correlation is essential for making a correct histopathologic diagnosis of alopecia, we also include a brief description of the clinical features of the principal forms of this disorder. Copyright © 2014 Elsevier España, S.L.U. and AEDV. All rights reserved. 11. Complex dynamics in diatomic molecules. Part II: Quantum trajectories International Nuclear Information System (INIS) Yang, C.-D.; Weng, H.-J. 2008-01-01 The second part of this paper deals with quantum trajectories in diatomic molecules, which has not been considered before in the literature. Morse potential serves as a more accurate function than a simple harmonic oscillator for illustrating a realistic picture about the vibration of diatomic molecules. However, if we determine molecular dynamics by integrating the classical force equations derived from a Morse potential, we will find that the resulting trajectories do not consist with the probabilistic prediction of quantum mechanics. On the other hand, the quantum trajectory determined by Bohmian mechanics [Bohm D. A suggested interpretation of the quantum theory in terms of hidden variable. Phys. Rev. 1952;85:166-179] leads to the conclusion that a diatomic molecule is motionless in all its vibrational eigen-states, which also contradicts probabilistic prediction of quantum mechanics. In this paper, we point out that the quantum trajectory of a diatomic molecule completely consistent with quantum mechanics does exist and can be solved from the quantum Hamilton equations of motion derived in Part I, which is based on a complex-space formulation of fractal spacetime [El Naschie MS. A review of E-Infinity theory and the mass spectrum of high energy particle physics. Chaos, Solitons and Fractals 2004;19:209-36; El Naschie MS. E-Infinity theory - some recent results and new interpretations. Chaos, Solitons and Fractals 2006;29:845-853; El Naschie MS. The concepts of E-infinity. An elementary introduction to the cantorian-fractal theory of quantum physics. Chaos, Solitons and Fractals 2004;22:495-511; El Naschie MS. SU(5) grand unification in a transfinite form. Chaos, Solitons and Fractals 2007;32:370-374; Nottale L. Fractal space-time and microphysics: towards a theory of scale relativity. Singapore: World Scientific; 1993; Ord G. Fractal space time and the statistical mechanics of random works. Chaos, Soiltons and Fractals 1996;7:821-843] approach to quantum 12. On the problem of ethnophyletism: a historical study. Part II Directory of Open Access Journals (Sweden) 2016-12-01 Full Text Available The Holy and Great Council on Crete, 2016 has risen an important issue of Ethnophyletism. Russian, Georgian, Bulgarian, and Antiochian Orthodox Churches delegations were not present at the Great Council and were criticized for Ethnophyletism at the plenary session. The heresy of ethnophyletism was announced by the Council of Constantinople in 1872. Now we can see that it became essential nowadays. The article tells about the origin of this heresy and whether the Ethnophyletism may be decided to be the heresy. The second part of the paper deals with the events on the eve of the Pan-Orthodox Synod in 1872 (since the manifestation of the famous Abdülaziz Sultan's Firman 1870. 13. One-loop effective actions and higher spins. Part II Science.gov (United States) Bonora, L.; Cvitan, M.; Prester, P. Dominis; Giaccari, S.; Štemberga, T. 2018-01-01 In this paper we continue and improve the analysis of the effective actions obtained by integrating out a scalar and a fermion field coupled to external symmetric sources, started in the previous paper. The first subject we study is the geometrization of the results obtained there, that is we express them in terms of covariant Jacobi tensors. The second subject concerns the treatment of tadpoles and seagull terms in order to implement off-shell covariance in the initial model. The last and by far largest part of the paper is a repository of results concerning all two point correlators (including mixed ones) of symmetric currents of any spin up to 5 and in any dimensions between 3 and 6. In the massless case we also provide formulas for any spin in any dimension. 14. Planar LTCC transformers for high voltage flyback converters: Part II. Energy Technology Data Exchange (ETDEWEB) Schofield, Daryl (NASCENTechnology, Inc., Watertown, SD); Schare, Joshua M., Ph.D.; Slama, George (NASCENTechnology, Inc., Watertown, SD); Abel, David (NASCENTechnology, Inc., Watertown, SD) 2009-02-01 This paper is a continuation of the work presented in SAND2007-2591 'Planar LTCC Transformers for High Voltage Flyback Converters'. The designs in that SAND report were all based on a ferrite tape/dielectric paste system originally developed by NASCENTechnoloy, Inc, who collaborated in the design and manufacturing of the planar LTCC flyback converters. The output/volume requirements were targeted to DoD application for hard target/mini fuzing at around 1500 V for reasonable primary peak currents. High voltages could be obtained but with considerable higher current. Work had begun on higher voltage systems and is where this report begins. Limits in material properties and processing capabilities show that the state-of-the-art has limited our practical output voltage from such a small part volume. In other words, the technology is currently limited within the allowable funding and interest. 15. PERICARDITIS: ETIOLOGY, CLASSIFICATION, CLINIC, DIAGNOSTICS, TREATMENT. PART II Directory of Open Access Journals (Sweden) A.B. Sugak 2009-01-01 Full Text Available Pericarditis maybe caused by different agents: viruses, bacteria, tuberculosis, and it may be autoimmune. All these types of diseases have similar clinical signs, but differ by prevalence, prognosis and medical tactics. Due to achievements of radial methods of visualization, molecular biology, and immunology, we have an opportunity to provide early specific diagnostics and etiological treatment of inflammatory diseases of pericardium. The second part of lecture presents main principles of differential diagnostics of specific types of pericarditis, gives characteristics of several often accruing types of disease, and describes treatment and tactics of management of patients with pericarditis.Key words: children, pericarditis.(Voprosy sovremennoi pediatrii — Current Pediatrics. 2009;8(3:76-81 16. [Scientific reductionism and social control of mind. Part II]. Science.gov (United States) Viniegra Velázquez, Leonardo In the second part of this essay, the progressive subordination of scientific endeavor and knowledge of business and profit is pointed out. For instance, the way facts are prioritized over concepts and ideas in scientific knowledge can translate into technological innovation, central to enterprise competitiveness and key to social mechanisms of control (military, cybernetic, ideological). Overcoming the scientific reductionism approach indicates recognizing the need to define progress in another way, one that infuses scientific knowledge with real liberating and inquisitive power. Power is essential in the search for a more collaborative, inclusive and pluralistic society where respect for human dignity and care for the ecosystem that we live in are prioritized. Copyright © 2014 Hospital Infantil de México Federico Gómez. Publicado por Masson Doyma México S.A. All rights reserved. 17. Sequencing of contents and learning objects - part II Directory of Open Access Journals (Sweden) Miguel Zapata Ros 2006-01-01 18. Pulmonary Surfactants for Acute and Chronic Lung Diseases (Part II Directory of Open Access Journals (Sweden) O. A. Rozenberg 2014-01-01 Full Text Available Part 2 of the review considers the problem of surfactant therapy for acute respiratory distress syndrome (ARDS in adults and young and old children. It gives information on the results of surfactant therapy and prevention of ARDS in patients with severe concurrent trauma, inhalation injuries, complications due to complex expanded chest surgery, or severe pneumonias, including bilateral pneumonia in the presence of A/H1N1 influenza. There are data on the use of a surfactant in obstetric care and prevention of primary graft dysfunction during lung transplantation. The results of longterm use of surfactant therapy in Russia, suggesting that death rates from ARDS may be substantially reduced (to 20% are discussed. Examples of surfactant therapy for other noncritical lung diseases, such as permanent athelectasis, chronic obstructive pulmonary diseases, and asthma, as well tuberculosis, are also considered. 19. Lasers in modern caries management--part II: CAMBRA. Science.gov (United States) Young, Douglas A 2005-01-01 Part two of this series discussed the key strategies that each practice should focus on for caries management. History has proven that oral hygiene and "drilling and filling" alone will not eliminate dental caries. Chemical treatments to prevent and reverse early lesions and conservative, tooth-preserving restorative procedures when surgical intervention is necessary should be the new standard of care. Caries management by risk assessment (CAMBRA), where risk factors are "re-balanced" to that of health, is a sound strategy that is one step closer to "curative" dentistry and improving the quality of life of dental patients. The final article in this series will discuss the role that glass-ionomer materials and hard tissue lasers play in the minimally invasive restorative procedures for dental caries. 20. The Mechanism of Graviton Exchange between Bodies, Part II DEFF Research Database (Denmark) 2016-01-01 Further to Special Relativity, modern physics includes two great theories which describe universe in a new different way. One of them is Quantum Mechanics which describes elementary particles, atoms and molecules and the other one is General Relativity which has been replaced the Newtonian...... Gravitational Law by space-time curvature. Quantum gravity is a part of quantum mechanics which is expected to combine these two theories, and it describes gravity force according to the principles of quantum mechanics which has not got the desired result, yet. In CPH theory, after reconsidering and analyzing...... the behavior of photon in the gravitational field, a new definition of graviton based on carrying the gravity force is given. By using this definition, graviton exchange mechanism between bodies/objects is described. As the purpose of quantum gravity is describing the force of gravity by using the principles... 1. Operational Control Procedures for the Activated Sludge Process, Part I - Observations, Part II - Control Tests. Science.gov (United States) West, Alfred W. This is the first in a series of documents developed by the National Training and Operational Technology Center describing operational control procedures for the activated sludge process used in wastewater treatment. Part I of this document deals with physical observations which should be performed during each routine control test. Part II… 2. Practice improvement, part II: trends in employment versus private practice. Science.gov (United States) Coleman, Mary Thoesen; Roett, Michelle A 2013-11-01 A growing percentage of physicians are selecting employment over solo practice, and fewer family physicians have hospital admission privileges. Results from surveys of recent medical school graduates indicate a high value placed on free time. Factors to consider when choosing a practice opportunity include desire for independence, decision-making authority, work-life balance, administrative responsibilities, financial risk, and access to resources. Compensation models are evolving from the simple fee-for-service model to include metrics that reward panel size, patient access, coordination of care, chronic disease management, achievement of patient-centered medical home status, and supervision of midlevel clinicians. When a practice is sold, tangible personal property and assets in excess of liabilities, patient accounts receivable, office building, and goodwill (ie, expected earnings) determine its value. The sale of a practice includes a broad legal review, addressing billing and coding deficiencies, noncompliant contractual arrangements, and potential litigations as well as ensuring that all employment agreements, leases, service agreements, and contracts are current, have been executed appropriately, and meet regulatory requirements. Written permission from the American Academy of Family Physicians is required for reproduction of this material in whole or in part in any form or medium. 3. The Role of Regulatory Agencies and Intellectual Property: Part II Science.gov (United States) Noonan, Kevin E. 2015-01-01 Patent law and antitrust law have traditionally been areas of the law involving at least some inherent tension. Champions of antitrust argue that the patent “monopoly” must be strictly limited as an exception to the general legal principle that competition should be unfettered. Patent lawyers argue that patents are the result of an exercise of congressional authority, enshrined in the Constitution, reflecting the policy decision by the Founders that granting a limited exclusionary right was justified by the public benefits derived from full disclosure of the patented invention. In the modern era these competing values have played out in the context of so-called ANDA litigation, involving disputes between branded pharmaceutical companies and generic competitors. Settlement of such litigation has been identified by the Federal Trade Commission (FTC), and private parties encouraged by the FTC’s position, as an antitrust violation, in large part because such settlements are viewed as frustrating the congressional purpose in promoting early generic competition. After almost a decade of fighting these battles in the federal courts, the Supreme Court addressed the issue directly. The result is that such settlements are not per se illegal but are also not protected by the presumption of patent validity for activities within the “scope of the patent.” Rather, the court decided that these agreements should be assessed for antitrust liability under the “rule of reason” used in other antitrust contexts. PMID:25775920 4. Recent applications of nuclear medicine in diagnostics: II part Directory of Open Access Journals (Sweden) Giorgio Treglia 2013-04-01 Full Text Available Introduction: Positron-emission tomography (PET and single photon emission computed tomography (SPECT are effective diagnostic imaging tools in several clinical settings. The aim of this article (the second of a 2-part series is to examine some of the more recent applications of nuclear medicine imaging techniques, particularly in the fields of neurology, cardiology, and infection/inflammation. Discussion: A review of the literature reveals that in the field of neurology nuclear medicine techniques are most widely used to investigate cognitive deficits and dementia (particularly those associated with Alzheimer disease, epilepsy, and movement disorders. In cardiology, SPECT and PET also play important roles in the work-up of patients with coronary artery disease, providing accurate information on the state of the myocardium (perfusion, metabolism, and innervation. White blood cell scintigraphy and FDG-PET are widely used to investigate many infectious/inflammatory processes. In each of these areas, the review discusses the use of recently developed radiopharmaceuticals, the growth of tomographic nuclear medicine techniques, and the ways in which these advances are improving molecular imaging of biologic processes at the cellular level. 5. Imaging of juvenile idiopathic arthritis. Part II: Ultrasonography and MRI Directory of Open Access Journals (Sweden) Iwona Sudoł-Szopińska 2016-09-01 Full Text Available Juvenile idiopathic arthritis is the most common autoimmune systemic disease of the connective tissue affecting individuals in the developmental age. Radiography, which was described in the first part of this publication, is the standard modality in the assessment of this condition. Ultrasound and magnetic resonance imaging enable early detection of the disease which affects soft tissues, as well as bones. Ultrasound assessment involves: joint cavities, tendon sheaths and bursae for the presence of synovitis, intraand extraarticular fat tissue to visualize signs of inflammation, hyaline cartilage, cartilaginous epiphysis and subchondral bone to detect cysts and erosions, and ligaments, tendons and their entheses for signs of enthesopathies and tendinopathies. Magnetic resonance imaging is indicated in children with juvenile idiopathic arthritis for assessment of inflammation in peripheral joints, tendon sheaths and bursae, bone marrow involvement and identification of inflammatory lesions in whole-body MRI, particularly when the clinical picture is unclear. Also, MRI of the spine and spinal cord is used in order to diagnose synovial joint inflammation, bone marrow edema and spondylodiscitis as well as to assess their activity, location, and complications (spinal canal stenosis, subluxation, e.g. in the atlantoaxial region. This article discusses typical pathological changes seen on ultrasound and magnetic resonance imaging. The role of these two methods for disease monitoring, its identification in the pre-clinical stage and establishing its remission are also highlighted. 6. Modeling multibody systems with uncertainties. Part II: Numerical applications Energy Technology Data Exchange (ETDEWEB) Sandu, Corina, E-mail: [email protected]; Sandu, Adrian; Ahmadian, Mehdi [Virginia Polytechnic Institute and State University, Mechanical Engineering Department (United States) 2006-04-15 This study applies generalized polynomial chaos theory to model complex nonlinear multibody dynamic systems operating in the presence of parametric and external uncertainty. Theoretical and computational aspects of this methodology are discussed in the companion paper 'Modeling Multibody Dynamic Systems With Uncertainties. Part I: Theoretical and Computational Aspects .In this paper we illustrate the methodology on selected test cases. The combined effects of parametric and forcing uncertainties are studied for a quarter car model. The uncertainty distributions in the system response in both time and frequency domains are validated against Monte-Carlo simulations. Results indicate that polynomial chaos is more efficient than Monte Carlo and more accurate than statistical linearization. The results of the direct collocation approach are similar to the ones obtained with the Galerkin approach. A stochastic terrain model is constructed using a truncated Karhunen-Loeve expansion. The application of polynomial chaos to differential-algebraic systems is illustrated using the constrained pendulum problem. Limitations of the polynomial chaos approach are studied on two different test problems, one with multiple attractor points, and the second with a chaotic evolution and a nonlinear attractor set. The overall conclusion is that, despite its limitations, generalized polynomial chaos is a powerful approach for the simulation of multibody dynamic systems with uncertainties. 7. Modeling multibody systems with uncertainties. Part II: Numerical applications International Nuclear Information System (INIS) 2006-01-01 This study applies generalized polynomial chaos theory to model complex nonlinear multibody dynamic systems operating in the presence of parametric and external uncertainty. Theoretical and computational aspects of this methodology are discussed in the companion paper 'Modeling Multibody Dynamic Systems With Uncertainties. Part I: Theoretical and Computational Aspects .In this paper we illustrate the methodology on selected test cases. The combined effects of parametric and forcing uncertainties are studied for a quarter car model. The uncertainty distributions in the system response in both time and frequency domains are validated against Monte-Carlo simulations. Results indicate that polynomial chaos is more efficient than Monte Carlo and more accurate than statistical linearization. The results of the direct collocation approach are similar to the ones obtained with the Galerkin approach. A stochastic terrain model is constructed using a truncated Karhunen-Loeve expansion. The application of polynomial chaos to differential-algebraic systems is illustrated using the constrained pendulum problem. Limitations of the polynomial chaos approach are studied on two different test problems, one with multiple attractor points, and the second with a chaotic evolution and a nonlinear attractor set. The overall conclusion is that, despite its limitations, generalized polynomial chaos is a powerful approach for the simulation of multibody dynamic systems with uncertainties 8. A platform for quality management in research institutes (part II Directory of Open Access Journals (Sweden) Klembalska Agnieszka 2016-09-01 Full Text Available In the recent years there has been a particularly strong pressure on changing old structures and management models in research institutes. Contemporary research institutes are scientific units which are commercial in character – almost 80% of funds come from companies and contractual research activity and services. They are the basic sector of science aiming at cooperation with the economy, applied and innovative research. In order to maintain the current and start new cooperation it is necessary to pay particular attention to maintaining, improving and exposing high level of quality of conducted activity. Taking into consideration the necessity of carrying out ever more complex research projects, conducting activity requiring fast reaction to change, risk analysis, which is assessed every year by the Ministry of Science and Higher Education – it seems that it is necessary to apply tools supporting the assessment of quality. In the proposed three-aspect perspective the following scopes of activity are emphasized: implemented quality management systems, area of scientific information and the sphere of cooperation with the client. This article constitutes the continuation of the subjects discussed in the first part – an extension of issues associated with the scope of responsibilities of particular Sections of the proposed Quality Management Platform in research institutes. 9. Cyclopentane combustion. Part II. Ignition delay measurements and mechanism validation KAUST Repository Rachidi, Mariam El 2017-06-12 This study reports cyclopentane ignition delay measurements over a wide range of conditions. The measurements were obtained using two shock tubes and a rapid compression machine, and were used to test a detailed low- and high-temperature mechanism of cyclopentane oxidation that was presented in part I of this study (Al Rashidi et al., 2017). The ignition delay times of cyclopentane/air mixtures were measured over the temperature range of 650–1350K at pressures of 20 and 40atm and equivalence ratios of 0.5, 1.0 and 2.0. The ignition delay times simulated using the detailed chemical kinetic model of cyclopentane oxidation show very good agreement with the experimental measurements, as well as with the cyclopentane ignition and flame speed data available in the literature. The agreement is significantly improved compared to previous models developed and investigated at higher temperatures. Reaction path and sensitivity analyses were performed to provide insights into the ignition-controlling chemistry at low, intermediate and high temperatures. The results obtained in this study confirm that cycloalkanes are less reactive than their non-cyclic counterparts. Moreover, cyclopentane, a high octane number and high octane sensitivity fuel, exhibits minimal low-temperature chemistry and is considerably less reactive than cyclohexane. This study presents the first experimental low-temperature ignition delay data of cyclopentane, a potential fuel-blending component of particular interest due to its desirable antiknock characteristics. 10. Polymers Based on Renewable Raw Materials – Part II Directory of Open Access Journals (Sweden) Jovanović, S. 2013-09-01 Full Text Available A short review of biopolymers based on starch (starch derivatives, thermoplastic starch, lignin and hemicelluloses, chitin (chitosan and products obtained by degradation of starch and other polysaccharides and sugars (poly(lactic acid, poly(hydroxyalkanoates, as well as some of their basic properties and application area, are given in this part. The problem of environmental and economic feasibility of biopolymers based on renewable raw materials and their competitiveness with polymers based on fossil raw materials is discussed. Also pointed out are the problems that appear due to the increasing use of agricultural land for the production of raw materials for the chemical industry and energy, instead for the production of food for humans and animals. The optimistic assessments of experts considering the development perspectives of biopolymers based on renewable raw materials in the next ten years have also been pointed out.At the end of the paper, the success of a team of researchers gathered around the experts from the company Bayer is indicated. They were the first in the world to develop a catalyst by which they managed to effectively activate CO - and incorporate it into polyols, used for the synthesis of polyurethanes in semi-industrial scale. By applying this process, for the first time a pollutant will be used as a basic raw material for the synthesis of organic compounds, which will have significant consequences on the development of the chemical industry, and therefore the production of polymers. 11. Oral health in Brazil - Part II: Dental Specialty Centers (CEOs Directory of Open Access Journals (Sweden) Vinícius Pedrazzi 2008-08-01 Full Text Available The concepts of health promotion, self-care and community participation emerged during the 1970s and, since then, their application has grown rapidly in the developed world, showing evidence of effectiveness. In spite of this, a major part of the population in the developing countries still has no access to specialized dental care such as endodontic treatment, dental care for patients with special needs, minor oral surgery, periodontal treatment and oral diagnosis. This review focuses on a program of the Brazilian Federal Government named CEOs (Dental Specialty Centers, which is an attempt to solve the dental care deficit of a population that is suffering from oral diseases and whose oral health care needs have not been addressed by the regular programs offered by the SUS (Unified National Health System. Literature published from 2000 to the present day, using electronic searches by Medline, Scielo, Google and hand-searching was considered. The descriptors used were Brazil, Oral health, Health policy, Health programs, and Dental Specialty Centers. There are currently 640 CEOs in Brazil, distributed in 545 municipal districts, carrying out dental procedures with major complexity. Based on this data, it was possible to conclude that public actions on oral health must involve both preventive and curative procedures aiming to minimize the oral health distortions still prevailing in developing countries like Brazil. 12. Medicine at the crossroads. Part II. Summary of completed project Energy Technology Data Exchange (ETDEWEB) NONE 1998-05-01 Medicine at the crossroads (a.k.a. The Future of Medicine) is an 8-part series of one-hour documentaries which examines the scientific and social forces that have shaped the practice of medicine around the world. The series was developed and produced over a five-year period and in eleven countries. Among the major issues examined in the series are the education of medical practitioners and the communication of medical issues. The series also considers the dilemmas of modern medicine, including the treatment of the elderly and the dying, the myth of the quick fix in the face of chronic and incurable diseases such as HIV, and the far-reaching implications of genetic treatments. Finally, the series examines the global progress made in medical research and application, as well as the questions remaining to be answered. These include not only scientific treatment, but accessibility and other critical topics affecting the overall success of medical advances. Medicine at the crossroads is a co-production of Thirteen/WNET and BBC-TV in association with Television Espafiola SA (RTVE) and the Australian Broadcasting Corporation. Stefan Moore of Thirteen/WNET and Martin Freeth of BBC-TV are series producers. George Page is executive in charge of medicine at the crossroads. A list of scholarly advisors and a program synopses is attached. 13. Institutional Approach in Economics and to Economics. Part II Directory of Open Access Journals (Sweden) 2015-12-01 Full Text Available The paper attempts to justify an alternative to conventional methodology of economics, as well as make a corresponding revision of the history of this discipline. It explains why the discipline in its orthodox version, and for the most unorthodox directions, is cognitively sterile and socio-economico-politically very harmful. The paper can be seen as a manifesto which calls for a radical change in the discipline. As a manifesto, it probably could be called "Economics: from repentance and resurrection", that is, I suggest to economists to repent of the harm that this discipline has brought and on the basis of this reflection to revive it in the different methodological framework that is able to make economics socially useful. The second part of the paper, which is published in this issue, consists of two sections. In the first one (section number 4 economics is regarded as an institution and discusses its origin and evolution, as well as its current operation. On the basis of the analysis made in this section we can conclude that this institution has a very high degree of stability and because of that it cannot be reformed without an active external intervention. The fifth section of the paper is devoted to the link of the social role of economics with its methodology. It argues that the social function of economic science is the study of reality as it is, and underlines that economic education must be oriented in this direction. Science.gov (United States) Maloney, C 1985-01-01 15. Overactive bladder – 18 years – Part II Science.gov (United States) Truzzi, Jose Carlos; Gomes, Cristiano Mendes; Bezerra, Carlos A.; Plata, Ivan Mauricio; Campos, Jose; Garrido, Gustavo Luis; Almeida, Fernando G.; Averbeck, Marcio Augusto; Fornari, Alexandre; Salazar, Anibal; Dell’Oro, Arturo; Cintra, Caio; Sacomani, Carlos Alberto Ricetto; Tapia, Juan Pablo; Brambila, Eduardo; Longo, Emilio Miguel; Rocha, Flavio Trigo; Coutinho, Francisco; Favre, Gabriel; Garcia, José Antonio; Castaño, Juan; Reyes, Miguel; Leyton, Rodrigo Eugenio; Ferreira, Ruiter Silva; Duran, Sergio; López, Vanda; Reges, Ricardo 2016-01-01 16. Overactive bladder – 18 years – Part II Directory of Open Access Journals (Sweden) Jose Carlos Truzzi 2016-04-01 17. Neuromorphic meets neuromechanics, part II: the role of fusimotor drive. Science.gov (United States) Jalaleddini, Kian; Minos Niu, Chuanxin; Chakravarthi Raja, Suraj; Joon Sohn, Won; Loeb, Gerald E; Sanger, Terence D; Valero-Cuevas, Francisco J 2017-04-01 We studied the fundamentals of muscle afferentation by building a Neuro-mechano-morphic system actuating a cadaveric finger. This system is a faithful implementation of the stretch reflex circuitry. It allowed the systematic exploration of the effects of different fusimotor drives to the muscle spindle on the closed-loop stretch reflex response. As in Part I of this work, sensory neurons conveyed proprioceptive information from muscle spindles (with static and dynamic fusimotor drive) to populations of α-motor neurons (with recruitment and rate coding properties). The motor commands were transformed into tendon forces by a Hill-type muscle model (with activation-contraction dynamics) via brushless DC motors. Two independent afferented muscles emulated the forces of flexor digitorum profundus and the extensor indicis proprius muscles, forming an antagonist pair at the metacarpophalangeal joint of a cadaveric index finger. We measured the physical response to repetitions of bi-directional ramp-and-hold rotational perturbations for 81 combinations of static and dynamic fusimotor drives, across four ramp velocities, and three levels of constant cortical drive to the α-motor neuron pool. We found that this system produced responses compatible with the physiological literature. Fusimotor and cortical drives had nonlinear effects on the reflex forces. In particular, only cortical drive affected the sensitivity of reflex forces to static fusimotor drive. In contrast, both static fusimotor and cortical drives reduced the sensitivity to dynamic fusimotor drive. Interestingly, realistic signal-dependent motor noise emerged naturally in our system without having been explicitly modeled. We demonstrate that these fundamental features of spinal afferentation sufficed to produce muscle function. As such, our Neuro-mechano-morphic system is a viable platform to study the spinal mechanisms for healthy muscle function-and its pathologies such as dystonia and spasticity. In 18. Neuromorphic meets neuromechanics, part II: the role of fusimotor drive Science.gov (United States) Jalaleddini, Kian; Minos Niu, Chuanxin; Chakravarthi Raja, Suraj; Sohn, Won Joon; Loeb, Gerald E.; Sanger, Terence D.; Valero-Cuevas, Francisco J. 2017-04-01 Objective. We studied the fundamentals of muscle afferentation by building a Neuro-mechano-morphic system actuating a cadaveric finger. This system is a faithful implementation of the stretch reflex circuitry. It allowed the systematic exploration of the effects of different fusimotor drives to the muscle spindle on the closed-loop stretch reflex response. Approach. As in Part I of this work, sensory neurons conveyed proprioceptive information from muscle spindles (with static and dynamic fusimotor drive) to populations of α-motor neurons (with recruitment and rate coding properties). The motor commands were transformed into tendon forces by a Hill-type muscle model (with activation-contraction dynamics) via brushless DC motors. Two independent afferented muscles emulated the forces of flexor digitorum profundus and the extensor indicis proprius muscles, forming an antagonist pair at the metacarpophalangeal joint of a cadaveric index finger. We measured the physical response to repetitions of bi-directional ramp-and-hold rotational perturbations for 81 combinations of static and dynamic fusimotor drives, across four ramp velocities, and three levels of constant cortical drive to the α-motor neuron pool. Main results. We found that this system produced responses compatible with the physiological literature. Fusimotor and cortical drives had nonlinear effects on the reflex forces. In particular, only cortical drive affected the sensitivity of reflex forces to static fusimotor drive. In contrast, both static fusimotor and cortical drives reduced the sensitivity to dynamic fusimotor drive. Interestingly, realistic signal-dependent motor noise emerged naturally in our system without having been explicitly modeled. Significance. We demonstrate that these fundamental features of spinal afferentation sufficed to produce muscle function. As such, our Neuro-mechano-morphic system is a viable platform to study the spinal mechanisms for healthy muscle function—and its 19. Denial of Chronic Illness and Disability: Part II. Research Findings, Measurement Considerations, and Clinical Aspects Science.gov (United States) Livneh, Hanoch 2009-01-01 The concept of denial has been an integral part of the psychological and disability studies bodies of literature for over 100 years. Yet, denial is a highly elusive concept and has been associated with mixed, indeed conflicting theoretical perspectives, clinical strategies, and empirical findings. In part II the author reviews empirical findings,… 20. Three Mile Island: a report to the commissioners and to the public. Volume II, Part 3 International Nuclear Information System (INIS) 1979-01-01 This is the third and final part of the second volume of a study of the Three Mile Island accident. Part 3 of Volume II contains descriptions and assessments of responses to the accident by the utility and by the NRC and other government agencies 1. Studies in Enrollment Trends and Patterns. Part II--Summer Quarter: 1940-1964. Science.gov (United States) Schmid, Calvin F.; Watson, F. Jean This is the second part of a report on major facets of institutional change at the University of Washington. Part II is a detailed analysis of Summer Quarter students and covers: class differentials in enrollment trends; trends in undergraduate students by major field and college; trends in graduate and professional students by major field and… 2. Literacy and Deaf Students in Taiwan: Issues, Practices and Directions for Future Research--Part II Science.gov (United States) Liu, Hsiu Tan; Andrews, Jean F.; Liu, Chun Jung 2014-01-01 In Part I, we underscore the issues surrounding young deaf and hard of hearing (DHH) learners of literacy in Taiwan who use sign to support their learning of Chinese literacy. We also described the linguistic features of Chinese writing and the visual codes used by DHH children. In Part II, we describe the reading and writing practices used with… 3. Precompetición y ansiedad en fisicoculturistas Directory of Open Access Journals (Sweden) F\\u00E9lix Arbinaga Ibarz\\u00E1bal 2005-01-01 4. PGE Production in Southern Africa, Part II: Environmental Aspects Directory of Open Access Journals (Sweden) Benedikt Buchspies 2017-11-01 Full Text Available Platinum group elements (PGEs, 6E PGE = Pt + Pd + Rh + Ru + Ir + Au are used in numerous applications that seek to reduce environmental impacts of mobility and energy generation. Consequently, the future demand for PGEs is predicted to increase. Previous studies indicate that environmental impacts of PGE production change over time emphasizing the need of up-to-date data and assessments. In this context, an analysis of environmental aspects of PGE production is needed to support the environmental assessment of technologies using PGEs, to reveal environmental hotspots within the production chain and to identify optimization potential. Therefore, this paper assesses greenhouse gas (GHG emissions, cumulative fossil energy demand (CEDfossil, sulfur dioxide (SO2 emissions and water use of primary PGE production in Southern Africa, where most of today’s supply originates from. The analysis shows that in 2015, emissions amounted to 45 t CO2-eq. and 502 kg SO2 per kg 6E PGE in the case GHG and SO2 emissions, respectively. GHG emissions are dominated by emissions from electricity provision contributing more than 90% to the overall GHG emissions. The CEDfossil amounted to 0.60 TJ per kg 6E PGE. A detailed analysis of the CEDfossil reveals that electricity provision based on coal power consumes the most fossil energy carriers among all energy forms. Results show that the emissions are directly related to the electricity demand. Thus, the reduction in the electricity demand presents the major lever to reduce the consumption of fossil energy resources and the emission of GHGs and SO2. In 2015, the water withdrawal amounted to 0.272 million L per kg 6E PGE. Additionally, 0.402 million L of recycled water were used per kg 6E PGE. All assessed indicators except ore grades and production volumes reveal increasing trends in the period from 2010 to 2015. It can be concluded that difficult market conditions (see part I of this paper series and increasing 5. PROBABILITY BASED CORROSION CONTROL FOR WASTE TANKS - PART II Energy Technology Data Exchange (ETDEWEB) Hoffman, E.; Edwards, T. 2010-12-09 As part of an ongoing study to evaluate the discontinuity in the corrosion controls at the SRS tank farm, a study was conducted this year to assess the minimum concentrations below 1 molar nitrate, see Figure 1. Current controls on the tank farm solution chemistry are in place to prevent the initiation and propagation of pitting and stress corrosion cracking in the primary steel waste tanks. The controls are based upon a series of experiments performed with simulated solutions on materials used for construction of the tanks, namely ASTM A537 carbon steel (A537). During FY09, an experimental program was undertaken to investigate the risk associated with reducing the minimum molar nitrite concentration required to confidently inhibit pitting in dilute solutions (i.e., less than 1 molar nitrate). The experimental results and conclusions herein provide a statistical basis to quantify the probability of pitting for the tank wall exposed to various solutions with dilute concentrations of nitrate and nitrite. Understanding the probability for pitting will allow the facility to make tank-specific risk-based decisions for chemistry control. Based on previous electrochemical testing, a statistical test matrix was developed to refine and solidify the application of the statistical mixture/amount model to corrosion of A537 steel. A mixture/amount model was identified based on statistical analysis of recent and historically collected electrochemical data. This model provides a more complex relationship between the nitrate and nitrite concentrations and the probability of pitting than is represented by the model underlying the current chemistry control program, and its use may provide a technical basis for the utilization of less nitrite to inhibit pitting at concentrations below 1 molar nitrate. FY09 results fit within the mixture/amount model, and further refine the nitrate regime in which the model is applicable. The combination of visual observations and cyclic 6. PIO I-II tendencies. Part 2. Improving the pilot modeling Directory of Open Access Journals (Sweden) Ioan URSU 2011-03-01 Full Text Available The study is conceived in two parts and aims to get some contributions to the problem ofPIO aircraft susceptibility analysis. Part I, previously published in this journal, highlighted the mainsteps of deriving a complex model of human pilot. The current Part II of the paper considers a properprocedure of the human pilot mathematical model synthesis in order to analyze PIO II typesusceptibility of a VTOL-type aircraft, related to the presence of position and rate-limited actuator.The mathematical tools are those of semi global stability theory developed in recent works. 7. Factores de riesgo de preeclampsia: enfoque inmunoendocrino. Parte II Risk factors for preeclampsia: immunoendocrine approach. Part II Directory of Open Access Journals (Sweden) Jeddú Cruz Hernández 2008-03-01 Full Text Available Esta segunda parte de la revisión trata sobre los nuevos factores de riesgo de la preeclampsia, también conocidos como emergentes, entre los cuales se incluyen fenómenos biológicos de tipo endocrino, inmunológico y relacionados con la disfunción endotelial, como el aumento del estrés oxidativo, la disminución de las vitaminas antioxidantes y otros. Se augura que estos factores de riesgo de reciente descripción tendrán que ser tenidos muy en cuenta en un futuro no lejano, si se quiere predecir eficazmente la aparición de la preeclampsia para poder actuar así de forma precoz durante el desarrollo de la enfermedad, y evitar al máximo sus consecuencias obstétricas adversas, y en algunos casos, hasta para prevenir el surgimiento de esta enfermedad.This second part of the review deals with the new risk factors for preeclampsia, which are also known as emergent, among which the biological phenomena of endocrine and immunological type and related to endothelial dysfunction, such as the increase of oxidative stress, the reduction of antioxidant vitamins and others, are included. It is said that these recently described risk factors will have to be taken into account in a near future to efficiently predict the appearance of preeclampsia to act early during the development of the disease and to prevent as much as possible its adverse obstetric consequences and, in some cases, to avoid the appearance of this disease. 8. Los trastornos de ansiedad durante la transición a la menopausia Directory of Open Access Journals (Sweden) A. Carvajal-Lohr 2016-01-01 9. International Working Group on Fast Reactors Thirteenth Annual Meeting. Summary Report. Part II International Nuclear Information System (INIS) 1980-10-01 The Thirteenth Annual Meeting of the IAEA International Working Group on Fast Reactors was held at the IAEA Headquarters, Vienna, Austria from 9 to 11 April 1980. The Summary Report (Part I) contains the Minutes of the Meeting. The Summary Report (Part II) contains the papers which review the national programme in the field of LMFBRs and other presentations at the Meeting. The Summary Report (Part III) contains the discussions on the review of the national programmes 10. Improving Energy Efficiency in Pharmaceutical Manufacturing Operations -- Part II: HVAC, Boilers and Cogeneration OpenAIRE Galitsky, Christina; Worrell, Ernst; Masanet, Eric; Chang, Sheng-chieh 2006-01-01 Whereas Part I of this article ("Improving Energy Efficiency in Pharmaceutical Manufacturing Operations ? Part I: Motors, Drives and Compressed Air Systems", Pharmaceutical Manufacturing, Feb. 2006) focused on motors, drives and compressed air systems, Part II will review, briefly, potential improvements in heating, ventilation and air conditioning (HVAC) systems, overall building management and boilers. Research in this article was first published last September, in an extensive report devel... 11. Part II DEFF Research Database (Denmark) Guo, Song; Vollesen, Anne Luise Haulund; Hansen, Young Bae Lee 2017-01-01 the start of the infusion. A control group of six healthy volunteers received intravenous saline. Results PACAP38 infusion caused significant changes in plasma concentrations of VIP ( p = 0.026), prolactin ( p = 0.011), S100B ( p thyroid-stimulating hormone (TSH; p = 0.015), but not CGRP ( p... 12. Ansiedad y diagnóstico del síndrome premenstrual (SPM Directory of Open Access Journals (Sweden) CARMEN BORRÁS SANSALONI 2001-01-01 13. Validating the standard for the National Board Dental Examination Part II. Science.gov (United States) Tsai, Tsung-Hsun; Neumann, Laura M; Littlefield, John H 2012-05-01 As part of the overall exam validation process, the Joint Commission on National Dental Examinations periodically reviews and validates the pass/fail standard for the National Board Dental Examination (NBDE), Parts I and II. The most recent standard-setting activities for NBDE Part II used the Objective Standard Setting method. This report describes the process used to set the pass/fail standard for the 2009 exam. The failure rate on the NBDE Part II increased from 5.3 percent in 2008 to 13.7 percent in 2009 and then decreased to 10 percent in 2010. This article describes the Objective Standard Setting method and presents the estimated probabilities of classification errors based on the beta binomial mathematical model. The results show that the probability of correct classifications of candidate performance is very high (0.97) and that probabilities of false negative and false positive errors are very small (.03 and <0.001, respectively). The low probability of classification errors supports the conclusion that the pass/fail score on the NBDE Part II is a valid guide for making decisions about candidates for dental licensure. 14. A virtual roundtable on Iser’s legacy Part II: conversation with Mark Freeman Directory of Open Access Journals (Sweden) Mark Freeman 2017-06-01 Full Text Available In this article you find the second part of a roundtable on Wolfgang’s Iser legacy with Gerald Prince, Mark Freeman, Marco Caracciolo and Federico Bertoni. In Part II we discuss with Prof. Mark Freem the role of narrative hermeneutics in understanding the human realm and the tenets of self-interpretation, as well as the necessity of literary antrhopology and literary theory. 15. Tratamientos eficaces para el Trastorno de Ansiedad Social Directory of Open Access Journals (Sweden) Carolina Baeza Velasco 2007-01-01 Full Text Available www.neuropsicologia.cl 127 Tratamientos eficaces para el Trastorno de Ansiedad Social Efficient Treatments for Social Anxiety Disorder Carolina Baeza Velasco * Resumen El Trastorno de Ansiedad Social (TAS, también cono cido como Fobia Social, es reconocido hoy en día como una condición psiquiátri ca crónica e incapacitante. La alta prevalencia y la significancia clínica de la enferm edad, enfatizan la necesidad de reconocimiento temprano y de tratamiento eficaz. El objetivo de este trabajo es exponer los principales tratamientos existentes, poniendo a tención a las investigaciones y estudios meta-analíticos que intentan diferenciar los distin tos tipos de intervención en relación a su eficacia. OpenAIRE Bojórquez de la Torre, Javier Daniel 2015-01-01 Objetivos: Determinar la asociación entre el nivel de ansiedad clínica y el rendimiento académico en los estudiantes del primer año de la Facultad de Medicina Humana de la Universidad de San Martín de Porres (2012 y 2013). Material y métodos: Se realizó un estudio observacional de cohorte retrospectivo. La población fue de 687 alumnos de la Facultad de Medicina Humana de la Universidad de San Martín de Porres, a los que se aplicó la Escala de Autoevaluación de la Ansiedad de Zung, durant... 17. Creencias irracionales y ansiedad en estudiantes de medicina de una universidad nacional OpenAIRE García Arce, Sara del Carmen 2014-01-01 Revisa la ansiedad y las creencias irracionales como trasfondo de las manifestaciones ansiosas en cuanto se presentan como evidencias de un rasgo de la personalidad; o, como estado asociado a situaciones y reacciones de tipo emocional. Se planteó como objetivos, identificar las creencias irracionales asociadas a la ansiedad estado en un grupo de estudiantes de Medicina Humana de la ciudad de Tarapoto e identificar las creencias irracionales asociadas a la ansiedad rasgo. Luego del trabajo con... 18. 7 squadron in World War II (Part 2: 1943-1945) | Robinson | Scientia ... African Journals Online (AJOL) 19. Title II, Part A: Don't Scrap It, Don't Dilute It, Fix It Science.gov (United States) Coggshall, Jane G. 2015-01-01 The Issue: Washington is taking a close look at Title II, Part A (Title IIA) of the Elementary and Secondary Education Act (ESEA) as Congress debates reauthorization. The program sends roughly 2.5 billion a year to all states and nearly all districts to "(1) increase student academic achievement through strategies such as improving teacher… 20. Student Performance on the NBME Part II Subtest and Subject Examination in Obstetrics-Gynecology. Science.gov (United States) Metheny, William P.; Holzman, Gerald B. 1988-01-01 Comparison of the scores of 342 third-year medical students on the National Board of Medical Examiners subject examination and the Part II subtest on obstetrics-gynecology found significantly better performance on the former, suggesting a need to interpret the scores differently. (Author/MSE) 1. Instructional Climates in Preschool Children Who Are At-Risk. Part II: Perceived Physical Competence Science.gov (United States) Robinson, Leah E.; Rudisill, Mary E.; Goodway, Jacqueline D. 2009-01-01 In Part II of this study, we examined the effect of two 9-week instructional climates (low-autonomy [LA] and mastery motivational climate [MMC]) on perceived physical competence (PPC) in preschoolers (N = 117). Participants were randomly assigned to an LA, MMC, or comparison group. PPC was assessed by a pretest, posttest, and retention test with… 2. Sesgos de Memoria en los Trastornos de Ansiedad Directory of Open Access Journals (Sweden) Rubén Sanz Blasco 2011-01-01 Full Text Available En la actualidad existen un gran número de modelos teóricos que defienden la importancia de la valoración cognitiva en el inicio y mantenimiento de la respuesta de ansiedad. La investigación acerca de los procesos cognitivos que subyacen a la respuesta de ansiedad ha puesto de manifiesto de manera bastante sólida cómo los sujetos ansiosos en comparación con sujetos normales muestran una tendencia a atender de manera selectiva y a interpretar de un modo catastrofista información congruente con su estado emocional. Sin embargo, existiría un tercer sesgo para el cual los datos de las distintas investigaciones han arrojado resultados difusos en los distintos trastornos y tareas experimentales. Nos referimos al sesgo de memoria, que puede definirse como la tendencia a recordar preferentemente estimulación negativa presentada previamente en comparación con estimulación neutra. Se presenta un trabajo de revisión teórica sistemática que tiene como objetivo fundamental determinar la existencia del sesgo de memoria a lo largo de los diferentes trastornos de ansiedad y de las diferentes tareas experimentales utilizadas en la evaluación de dicho sesgo. 3. High energy physics studies. Progress report. Part I. Experimental program. Part II. Theoretical program International Nuclear Information System (INIS) Romanowski, T.A.; Tanaka, K.; Wada, W.W. 1978-01-01 Experimental Program: assembly of an experiment as Fermilab E-531 to measure decay lifetimes, with tagged emulsion of charmed particles produced by high energy neutrinos was finished, and data taking now is in progress. An experiment to measure prompt neutrino production at Fermilab, E-613, was approved and detailed design of it is continuing. Search for parity violation in scattering of polarized protons, an experiment E-446-ZGS at ANL, was performed with the sensitivity of 10 -6 for detection of that process and yielded null results. Another run with improved sensitivity of 10 -7 is in preparation. Data analysis of the neutrino experiment E-310 at Fermilab will continue. Trimuon events, a new discovery, were identified in those data. Analysis of data on meson production from experiments performed at the ZGS--ANL, E-397, E-420 and E-428, with charged and neutral spectrometer will continue. A new relatively broad resonance (T approx. 70 MeV) with quantum numbers IJ/sup P/ = 00 -1 was discovered in the data from E-397. Analysis of beta decay of polarized Σ - hyperons is in progress. Participation in the design of the experimental areas for the Isabelle colliding proton beam accelerator will continue. Theoretical Program: topics of current interest in particle theory which will be investigated in the coming year are: the instanton-anti-instanton QCD gauge fields, discrete symmetries which may determine quark masses in the SU(2) x U(1) model, calculation of charmed meson production in e + e - collisions and formation of gluon jets, Higgs boson production in pp collisions, calculation of Higgs boson mass in terms of vector boson mass, study of Lagrangians with gauge and Higgs scalar fields, investigation of Faddeev--Popov determinants as related to quantum chromodynamics, a study of quantum flavor dynamics and anomalies in the axial vector Ward identity and a study of super symmetry as a part of a realistic model of leptonic interactions 4. Ansiedad hacia las matemáticas, agrado y utilidad en futuros maestros OpenAIRE Nortes, Rosa; Nortes, Andrés 2014-01-01 Para conocer si los futuros maestros de primaria tienen ansiedad hacia las matemáticas se han elegido dos muestras de alumnos de la Universidad de Murcia en dos cursos consecutivos que estudian actualmente el grado y se les han aplicado dos escalas de ansiedad, una de agrado y una de utilidad. Los resultados obtenidos indican que el nivel de ansiedad se mantiene estable, que al alumnado le agradan poco las Matemáticas aunque le encuentran utilidad, que la ansiedad ante un examen es alta y q... 5. Subseabed disposal program annual report, January-December 1979. Volume II. Appendices (principal investigator progress reports). Part 2 of 2 International Nuclear Information System (INIS) Talbert, D.M. 1981-04-01 Volume II of the sixth annual report describing the progress and evaluating the status of the Subseabed Disposal Program contains the appendices referred to in Volume II, Summary and Status. Because of the length of Volume II, it has been split into two parts for publication purposes. Part 1 contains Appendices A-O; Part 2 contains Appendices P-FF. Separate abstracts have been prepared for each appendix for inclusion in the Energy Data Base 6. GRUPO TERAPÊUTICO COM MULHERES COM TRANSTORNOS DE ANSIEDADE: AVALIAÇÃO PELA ESCALA DE ANSIEDADE DE HAMILTON Directory of Open Access Journals (Sweden) ÂNGELA MARIA ALVES E SOUZA 2008-01-01 Full Text Available La necesidad de evaluar la asistencia a un grupo de mujeres nos llevó a la aplicación de una escala. Fueron seleccionadas dieciocho usuarias con diagnóstico de trastornos neuróticos, relacionados al estrés y somato formes. Se aplicó la Escala de Evaluación de Ansiedad de Hamilton (HAM-A con el objetivo de verificar el nivel de ansiedad antes y después de empezar las sesiones grupales. Realizamos 16 sesiones semanales, con desarrollo de técnicas de relajación y arte terapia, y como referencial la Terapia de Gestalt de corta duración. Después de las secciones de grupo, el nivel de ansiedad de las mujeres acompañadas a través de abordaje grupal, tuvo reducción significativa en lo que se refiere a los síntomas que habían aparecido como características determinantes para su sufrimiento psíquico. 7. Ansiedad estado y ansiedad rasgo en bailarines según el tipo de danza que practican y su condición como bailarín OpenAIRE Valle, María Liliana; Universidad de Lima (Perú) 2014-01-01 Se determinó comparativamente la ansiedad estado y la ansiedad rasgo en bailarines según el tipo de danza escogido (clásica y contemporánea) y su condición como bailarín (profesional y amateur). Se trabajó con 58 bailarines de Lima Metropolitana, de los cuales el 24,1 % fueron bailarines clásicos amateur; el 27,6 %, bailarines clásicos profesionales; el 20,7 %, bailarines contemporáneos amateur, y 27,6 %, bailarines contemporáneos profesionales. Se utilizó el Inventario de Ansiedad Rasgo-Esta... 8. Impacto del resultado post-partido en la ansiedad cognitiva, ansiedad somática y la autoconfianza en jóvenes jugadoras de fútbol OpenAIRE Arroyo del Bosque, Rubén; Irazusta Adarraga, Susana; González Rodríguez, Oscar; Bastarrica Varela, Olatz 2014-01-01 22 p. - Publicado en CD-ROM El presente estudio analiza el impacto que tiene el resultado post-partido en la ansiedad cognitiva, ansiedad somática y la autoconfianza en jóvenes jugadoras de fútbol. Una muestra de 45 jóvenes futbolistas mujeres completaron la traducción realizada por Capdevila (1997) al castellano de la versión en inglés del Inventario de Ansiedad Competitiva en el Deporte (CSAI-2) (Martens et al., 1990), cuestionario que evalúa los componentes cognitivos y somáticos de la ... 9. Ansiedade e desempenho: um estudo com uma equipe infantil de voleibol feminino Anxiety and performance: a study of an infant female volleyball team Directory of Open Access Journals (Sweden) Christi Noriko Sonoo 2010-09-01 Full Text Available O objetivo deste estudo foi analisar a 'ansiedade traço' e 'ansiedade estado' e sua relação com o desempenho pré-competitivo e competitivo no voleibol. O caso estudado foi uma equipe de voleibol feminina infantil. Utilizou-se como instrumentos de medida os protocolos CSAI-II e o SCAT. A coleta de dados ocorreu nos locais de treinamento e durante os Jogos Colegiais 2005. Para a análise dos dados utilizou-se estatística descritiva e o teste "t" de Student. Os resultados indicaram: 'ansiedade traço' pré-competitiva e competitiva sem diferenças significativas, porém na 'ansiedade estado' observou-se diferença significativa (P=0,05 para o componente cognitivo. Verificou-se um equilíbrio entre os componentes da 'ansiedade estado' na fase preparatória pelas vitórias em todos os jogos, o que se mostrou diferente na fase competitiva, onde a equipe sofreu duas derrotas. Conclui-se que a ansiedade pode afetar o desempenho das atletas na situação competitiva nesta modalidade e categoria estudada.The objective of this study was to analyze trait anxiety and state anxiety and their relation with pre-competitive and competitive performance in volleyball. The case studied was a infantile female volleyball team. The measure instruments used were the CSAI-II and SCAT protocols. Data collection occurred in training stations and during High School games 2005. Data analysis was performed through descriptive statistics and Student's "t" test. The results indicated: pre-competitive and competitive trait anxiety with no significant differences, however in relation to state anxiety a significant difference was observed (P=0,05 in the cognitive component. A balance was verified between state anxiety components in preparation phase because of the winnings in all games, fact that was different in the competitive phase, where the team lost two times. It is concluded that anxiety can affect athletes' performance in competitive situation in the modality and 10. Bloqueio do nervo supraescapular: procedimento importante na prática clínica. Parte II Suprascapular nerve block: important procedure in clinical practice. Part II Directory of Open Access Journals (Sweden) Marcos Rassi Fernandes 2012-08-01 Full Text Available O bloqueio do nervo supraescapular é um método de tratamento reprodutível, confiável e extremamente efetivo no controle da dor no ombro. Esse método tem sido amplamente utilizado por profissionais na prática clínica, como reumatologistas, ortopedistas, neurologistas e especialistas em dor, na terapêutica de enfermidades crônicas, como lesão irreparável do manguito rotador, artrite reumatoide, sequelas de AVC e capsulite adesiva, o que justifica a presente revisão (Parte II. O objetivo deste estudo foi descrever as técnicas do procedimento e suas complicações descritas na literatura, já que a primeira parte reportou as indicações clínicas, drogas e volumes utilizados em aplicação única ou múltipla. Apresentamse, detalhadamente, os acessos para a realização do procedimento tanto direto como indireto, anterior e posterior, lateral e medial, e superior e inferior. Diversas são as opções para se realizar o bloqueio do nervo supraescapular. Apesar de raras, as complicações podem ocorrer. Quando bem indicado, este método deve ser considerado.The suprascapular nerve block is a reproducible, reliable, and extremely effective treatment method in shoulder pain control. This method has been widely used by professionals in clinical practice such as rheumatologists, orthopedists, neurologists, and pain specialists in the treatment of chronic diseases such as irreparable rotator cuff injury, rheumatoid arthritis, stroke sequelae, and adhesive capsulitis, which justifies the present review (Part II. The objective of this study was to describe the techniques and complications of the procedure described in the literature, as the first part reported the clinical indications, drugs, and volumes used in single or multiple procedures. We present in details the accesses used in the procedure: direct and indirect, anterior and posterior, lateral and medial, upper and lower. There are several options to perform suprascapular nerve block 11. The basic science of dermal fillers: past and present Part II: adverse effects. Science.gov (United States) Gilbert, Erin; Hui, Andrea; Meehan, Shane; Waldorf, Heidi A 2012-09-01 The ideal dermal filler should offer long-lasting aesthetic improvement with a minimal side-effect profile. It should be biocompatible and stable within the injection site, with the risk of only transient undesirable effects from injection alone. However, all dermal fillers can induce serious and potentially long-lasting adverse effects. In Part II of this paper, we review the most common adverse effects related to dermal filler use. 12. Societal Planning: Identifying a New Role for the Transport Planner-Part II: Planning Guidelines DEFF Research Database (Denmark) Khisty, C. Jotin; Leleur, Steen 1997-01-01 The paper seeks to formulate planning guidelines based on Habermas's theory of communicative action. Specifically, this has led to the formulation of a set of four planning validity claims concerned to four types of planning guidelines concerning adequacy, dependency, suitability and adaptability......-a-vis the planning validity claims. Among other things the contingency of this process is outlined. It is concluded (part I & II) that transport planners can conveniently utilize the guidelines in their professional practice, tailored to their particular settings.... 13. Quantitative impact of aerosols on numerical weather prediction. Part II: Impacts to IR radiance assimilation Science.gov (United States) Marquis, J. W.; Campbell, J. R.; Oyola, M. I.; Ruston, B. C.; Zhang, J. 2017-12-01 This is part II of a two-part series examining the impacts of aerosol particles on weather forecasts. In this study, the aerosol indirect effects on weather forecasts are explored by examining the temperature and moisture analysis associated with assimilating dust contaminated hyperspectral infrared radiances. The dust induced temperature and moisture biases are quantified for different aerosol vertical distribution and loading scenarios. The overall impacts of dust contamination on temperature and moisture forecasts are quantified over the west coast of Africa, with the assistance of aerosol retrievals from AERONET, MPL, and CALIOP. At last, methods for improving hyperspectral infrared data assimilation in dust contaminated regions are proposed. 14. Advances in explosives analysis--part II: photon and neutron methods. Science.gov (United States) Brown, Kathryn E; Greenfield, Margo T; McGrane, Shawn D; Moore, David S 2016-01-01 The number and capability of explosives detection and analysis methods have increased dramatically since publication of the Analytical and Bioanalytical Chemistry special issue devoted to Explosives Analysis [Moore DS, Goodpaster JV, Anal Bioanal Chem 395:245-246, 2009]. Here we review and critically evaluate the latest (the past five years) important advances in explosives detection, with details of the improvements over previous methods, and suggest possible avenues towards further advances in, e.g., stand-off distance, detection limit, selectivity, and penetration through camouflage or packaging. The review consists of two parts. Part I discussed methods based on animals, chemicals (including colorimetry, molecularly imprinted polymers, electrochemistry, and immunochemistry), ions (both ion-mobility spectrometry and mass spectrometry), and mechanical devices. This part, Part II, will review methods based on photons, from very energetic photons including X-rays and gamma rays down to the terahertz range, and neutrons. 15. Estudo transversal de ansiedade pré-operatória em crianças: utilização da escala de Yale modificada Estudio transversal de ansiedad preoperatoria en niños: utilización de la escala de Yale modificada A transversal study on preoperative anxiety in children: use of the modified Yale scale Directory of Open Access Journals (Sweden) Álvaro Antônio Guaratini 2006-12-01 mencionan la ansiedad al momento de la evaluación preanestésica ambulatorial (APA. Este estudio transversal buscó evaluar el nivel y la prevalencia de la ansiedad al momento de la APA y de la consulta clínica utilizando la escala EAPY-m, en niños en edad preescolar. MÉTODO: Se seleccionaron 100 Niños, estado físico ASA I y II: G PED = 50 niños a ser sometidos a la evaluación clínica; G APA = 50 niños a ser sometidos a la APA para programación quirúrgica. El estudio se desarrolló en la sala de espera de los ambulatorios de pediatría y de APA mientras los niños esperaban sus respectivas consultas. Dos observadores aplicaron la escala EAPY-m de forma independiente. Las variables analizadas fueron datos socio demográficos; promedio y porcentaje de pacientes con ansiedad (EAPY-m > 30. Se realizó el análisis estadístico considerando significativo p BACKGROUND AND OBJECTIVES: Scales can be useful to recognize anxiety states and to indicate ways to prevent complications due to elevated levels of anxiety. The modified Yale Preoperative Anxiety Scale (YPAS-m was developed to evaluate anxiety in preschool children at the time of the anesthetic induction. It is an observational scale, being applied and completed in a short period of time. Studies on anxiety in children in the preoperative period do not mention anxiety at the preanesthetic evaluation. This transversal study tried to evaluate the level and prevalence of anxiety at the preanesthetic evaluation and in the clinical evaluation using the YPAS-m in preschool children. METHODS: One hundred children, physical status ASA I and II were evaluated; G PED = 50 children undergoing clinical evaluation; G PEC = 50 children undergoing preanesthetic evaluation for surgery. The study was conducted at the pediatric clinic and preanesthetic evaluation waiting-room while the children waited for their appointment. Two observers applied the YPAS-m independently. Parameters analyzed included the demographic data; and median 16. El enfoque cognitivo-comportamental para la ansiedad por la salud ("Hipocondría") OpenAIRE Salkovskis, Paul M.; Rimes, Katharine A. 1997-01-01 El presente artículo describe las líneas generales del enfoque cognitivo aplicado a la ansiedad por la salud (hipocondría). En este trastorno, la ansiedad se operativiza mediante 4 factores cognitivos clave: la probabilidad percibida de tener una enfem 17. Alchemical poetry in medieval and early modern Europe: a preliminary survey and synthesis. Part II - Synthesis. Science.gov (United States) Kahn, Didier 2011-03-01 This article provides a preliminary description of medieval and early modern alchemical poetry composed in Latin and in the principal vernacular languages of western Europe. It aims to distinguish the various genres in which this poetry flourished, and to identify the most representative aspects of each cultural epoch by considering the medieval and early modern periods in turn. Such a distinction (always somewhat artificial) between two broad historical periods may be justified by the appearance of new cultural phenomena that profoundly modified the character of early modern alchemical poetry: the ever-increasing importance of the prisca theologia, the alchemical interpretation of ancient mythology, and the rise of neo-Latin humanist poetry. Although early modern alchemy was marked by the appearance of new doctrines (notably the alchemical spiritus mundi and Paracelsianism), alchemical poetry was only superficially modified by criteria of a scientific nature, which therefore appear to be of lesser importance. This study falls into two parts. Part I provides a descriptive survey of extant poetry, and in Part II the results of the survey are analysed in order to highlight such distinctive features as the function of alchemical poetry, the influence of the book market on its evolution, its doctrinal content, and the question of whether any theory of alchemical poetry ever emerged. Part II is accompanied by an index of the authors and works cited in both parts. 18. AROMATERAPIA E ANSIEDADE: REVISÃO INTEGRATIVA DA LITERATURA OpenAIRE Domingo, Thiago da Silva; Braga, Eliana Mara 2013-01-01 O presente estudo teve como objetivo investigar na literatura científica nacional e internacional como se utiliza a aromaterapia como ferramentaterapêutica para a redução da ansiedade. Trata-se de uma pesquisabibliográfica que adotou como método a revisão integrativa da literatura. Foram selecionados estudos publicados entre 2008 e 2012, nas bases de dados Scielo, Medline e Cinahl. A análise se constituiu de uma amostra de 20 artigos científicos, sendo 13 com delineamento experimentale quase ... 19. Ansiedad, stress y trastornos psicofisiológicos OpenAIRE Casado Morales, María Isabel 1994-01-01 La presente tesis doctoral se centra en el estudio de los trastornos psicofisiologicos y su relación con una serie de variables psicológicas asociadas a los mismos. En primer lugar, por su relación con dichos trastornos se recogen de forma global el desarrollo histórico y los modelos explicativos de los conceptos de ansiedad y stress. Centrándonos ya de forma directa sobre los trastornos psicofisiologicos, se reseñan las distintas formas de abordaje, los factores determinantes y los modelos e... 20. Bienestar espiritual y ansiedad en pacientes diabéticos Directory of Open Access Journals (Sweden) Ma. del Refugio Zavala 2006-01-01 Full Text Available El presente estudio investigó la correlación entre los niveles de bienestar espiritual y los niveles de ansiedad-estado, en una muestra de 190 pacientes obtenida por medio de muestreo no probabilístico; los criterios de inclusión fueron: adultos de 35 a 85 años de edad, diagnosticados con diabetes mellitus tipo 2, se excluyeron los pacientes con demencia. El marco teórico que guió el estudio es el Modelo de Adaptación de Callista Roy. El diseño fue descriptivo, transversal, correlacional. La muestra poblacional se distribuyó normalmente con una potencia de 80. Se utilizó el cuestionario de espiritualidad de Reed y la escala de ansiedad-estado de Spielberger; los instrumentos presentaron una consistencia interna favorable con un Alpha de Cronbach de .894 y .847 respectivamente. La información se recabó de cuatro instituciones de salud, una del primer nivel y tres del segundo nivel de atención. El análisis de los datos se llevó a cabo a través del paquete estadístico SPSS versión 13. La edad media de los participantes fue de 57,36 años, con una desviación estándar de 11,4; el 55% fueron del sexo femenino, el 83,2% profesa la religión católica, el 51,5% presenta algún tipo de complicación asociada a la diabetes. El coeficiente de correlación entre el bienestar espiritual y la ansiedad-estado fue significativo. Estos resultados apoyan la hipótesis del estudio que refiere a mayor espiritualidad menor nivel de ansiedad-estado, situación que invita a profundizar en el estudio de estos fenómenos tanto en el área educativa como asistencial en la disciplina de enfermería. 1. Realidade virtual aplicada ao tratamento da ansiedade social OpenAIRE Pinheiro, Tânia Cristina Martins 2012-01-01 Trabalho de projecto de mestrado em Engenharia Informática (Engenharia de Software), apresentado à Universidade de Lisboa, através da Faculdade de Ciências, 2012 A ansiedade social é uma patologia debilitante que prejudica e diminui a qualidade de vida. O tratamento existe e ´e composto por várias terapias realizadas em simultâneo, sendo uma delas a terapia de exposição¸. Com a evolução tecnológica, surgiu a possibilidade de aplicar a Realidade Virtual a terapia de exposição¸ (Terapia de ... 2. Ansiedade no período pré-operatório de cirurgias de mama: estudo comparativo entre pacientes com suspeita de câncer e a serem submetidas a procedimentos cirúrgicos estéticos OpenAIRE Alves, Maria Luiza Melo; Pimentel, Adriana Jucá; Guaratini, Álvaro Antônio; Marcolino, José Álvaro Marques; Gozzani, Judymara Lauzi; Mathias, Ligia Andrade da Silva Telles 2007-01-01 JUSTIFICATIVA E OBJETIVOS: A avaliação da ansiedade não faz parte da rotina da avaliação pré-anestésica (APA), o que faz com que situações especiais em que o estado emocional dos pacientes possa estar alterado, passem despercebidas pelo anestesiologista. Este estudo visou comparar, no momento da APA ambulatorial, fatores de risco, intensidade e prevalência de ansiedade em pacientes com suspeita de câncer de mama e a serem submetidas a procedimentos cirúrgicos estéticos de mama. MÉTODO: Após a... 3. Nivel de Ansiedad preoperatoria en los pacientes programados para cirugía Directory of Open Access Journals (Sweden) Vilma Margot Vivas 2009-12-01 Full Text Available El presente estudio tiene como objetivo principal, establecer la relación entre el grado de información del procedimiento quirúrgico con el nivel de ansiedad preoperatoria en los pacientes programados para cirugía de la Fundación Mario Gaitan Yanguas en el periodo comprendido de octubre-noviembre del 2008. Se realizó un estudio cuantitativo, descriptivo, correlacional y de corte transversal. Para la recolección de la información se utilizó Escala De Valoración De La Ansiedad De Spielberger Idare-Estado. La población está conformada por los pacientes que acuden al servicio de cirugía. Las variables utilizadas son información acerca de la cirugía, nivel de ansiedad y causas de la ansiedad. Se concluye que el 59% de los pacientes tenían conocimiento acerca del procedimiento, 50.9 % presentaron un nivel de ansiedad moderada; los procedimientos con mayor nivel de ansiedad fue colecistectomía 67% y herniorrafia inguinal 50%. En cuanto a la relación entre el procedimiento y el nivel de ansiedad, la cesárea + pomeroy y la conización presentaron un nivel de ansiedad alto; y la relación entre el grado de información del procedimiento y el nivel de ansiedad, se encontró que, a pesar de que la mayoría de los pacientes manifiestan conocer el procedimiento, predomina el nivel de ansiedad moderada y baja. 4. Subseabed disposal program annual report, January-December 1980. Volume II. Appendices (principal investigator progress reports). Part 1 Energy Technology Data Exchange (ETDEWEB) Hinga, K.R. (ed.) 1981-07-01 Volume II of the sixth annual report describing the progress and evaluating the status of the Subseabed Disposal Program contains the appendices referred to in Volume I, Summary and Status. Because of the length of Volume II, it has been split into two parts for publication purposes. Part 1 contains Appendices A-Q; Part 2 contains Appendices R-MM. Separate abstracts have been prepared for each appendix for inclusion in the Energy Data Base. 5. Subseabed disposal program annual report, January-December 1980. Volume II. Appendices (principal investigator progress reports). Part 1 International Nuclear Information System (INIS) Hinga, K.R. 1981-07-01 Volume II of the sixth annual report describing the progress and evaluating the status of the Subseabed Disposal Program contains the appendices referred to in Volume I, Summary and Status. Because of the length of Volume II, it has been split into two parts for publication purposes. Part 1 contains Appendices A-Q; Part 2 contains Appendices R-MM. Separate abstracts have been prepared for each appendix for inclusion in the Energy Data Base 6. Subseabed disposal program annual report, January-December 1979. Volume II. Appendices (principal investigator progress reports). Part 1 of 2 International Nuclear Information System (INIS) Talbert, D.M. 1981-04-01 Volume II of the sixth annual report describing the progress and evaluating the status of the Subseabed Disposal Program contains the appendices referred to in Volume I, Summary and Status. Because of the length of Volume II, it has been split into two parts for publication purposes. Part 1 contains Appendices A-O; Part 2 contains Appendices P-FF. Separate abstracts have been prepared of each Appendix for inclusion in the Energy Data Base 7. A legacy of struggle: the OSHA ergonomics standard and beyond, Part II. Science.gov (United States) Delp, Linda; Mojtahedi, Zahra; Sheikh, Hina; Lemus, Jackie 2014-11-01 The OSHA ergonomics standard issued in 2000 was repealed within four months through a Congressional resolution that limits future ergonomics rulemaking. This section continues the conversation initiated in Part I, documenting a legacy of struggle for an ergonomics standard through the voices of eight labor, academic, and government key informants. Part I summarized important components of the standard; described the convergence of labor activism, research, and government action that laid the foundation for a standard; and highlighted the debates that characterized the rulemaking process. Part II explores the anti-regulatory political landscape of the 1990s, as well as the key opponents, power dynamics, and legal maneuvers that led to repeal of the standard. This section also describes the impact of the ergonomics struggle beyond the standard itself and ends with a discussion of creative state-level policy initiatives and coalition approaches to prevent work-related musculoskeletal disorders (WMSDs) in today's sociopolitical context. 8. Three Mile Island: a report to the commissioners and to the public. Volume II, Part 1 International Nuclear Information System (INIS) 1979-01-01 This is part one of three parts of the second volume of the Special Inquiry Group's report to the Nuclear Regulatory Commission on the accident at Three Mile Island. The first volume contained a narrative description of the accident and a discussion of the major conclusions and recommendations. This second volume is divided into three parts. Part 1 of Volume II focuses on the pre-accident licensing and regulatory background. This part includes an examination of the overall licensing and regulatory system for nuclear powerplants viewed from different perspectives: the system as it is set forth in statutes and regulations, as described in Congressional testimony, and an overview of the system as it really works. In addition, Part 1 includes the licensing, operating, and inspection history of Three Mile Island Unit 2, discussions of relevant regulatory matters, a discussion of specific precursor events related to the accident, a case study of the pressurizer design issue, and an analysis of incentives to declare commercial operation 9. Ansiedade social e abuso de propranolol: relato de caso Directory of Open Access Journals (Sweden) Fontanella Bruno José Barcellos 2003-01-01 Full Text Available Paciente com grave ansiedade social automedicou-se com propranolol durante seis anos, em doses de até 320 mg/d. Além do tratamento psicanalítico que já havia iniciado, foi tratada com tranilcipromina, apresentando melhora parcial do quadro fóbico e do abuso do betabloqueador. Após introdução de paroxetina, houve melhora ainda mais pronunciada. Apesar da automedicação com uma substância potencialmente eficaz em alguns casos, perpetuou-se durante anos um grave padrão fóbico de comportamento. O caso exemplifica as dificuldades de procura de tratamento específico pela população de fóbicos sociais. Levanta-se a hipótese da existência de uma prática crescente de automedicação com betabloqueadores entre fóbicos sociais e pessoas com ansiedade de desempenho, problema cuja relevância para a saúde pública ainda não foi pesquisada. 10. PIC Simulations in Low Energy Part of PIP-II Proton Linac Energy Technology Data Exchange (ETDEWEB) Romanov, Gennady 2014-07-01 The front end of PIP-II linac is composed of a 30 keV ion source, low energy beam transport line (LEBT), 2.1 MeV radio frequency quadrupole (RFQ), and medium energy beam transport line (MEBT). This configuration is currently being assembled at Fermilab to support a complete systems test. The front end represents the primary technical risk with PIP-II, and so this step will validate the concept and demonstrate that the hardware can meet the specified requirements. SC accelerating cavities right after MEBT require high quality and well defined beam after RFQ to avoid excessive particle losses. In this paper we will present recent progress of beam dynamic study, using CST PIC simulation code, to investigate partial neutralization effect in LEBT, halo and tail formation in RFQ, total emittance growth and beam losses along low energy part of the linac. 11. Numerical Simulation of Projectile Impact on Mild Steel Armour Platesusing LS-DYNA, Part II: Parametric Studies OpenAIRE M. Raguraman; A. Deb; N. K. Gupta; D. K. Kharat 2008-01-01 In Part I of the current two-part series, a comprehensive simulation-based study of impact of jacketed projectiles on mild steel armour plates has been presented. Using the modelling procedures developed in Part I, a number of parametric studies have been carried out for the same mild steel plates considered in Part I and reported here in Part II. The current investigation includes determination of ballistic limits of a given target plate for different projectile diameters and impact velociti... 12. Numerical simulation of projectile impact on mild steel armour plates using LS-DYNA, Part II: Parametric studies OpenAIRE Raguraman, M; Deb, A; Gupta, NK; Kharat, DK 2008-01-01 In Part I of the current two-part series, a comprehensive simulation-based study of impact of Jacketed projectiles on mild steel armour plates has been presented. Using the modelling procedures developed in Part I, a number of parametric studies have been carried out for the same mild steel plates considered in Part I and reported here in Part II. The current investigation includes determination of ballistic limits of a given target plate for different projectile diameters and impact velociti... 13. Zn(II, Mn(II and Sr(II Behavior in a Natural Carbonate Reservoir System. Part II: Impact of Geological CO2 Storage Conditions Directory of Open Access Journals (Sweden) Auffray B. 2016-07-01 Full Text Available Some key points still prevent the full development of geological carbon sequestration in underground formations, especially concerning the assessment of the integrity of such storage. Indeed, the consequences of gas injection on chemistry and petrophysical properties are still much discussed in the scientific community, and are still not well known at either laboratory or field scale. In this article, the results of an experimental study about the mobilization of Trace Elements (TE during CO2 injection in a reservoir are presented. The experimental conditions range from typical storage formation conditions (90 bar, supercritical CO2 to shallower conditions (60 and 30 bar, CO2 as gas phase, and consider the dissolution of the two carbonates, coupled with the sorption of an initial concentration of 10−5 M of Zn(II, and the consequent release in solution of Mn(II and Sr(II. The investigation goes beyond the sole behavior of TE in the storage conditions: it presents the specific behavior of each element with respect to the pressure and the natural carbonate considered, showing that different equilibrium concentrations are to be expected if a fluid with a given concentration of TE leaks to an upper formation. Even though sorption is evidenced, it does not balance the amount of TE released by the dissolution process. The increase in porosity is clearly evidenced as a linear function of the CO2 pressure imposed for the St-Emilion carbonate. For the Lavoux carbonate, this trend is not confirmed by the 90 bar experiment. A preferential dissolution of the bigger family of pores from the preexisting porosity is observed in one of the samples (Lavoux carbonate while the second one (St-Emilion carbonate presents a newly-formed family of pores. Both reacted samples evidence that the pore network evolves toward a tubular network type. 14. Disfunção Temporomandibular segundo o Nível de Ansiedade em Adolescentes Directory of Open Access Journals (Sweden) Lara Jansiski Motta Full Text Available RESUMOO objetivo do estudo foi determinar a prevalência de sinais e sintomas de disfunção temporomandibular (DTM, segundo o nível de ansiedade de adolescentes da cidade de São Roque-SP. Foi utilizado o Índice de Fonseca para determinar a presença e o grau de severidade da DTM. Para avaliar o nível de ansiedade, foi utilizado o Inventário de Ansiedade Traço-Estado. Os participantes foram 3538 adolescentes entre 10 e 19 anos. Os resultados revelaram que 73,3% dos adolescentes apresentavam DTM e 72,7%, apresentavam ansiedade. Foram observadas associações estatisticamente significativas entre a presença de DTM e a presença de ansiedade, mas apenas com o sexo feminino, e correlação positiva, embora baixa, entre o grau de DTM e o nível de ansiedade. Conclui-se que adolescentes do sexo feminino apresentam maior chance de desenvolver DTM que os do sexo masculino, e quanto maior o nível de ansiedade do adolescente, maior a chance de desenvolver DTM. 15. Ansiedade e espiritualidade em estudantes universitários: um estudo transversal Directory of Open Access Journals (Sweden) Erika de Cássia Lopes Chaves 2015-06-01 Full Text Available RESUMOObjetivo:investigar a ansiedade e a espiritualidade de estudantes universitários e a relação entre elas.Método:para a coleta de dados, foi utilizado o Inventário de ansiedade traço-estado (IDATE e a Escala de Espiritualidade de Pinto e Pais-Ribeiro.Resultados:participaram 609 alunos, sendo que 91,5% apresentam níveis moderados e altos de ansiedade-traço; 92,9%, os mesmos níveis de ansiedade-estado e 93,8% alto escore de espiritualidade. O teste de regressão linear múltipla apontou relação signifi cativa entre a ansiedade e a presença de desconfortos físicos, de movimentos pouco comuns e necessidade de tratamento. Os maiores níveis de ansiedade estiveram associados ao sexo feminino, à ausência de atividades de lazer e aos baixos níveis de otimismo da escala de espiritualidade.Conclusão:é importante o desenvolvimento de estratégias de enfrentamento da ansiedade que, por sua vez, podem estar voltadas a fatores protetores, como a espiritualidade. 16. Optimal recombination in genetic algorithms for combinatorial optimization problems: Part II Directory of Open Access Journals (Sweden) Eremeev Anton V. 2014-01-01 Full Text Available This paper surveys results on complexity of the optimal recombination problem (ORP, which consists in finding the best possible offspring as a result of a recombination operator in a genetic algorithm, given two parent solutions. In Part II, we consider the computational complexity of ORPs arising in genetic algorithms for problems on permutations: the Travelling Salesman Problem, the Shortest Hamilton Path Problem and the Makespan Minimization on Single Machine and some other related problems. The analysis indicates that the corresponding ORPs are NP-hard, but solvable by faster algorithms, compared to the problems they are derived from. 17. Social class, political power, and the state: their implications in medicine--parts I and II. Science.gov (United States) Navarro, V 1976-01-01 This three part article presents an anlysis of the distribution of power and of the nature of the state in Western industrialized societies and details their implications in medicine. Part I presents a critique of contemporary theories of the Western system of power; discusses the countervailing pluralist and power elite theories, as well as those of bureaucratic and professional control; and concludes with an examination of the Marxist theories of economic determinism, structural determinism, and corporate statism. Part II presents a Marxist theory of the role, nature, and characteristics of state intervention. Part III (which will appear in the next issue of this journal) focuses on the mode of that intervention and the reasons for its growth, with an added analysis of the attributes of state intervention in the health sector, and of the dialectical relationship between its growth and the current fiscal crisis of the state. In all three parts, the focus is on Western European countries and on North America, with many examples and categories from the area of medicine. 18. Part I: Sound color in the music of Gyorgy Kurtag, Part II: "Leopard's Path," thirteen visions for chamber ensemble Science.gov (United States) Iachimciuc, Igor The dissertation is in two parts, a theoretical study and a musical composition. In Part I the music of Gyorgy Kurtag is analyzed from the point of view of sound color. A brief description of what is understood by the term sound color, and various ways of achieving specific coloristic effects, are presented in the Introduction. An examination of Kurtag's approaches to the domain of sound color occupies the chapters that follow. The musical examples that are analyzed are selected from Kurtag's different compositional periods, showing a certain consistency in sound color techniques, the most important of which are already present in the String Quartet, Op. 1. The compositions selected for analysis are written for different ensembles, but regardless of the instrumentation, certain principles of the formation and organization of sound color remain the same. Rather than relying on extended instrumental techniques, Kurtag creates a large variety of sound colors using traditional means such as pitch material, register, density, rhythm, timbral combinations, dynamics, texture, spatial displacement of the instruments, and the overall musical context. Each sound color unit in Kurtag's music is a separate entity, conceived as a complete microcosm. Sound color units can either be juxtaposed as contrasting elements, forming sound color variations, or superimposed, often resulting in a Klangfarbenmelodie effect. Some of the same gestural figures (objets trouves) appear in different compositions, but with significant coloristic modifications. Thus, the principle of sound color variations is not only a strong organizational tool, but also a characteristic stylistic feature of the music of Gyorgy Kurtag. Part II, Leopard's Path (2010), for flute, clarinet, violin, cello, cimbalom, and piano, is an original composition inspired by the painting of Jesse Allen, a San Francisco based artist. The composition is conceived as a cycle of thirteen short movements. Ten of these movements are 19. Diagnostic tools in PEM fuel cell research: Part II. Physical/chemical methods Energy Technology Data Exchange (ETDEWEB) Wu, Jinfeng; Zi Yuan, Xiao; Wang, Haijiang; Martin, Jonathan J.; Zhang, Jiujun [Institute for Fuel Cell Innovation, National Research Council (Canada); Blanco, Mauricio [Institute for Fuel Cell Innovation, National Research Council (Canada); Department of Chemical and Biological Engineering, University of British Columbia, Vancouver, BC (Canada) 2008-03-15 To meet the power density, reliability and cost requirements that will enable a widespread use of fuel cells, many research activities focus on an understanding of the thermodynamics as well as the fluid mechanical and electrochemical processes within a fuel cell. To date, a wide range of experimental diagnostics is imperative not only to help a fundamental understanding of fuel cell dynamics but also to provide benchmark-quality data for modeling research. This paper reviews various tools for diagnosing polymer electrolyte membrane (PEM) fuel cells and stacks, and attempts to incorporate the most recent technical advances in PEM fuel cell diagnosis. In Part I of the review we covered electrochemical techniques. In Part II, we review various physical/chemical methods and outline the principle, experimental implementation and data processing of each technique. Capabilities and weaknesses of these techniques are also discussed. (author) 20. Biology and Mechanics of Blood Flows Part II: Mechanics and Medical Aspects CERN Document Server Thiriet, Marc 2008-01-01 Biology and Mechanics of Blood Flows presents the basic knowledge and state-of-the-art techniques necessary to carry out investigations of the cardiovascular system using modeling and simulation. Part II of this two-volume sequence, Mechanics and Medical Aspects, refers to the extraction of input data at the macroscopic scale for modeling the cardiovascular system, and complements Part I, which focuses on nanoscopic and microscopic components and processes. This volume contains chapters on anatomy, physiology, continuum mechanics, as well as pathological changes in the vasculature walls including the heart and their treatments. Methods of numerical simulations are given and illustrated in particular by application to wall diseases. This authoritative book will appeal to any biologist, chemist, physicist, or applied mathematician interested in the functioning of the cardiovascular system. 1. Assessing and addressing moral distress and ethical climate Part II: neonatal and pediatric perspectives. Science.gov (United States) Sauerland, Jeanie; Marotta, Kathleen; Peinemann, Mary Anne; Berndt, Andrea; Robichaux, Catherine 2015-01-01 Moral distress remains a pervasive and, at times, contested concept in nursing and other health care disciplines. Ethical climate, the conditions and practices in which ethical situations are identified, discussed, and decided, has been shown to exacerbate or ameliorate perceptions of moral distress. The purpose of this mixed-methods study was to explore perceptions of moral distress, moral residue, and ethical climate among registered nurses working in an academic medical center. Two versions of the Moral Distress Scale in addition to the Hospital Ethical Climate Survey were used, and participants were invited to respond to 2 open-ended questions. Part I reported the findings among nurses working in adult acute and critical care units. Part II presents the results from nurses working in pediatric/neonatal units. Significant differences in findings between the 2 groups are discussed. Subsequent interventions developed are also presented. 2. Light-curing considerations for resin-based composite materials: a review. Part II. Science.gov (United States) Malhotra, Neeraj; Mala, Kundabala 2010-10-01 As discussed in Part I, the type of curing light and curing mode impact the polymerization kinetics of resin-based composite (RBC) materials. Major changes in light-curing units and curing modes have occurred. The type of curing light and mode employed affects the polymerization shrinkage and associated stresses, microhardness, depth of cure, degree of conversion, and color change of RBCs. These factors also may influence the microleakage in an RBC restoration. Apart from the type of unit and mode used, the polymerization of RBCs is also affected by how a light-curing unit is used and handled, as well as the aspects associated with RBCs and the environment. Part II discusses the various clinical issues that should be considered while curing RBC restorations in order to achieve the best possible outcome. 3. Transferring diffractive optics from research to commercial applications: Part II - size estimations for selected markets Science.gov (United States) Brunner, Robert 2014-04-01 In a series of two contributions, decisive business-related aspects of the current process status to transfer research results on diffractive optical elements (DOEs) into commercial solutions are discussed. In part I, the focus was on the patent landscape. Here, in part II, market estimations concerning DOEs for selected applications are presented, comprising classical spectroscopic gratings, security features on banknotes, DOEs for high-end applications, e.g., for the semiconductor manufacturing market and diffractive intra-ocular lenses. The derived market sizes are referred to the optical elements, itself, rather than to the enabled instruments. The estimated market volumes are mainly addressed to scientifically and technologically oriented optical engineers to serve as a rough classification of the commercial dimensions of DOEs in the different market segments and do not claim to be exhaustive. 4. Exploring Cancer Therapeutics with Natural Products from African Medicinal Plants, Part II: Alkaloids, Terpenoids and Flavonoids. Science.gov (United States) Nwodo, Justina N; Ibezim, Akachukwu; Simoben, Conrad V; Ntie-Kang, Fidele 2016-01-01 Cancer stands as second most common cause of disease-related deaths in humans. Resistance of cancer to chemotherapy remains challenging to both scientists and physicians. Medicinal plants are known to contribute significantly to a large population of Africa, which is to a very large extent linked to folkloric claims which is part of their livelihood. In this review paper, the potential of naturally occurring anti-cancer agents from African flora has been explored, with suggested modes of action, where such data is available. Literature search revealed plant-derived compounds from African flora showing anti-cancer and/or cytotoxic activities, which have been tested in vitro and in vivo. This corresponds to 400 compounds (from mildly active to very active) covering various compound classes. However, in this part II, we only discussed the three major compound classes which are: flavonoids, alkaloids and terpenoids. 5. El Cuestionario de Ansiedad Laboral (C.A.L.). Resultados preliminares OpenAIRE González-Romá, Vicente; Espejo Tort, Begoña; Lloret Segura, Susana 1993-01-01 La aceptación de la naturaleza multidimensional de la ansiedad ha llevado a su abandono como una entidad explicativa unitaria. Las dimensiones de la ansiedad comprenden las interacciones entre el ambiente y el comportamiento cognitivo y/o fisiológico y/o motor, y dado que los tres sistemas interactúan entre sí, es necesaria la medición de las tres dimensiones. Sin embargo, en muchas investigaciones solo se ha medido uno de los componentes de la ansiedad, como ocurre en el ámbito de las organi... 6. O estudo bibliométrico do transtorno de ansiedade social em universitários OpenAIRE Sabrina Maura Pereira; Lélio Moura Lourenço 2012-01-01 El trastorno de Ansiedad social (TAS) o Fobia Social (FS) se caracteriza por una ansiedad excesiva y persiste en situaciones de interacción social o de desempeño. El presente trabajo pretende analizar los artículos indexados en las bases de datos Pubmed y Web of Science, en el período comprendido entre 2006 y 2010, y evaluar los indicadores bibliométricos de la literatura científica relacionados con el trastorno de ansiedad social/fobia social en estudiantes universitarios. La muestra final c... 7. Depresión, autoestima y ansiedad en la tercera edad: un estudio comparativo OpenAIRE Hugo Guadalupe Canto Pech; Eira Karla Castro Rena 2004-01-01 El presente estudio se enfocó en los niveles de depresión, ansiedad y autoestima en los ancianos, especialmente en quienes viven en asilos o que acuden a estancias con frecuencia. De los resultados obtenidos, se observó de manera general que hay probabilidades de que a mayor nivel de autoestima, menor depresión; a mayor nivel de ansiedad, mayor probabilidad de depresión, y a menor autoestima, mayor el nivel de ansiedad. Al comparar las estancias con los asilos en cuanto a lo... 8. Ansiedad, personalidad y rendimiento académico en alumnado de Educación primaria OpenAIRE Domblas García, Andrés 2016-01-01 La investigación que se presenta en esta tesis doctoral es un estudio descriptivo y correlacional situado en el ámbito escolar. En primer lugar, se lleva a cabo un análisis minucioso de cada una de las variables objeto de estudio: la ansiedad, la personalidad y el rendimiento académico. En segundo lugar, se analizan las interacciones entre las variables estudiadas: ansiedad y personalidad, ansiedad y rendimiento académico, personalidad y rendimiento académico. En tercer luga... 9. Guidelines for the management of gastroenteropancreatic neuroendocrine tumours (including bronchopulmonary and thymic neoplasms). Part II-specific NE tumour types DEFF Research Database (Denmark) Oberg, Kjell; Astrup, Lone Bording; Eriksson, Barbro 2004-01-01 Part II of the guidelines contains a description of epidemiology, histopathology, clinical presentation, diagnostic procedure, treatment, and survival for each type of neuroendocrine tumour. We are not only including gastroenteropancreatic tumours but also bronchopulmonary and thymic neuroendocri... 10. Emerging Forms of the Part II of Jonathan Swift's Novel “Gulliver’s Travels” Directory of Open Access Journals (Sweden) Svitlana Tikhonenko 2017-11-01 Full Text Available The article is devoted to the study of grotesque forms in Jonathan Swift's novel "Gulliver’s Travels" based on the text of part II of the novel "A Voyage to Brobdingnag". On the basis of the selected actual material, displays of the grotesque elements in the semantic field of the work’s text are traced. The grotesque world of the novel is the author's model of mankind, in which J. Swift presents his view not only on the state of the modern system of England, but also on the nature of man in general, reveals the peculiarities of the psychology of human nature, especially human socialization. In part II, the author continues to develop a complex and contradictory picture of human existence in front of the reader, the world of giants appears as an ambivalent system in which the features of an ideal society and ideal ruler, in author’s opinion, with the ugly face of man and society, are marvelously combined. 11. Two-loop renormalization in the standard model, part II. Renormalization procedures and computational techniques Energy Technology Data Exchange (ETDEWEB) Actis, S. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Passarino, G. [Torino Univ. (Italy). Dipt. di Fisica Teorica; INFN, Sezione di Torino (Italy) 2006-12-15 In part I general aspects of the renormalization of a spontaneously broken gauge theory have been introduced. Here, in part II, two-loop renormalization is introduced and discussed within the context of the minimal Standard Model. Therefore, this paper deals with the transition between bare parameters and fields to renormalized ones. The full list of one- and two-loop counterterms is shown and it is proven that, by a suitable extension of the formalism already introduced at the one-loop level, two-point functions suffice in renormalizing the model. The problem of overlapping ultraviolet divergencies is analyzed and it is shown that all counterterms are local and of polynomial nature. The original program of 't Hooft and Veltman is at work. Finite parts are written in a way that allows for a fast and reliable numerical integration with all collinear logarithms extracted analytically. Finite renormalization, the transition between renormalized parameters and physical (pseudo-)observables, are discussed in part III where numerical results, e.g. for the complex poles of the unstable gauge bosons, are shown. An attempt is made to define the running of the electromagnetic coupling constant at the two-loop level. (orig.) 12. Advanced diagnostic methods in oral and maxillofacial pathology. Part II: immunohistochemical and immunofluorescent methods. Science.gov (United States) Jordan, Richard C K; Daniels, Troy E; Greenspan, John S; Regezi, Joseph A 2002-01-01 The practice of pathology is currently undergoing significant change, in large part due to advances in the analysis of DNA, RNA, and proteins in tissues. These advances have permitted improved biologic insights into many developmental, inflammatory, metabolic, infectious, and neoplastic diseases. Moreover, molecular analysis has also led to improvements in the accuracy of disease diagnosis and classification. It is likely that, in the future, these methods will increasingly enter into the day-to-day diagnosis and management of patients. The pathologist will continue to play a fundamental role in diagnosis and will likely be in a pivotal position to guide the implementation and interpretation of these tests as they move from the research laboratory into diagnostic pathology. The purpose of this 2-part series is to provide an overview of the principles and applications of current molecular biologic and immunologic tests. In Part I, the biologic fundamentals of DNA, RNA, and proteins and methods that are currently available or likely to become available to the pathologist in the next several years for their isolation and analysis in tissue biopsies were discussed. In Part II, advances in immunohistochemistry and immunofluorescence methods and their application to modern diagnostic pathology are reviewed. 13. Seismic risk analysis for General Electric Plutonium Facility, Pleasanton, California. Final report, part II International Nuclear Information System (INIS) 1980-01-01 This report is the second of a two part study addressing the seismic risk or hazard of the special nuclear materials (SNM) facility of the General Electric Vallecitos Nuclear Center at Pleasanton, California. The Part I companion to this report, dated July 31, 1978, presented the seismic hazard at the site that resulted from exposure to earthquakes on the Calaveras, Hayward, San Andreas and, additionally, from smaller unassociated earthquakes that could not be attributed to these specific faults. However, while this study was in progress, certain additional geologic information became available that could be interpreted in terms of the existance of a nearby fault. Although substantial geologic investigations were subsequently deployed, the existance of this postulated fault, called the Verona Fault, remained very controversial. The purpose of the Part II study was to assume the existance of such a capable fault and, under this assumption, to examine the loads that the fault could impose on the SNM facility. This report first reviews the geologic setting with a focus on specifying sufficient geologic parameters to characterize the postulated fault. The report next presents the methodology used to calculate the vibratory ground motion hazard. Because of the complexity of the fault geometry, a slightly different methodology is used here compared to the Part I report. This section ends with the results of the calculation applied to the SNM facility. Finally, the report presents the methodology and results of the rupture hazard calculation 14. Ansiedade e espiritualidade em estudantes universitários: um estudo transversal OpenAIRE Erika de Cássia Lopes Chaves; Denise Hollanda Iunes; Caroline de Castro Moura; Leonardo César Carvalho; Andréia Maria Silva; Emília Campos de Carvalho 2015-01-01 RESUMOObjetivo:investigar a ansiedade e a espiritualidade de estudantes universitários e a relação entre elas.Método:para a coleta de dados, foi utilizado o Inventário de ansiedade traço-estado (IDATE) e a Escala de Espiritualidade de Pinto e Pais-Ribeiro.Resultados:participaram 609 alunos, sendo que 91,5% apresentam níveis moderados e altos de ansiedade-traço; 92,9%, os mesmos níveis de ansiedade-estado e 93,8% alto escore de espiritualidade. O teste de regressão linear múltipla apontou rela... 15. La Validez y la Eficacia de los Ejercicios Respiratorios para Reducir la Ansiedad Escénica en el Aula de Música Directory of Open Access Journals (Sweden) Pablo Ramos Ramos 2013-06-01 Full Text Available El presente texto repasa la literatura científica más relevante en el ámbito de la ansiedad escénica asociada a la interpretación musical, tanto en edad adulta como durante la infancia. Además, se exponen las conclusiones de aquellos estudios que han analizado este rasgo de la personalidad en el medio escolar y se destacan las causas que lo provocan: presión ante la presencia de padres y de otros alumnos, perfeccionismo, baja resistencia al estrés.Por otra parte, se relaciona el desarrollo de la ansiedad escénica con la pérdida de autoestima, lo que repercute en la educación del niño a un nivel general. Ante esta problemática se discuten las posibles terapias que podrían implementarse en el aula; de entre todas ellas se destacan –por su eficacia y por la facilidad a la hora de aplicarlas– las técnicas de respiración.Finalmente, se establece un decálogo a modo de guía de actuación para combatir la ansiedad escénica, que el maestro o profesor de música puede tener en cuenta de cara a una primera intervención en el medio escolar. 16. La preocupación como estrategia de afrontamiento en pacientes con trastornos de ansiedad generalizada Worry as coping strategy in patients with generalized anxiety disorder Directory of Open Access Journals (Sweden) Giselle Vetere 2011-12-01 Full Text Available El siguiente trabajo se enmarca dentro de un proyecto UBACyT sobre conductas de afrontamiento en trastornos de ansiedad. En el presente artículo se muestran los resultados de una revisión bibliográica sobre la utilización de la preocupación como estrategia de afrontamiento en pacientes con trastorno de ansiedad generalizada. El método utilizado consistió en una búsqueda de los trabajos disponibles en las bases de datos PubMed, Scielo, Lilacs y Ebsco utilizando como palabras clave afrontamiento, ansiedad generalizada y preocupación. En primer lugar se describen brevemente las características del cuadro y se define el concepto de afrontamiento y sus diversos tipos. Seguidamente, en base a los resultados obtenidos en la búsqueda se analiza el concepto de preocupación así como las consecuencias de su uso como estrategia de afrontamiento en pacientes con trastorno de ansiedad generalizada. Finalmente se discuten las implicancias de los resultados para el tratamiento del cuadro.The following work is part of a research project about coping behaviors in anxiety disorders. In this paper we show the results of a literature review focused on the use of worry as a coping strategy in patients with generalized anxiety disorder. The method used consisted of a bibliographic search of the available studies in the PubMed, Scielo, Lilacs and Ebsco databases using the terms coping strategies, generalized anxiety and concern as keywords. First, we briely describe the characteristics of the disorder and deine the concept of coping and its diverse forms. Then, following the results found in the search we explore the concept of worry and the consequences of its use as a coping strategy in patients with generalized anxiety disorder. Finally, we discuss the implications of the results for the treatment of the disorder. 17. Great expectations: a position description for parents as caregivers: Part II. Science.gov (United States) Sullivan-Bolyai, Susan; Knafl, Kathleen A; Sadler, Lois; Gilliss, Catherine L 2004-01-01 Parents caring for a child with a chronic condition must attend to a myriad of day-to-day management responsibilities and activities. Part I of this two-part series (in the previous issue of Pediatric Nursing) reviewed both the adult and pediatric family caregiving literature within the context of four major categories of responsibilities: (a) managing the illness, which includes hands-on care, monitoring and interpreting signs and symptoms, as well as problem-solving and decision-making processes; (b) identifying, accessing, and coordinating resources, which involves assessing and negotiating community resources including health care providers; (c) maintaining the family unit, including balancing illness and family demands while at the same time attempting to meet the health and developmental needs of each family member; and (d) maintaining self, including physical, emotional, and spiritual health.. Part II presents a multifaceted list of parent caregiving management responsibilities and associated activities, and discusses nursing implications. The list was developed to facilitate "caregiving" dialogue between health care providers and families of children with chronic conditions. It is hoped that through such partnerships creative ways of educating, preparing, and supporting caregivers will be generated. 18. Music in the exercise domain: a review and synthesis (Part II) Science.gov (United States) Karageorghis, Costas I.; Priest, David-Lee 2011-01-01 Since a 1997 review by Karageorghis and Terry, which highlighted the state of knowledge and methodological weaknesses, the number of studies investigating musical reactivity in relation to exercise has swelled considerably. In this two-part review paper, the development of conceptual approaches and mechanisms underlying the effects of music are explicated (Part I), followed by a critical review and synthesis of empirical work (spread over Parts I and II). Pre-task music has been shown to optimise arousal, facilitate task-relevant imagery and improve performance in simple motoric tasks. During repetitive, endurance-type activities, self-selected, motivational and stimulative music has been shown to enhance affect, reduce ratings of perceived exertion, improve energy efficiency and lead to increased work output. There is evidence to suggest that carefully selected music can promote ergogenic and psychological benefits during high-intensity exercise, although it appears to be ineffective in reducing perceptions of exertion beyond the anaerobic threshold. The effects of music appear to be at their most potent when it is used to accompany self-paced exercise or in externally valid conditions. When selected according to its motivational qualities, the positive impact of music on both psychological state and performance is magnified. Guidelines are provided for future research and exercise practitioners. PMID:22577473 19. Noncardiac findings on cardiac CT. Part II: spectrum of imaging findings. LENUS (Irish Health Repository) Killeen, Ronan P 2012-02-01 Cardiac computed tomography (CT) has evolved into an effective imaging technique for the evaluation of coronary artery disease in selected patients. Two distinct advantages over other noninvasive cardiac imaging methods include its ability to directly evaluate the coronary arteries and to provide a unique opportunity to evaluate for alternative diagnoses by assessing the extracardiac structures, such as the lungs and mediastinum, particularly in patients presenting with the chief symptom of acute chest pain. Some centers reconstruct a small field of view (FOV) cropped around the heart but a full FOV (from skin to skin in the area irradiated) is obtainable in the raw data of every scan so that clinically relevant noncardiac findings are identifiable. Debate in the scientific community has centered on the necessity for this large FOV. A review of noncardiac structures provides the opportunity to make alternative diagnoses that may account for the patient\\'s presentation or to detect important but clinically silent problems such as lung cancer. Critics argue that the yield of biopsy-proven cancers is low and that the follow-up of incidental noncardiac findings is expensive, resulting in increased radiation exposure and possibly unnecessary further testing. In this 2-part review we outline the issues surrounding the concept of the noncardiac read, looking for noncardiac findings on cardiac CT. Part I focused on the pros and cons for and against the practice of identifying noncardiac findings on cardiac CT. Part II illustrates the imaging spectrum of cardiac CT appearances of benign and malignant noncardiac pathology. 20. From Constraints to Resolution Rules Part II : chains, braids, confluence and T&E Science.gov (United States) Berthier, Denis In this Part II, we apply the general theory developed in Part I to a detailed analysis of the Constraint Satisfaction Problem (CSP). We show how specific types of resolution rules can be defined. In particular, we introduce the general notions of a chain and a braid. As in Part I, these notions are illustrated in detail with the Sudoku example - a problem known to be NP-complete and which is therefore typical of a broad class of hard problems. For Sudoku, we also show how far one can go in "approximating" a CSP with a resolution theory and we give an empirical statistical analysis of how the various puzzles, corresponding to different sets of entries, can be classified along a natural scale of complexity. For any CSP, we also prove the confluence property of some Resolution Theories based on braids and we show how it can be used to define different resolution strategies. Finally, we prove that, in any CSP, braids have the same solving capacity as Trial-and-Error (T&E) with no guessing and we comment this result in the Sudoku case. 1. Music in the exercise domain: a review and synthesis (Part II). Science.gov (United States) Karageorghis, Costas I; Priest, David-Lee 2012-03-01 Since a 1997 review by Karageorghis and Terry, which highlighted the state of knowledge and methodological weaknesses, the number of studies investigating musical reactivity in relation to exercise has swelled considerably. In this two-part review paper, the development of conceptual approaches and mechanisms underlying the effects of music are explicated (Part I), followed by a critical review and synthesis of empirical work (spread over Parts I and II). Pre-task music has been shown to optimise arousal, facilitate task-relevant imagery and improve performance in simple motoric tasks. During repetitive, endurance-type activities, self-selected, motivational and stimulative music has been shown to enhance affect, reduce ratings of perceived exertion, improve energy efficiency and lead to increased work output. There is evidence to suggest that carefully selected music can promote ergogenic and psychological benefits during high-intensity exercise, although it appears to be ineffective in reducing perceptions of exertion beyond the anaerobic threshold. The effects of music appear to be at their most potent when it is used to accompany self-paced exercise or in externally valid conditions. When selected according to its motivational qualities, the positive impact of music on both psychological state and performance is magnified. Guidelines are provided for future research and exercise practitioners. 2. Impact of monovalent cations on soil structure. Part II. Results of two Swiss soils Science.gov (United States) Farahani, Elham; Emami, Hojat; Keller, Thomas 2018-01-01 In this study, we investigated the impact of adding solutions with different potassium and sodium concentrations on dispersible clay, water retention characteristics, air permeability, and soil shrinkage behaviour using two agricultural soils from Switzerland with different clay content but similar organic carbon to clay ratio. Three different solutions (including only Na, only K, and the combination of both) were added to soil samples at three different cation ratio of soil structural stability levels, and the soil samples were incubated for one month. Our findings showed that the amount of readily dispersible clay increased with increasing Na concentrations and with increasing cation ratio of soil structural stability. The treatment with the maximum Na concentration resulted in the highest water retention and in the lowest shrinkage capacity. This was was associated with high amounts of readily dispersible clay. Air permeability generally increased during incubation due to moderate wetting and drying cycles, but the increase was negatively correlated with readily dispersible clay. Readily dispersible clay decreased with increasing K, while readily dispersible clay increased with increasing K in Iranian soil (Part I of our study). This can be attributed to the different clay mineralogy of the studied soils (muscovite in Part I and illite in Part II). 3. Multiobjective Optimization for Fixture Locating Layout of Sheet Metal Part Using SVR and NSGA-II Directory of Open Access Journals (Sweden) Yuan Yang 2017-01-01 Full Text Available Fixture plays a significant role in determining the sheet metal part (SMP spatial position and restraining its excessive deformation in many manufacturing operations. However, it is still a difficult task to design and optimize SMP fixture locating layout at present because there exist multiple conflicting objectives and excessive computational cost of finite element analysis (FEA during the optimization process. To this end, a new multiobjective optimization method for SMP fixture locating layout is proposed in this paper based on the support vector regression (SVR surrogate model and the elitist nondominated sorting genetic algorithm (NSGA-II. By using ABAQUS™ Python script interface, a parametric FEA model is established. And the fixture locating layout is treated as design variables, while the overall deformation and maximum deformation of SMP under external forces are as the multiple objective functions. First, a limited number of training and testing samples are generated by combining Latin hypercube design (LHD with FEA. Second, two SVR prediction models corresponding to the multiple objectives are established by learning from the limited training samples and are integrated as the multiobjective optimization surrogate model. Third, NSGA-II is applied to determine the Pareto optimal solutions of SMP fixture locating layout. Finally, a multiobjective optimization for fixture locating layout of an aircraft fuselage skin case is conducted to illustrate and verify the proposed method. 4. Domestic violence perpetrator programs in Europe, Part II: A systematic review of the state of evidence. Science.gov (United States) Akoensi, Thomas D; Koehler, Johann A; Lösel, Friedrich; Humphreys, David K 2013-10-01 In Part II of this article, we present the results of a systematic review of European evidence on the effectiveness of domestic violence perpetrator programs. After searching through 10,446 titles, we discovered only 12 studies that evaluated the effectiveness of a perpetrator program in some systematic manner. The studies applied treatment to a total of 1,586 domestic violence perpetrators, and the sample sizes ranged from 9 to 322. Although the evaluations showed various positive effects after treatment, methodological problems relating to the evaluation designs do not allow attribution of these findings to the programs. Overall, the methodological quality of the evaluations is insufficient to derive firm conclusions and estimate an effect size. Accordingly, one cannot claim that one programmatic approach is superior to another. Evaluation of domestic violence perpetrator treatment in Europe must be improved and programs should become more tailored to the characteristics of the participants. 5. CERN scientists take part in the Tevatron Run II performance review committee CERN Multimedia Maximilien Brice 2002-01-01 Tevatron Run II is under way at Fermilab, exploring the high-energy frontier with upgraded detectors that will address some of the biggest questions in particle physics.Until CERN's LHC switches on, the Tevatron proton-antiproton collider is the world's only source of top quarks. It is the only place where we can search for supersymmetry, for the Higgs boson, and for signatures of additional dimensions of space-time. The US Department of Energy (DOE) recently convened a high-level international review committee to examine Fermilab experts' first-phase plans for the accelerator complex. Pictured here with a dipole magnet in CERN's LHC magnet test facility are the four CERN scientists who took part in the DOE's Tevatron review. Left to right: Francesco Ruggiero, Massimo Placidi, Flemming Pedersen, and Karlheinz Schindl. Further information: CERN Courier 43 (1) 6. Ansiedade e depressao entre homens e mulheres submetidos a intervencao coronaria percutanea Directory of Open Access Journals (Sweden) Rejane Kiyomi Furuya 2013-12-01 Full Text Available Estudo descritivo, transversal, correlacional, que objetivou verificar a associação entre a presença de ansiedade e depressão após a alta hospitalar em pacientes submetidos à intervenção coronária percutânea (ICP, segundo o sexo. Foram avaliados 59 pacientes submetidos à ICP e em acompanhamento ambulatorial nos primeiros sete meses após a alta hospitalar. Para avaliação de sintomas de ansiedade e de depressão foi utilizada a Escala Hospitalar de Ansiedade e Depressão (HADS. Para testar as possíveis associações entre as variáveis ansiedade, depressão e sexo foi utilizado o teste de qui-quadrado, com nível de significância de 5%. Os resultados indicaram maior número de mulheres com depressão, sendo que a associação entre as variáveis sexo e depressão mostrou-se estatisticamente significativa. Em relação à ansiedade, os casos foram mais frequentes no sexo masculino e a associação entre as variáveis sexo e ansiedade não foi estatisticamente significativa. 7. Advances in Knowledge Discovery and Data Mining 21st Pacific Asia Conference, PAKDD 2017 Held in Jeju, South Korea, May 23 26, 2017. Proceedings Part I, Part II. Science.gov (United States) 2017-06-27 Data Mining 21’’ Pacific-Asia Conference, PAKDD 2017Jeju, South Korea, May 23-26, Sb. GRANT NUMBER 2017 Proceedings, Part I, Part II Sc. PROGRAM...Springer; Switzerland. 14. ABSTRACT The Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD) is a leading international conference...in the areas of knowledge discovery and data mining (KDD). We had three keynote speeches, delivered by Sang Cha from Seoul National University 8. The effect of tobacco ingredients on smoke chemistry. Part II: casing ingredients. Science.gov (United States) Baker, Richard R; Pereira da Silva, José R; Smith, Graham 2004-01-01 This is the second part of a study in which the effects of adding a range of ingredients to tobacco on the chemistry of cigarette mainstream smoke are assessed. The examination of smoke chemistry has concentrated on those constituents in smoke that regulatory authorities in the USA and Canada believe to be relevant to smoking-related diseases. In this part of the study the effects of 29 casing ingredients and three humectants have been assessed at the maximum levels typically used on cigarettes by British American Tobacco. This brings the total number of ingredients assessed in Parts I and II of this study to 482. The casing ingredients were added at levels of up to 68 mg on the cigarettes. Their effects on smoke constituents were generally larger than the effects of flavouring ingredients, which were added at parts per million levels. Many of the casing ingredient mixtures either had no statistically significant effect on the level of the analytes investigated in smoke relative to a control cigarette, or they produced decreases of up to 44% in some cases. Those analytes that were increased in smoke are highlighted in this paper. The largest increases were for formaldehyde levels, up to 26 microg (73%) in one case, observed from casing mixtures containing sugar. This is most likely due to the generation of formaldehyde by pyrolysis of sugars. Occasional small increases were also observed for other analytes. However, the statistical significance of many of these increases was not present when the long-term variability of the analytical method was taken into account. The significance and possible reasons for the increases are discussed. 9. Prediction of periventricular leukomalacia. Part II: Selection of hemodynamic features using computational intelligence. Science.gov (United States) Samanta, Biswanath; Bird, Geoffrey L; Kuijpers, Marijn; Zimmerman, Robert A; Jarvik, Gail P; Wernovsky, Gil; Clancy, Robert R; Licht, Daniel J; Gaynor, J William; Nataraj, Chandrasekhar 2009-07-01 The objective of Part II is to analyze the dataset of extracted hemodynamic features (Case 3 of Part I) through computational intelligence (CI) techniques for identification of potential prognostic factors for periventricular leukomalacia (PVL) occurrence in neonates with congenital heart disease. The extracted features (Case 3 dataset of Part I) were used as inputs to CI based classifiers, namely, multi-layer perceptron (MLP) and probabilistic neural network (PNN) in combination with genetic algorithms (GA) for selection of the most suitable features predicting the occurrence of PVL. The selected features were next used as inputs to a decision tree (DT) algorithm for generating easily interpretable rules of PVL prediction. Prediction performance for two CI based classifiers, MLP and PNN coupled with GA are presented for different number of selected features. The best prediction performances were achieved with 6 and 7 selected features. The prediction success was 100% in training and the best ranges of sensitivity (SN), specificity (SP) and accuracy (AC) in test were 60-73%, 74-84% and 71-74%, respectively. The identified features when used with the DT algorithm gave best SN, SP and AC in the ranges of 87-90% in training and 80-87%, 74-79% and 79-82% in test. Among the variables selected in CI, systolic and diastolic blood pressures, and pCO(2) figured prominently similar to Part I. Decision tree based rules for prediction of PVL occurrence were obtained using the CI selected features. The proposed approach combines the generalization capability of CI based feature selection approach and generation of easily interpretable classification rules of the decision tree. The combination of CI techniques with DT gave substantially better test prediction performance than using CI and DT separately. 10. Errores numéricos: ¿Cómo afectan a las personas con ansiedad matemática? Directory of Open Access Journals (Sweden) Macarena Suárez-Pellicioni 2014-05-01 Full Text Available ¿Cómo responde el cerebro de una persona con ansiedad a las matemáticas? Nuestro estudio muestra que los estudiantes con mucha ansiedad hacia las matemáticas presentan un componente electroencefalográfico llamado negatividad asociada al error (NAE de mayor tamaño que aquellos con poca ansiedad. Esta diferencia emerge en errores en tareas numéricas, lo que sugiere que las personas con alta ansiedad son hipersensibles a la comisión de estos errores. Este hallazgo aporta nuevo conocimiento sobre las bases cerebrales de la ansiedad hacia las matemáticas y sugiere que esta hipersensibilidad al error numérico podría ser un factor determinante tanto en el origen como en el mantenimiento de esta ansiedad. 11. [Education in our time: competency or aptitude? The case for medicine. Part II]. Science.gov (United States) Viniegra-Velázquez, Leonardo Part II is focused on participatory education (PE), a distinctive way to understand and practice education in contrast to passive education. The core of PE is to develop everyone's own cognitive potentialities frequently mutilated, neglected or ignored. Epistemological and experiential basis of PE are defined: the concept of incisive and creative criticism, the idea of knowledge as each person's own construct and life experience as the main focus of reflection and cognition. The PE aims towards individuals with unprecedented cognitive and creative faculties, capable of approaching a more inclusive and hospitable world. The last part criticizes the fact that medical education has remained among the passive education paradigm. The key role of cognitive aptitudes, both methodological and practical (clinical aptitude), in the progress of medical education and practice is emphasized. As a conclusion, the knowhow of education is discussed, aiming towards a better world away from human and planetary degradation. Copyright © 2017 Hospital Infantil de México Federico Gómez. Publicado por Masson Doyma México S.A. All rights reserved. 12. Discover Health Services Near You! The North Dakota Story: Part II. Science.gov (United States) Safratowich, Michael; Markland, Mary J; Rieke, Judith L 2009-07-01 Since the 2003 launch of NC Health Info, the National Library of Medicine has encouraged the development of Go Local databases. A team of Go Local enthusiasts at North Dakota's only medical school library wanted to obtain NLM funding and build a resource for their rural state. Although short on staff, money, and time, the team found a way to realize a Go Local database that serves the state's residents and helps them "Discover Health Services Near You!" A team approach and collaboration with health providers and organizations worked well in this small rural state. North Dakota's Go Local project offers a low-cost model that stresses collaboration, teamwork and technology. Part I which appeared in the last issue describes the rural setting, explains how the project was conceived, and the processes necessary to begin building the database. Part II which appears in this issue details how records were created including developing the input style guide and indexing decisions, the NLM testing and review process, the maintenance and auditing process, and publicity and promotion of the project. 13. Resistance and Elastic Stiffness of RHS "T" Joints: Part II - Combined Axial Brace and Chord Loading Directory of Open Access Journals (Sweden) R.M.M.P. de Matos Full Text Available Abstract This paper deals with the behaviour of welded "T" joints between RHS sections submitted to tension brace loading combined with chord axial loading. In the companion paper (part I a finite element model and a study without axial load in the chord, focusing on the joint behaviour as a function of the significant geometrical variables, were presented. In this part II paper, tension loading on the brace is incremented up to the joint failure, but is combined with different chord load levels in tension or compression, that are kept constant for each case. The same geometries and geometric variables as in the companion paper are used, and therefore the influence of these features together with the chord load level (in tension or compression on the connection's response is evaluated. The force-displacement curves from the different geometries and chord load levels are analysed and compared, with a special attention on the influence of the chord load on the joint resistance and stiffness. Finally, a comparison of the numerical results with the (Eurocode 3, 2005 and the newer (ISO 14346, 2013 provisions is presented and discussed. 14. Adaptive Core Simulation Employing Discrete Inverse Theory - Part II: Numerical Experiments International Nuclear Information System (INIS) Abdel-Khalik, Hany S.; Turinsky, Paul J. 2005-01-01 Use of adaptive simulation is intended to improve the fidelity and robustness of important core attribute predictions such as core power distribution, thermal margins, and core reactivity. Adaptive simulation utilizes a selected set of past and current reactor measurements of reactor observables, i.e., in-core instrumentation readings, to adapt the simulation in a meaningful way. The companion paper, ''Adaptive Core Simulation Employing Discrete Inverse Theory - Part I: Theory,'' describes in detail the theoretical background of the proposed adaptive techniques. This paper, Part II, demonstrates several computational experiments conducted to assess the fidelity and robustness of the proposed techniques. The intent is to check the ability of the adapted core simulator model to predict future core observables that are not included in the adaption or core observables that are recorded at core conditions that differ from those at which adaption is completed. Also, this paper demonstrates successful utilization of an efficient sensitivity analysis approach to calculate the sensitivity information required to perform the adaption for millions of input core parameters. Finally, this paper illustrates a useful application for adaptive simulation - reducing the inconsistencies between two different core simulator code systems, where the multitudes of input data to one code are adjusted to enhance the agreement between both codes for important core attributes, i.e., core reactivity and power distribution. Also demonstrated is the robustness of such an application 15. Diagnosing DSM-IV--Part II: Eysenck (1986) and the essentialist fallacy. Science.gov (United States) Wakefield, J C 1997-07-01 In Part I (Wakefield, 1997, Behaviour Research and Therapy, 35, 633-649) of this two-article series, I used the harmful dysfunction analysis of the concept of disorder (Wakefield, 1992a, American Psychologist, 47, 373-388) to 'diagnose' a problem with DSM-IV. I argued that DSM-IV diagnostic criteria often violate the 'dysfunction' requirement by invalidity classifying harms not caused by dysfunctions as disorders. In Part II, I examine Eysenck's (Eysenck, 1986, Contemporary directions in psychopathology: Toward the DSM-IV) argument that DSM commits a 'categorical fallacy' and should be replaced by dimensional diagnoses based on Eysenckian personality traits. I argue that Eysenck's proposed diagnostic criteria violate the 'harm' requirement by invalidly classifying symptomless conditions as disorders. Eysenck commits an 'essentialist fallacy'; he misconstrues 'disorder' as an essentialist theoretical concept when in fact it is a hybrid theoretical-practical or 'cause-effect' concept. He thus ignores the harmful effects essential to disorder that are captured in DSM's symptom-based categories. 16. Interview-Based Qualitative Research in Emergency Care Part II: Data Collection, Analysis and Results Reporting Science.gov (United States) Ranney, Megan L.; Meisel, Zachary; Choo, Esther K.; Garro, Aris; Sasson, Comilla; Morrow, Kathleen 2015-01-01 Qualitative methods are increasingly being used in emergency care research. Rigorous qualitative methods can play a critical role in advancing the emergency care research agenda by allowing investigators to generate hypotheses, gain an in-depth understanding of health problems or specific populations, create expert consensus, and develop new intervention and dissemination strategies. In Part I of this two-article series, we provided an introduction to general principles of applied qualitative health research and examples of its common use in emergency care research, describing study designs and data collection methods most relevant to our field (observation, individual interviews, and focus groups). Here in Part II of this series, we outline the specific steps necessary to conduct a valid and reliable qualitative research project, with a focus on interview-based studies. These elements include building the research team, preparing data collection guides, defining and obtaining an adequate sample, collecting and organizing qualitative data, and coding and analyzing the data. We also discuss potential ethical considerations unique to qualitative research as it relates to emergency care research. PMID:26284572 17. Interview-based Qualitative Research in Emergency Care Part II: Data Collection, Analysis and Results Reporting. Science.gov (United States) Ranney, Megan L; Meisel, Zachary F; Choo, Esther K; Garro, Aris C; Sasson, Comilla; Morrow Guthrie, Kate 2015-09-01 Qualitative methods are increasingly being used in emergency care research. Rigorous qualitative methods can play a critical role in advancing the emergency care research agenda by allowing investigators to generate hypotheses, gain an in-depth understanding of health problems or specific populations, create expert consensus, and develop new intervention and dissemination strategies. In Part I of this two-article series, we provided an introduction to general principles of applied qualitative health research and examples of its common use in emergency care research, describing study designs and data collection methods most relevant to our field (observation, individual interviews, and focus groups). Here in Part II of this series, we outline the specific steps necessary to conduct a valid and reliable qualitative research project, with a focus on interview-based studies. These elements include building the research team, preparing data collection guides, defining and obtaining an adequate sample, collecting and organizing qualitative data, and coding and analyzing the data. We also discuss potential ethical considerations unique to qualitative research as it relates to emergency care research. © 2015 by the Society for Academic Emergency Medicine. 18. Tobacco control and gender in south-east Asia. Part II: Singapore and Vietnam. Science.gov (United States) Morrow, Martha; Barraclough, Simon 2003-12-01 In the World Health Organization's Western Pacific Region, being born male is the single greatest risk marker for tobacco use. While the literature demonstrates that risks associated with tobacco use may vary according to sex, gender refers to the socially determined roles and responsibilities of men and women, who initiate, continue and quit using tobacco for complex and often different reasons. Cigarette advertising frequently appeals to gender roles. Yet tobacco control policy tends to be gender-blind. Using a broad, gender-sensitivity framework, this contradiction is explored in four Western Pacific countries. Part I of the study presented the rationale, methodology and design of the study, discussed issues surrounding gender and tobacco, and analysed developments in Malaysia and the Philippines (see the previous issue of this journal). Part II deals with Singapore and Vietnam. In all four countries gender was salient for the initiation and maintenance of smoking. Yet, with a few exceptions, gender was largely unrecognized in control policy. Suggestions for overcoming this weakness in order to enhance tobacco control are made. 19. ¿Tienen ansiedad hacia las matemáticas los futuros matemáticos? Directory of Open Access Journals (Sweden) Rosa Nortes Martínez-Artero 2014-01-01 Full Text Available Las Matemáticas es una asignatura difícil de enseñar y de aprender, siendo los profesionales que se dediquen a ello personas con una baja ansiedad hacia las matemáticas. El objetivo del presente trabajo es aportar nuevos datos sobre la ansiedad de los alumnos hacia las Matemáticas, en este caso del Grado de Matemáticas, a través de la aplicación de dos escalas de ansiedad tipo Likert de 5 valores, siendo 1 el más bajo y 5 el más alto, a una muestra de 149 alumnos de distintas universidades españolas, reunidos en Murcia en el 13 Encuentro Nacional de Estudiantes de Matemáticas (24 -28 julio de 2012, obteniendo que la ansiedad hacia las matemáticas de los futuros matemáticos es de 1,767 en la subescala de Auzmendi y de 2,007 según la de Fennema-Sherman; siendo de 1,714 en la subescala general, de 2,093 ante la resolución de problemas y de 2,501 ante los exámenes, aportando la novedad de ser exclusiva a estudiantes del Grado y Licenciatura de Matemáticas y contrastar su ansiedad por edad y por futuro profesional, siendo -0,250 la correlación de la última calificación en una asignatura de Matemáticas con la Ansiedad (Fennema-Sherman. Como conclusiones obtenemos que la correlación entre las dos escalas es de 0,648, que las mujeres tienen más ansiedad que los hombres (Escala de Fennema-Sherman, que los estudiantes de 21 años o más tiene más ansiedad que los de menos de 21 años (Subescala general de Fennema-Sherman y que los futuros matemáticos docentes tienen mayor ansiedad que los no docentes (en las dos escalas, teniendo en todos los casos un nivel bajo de ansiedad al ser alumnos relacionados profesionalmente en el futuro con las Matemáticas, aunque más alto de lo deseable, necesitando ampliar este estudio a otros futuros docentes, no específicamente de Matemáticas, como pueden ser maestros de primaria. 20. Proceso de alta de una unidad de cuidados intensivos. Medición de la ansiedad y factores relacionados OpenAIRE Ricart Basagaña, Mª Teresa 2015-01-01 La ansiedad es un diagnóstico que frecuentemente se relaciona con los enfermos de las unidades de cuidados intensivos (UCI). Aunque la literatura aporta evidencias suficientes para tener en cuenta el problema de la ansiedad y las complicaciones derivadas, aún desconocemos hasta qué punto la ansiedad tiene impacto en nuestro contexto cultural y cuáles son los factores que condicionan que algunas personas presenten niveles de ansiedad más altos que otras cuando son dados de alta de nuestras UC... 1. Rare or remarkable microfungi from Oaxaca (south Mexico)--Part II. Science.gov (United States) Ale-Agha, N; Jensen, M; Brassmann, M; Kautz, S; Eilmus, S; Ballhorn, D J 2008-01-01 Microfungi were collected in southern Mexico in the vicinity of Puerto Escondido, Oaxaca in 2007. In 2006, samples were gathered from Acacia myrmecophytes [(Remarkable microfungi from Oaxaca of Acacia species) Part I]. In the present investigation [Part II], we collected microfungi from different parts of a variety of wild and cultivated higher plants belonging to the families Anacardiaceae, Caricaceae, Fabaceae, Moraceae, and Nyctaginacae. The microfungi found here live as parasites or saprophytes. Interestingly, the species Colletotrichum lindemuthianum (Sacc. and Magn.) Briosi and Cavara has repeatedly been used to cause fungal infections of Phaseolus lunatus leaves in laboratory experiments. We could now find the same fungus as parasite on the same host plants under field conditions showing that results obtained in the laboratory are also relevant in nature. Most of the fungal species collected belong to the classes Ascomycotina, Basidiomycotina and Deuteromycotina. Until now, some of the microfungi identified in this study have been rarely observed before or have been reported for the first time in Mexico, for example: Pestalotia acaciae Thüm. on Acacia collinsii Safford; Corynespora cassiicola (Berk. and M.A. Curtis) C.T. Wei on Carica papaya L.; Botryosphaeria ribis Grossenb. and Duggar and Cercosporella leucaenae (Raghu Ram and Mallaiah) U. Braun (new for Mexico) and Camptomeris leucaenae (F. Stevens and Dalbey) Syd. (new for Mexico) on Leucaena leucocephala (Lam.) de Wit.; Oidium clitoriae Narayanas. and K. Ramakr. and Phakopsora cf. pachyrhizi Sydow and Sydow (new for Mexico) on Clitoria ternatea L.; Botryosphaeria obtusa (Schw.) Shoemaker on Prosopis juliflora (Sw.) DC.; Cylindrocladium scoparium Morg. on Ficus benjamina L.; Acremonium sp. on Bougainvillea sp. All specimens are located in the herbarium ESS. Mycotheca Parva collection G.B. Feige and N. Ale-Agha. 2. On the Processing of Spalling Experiments. Part II: Identification of Concrete Fracture Energy in Dynamic Tension Science.gov (United States) Lukić, Bratislav B.; Saletti, Dominique; Forquin, Pascal 2017-12-01 This paper presents a second part of the study aimed at investigating the fracture behavior of concrete under high strain rate tensile loading. The experimental method together with the identified stress-strain response of three tests conducted on ordinary concrete have been presented in the paper entitled Part I (Forquin and Lukić in Journal of Dynamic Behavior of Materials, 2017. https://doi.org/10.1007/s40870-017-0135-1). In the present paper, Part II, the investigation is extended towards directly determining the specific fracture energy of each observed fracture zone by visualizing the dynamic cracking process with a temporal resolution of 1 µs. Having access to temporal displacement fields of the sample surface, it is possible to identify the fracture opening displacement (FOD) and the fracture opening velocity of any principle (open) and secondary (closed) fracture at each measurement instance, that may or may not lead to complete physical failure of the sample. Finally, the local Stress-FOD curves were obtained for each observed fracture zone, opposed to previous works where indirect measurements were used. The obtained results indicated a much lower specific fracture energy compared to the results often found in the literature. Furthermore, numerical simulations were performed with a damage law to evaluate the validity of the proposed experimental data processing and compare it to the most often used one in the previous works. The results showed that the present method can reliably predict the specific fracture energy needed to open one macro-fracture and suggested that indirect measurement techniques can lead to an overestimate of specific fracture energy due to the stringent assumption of linear elasticity up-to the peak and the inability of having access to the real post-peak change of axial stress. 3. Viabilidade do emprego de cinza de casca de arroz natural em concreto estrutural (parte II: durabilidade Directory of Open Access Journals (Sweden) Geraldo Cechella Isaia Full Text Available Resumo Os resíduos incorporados aos materiais de construção devem ser usados, se possível, sem processamentos, para evitar o aumento dos impactos ambientais e custos adicionais. A cinza de casca de arroz (CCA é uma pozolana que deve ser previamente moída, para aumentar a finura e a reatividade com o cimento, quando empregada como material cimentício. Este trabalho estuda cinza de casca de arroz natural (CCAN sem processamento em substituição parcial de 15% de cimento, em massa, para uso em concreto estrutural, cominuída por moagem conjunta com os agregados no tambor da betoneira. Na parte I desta pesquisa, já publicada, são apresentados os resultados de microestrutura, resistência mecânica e retração, também para o teor de 25%, e nesta parte II são mostrados os dados dos ensaios de durabilidade (carbonatação, penetração de cloretos, resistividade, absorção d'água, permeabilidade ao oxigênio, absorção capilar e reação álcali-sílica - RAS, comparados ao concreto referência com 100% de cimento e, ainda, com CCA moída previamente (CCAM. Os resultados mostram que 15% de CCAN é factível de ser empregado em concreto porque apresenta desempenho superior ao concreto referência, quando usado cimento com pozolanas e próximos ou até superiores às misturas de CCAM, para grande parte das variáveis estudadas. Conclui-se que 15% de CCAN para concreto estrutural é viável e traz maior sustentabilidade. 4. Personalized translational epilepsy research - Novel approaches and future perspectives: Part II: Experimental and translational approaches. Science.gov (United States) Bauer, Sebastian; van Alphen, Natascha; Becker, Albert; Chiocchetti, Andreas; Deichmann, Ralf; Deller, Thomas; Freiman, Thomas; Freitag, Christine M; Gehrig, Johannes; Hermsen, Anke M; Jedlicka, Peter; Kell, Christian; Klein, Karl Martin; Knake, Susanne; Kullmann, Dimitri M; Liebner, Stefan; Norwood, Braxton A; Omigie, Diana; Plate, Karlheinz; Reif, Andreas; Reif, Philipp S; Reiss, Yvonne; Roeper, Jochen; Ronellenfitsch, Michael W; Schorge, Stephanie; Schratt, Gerhard; Schwarzacher, Stephan W; Steinbach, Joachim P; Strzelczyk, Adam; Triesch, Jochen; Wagner, Marlies; Walker, Matthew C; von Wegner, Frederic; Rosenow, Felix 2017-11-01 Despite the availability of more than 15 new "antiepileptic drugs", the proportion of patients with pharmacoresistant epilepsy has remained constant at about 20-30%. Furthermore, no disease-modifying treatments shown to prevent the development of epilepsy following an initial precipitating brain injury or to reverse established epilepsy have been identified to date. This is likely in part due to the polyetiologic nature of epilepsy, which in turn requires personalized medicine approaches. Recent advances in imaging, pathology, genetics, and epigenetics have led to new pathophysiological concepts and the identification of monogenic causes of epilepsy. In the context of these advances, the First International Symposium on Personalized Translational Epilepsy Research (1st ISymPTER) was held in Frankfurt on September 8, 2016, to discuss novel approaches and future perspectives for personalized translational research. These included new developments and ideas in a range of experimental and clinical areas such as deep phenotyping, quantitative brain imaging, EEG/MEG-based analysis of network dysfunction, tissue-based translational studies, innate immunity mechanisms, microRNA as treatment targets, functional characterization of genetic variants in human cell models and rodent organotypic slice cultures, personalized treatment approaches for monogenic epilepsies, blood-brain barrier dysfunction, therapeutic focal tissue modification, computational modeling for target and biomarker identification, and cost analysis in (monogenic) disease and its treatment. This report on the meeting proceedings is aimed at stimulating much needed investments of time and resources in personalized translational epilepsy research. This Part II includes the experimental and translational approaches and a discussion of the future perspectives, while the diagnostic methods, EEG network analysis, biomarkers, and personalized treatment approaches were addressed in Part I [1]. Copyright © 2017 5. Sensibilidad y especificidad del cuestionario de preocupación y ansiedad para la detección del trastorno de ansiedad generalizada en la edad avanzada OpenAIRE Nuevo Benítez, Roberto 2005-01-01 Este trabajo se dirige a comprobar, mediante el análisis de las curvas COR (características operativas del receptor), la capacidad para identificar la presencia de trastorno de ansiedad generalizada (TAG) en personas mayores del Cuestionario de Preocupa 6. Ansiedade e depressão em familiares de pacientes internados em unidade de cuidados intensivos Ansiedad y depresión en familiares de pacientes internados en una unidad de cuidados intensivos Anxiety and depressions in relatives of patients admitted in intensive care units Directory of Open Access Journals (Sweden) Marina Rumiko Maruiti 2008-01-01 Full Text Available OBJETIVO: Identificar a ocorrência de sintomas de ansiedade e/ou depressão em familiares de pacientes internados em uma unidade de cuidados intensivos e correlacionar a presença desses sintomas com o sexo e idade de familiar e com o tempo de internação dos pacientes. MÉTODOS: Fizerem parte da amostra 39 familiares de pacientes em estado crítico de saúde. Para coleta de dados foi utilizada a Escala Hospitalar de Ansiedade e Depressão. RESULTADOS: Foram identificados 11 (28,2% possíveis casos de ansiedade, 17 (43,6% prováveis casos de ansiedade, 14 (35,9% possíveis casos de depressão e 7 (17,9% prováveis casos de depressão. CONSIDERAÇÕES FINAIS: O suporte emocional e a satisfação das necessidades dos familiares devem ser prioridades no plano de assistência de enfermagem, de forma a prevenir esses transtornos.OBJETIVO: Identificar la ocurrencia de síntomas de ansiedad y /o depresión en familiares de pacientes internados en una unidad de cuidados intensivos y correlacionar la presencia de esos síntomas con el sexo y edad del familiar y con el tiempo de internamiento de los pacientes. MÉTODOS: Hicieron parte de la muestra 39 familiares de pacientes en estado crítico de salud. Para la recolección de los datos fue utilizada la Escala Hospitalaria de Ansiedad y Depresión. RESULTADOS: Fueron identificados 11 (28,2% posibles casos de ansiedad, 17 (43,6% probables casos de ansiedad, 14 (35,9% posibles casos de depresión y 7 (17,9% probables casos de depresión. CONSIDERACIONES FINALES: El soporte emocional y la satisfacción de las necesidades de los familiares deben ser prioridades en el plan de asistencia de enfermería, de forma a prevenir esos trastornos.OBJECTIVE: to identify the occurrence of symptoms of anxiety and/or depression in relatives of patients admitted in an intensive care unit and correlate the presence of such symptoms with the relatives' gender, age, and with the total length of the patient's hospital stay 7. Neutronics and thermohydraulics of the reactor C.E.N.E. Part II; Analisis neutronico y termohidraulico del reactor C.E.N.E. Parte II Energy Technology Data Exchange (ETDEWEB) Caro, R. 1976-07-01 In this report the analysis of neutronics thermohydraulics and shielding of the 10 HWt swimming pool reactor C.E.N.E is included. In each of these chapters is given a short description of the theoretical model used, along with the theoretical versus experimental checking carried out, whenever possible, with the reactors JEN-I and JEN-II of Junta de Energia Nuclear. (Author) 11 refs. 8. Ansiedade no período pré-operatório de cirurgias de mama: estudo comparativo entre pacientes com suspeita de câncer e a serem submetidas a procedimentos cirúrgicos estéticos Ansiedad en el período preoperatorio de cirugías de mama: estudio comparativo entre pacientes con sospecha de cáncer a ser sometidas a procedimientos quirúrgicos estéticos Preoperative anxiety in surgeries of the breast: a comparative study between patients with suspected breast cancer and that undergoing cosmetic surgery OpenAIRE Maria Luiza Melo Alves; Adriana Jucá Pimentel; Álvaro Antônio Guaratini; José Álvaro Marques Marcolino; Judymara Lauzi Gozzani; Ligia Andrade da Silva Telles Mathias 2007-01-01 JUSTIFICATIVA E OBJETIVOS: A avaliação da ansiedade não faz parte da rotina da avaliação pré-anestésica (APA), o que faz com que situações especiais em que o estado emocional dos pacientes possa estar alterado, passem despercebidas pelo anestesiologista. Este estudo visou comparar, no momento da APA ambulatorial, fatores de risco, intensidade e prevalência de ansiedade em pacientes com suspeita de câncer de mama e a serem submetidas a procedimentos cirúrgicos estéticos de mama. MÉTODO: Após a... 9. A thermoelectric power generating heat exchanger: Part II – Numerical modeling and optimization International Nuclear Information System (INIS) Sarhadi, Ali; Bjørk, Rasmus; Lindeburg, Niels; Viereck, Peter; Pryds, Nini 2016-01-01 Highlights: • A comprehensive model was developed to optimize the integrated TEG-heat exchanger. • The developed model was validated with the experimental data. • The effect of using different interface materials on the output power was assessed. • The influence of TEG arrangement on the power production was investigated. • Optimized geometrical parameters and proper interface materials were suggested. - Abstract: In Part I of this study, the performance of an experimental integrated thermoelectric generator (TEG)-heat exchanger was presented. In the current study, Part II, the obtained experimental results are compared with those predicted by a finite element (FE) model. In the simulation of the integrated TEG-heat exchanger, the thermal contact resistance between the TEG and the heat exchanger is modeled assuming either an ideal thermal contact or using a combined Cooper–Mikic–Yovanovich (CMY) and parallel plate gap formulation, which takes into account the contact pressure, roughness and hardness of the interface surfaces as well as the air gap thermal resistance at the interface. The combined CMY and parallel plate gap model is then further developed to simulate the thermal contact resistance for the case of an interface material. The numerical results show good agreement with the experimental data with an average deviation of 17% for the case without interface material and 12% in the case of including additional material at the interfaces. The model is then employed to evaluate the power production of the integrated system using different interface materials, including graphite, aluminum (Al), tin (Sn) and lead (Pb) in a form of thin foils. The numerical results show that lead foil at the interface has the best performance, with an improvement in power production of 34% compared to graphite foil. Finally, the model predicts that for a certain flow rate, increasing the parallel TEG channels for the integrated systems with 4, 8, and 12 TEGs 10. Effects of hypobaric pressure on human skin: implications for cryogen spray cooling (part II). Science.gov (United States) Aguilar, Guillermo; Franco, Walfre; Liu, Jie; Svaasand, Lars O; Nelson, J Stuart 2005-02-01 Clinical results have demonstrated that dark purple port wine stain (PWS) birthmarks respond favorably to laser induced photothermolysis after the first three to five treatments. Nevertheless, complete blanching is rarely achieved and the lesions stabilize at a red-pink color. In a feasibility study (Part I), we showed that local hypobaric pressure on PWS human skin prior to laser irradiation induced significant lesion blanching. The objective of the present study (Part II) is to investigate the effects of hypobaric pressures on the efficiency of cryogen spray cooling (CSC), a technique that assists laser therapy of PWS and other dermatoses. Experiments were carried out within a suction cup and vacuum chamber to study the effect of hypobaric pressure on the: (1) interaction of cryogen sprays with human skin; (2) spray atomization; and (3) thermal response of a model skin phantom. A high-speed camera was used to acquire digital images of spray impingement on in vivo human skin and spray cones generated at different hypobaric pressures. Subsequently, liquid cryogen was sprayed onto a skin phantom at atmospheric and 17, 34, 51, and 68 kPa (5, 10, 15, and 20 in Hg) hypobaric pressures. A fast-response temperature sensor measured sub-surface phantom temperature as a function of time. Measurements were used to solve an inverse heat conduction problem to calculate surface temperatures, heat flux, and overall heat extraction at the skin phantom surface. Under hypobaric pressures, cryogen spurts did not produce skin indentation and only minimal frost formation. Sprays also showed shorter jet lengths and better atomization. Lower minimum surface temperatures and higher overall heat extraction from skin phantoms were reached. The combined effects of hypobaric pressure result in more efficient cryogen evaporation that enhances heat extraction and, therefore, improves the epidermal protection provided by CSC. (c) 2005 Wiley-Liss, Inc. 11. Reproduction in the space environment: Part II. Concerns for human reproduction Science.gov (United States) Jennings, R. T.; Santy, P. A. 1990-01-01 Long-duration space flight and eventual colonization of our solar system will require successful control of reproductive function and a thorough understanding of factors unique to space flight and their impact on gynecologic and obstetric parameters. Part II of this paper examines the specific environmental factors associated with space flight and the implications for human reproduction. Space environmental hazards discussed include radiation, alteration in atmospheric pressure and breathing gas partial pressures, prolonged toxicological exposure, and microgravity. The effects of countermeasures necessary to reduce cardiovascular deconditioning, calcium loss, muscle wasting, and neurovestibular problems are also considered. In addition, the impact of microgravity on male fertility and gamete quality is explored. Due to current constraints, human pregnancy is now contraindicated for space flight. However, a program to explore effective countermeasures to current constraints and develop the required health care delivery capability for extended-duration space flight is suggested. A program of Earth- and space-based research to provide further answers to reproductive questions is suggested. 12. State of the Science of Spirituality and Palliative Care Research Part II: Screening, Assessment, and Interventions. Science.gov (United States) Balboni, Tracy A; Fitchett, George; Handzo, George F; Johnson, Kimberly S; Koenig, Harold G; Pargament, Kenneth I; Puchalski, Christina M; Sinclair, Shane; Taylor, Elizabeth J; Steinhauser, Karen E 2017-09-01 The State of the Science in Spirituality and Palliative Care was convened to address the current landscape of research at the intersection of spirituality and palliative care and to identify critical next steps to advance this field of inquiry. Part II of the SOS-SPC report addresses the state of extant research and identifies critical research priorities pertaining to the following questions: 1) How do we assess spirituality? 2) How do we intervene on spirituality in palliative care? And 3) How do we train health professionals to address spirituality in palliative care? Findings from this report point to the need for screening and assessment tools that are rigorously developed, clinically relevant, and adapted to a diversity of clinical and cultural settings. Chaplaincy research is needed to form professional spiritual care provision in a variety of settings, and outcomes assessed to ascertain impact on key patient, family, and clinical staff outcomes. Intervention research requires rigorous conceptualization and assessments. Intervention development must be attentive to clinical feasibility, incorporate perspectives and needs of patients, families, and clinicians, and be targeted to diverse populations with spiritual needs. Finally, spiritual care competencies for various clinical care team members should be refined. Reflecting those competencies, training curricula and evaluation tools should be developed, and the impact of education on patient, family, and clinician outcomes should be systematically assessed. Published by Elsevier Inc. 13. Testing and Analysis of a Composite Non-Cylindrical Aircraft Fuselage Structure . Part II; Severe Damage Science.gov (United States) Przekop, Adam; Jegley, Dawn C.; Lovejoy, Andrew E.; Rouse, Marshall; Wu, Hsi-Yung T. 2016-01-01 The Environmentally Responsible Aviation Project aimed to develop aircraft technologies enabling significant fuel burn and community noise reductions. Small incremental changes to the conventional metallic alloy-based 'tube and wing' configuration were not sufficient to achieve the desired metrics. One airframe concept identified by the project as having the potential to dramatically improve aircraft performance was a composite-based hybrid wing body configuration. Such a concept, however, presented inherent challenges stemming from, among other factors, the necessity to transfer wing loads through the entire center fuselage section which accommodates a pressurized cabin confined by flat or nearly flat panels. This paper discusses a finite element analysis and the testing of a large-scale hybrid wing body center section structure developed and constructed to demonstrate that the Pultruded Rod Stitched Efficient Unitized Structure concept can meet these challenging demands of the next generation airframes. Part II of the paper considers the final test to failure of the test article in the presence of an intentionally inflicted severe discrete source damage under the wing up-bending loading condition. Finite element analysis results are compared with measurements acquired during the test and demonstrate that the hybrid wing body test article was able to redistribute and support the required design loads in a severely damaged condition. 14. Coordinator(a) de Servicios Clinicos. Parte I (Unidad I-IV). Parte II (Unidad V-VI). Guia. Documento de Trabajo (Clinical Services Coordinator. Part I. Units I-IV. Part II. Units V-VI. Guide. Working Document). Science.gov (United States) Puerto Rico State Dept. of Education, Hato Rey. Area for Vocational and Technical Education. This guide is intended for instructing secondary students in the occupation of clinical services coordinator in a hospital. The first part contains four units on the following subjects: the occupation of clinical services coordinator; interpersonal relationships; ethical/legal aspects; and communications (telephone, intercom, and others). For each… 15. Ansiedad y calidad de vida en la mujer con cáncer de mama Directory of Open Access Journals (Sweden) Eustolia Velázquez Leyva 2015-12-01 Full Text Available La ansiedad surge en la mujer con Cáncer de Mama (CaMa desde el momento del diagnóstico, por tal motivo el presente estudio tiene como objetivo medir la relación de la ansiedad como factor estresor intrapersonal en la calidad de vida de la mujer con CaMa, en un hospital de Sonora. Es un estudio cuantitativo, correlacional y no experimental; el muestreo es no probabilístico, con una significancia de 0.05 y poder de 0.80, muestra de 65 individuos. Se empleó estadística descriptiva e inferencial, se aplicó la prueba de correlación de Pearson, se utilizó la escala de Hamilton para medir la ansiedad, y el WHOQOL-BREF para determinar la calidad de vida. La edad media fue 52.43 años y la ansiedad tuvo una relación negativa significativa (r = -0.270, p < 0.01 con la calidad de vida. Se concluye que es importante desarrollar intervenciones que promuevan una mejor calidad de vida y disminuyan la ansiedad en las mujeres con CaMa. 16. Estrategias de aprendizaje y ansiedad ante los exámenes en estudiantes universitarios Directory of Open Access Journals (Sweden) Luis Alberto Furlan 2009-01-01 Full Text Available Diversas investigaciones coinciden en señalar que la elevada ansiedad frente a los exámenes está asociada a la baja habilidad para el estudio y al uso de estrategias superficiales de procesamiento de la información, existendo influencias recíprocas entre dichas variables. De acuerdo a esta perspectiva, se evaluó el uso de estrategias de aprendizaje en 816 estudiantes universitarios con elevada, media o baja ansiedad frente a los exámenes. Adicionalmente, se analizaron las relaciones entre las cuatro dimensiones de la ansiedad y las estrategias de aprendizaje. Los estudiantes con elevada ansiedad utilizaron más frecuentemente estrategias de repetición y búqueda de ayuda académica y los de baja ansiedad, estrategias de estudio reflexivo. La falta de confianza correlacionó negativamente con estrategias de estudio reflexivo, repetición, búsqueda de ayuda académica y regulación del tiempo y esfuerzo. En sentido opuesto, la preocupación se asoció positivamente con tres de estas estrategias. 17. Towards multi-resolution global climate modeling with ECHAM6-FESOM. Part II: climate variability Science.gov (United States) Rackow, T.; Goessling, H. F.; Jung, T.; Sidorenko, D.; Semmler, T.; Barbi, D.; Handorf, D. 2018-04-01 This study forms part II of two papers describing ECHAM6-FESOM, a newly established global climate model with a unique multi-resolution sea ice-ocean component. While part I deals with the model description and the mean climate state, here we examine the internal climate variability of the model under constant present-day (1990) conditions. We (1) assess the internal variations in the model in terms of objective variability performance indices, (2) analyze variations in global mean surface temperature and put them in context to variations in the observed record, with particular emphasis on the recent warming slowdown, (3) analyze and validate the most common atmospheric and oceanic variability patterns, (4) diagnose the potential predictability of various climate indices, and (5) put the multi-resolution approach to the test by comparing two setups that differ only in oceanic resolution in the equatorial belt, where one ocean mesh keeps the coarse 1° resolution applied in the adjacent open-ocean regions and the other mesh is gradually refined to 0.25°. Objective variability performance indices show that, in the considered setups, ECHAM6-FESOM performs overall favourably compared to five well-established climate models. Internal variations of the global mean surface temperature in the model are consistent with observed fluctuations and suggest that the recent warming slowdown can be explained as a once-in-one-hundred-years event caused by internal climate variability; periods of strong cooling in the model (hiatus' analogs) are mainly associated with ENSO-related variability and to a lesser degree also to PDO shifts, with the AMO playing a minor role. Common atmospheric and oceanic variability patterns are simulated largely consistent with their real counterparts. Typical deficits also found in other models at similar resolutions remain, in particular too weak non-seasonal variability of SSTs over large parts of the ocean and episodic periods of almost absent 18. Miedos comunes en niños y adolescentes : relación con la sensibilidad a la ansiedad, el rasgo de ansiedad, la afectividad negativa y la depresión OpenAIRE Valiente, Rosa M.; Sandín, Bonifacio; Chorot, Paloma 2002-01-01 En el presente estudio hemos examinado las relaciones entre los miedos comunes y la sensibilidad a la ansiedad, el rasgo de ansiedad, la afectividad negativa y la depresión en una muestra no clínica de niños y adolescentes. Una amplia muestra (N= 1080) 19. Developing guidelines for economic evaluation of environmental impacts in EIAs. Part II: Case studies and dose-response literature International Nuclear Information System (INIS) 2005-01-01 This Part II of the report contains full versions of the case studies for air, water and land (Chapters 2-4), which were only summarised in Part I. In addition, during the work the research team has collected a large amount of literature and information on dose response relationships for air and water pollution relevant to China. This information is included as Chapters 5 and 6 20. Developing guidelines for economic evaluation of environmental impacts in EIAs. Part II: Case studies and dose-response literature Energy Technology Data Exchange (ETDEWEB) NONE 2005-07-01 This Part II of the report contains full versions of the case studies for air, water and land (Chapters 2-4), which were only summarised in Part I. In addition, during the work the research team has collected a large amount of literature and information on dose response relationships for air and water pollution relevant to China. This information is included as Chapters 5 and 6. 1. Acuity and case management: a healthy dose of outcomes, part II. Science.gov (United States) Craig, Kathy; Huber, Diane L 2007-01-01 This is the second of a 3-part series presenting 2 effective applications-acuity and dosage-that describe how the business case for case management (CM) can be made. In Part I, dosage and acuity concepts were explained as client need-severity, CM intervention-intensity, and CM activity-dose prescribed by amount, frequency, duration, and breadth of activities. Part I also featured a specific exemplar, the CM Acuity Tool, and described how to use acuity to identify and score the complexity of a CM case. Appropriate dosage prescription of CM activity was discussed. Part II further explains dosage and presents two acuity instruments, the Acuity Tool and AccuDiff. Details are provided that show how these applications produce opportunities for better communication about CM cases and for more accurate measurement of the right content that genuinely reflects the essentials of CM practice. The information contained in the 3-part series applies to all CM practice settings and contains ideas and recommendations useful to CM generalists, specialists, and supervisors, plus business and outcomes managers. The Acuity Tools Project was developed from frontline CM practice in one large, national telephonic CM company. Dosage: A literature search failed to find research into dosage of a behavioral intervention. The Huber-Hall model was developed and tested in a longitudinal study of CM models in substance abuse treatment and reported in the literature. Acuity: A structured literature search and needs assessment launched the development of the suite of acuity tools. A gap analysis identified that an instrument to assign and measure case acuity specific to CM activities was needed. Clinical experts, quality specialists, and business analysts (n = 7) monitored the development and testing of the tools, acuity concepts, scores, differentials, and their operating principles and evaluated the validity of the Acuity Tools' content related to CM activities. During the pilot phase of 2. The Historiography of British Imperial Education Policy, Part II: Africa and the Rest of the Colonial Empire Science.gov (United States) Whitehead, Clive 2005-01-01 Part II of this historiographical study examines British education policy in Africa, and in the many crown colonies, protectorates, and mandated territories around the globe. Up until 1920, the British government took far less interest than in India, in the development of schooling in Africa and the rest of the colonial empire, and education was… 3. Factores sociodemográficos que influyen en la ansiedad ante la muerte en estudiantes de medicina Directory of Open Access Journals (Sweden) Jaime Boceta Osuna 2017-07-01 Conclusiones: El sexo y la creencia en una religión influyen en la ansiedad ante la muerte. El afrontamiento de la ansiedad ante la muerte ha de ser contemplado específicamente en los programas de formación de pregrado en medicina. 4. An assessment of the Arctic Ocean in a suite of interannual CORE-II simulations. Part II: Liquid freshwater Science.gov (United States) Wang, Qiang; Ilicak, Mehmet; Gerdes, Rüdiger; Drange, Helge; Aksenov, Yevgeny; Bailey, David A.; Bentsen, Mats; Biastoch, Arne; Bozec, Alexandra; Böning, Claus; Cassou, Christophe; Chassignet, Eric; Coward, Andrew C.; Curry, Beth; Danabasoglu, Gokhan; Danilov, Sergey; Fernandez, Elodie; Fogli, Pier Giuseppe; Fujii, Yosuke; Griffies, Stephen M.; Iovino, Doroteaciro; Jahn, Alexandra; Jung, Thomas; Large, William G.; Lee, Craig; Lique, Camille; Lu, Jianhua; Masina, Simona; Nurser, A. J. George; Rabe, Benjamin; Roth, Christina; Salas y Mélia, David; Samuels, Bonita L.; Spence, Paul; Tsujino, Hiroyuki; Valcke, Sophie; Voldoire, Aurore; Wang, Xuezhu; Yeager, Steve G. 2016-03-01 The Arctic Ocean simulated in 14 global ocean-sea ice models in the framework of the Coordinated Ocean-ice Reference Experiments, phase II (CORE-II) is analyzed in this study. The focus is on the Arctic liquid freshwater (FW) sources and freshwater content (FWC). The models agree on the interannual variability of liquid FW transport at the gateways where the ocean volume transport determines the FW transport variability. The variation of liquid FWC is induced by both the surface FW flux (associated with sea ice production) and lateral liquid FW transport, which are in phase when averaged on decadal time scales. The liquid FWC shows an increase starting from the mid-1990s, caused by the reduction of both sea ice formation and liquid FW export, with the former being more significant in most of the models. The mean state of the FW budget is less consistently simulated than the temporal variability. The model ensemble means of liquid FW transport through the Arctic gateways compare well with observations. On average, the models have too high mean FWC, weaker upward trends of FWC in the recent decade than the observation, and low consistency in the temporal variation of FWC spatial distribution, which needs to be further explored for the purpose of model development. 5. Industry Wage Surveys: Banking and Life Insurance, December 1976. Part I--Banking. Part II--Life Insurance. Bulletin 1988. Science.gov (United States) Barsky, Carl This report presents the results of a survey conducted by the Bureau of Labor Statistics to determine wages and related benefits in (1) the banking industry and (2) for employees in home offices and regional head offices of life insurance carriers. Part 1 discusses banking industry characteristics and presents data for tellers and selected… 6. La Validez y la Eficacia de los Ejercicios Respiratorios para Reducir la Ansiedad Escénica en el Aula de Música OpenAIRE Pablo Ramos Ramos 2013-01-01 El presente texto repasa la literatura científica más relevante en el ámbito de la ansiedad escénica asociada a la interpretación musical, tanto en edad adulta como durante la infancia. Además, se exponen las conclusiones de aquellos estudios que han analizado este rasgo de la personalidad en el medio escolar y se destacan las causas que lo provocan: presión ante la presencia de padres y de otros alumnos, perfeccionismo, baja resistencia al estrés.Por otra parte, se relaciona el desarrollo de... 7. Diferencias entre los niveles de ansiedad en estudiantes de pregrado de ingeniería de la Universidad de Antioquia, 2017 OpenAIRE González Correa, Alexander; Hernández Ramírez, Eliana María 2017-01-01 El estudio de la deserción académica es de interés para todos los gobiernos, debido a que la educación es una herramienta esencial en el desarrollo de un país. En Colombia, se han identificado tres factores asociados a la deserción: la orientación vocacional, la capacidad económica y el rendimiento académico –RA-. Este último puede estar influenciado por aspectos evaluativos, personales y del contexto educativo. La salud mental hace parte de los aspectos personales, siendo la ansiedad una de ... 8. O conceito de ansiedade na análise do comportamento OpenAIRE Coêlho,Nilzabeth Leite; Tourinho,Emmanuel Zagury 2008-01-01 O conceito de ansiedade tem sido empregado na Análise do Comportamento sob controle de diferentes eventos ou relações. Neste artigo, oferecemos uma revisão dos modos como a análise do comportamento tem concebido teórica e conceitualmente o fenômeno da ansiedade e das relações que são colocadas em destaque nessas elaborações. Iniciamos com uma descrição dos usos correntes do conceito de ansiedade, assinalando que variam quanto ao papel atribuído às alterações fisiológicas, à definição das rela... 9. Asertividad y ansiedad en universitarios mexicanos y norteamericanos. Un estudio piloto Directory of Open Access Journals (Sweden) Ángel Manuel Ramírez Peredo 1990-01-01 Full Text Available A través de una encuesta aplicada a tres grupos de estudiantes universitarios de ambos lados de la frontera México-Estados Unidos (mexicanos, chicanos y norteaméricanos, en este trabajo se trata de medir la asertividad, que es una respuesta ante situaciones que provocan ansiedad y consiste en la defensa de los derechos personales pero manteniendo una actitud de respecto ante el derecho de los demás.Los resultados indican un elevado nivel de ansiedad y un notable decremento en asertividad en el grupo de los chicanos; además, para los tres grupos en el sexo femenino se presentan mayores niveles de ansiedad, acentúandose entre los mexico-norteamericanos. Los autores finalizan indicando la necesidad de efectuar más estudios sobre el tema. 10. Associação entre ansiedade e hipermobilidade articular: estudos com diferentes amostras OpenAIRE Simone Bianchi Sanches 2014-01-01 Introdução: A ansiedade pode se manifestar por meio de sintomas físicos e autonômicos. Os transtornos de ansiedade são geralmente descritos por uma interação de sintomas somáticos e sinais subjetivos, o que aumenta a importância de um conhecimento mais amplo sobre como esses fatores estão relacionados e ocorrem em conjunto com distúrbios psiquiátricos e não psiquiátricas. Assim, a ansiedade pode estar associada a diversas condições médicas, entre as quais a hipermobilidade articular. A hiperm... 11. Quality control of outpatient imaging examinations in North Rhine-Westphalia. Part II International Nuclear Information System (INIS) Krug, B.; Boettge, M.; Zaehringer, M.; Reinecke, T.; Coburger, S.; Harnischmacher, U.; Luengen, M.; Lauterbach, K.W.; Lehmacher, W.; Lackner, K. 2003-01-01 Purpose: In the state of North Rhine-Westphalia (NRW), Germany, a survey was conducted on radiologic examinations ordered by general practitioners (GPs). Part II of this study aims to determine the quality of the process and outcome. The reference standard is the assessment of both radiologists and physicians without board certification in radiology working at a university hospital and in outpatient facilities. Materials and Methods: All GPs in NRW were asked to cooperate. Participating GPs filled out a questionnaire for each patient. The patients recorded the symptoms prompting the imaging examinations. The radiologists or other physicians performing the examinations were asked to provide the images and written reports and to complete a questionnaire. A file was created for each of the 394 patients with image documentation of at least one examination. Each file, which included medical history, physical findings, imaging documentation and written report, was sequentially forwarded to a board-certified radiologist and to a physician without board certification in radiology working in a university hospital and in an outpatient facility. All physicians were requested to complete a structured questionnaire for each file. Results: The referral diagnoses were rated as medically plausible in 81%, the indications for imaging found correct in 76%, the examination techniques considered appropriate in 69%, the clinical question answered in 63%, the interpretation judged medically correct in 50% and all incidental findings documented in 49%. In retrospect, 32% of the examinations were judged superfluous. The sequence of multiple examinations performed on a particular patient was rated as appropriate in 51%. The interpretation revealed specialty-related differences. The plausibility of the referral diagnoses had a significant impact on the appropriateness of subsequent diagnostic investigations. Marked deficits showed sonography, performance by non-radiologists, self 12. Study of diffuse H II regions potentially forming part of the gas streams around Sgr A* Science.gov (United States) Armijos-Abendaño, J.; López, E.; Martín-Pintado, J.; Báez-Rubio, A.; Aravena, M.; Requena-Torres, M. A.; Martín, S.; Llerena, M.; Aldás, F.; Logan, C.; Rodríguez-Franco, A. 2018-05-01 We present a study of diffuse extended ionized gas towards three clouds located in the Galactic Centre (GC). One line of sight (LOS) is towards the 20 km s-1 cloud (LOS-0.11) in the Sgr A region, another LOS is towards the 50 km s-1 cloud (LOS-0.02), also in Sgr A, while the third is towards the Sgr B2 cloud (LOS+0.693). The emission from the ionized gas is detected from Hnα and Hmβ radio recombination lines (RRLs). Henα and Hemβ RRL emission is detected with the same n and m as those from the hydrogen RRLs only towards LOS+0.693. RRLs probe gas with positive and negative velocities towards the two Sgr A sources. The Hmβ to Hnα ratios reveal that the ionized gas is emitted under local thermodynamic equilibrium conditions in these regions. We find a He to H mass fraction of 0.29±0.01 consistent with the typical GC value, supporting the idea that massive stars have increased the He abundance compared to its primordial value. Physical properties are derived for the studied sources. We propose that the negative velocity component of both Sgr A sources is part of gas streams considered previously to model the GC cloud kinematics. Associated massive stars with what are presumably the closest H II regions to LOS-0.11 (positive velocity gas), LOS-0.02, and LOS+0.693 could be the main sources of ultraviolet photons ionizing the gas. The negative velocity components of both Sgr A sources might be ionized by the same massive stars, but only if they are in the same gas stream. 13. The 183-WSL Fast Rain Rate Retrieval Algorithm. Part II: Validation Using Ground Radar Measurements Science.gov (United States) Laviola, Sante; Levizzani, Vincenzo 2014-01-01 The Water vapour Strong Lines at 183 GHz (183-WSL) algorithm is a method for the retrieval of rain rates and precipitation type classification (convectivestratiform), that makes use of the water vapor absorption lines centered at 183.31 GHz of the Advanced Microwave Sounding Unit module B (AMSU-B) and of the Microwave Humidity Sounder (MHS) flying on NOAA-15-18 and NOAA-19Metop-A satellite series, respectively. The characteristics of this algorithm were described in Part I of this paper together with comparisons against analogous precipitation products. The focus of Part II is the analysis of the performance of the 183-WSL technique based on surface radar measurements. The ground truth dataset consists of 2.5 years of rainfall intensity fields from the NIMROD European radar network which covers North-Western Europe. The investigation of the 183-WSL retrieval performance is based on a twofold approach: 1) the dichotomous statistic is used to evaluate the capabilities of the method to identify rain and no-rain clouds; 2) the accuracy statistic is applied to quantify the errors in the estimation of rain rates.The results reveal that the 183-WSL technique shows good skills in the detection of rainno-rain areas and in the quantification of rain rate intensities. The categorical analysis shows annual values of the POD, FAR and HK indices varying in the range 0.80-0.82, 0.330.36 and 0.39-0.46, respectively. The RMSE value is 2.8 millimeters per hour for the whole period despite an overestimation in the retrieved rain rates. Of note is the distribution of the 183-WSL monthly mean rain rate with respect to radar: the seasonal fluctuations of the average rainfalls measured by radar are reproduced by the 183-WSL. However, the retrieval method appears to suffer for the winter seasonal conditions especially when the soil is partially frozen and the surface emissivity drastically changes. This fact is verified observing the discrepancy distribution diagrams where2the 183-WSL 14. Seismic detection. Part II. Nuclear events. Volume II. 1973--1974 (a bibliography with abstracts). Report for 1973--74 International Nuclear Information System (INIS) Habercom, G.E. Jr. 1976-03-01 Methods of seismic detection of nuclear events are investigated in these Government-sponsored research reports. (This updated bibliography contains 124 abstracts, none of which are new entries to the previous edition.) For general seismic detection, see Part 1, NTIS/PS-76/0206 15. Neuro-Oftalmologia: sistema sensorial -- Parte II Revisão 1997 -- 1999 Neuro-Ophthalmology: sensorial system - Part II. Review 1997 - 1999 Directory of Open Access Journals (Sweden) Marco Aurélio Lana-Peixoto 2002-03-01 Full Text Available Esta é a segunda parte de uma revisão da literatura do sistema visual sensorial. O autor seleciona artigos publicados na literatura entre os anos de 1997 e 1999 relacionados a neurorretinites, neuropatia óptica compressiva, tumores do nervo óptico, pseudotumor cerebral, neuropatias ópticas hereditárias, hipoplasia do nervo óptico, drusas do disco óptico, neuropatia óptica tóxica, neuropatia óptica traumática, outras neuropatias ópticas e doenças retinianas, doenças do quiasma óptico e do trato óptico, assim como alterações geniculares e retrogeniculares, incluindo os distúrbios visuais corticais. Os artigos são apresentados e comentados quanto às suas conclusões, alcance e relações com o conhecimento previamente estabelecido.This is the second part of a review of papers on the visual afferent system published from 1997 to 1999. In this part the author presents the most important contributions made to areas such as neuroretinitis, optic nerve tumors, idiopathic intracranial hypertension, hereditary optic neuropathies, optic disc drusen, optic nerve hypoplasia, traumatic and toxic optic neuropathy as well as geniculate and retrogeniculate visual disorders. Selected papers are considered in relation to their results and previously established concepts. 16. Control of uncertain systems by feedback linearization with neural networks augmentation. Part II. Controller validation by numerical simulation Directory of Open Access Journals (Sweden) Adrian TOADER 2010-09-01 Full Text Available The paper was conceived in two parts. Part I, previously published in this journal, highlighted the main steps of adaptive output feedback control for non-affine uncertain systems, having a known relative degree. The main paradigm of this approach was the feedback linearization (dynamic inversion with neural network augmentation. Meanwhile, based on new contributions of the authors, a new paradigm, that of robust servomechanism problem solution, has been added to the controller architecture. The current Part II of the paper presents the validation of the controller hereby obtained by using the longitudinal channel of a hovering VTOL-type aircraft as mathematical model. 17. Uncertainty estimation with a small number of measurements, part II: a redefinition of uncertainty and an estimator method Science.gov (United States) Huang, Hening 2018-01-01 This paper is the second (Part II) in a series of two papers (Part I and Part II). Part I has quantitatively discussed the fundamental limitations of the t-interval method for uncertainty estimation with a small number of measurements. This paper (Part II) reveals that the t-interval is an ‘exact’ answer to a wrong question; it is actually misused in uncertainty estimation. This paper proposes a redefinition of uncertainty, based on the classical theory of errors and the theory of point estimation, and a modification of the conventional approach to estimating measurement uncertainty. It also presents an asymptotic procedure for estimating the z-interval. The proposed modification is to replace the t-based uncertainty with an uncertainty estimator (mean- or median-unbiased). The uncertainty estimator method is an approximate answer to the right question to uncertainty estimation. The modified approach provides realistic estimates of uncertainty, regardless of whether the population standard deviation is known or unknown, or if the sample size is small or large. As an application example of the modified approach, this paper presents a resolution to the Du-Yang paradox (i.e. Paradox 2), one of the three paradoxes caused by the misuse of the t-interval in uncertainty estimation. 18. Prevalência de Sintomas de Ansiedade e Depressão em Estudantes de Medicina Directory of Open Access Journals (Sweden) Tatheane Couto de Vasconcelos Full Text Available Objetivos Determinar a prevalência de sintomas de ansiedade e depressão em estudantes de Medicina e avaliar fatores associados. Métodos Estudo de corte transversal com 234 estudantes que responderam a um questionário eletrônico com variáveis sociodemográficas e a Escala Hospitalar de Ansiedade e Depressão (Ehad. Resultados Em relação à ansiedade, o escore médio da Ehad foi de 6,7 (DP: +/- 3,4, com 34,3% (80 apresentando sintomas falso-positivos de ansiedade e 19,7% (46 manifestando sintomas sugestivos do transtorno. Quanto à depressão, o escore médio da Ehad foi de 4,4 (DP: +/- 3,1, com 19,3% (45 apresentando sintomas falso-positivos para depressão e 5,6% (13 manifestando sintomas sugestivos do transtorno. Na análise univariada, o uso de drogas psicoativas associou-se à presença de sintomas de ansiedade; para sintomas de depressão, o estudante ser procedente da RMR foi fator protetor, ao passo que o uso de drogas ilícitas foi associado a risco. Conclusão A prevalência de sintomas de ansiedade e depressão associada ao uso de drogas psicoativas e ilícitas, respectivamente, indica a necessidade de medidas de prevenção e diagnóstico precoces. 19. Relación entre ansiedad y actitud hacia los feminicidios Directory of Open Access Journals (Sweden) Paloma María Guadalupe Montiel Merino 2014-01-01 Full Text Available Inicialmente, escuchar sobre la muerte de una mujer tras haber sido privada de su libertad y violentada sexualmente sonaba como un caso aislado en el que las circunstancias habían “pro - piciado” ese acontecimiento. La cantidad y frecuencia de tales acontecimientos, obligó a dar un significante específico para tan específico significado: feminicidio. El interés social y psicológico condujo a investigar de qué manera estos feminicidios influyen en la actitud y en las emociones de las mujeres juarenses, específicamente en los niveles de ansiedad; el presente trabajo tiene como objetivo dar a conocer los resultados obtenidos en dicha investigación. A fin de establecer la relación entre la actitud hacia los feminicidios y la ansiedad, se tomaron dos grupos de estu - dio: mujeres estudiantes de psicología (n=181 y mujeres trabajadoras de maquila (n=186, a los cuales se les aplicaron dos instrumentos de medición (Inventario de Ansiedad Rasgo-Estado e Inventario de Actitud hacia los feminicidios que permitieran: a determinar los niveles de ansiedad (rasgo-estado y los niveles de actitud hacia los feminicidios en cada grupo de estudio, b correlacionar tales niveles, y c establecer una comparación entre los resultados de cada grupo de estudio. No se encontraron diferencias significativas entre los grupos respecto a la actitud hacia los feminicidios, sin embargo, se obtuvo una diferencia significativa en los niveles más altos de ansiedad rasgo-estado presentados en las trabajadoras de maquila, respecto a los niveles de ansiedad rasgo-estado presentados en las estudiantes de psicología. 20. Contributions to North American Ethnology, Volume II, Part II: The Klamath Indians of southwestern Oregon: dictionary of the Klamath language Science.gov (United States) Gatschet, Albert Samuel; Powell, John Wesley 1890-01-01 The present Dictionary, divided in two parts, contains the lexical portion of an Oregonian language never before reduced to writing. In view of the numerous obstacles and difficulties encountered in the preparation of such a work, a few hints upon its origin and tendencies will be of service in directing the studies of those who wish to acquire a more intimate knowledge of this energetic and well developed western language. 1. Comparación de dos procedimientos de inducción colectiva de ansiedad OpenAIRE Nuevo, Roberto; Cabrera, Isabel; Montorio Cerrato, Ignacio; Márquez González, María 2008-01-01 Objetivo: Este estudio compara los resultados de la aplicación colectiva de dos procedimientos de inducción de ansiedad. Método: 62 estudiantes de psicología fueron aleatoriamente asignados a dos condiciones experimentales de inducción de ansiedad empleandose un diseño intergrupos con medidas repetidas pre-post. La condición A incluía el visionado de escenas de películas ansiógenas y la condición B la lectura de frases autoreferenciales de contenido amenazante combinada con la escucha de una ... 2. Ansiedad manifiesta en jóvenes adolescentes con sobrepeso y obesidad OpenAIRE Edith Pompa Guajardo; Cecilia Meza Peña 2014-01-01 La obesidad es un problema de salud serio en México, y se presenta junto a un alto número de comorbilidades psicológicas, por lo que se propuso evaluar la presencia de ansiedad en po - blación de adolescentes en relación a su peso y talla. En el presente estudio participaron 601 adolescentes de ambos sexos, quienes com - pletaron la Escala de Ansiedad Manifiesta en Niños, además de proporcionar datos sociode - mográficos y ser evaluados en talla y peso. Los resultados son consistentes ... 3. Genes candidatos para la comorbilidad entre trastornos de ansiedad y trastornos adictivos. OpenAIRE Gallego Moreno, Xavier 2008-01-01 Esta Tesis Doctoral ha consistido en el estudio de los mecanismos implicados en la comorbilidad entre el abuso de drogas y los trastornos de ansiedad, así como en las regiones cerebrales responsables de la coexistencia de estos dos trastornos en modelos transgénicos. Nos hemos centrado en dos grandes familias de genes, que podrían actuar como interfaz genético en la comorbilidad entre trastornos de ansiedad y de abuso de sustancias: neurotrofinas por su intervención en el neurodesarrollo y en... 4. Tratamento do transtorno de ansiedade social em crianças e adolescentes OpenAIRE Isolan,Luciano; Pheula,Gabriel; Manfro,Gisele Gus 2007-01-01 CONTEXTO: Transtorno de ansiedade social é um transtorno incapacitante e altamente prevalente em crianças e adolescentes ao longo da vida, de acordo com os critérios do DSM-IV, variando de 0,7% a 3,5%. Se não tratado, pode interferir no funcionamento emocional, social e escolar. OBJETIVOS: Avaliar a evidência atual para a eficácia e efetividade de intervenções farmacológicas e psicoterápicas no tratamento do transtorno de ansiedade social na infância e na adolescência. MÉTODOS: Pesquisas fora... 5. Ansiedade em situações de prova: evidências de validade de duas escalas OpenAIRE Karino, Camila Akemi; Laros, Jacob A. 2014-01-01 A competição, a cobrança social e pessoal e a possibilidade de fracasso são alguns dos fatores que podem tornar a situação de prova um evento estressante e gerador de ansiedade. Nesse contexto, este estudo objetivou disponibilizar para o Brasil dois instrumentos de avaliação da ansiedade em situações de testagem e demonstrar evidências de validade dessas escalas. Participaram do estudo 1.878 estudantes do ensino médio de escolas públicas e particulares de Brasília. Dois instrumentos de ansied... 6. Ansiedad, síndrome de piernas inquietas y onicofagia en estudiantes de medicina. OpenAIRE Pedraz-Petrozzi, Bruno; Pilco-Inga, Jorge; Vizcarra-Pasapera, Joaquín; Osada-Liy, Jorge; Ruiz-Grosso, Paulo; Vizcarra-Escobar, Darwin 2015-01-01 Objetivos:Describir la frecuencia de ansiedad, onicofagia y síndrome de piernas inquietas (SPI) en estudiantes de medicina y explorar la relación entre los mismos. Materiales y métodos: Participaron 315 estudiantes del primer al quinto año de medicina de una universidad privada de Lima. Se administraron la Escala de Ansiedad de Beck (BAI), una escala Likert para onicofagia, el Inventario de Estudios Epidemiológicos de SPI (García - Borreguero) y el Inventario de SPI (Grupo Internacional de SP... 7. Nuclear power plant simulators for operator licensing and training. Part I. The need for plant-reference simulators. Part II. The use of plant-reference simulators International Nuclear Information System (INIS) Rankin, W.L.; Bolton, P.A.; Shikiar, R.; Saari, L.M. 1984-05-01 Part I of this report presents technical justification for the use of plant-reference simulators in the licensing and training of nuclear power plant operators and examines alternatives to the use of plant-reference simulators. The technical rationale is based on research on the use of simulators in other industries, psychological learning and testing principles, expert opinion and user opinion. Part II discusses the central considerations in using plant-reference simulators for licensing examination of nuclear power plant operators and for incorporating simulators into nuclear power plant training programs. Recommendations are presented for the administration of simulator examinations in operator licensing that reflect the goal of maximizing both reliability and validity in the examination process. A series of organizational tasks that promote the acceptance, use, and effectiveness of simulator training as part of the onsite training program is delineated 8. Fermionic quantum systems. Part I: Phase transitions in quantum dots. Part II: Nuclear matter on a lattice Science.gov (United States) Muller, Hans-Michael 1999-11-01 In the first part I perform Hartree-Fock calculations to show that quantum dots (i.e., two-dimensional systems of up to twenty interacting electrons in an external parabolic potential) undergo a gradual transition to a spin-polarized Wiper crystal with increasing magnetic field strength. The phase diagram and ground state energies have been determined. I tried to improve the ground state of the Wigner crystal by introducing a Jastrow ansatz for the wave function and performing a variational Monte Carlo calculation. The existence of so called magic numbers was also investigated. Finally, I also calculated the heat capacity associated with the rotational degree of freedom of deformed many-body states and suggest an experimental method to detect Wigner crystals. The second part of the thesis investigates infinite nuclear matter on a cubic lattice. The exact thermal formalism describes nucleons with a Hamiltonian that accommodates on-site and next-neighbor parts of the central, spin-exchange and isospin-exchange interaction. Using auxiliary field Monte Carlo methods, I show that energy and basic saturation properties of nuclear matter can be reproduced. A first order phase transition from an uncorrelated Fermi gas to a clustered system is observed by computing mechanical and thermodynamical quantities such as compressibility, heat capacity, entropy and grand potential. The structure of the clusters is investigated with the help two-body correlations. I compare symmetry energy and first sound velocities with literature and find reasonable agreement. I also calculate the energy of pure neutron matter and search for a similar phase transition, but the survey is restricted by the infamous Monte Carlo sign problem. Also, a regularization scheme to extract potential parameters from scattering lengths and effective ranges is investigated. 9. Part I:\\beta$-delayed fission, laser spectroscopy and shape-coexistence studies with astatine beams; Part II: Delineating the island of deformation in the light gold isotopes by means of laser spectroscopy CERN Document Server Andreyev, Andrei 2013-01-01 Part I:$\\beta-delayed fission, laser spectroscopy and shape-coexistence studies with astatine beams; Part II: Delineating the island of deformation in the light gold isotopes by means of laser spectroscopy 10. Relación entre los trastornos por ansiedad y alteraciones del oído interno Directory of Open Access Journals (Sweden) Heydy Luz Chica Urzola 2010-01-01 Full Text Available La ansiedad es un proceso normal adaptativo a circunstancias que generan estrés o representan un desafío para quien la padece y puede tornarse en desadaptativa bajo algunas circunstancias. Como parte del espectro sintomático de la ansiedad, algunas de sus representaciones somáticas se relacionan con equilibrio y oído interno. Con frecuencia se encuentra relación entre la sintomatología neuropsiquiátrica y otorrinolaringológica. Las estadísticas internacionales así lo señalan. En Colombia no hay datos específicos al respecto, aunque se encuentra información de patologías individuales. El abordaje de estos pacientes debe ser integral, y con disposición de la mayor cantidad de información clínica y paraclínica posible según la sospecha diagnóstica y con el paciente, se establecen indicadores de logros reales según el caso. La causalidad o el orden de aparición de la sintomatología entre la eminentemente psiquiátrica y la otológica es variable y en muchos casos parece circular. El ver estos síntomas desde la psiquiatría o desde la otorrinolaringología es un abordaje con frecuencia insuficiente y requiere de la integración de saberes. Con ese fin se han planteado teorías explicativas en varios sentidos e incluso se han adicionado categorías diagnósticas nuevas. El tratamiento es un abanico de posibilidades que incluye la terapia física de acondicionamiento y rehabilitación vestibular, psicoterapia y farmacoterapia, dentro de la cual se recomienda el uso de inhibidores selectivos de la recaptación de serotonina. 11. Relação entre ansiedade, depressão e desesperança entre grupos de idosos Relación entre ansiedad, depresión y desesperanza entre grupos de ancianos Relations between anxiety, depression and hopelessness among elderly groups Directory of Open Access Journals (Sweden) Katya Luciane de Oliveira 2006-08-01 Full Text Available O envelhecimento é uma etapa da vida que permanece pouco conhecida e estudada, se comparada a outras fases do desenvolvimento humano. Nesta pesquisa buscou-se explorar a relação entre ansiedade, depressão e desesperança entre grupos de idosos. Participaram deste estudo 79 idosos provenientes de centro do terceira idade de um posto de retirada de medicamentos e de uma instituição asilar. Utilizou-se um questionário para os dados de caracterização dos idosos, bem como as escalas Beck para mensurar sintomas de ansiedade, depressão e desesperança. Os resultados evidenciaram relação estatisticamente significativa entre ansiedade, depressão e desesperança nos idosos. O grupo de asilares apresentou uma maior incidência de sintomas ansiosos, depressivos e desesperançosos em relação aos outros dois grupos.El envejecimiento es una etapa de la vida que permanece poco conocida o estudiada, al ser comparada a otras fases del desarrollo humano. Esta investigación buscó explorar la relación entre ansiedad, depresión y desesperanza entre grupos de ancianos. Participaron en este estudio 79 adultos mayores provenientes de tres sitios diferentes: un centro de tercera edad, un puesto de retirada de medicinas y un asilo. Un cuestionario para los datos de caracterización de los ancianos fue utilizado, así como las escalas Beck para mensurar síntomas de ansiedad, depresión y desesperanza. Los resultados dejaron en evidencia relación estadísticamente significativa entre dichos síntomas en los adultos mayores. El grupo proveniente del asilo presentó mayor incidencia de manifestaciones en relación con los demás grupos.Aging is a life stage that remains hardly ever studied and, consequently, unknown if compared to other human development phases. This research investigates the relation between anxiety, depression and hopelessness among elderly groups. 79 elderly people, from three different units, such as a 'third-age-center', a center of 12. Determinar aportes de la Escala de Ansiedad de Spence en una población infantil : Su relación con los trastornos temporomandibulares OpenAIRE Nucciarone, Milena; Rimoldi, Marta Lidia; Ruiz, Miriam Ester; Levalle, María José; Lambruschini, Vanessa Alejandra; Beti, María Mónica; Hernández, Sandra Fabiana; Jáuregui, Rossana Miriam; Molinari, María Emelina; Capece, María del Carmen; Llanos, Antonella; Maurer, Florencia 2017-01-01 La ansiedad es uno de los problemas psicológicos más importante y frecuente en la infancia. Hablamos de ansiedad cuando esta interfiere en el desarrollo normal de la vida de los niños, como así también cuando sus manifestaciones son muy intensas. Sus síntomas se pueden clasificar en: Trastorno de ansiedad por separación, Pánico, Fobia social, Trastorno obsesivo compulsivo, Ansiedad Generalizada, Miedos. El objetivo de este trabajo fue analizar los resultados de la escala de ansiedad de Spence... 13. Peak-summer East Asian rainfall predictability and prediction part II: extratropical East Asia Science.gov (United States) Yim, So-Young; Wang, Bin; Xing, Wen 2016-07-01 The part II of the present study focuses on northern East Asia (NEA: 26°N-50°N, 100°-140°E), exploring the source and limit of the predictability of the peak summer (July-August) rainfall. Prediction of NEA peak summer rainfall is extremely challenging because of the exposure of the NEA to midlatitude influence. By examining four coupled climate models' multi-model ensemble (MME) hindcast during 1979-2010, we found that the domain-averaged MME temporal correlation coefficient (TCC) skill is only 0.13. It is unclear whether the dynamical models' poor skills are due to limited predictability of the peak-summer NEA rainfall. In the present study we attempted to address this issue by applying predictable mode analysis method using 35-year observations (1979-2013). Four empirical orthogonal modes of variability and associated major potential sources of variability are identified: (a) an equatorial western Pacific (EWP)-NEA teleconnection driven by EWP sea surface temperature (SST) anomalies, (b) a western Pacific subtropical high and Indo-Pacific dipole SST feedback mode, (c) a central Pacific-El Nino-Southern Oscillation mode, and (d) a Eurasian wave train pattern. Physically meaningful predictors for each principal component (PC) were selected based on analysis of the lead-lag correlations with the persistent and tendency fields of SST and sea-level pressure from March to June. A suite of physical-empirical (P-E) models is established to predict the four leading PCs. The peak summer rainfall anomaly pattern is then objectively predicted by using the predicted PCs and the corresponding observed spatial patterns. A 35-year cross-validated hindcast over the NEA yields a domain-averaged TCC skill of 0.36, which is significantly higher than the MME dynamical hindcast (0.13). The estimated maximum potential attainable TCC skill averaged over the entire domain is around 0.61, suggesting that the current dynamical prediction models may have large rooms to improve 14. Impacts of Realistic Urban Heating. Part II: Air Quality and City Breathability Science.gov (United States) Nazarian, Negin; Martilli, Alberto; Norford, Leslie; Kleissl, Jan 2018-03-01 Urban morphology and inter-building shadowing result in a non-uniform distribution of surface heating in urban areas, which can significantly modify the urban flow and thermal field. In Part I, we found that in an idealized three-dimensional urban array, the spatial distribution of the thermal field is correlated with the orientation of surface heating with respect to the wind direction (i.e. leeward or windward heating), while the dispersion field changes more strongly with the vertical temperature gradient in the street canyon. Here, we evaluate these results more closely and translate them into metrics of "city breathability," with large-eddy simulations coupled with an urban energy-balance model employed for this purpose. First, we quantify breathability by, (i) calculating the pollutant concentration at the pedestrian level (horizontal plane at z≈ 1.5 -2 m) and averaged over the canopy, and (ii) examining the air exchange rate at the horizontal and vertical ventilating faces of the canyon, such that the in-canopy pollutant advection is distinguished from the vertical removal of pollution. Next, we quantify the change in breathability metrics as a function of previously defined buoyancy parameters, horizontal and vertical Richardson numbers (Ri_h and Ri_v , respectively), which characterize realistic surface heating. We find that, unlike the analysis of airflow and thermal fields, consideration of the realistic heating distribution is not crucial in the analysis of city breathability, as the pollutant concentration is mainly correlated with the vertical temperature gradient (Ri_v ) as opposed to the horizontal (Ri_h ) or bulk (Ri_b ) thermal forcing. Additionally, we observe that, due to the formation of the primary vortex, the air exchange rate at the roof level (the horizontal ventilating faces of the building canyon) is dominated by the mean flow. Lastly, since Ri_h and Ri_v depend on the meteorological factors (ambient air temperature, wind speed, and 15. Market Analysis and Consumer Impacts Source Document. Part II. Review of Motor Vehicle Market and Consumer Expenditures on Motor Vehicle Transportation Science.gov (United States) 1980-12-01 This source document on motor vehicle market analysis and consumer impacts consists of three parts. Part II consists of studies and review on: motor vehicle sales trends; motor vehicle fleet life and fleet composition; car buying patterns of the busi... 16. Part I. Inviscid, swirling flows and vortex breakdown. Part II. A numerical investigation of the Lundgren turbulence model International Nuclear Information System (INIS) Buntine, J.D. 1994-01-01 Part I. A study of the behaviour of an inviscid, swirling fluid is performed. This flow can be described by the Squire-Long equation if the constraints of time-independence and axisymmetry are invoked. The particular case of flow through a diverging pipe is selected and a study is conducted to determine over what range of parameters does a solution exist. The work is performed with a view to understanding how the phenomenon of vortex breakdown develops. Experiments and previous numerical studies have indicated that the flow is sensitive to boundary conditions particularly at the pipe inlet. A open-quotes quasi-cylindricalclose quotes amplification of the Squire-Long equation is compared with the more complete model and shown to be able to account for most of its behaviour. An advantage of this latter representation is the relatively undetailed description of the flow geometry it requires in order to calculate a solution. open-quotes Criticalityclose quotes or the ability of small disturbances to propagate upstream is related to results of the quasi-cylindrical and axisymmetric flow models. This leads to an examination of claims made by researchers such as Benjamin and Hall concerning the interrelationship between open-quotes failureclose quotes of the quasi-cylindrical model and the occurrence of a open-quotes criticalclose quotes flow state. Lundgren developed an analytical model for homogeneous turbulence based on a collection of contracting spiral vortices each embedded in an axisymmetric strain field. Using asymptotic approximations he was able to deduce the Kolmogorov k -5/3 behaviour for inertial scales in the turbulence energy spectrum. Pullin ampersand Saffman have enlarged upon his work to make a number of predictions about the behaviour of turbulence described by the model. This work investigates the model numerically. The first part considers how the flow description compares with numerical simulations using the Navier-Stokes equations 17. Propriedades termofísicas de soluções-modelo similares a sucos: parte II Thermophysical properties of model solutions similar to juice: part II Directory of Open Access Journals (Sweden) Sílvia Cristina Sobottka Rolim de Moura 2005-09-01 Full Text Available Propriedades termofísicas, densidade e viscosidade de soluções-modelo similares a sucos foram determinadas experimentalmente. Os resultados foram comparados aos preditos por modelos matemáticos (STATISTICA 6.0 e obtidos da literatura em função da sua composição química. Para definição das soluções-modelo, foi realizado um planejamento estrela, mantendo-se fixa a quanti-dade de ácido (1,5% e variando-se a água (82-98,5%, o carboidrato (0-15% e a gordura (0-1,5%. A densidade foi determinada em picnômetro. A viscosidade foi determinada em viscosímetro Brookfield modelo LVF. A condutividade térmica foi calculada com o conhecimento das propriedades difusividade térmica e calor específico (apresentados na Parte I deste trabalho MOURA [7] e da densidade. Os resultados de cada propriedade foram analisados através de superfícies de respostas. Foram encontrados resultados significativos para as propriedades, mostrando que os modelos encontrados representam as mudanças das propriedades térmicas e físicas dos sucos, com alterações na composição e na temperatura.Thermophysical properties, density and viscosity of model solutions similar to juices were experimentally determined. The results were compared to those predicted by mathematical models (STATISTIC 6.0 and to values mentioned in the literature, according to the chemical composition. A star planning was adopted to define model solutions composition; fixing the acid amount in 1.5% and varying water (82-98.5%, carbohydrate (0-15% and fat (0-1.5%. The density was determined by picnometer. The viscosity was determined by Brookfield LVF model viscosimeter. The thermal conductivity was calculated based on thermal diffusivity and specific heat values (presented at the 1st . Part of this paper - MOURA [7] and density. The results of each property were analyzed by the response surface method. The found results were significant, indicating that the models represent the changes of 18. Nutrição em Unidade de Cuidados Intensivos -Parte II Directory of Open Access Journals (Sweden) Cecília Mendonça 1996-07-01 Full Text Available RESUMO: Nesta parte II do artigo “Nutrição em Unidade de Cuidados Intensivos” os autores privilegiam a via entérica para administração de nutrientes a doentes com um aparelho gastrointestinal funcionante. Refere-se quando e como iniciar a administração entérica e as contraindicações associadas à sua utilização, as sondas utilizadas e as técnicas de colocação. Quando a administração entérica está indicada por períodos prolongados pode ser importante discutir as vaotagens da efectivação de gastrostomia ou enterostomia.A colocação indevida das sondas na árvore traqueo-brônquica, a aspiração e a diarreia são as principais complicações da administração entérica.A administração entérica pode ser realizada de forma intermitente ou contínua. O resíduo gástrico deve ser avaliado regularmente e, se houver necessidade, pode recorrer-se à utilização de procinéticos.São discutidas fórmulas especiais de administração entérica dirigidas nomeadamente aos doentes com DPOC, Insuficiência Renal e Diabetes. É focada a problemática da realimentação e sobrealimentação.Finalmente são tratadas as indicações e controlo da alimentação parentérica. SUMMARY: In the second part of “Nutrition in Intensive Care Unit” the authors emphasise the importance of enteric route for the administration of nutrients to patients with a normal condition of the gastroinstestinal tract. The rules for when and how initiate the enteric feeding and the associate contra indications, as well as the utilised catheters and techniques of implementation, are expressed in this article. When the enteric feeding is indicated for a long period of time, the advantages of gastrostomy or enterostomy, should be considered.The incorrect positioning of the feeding catheter in tracheobronchial tree 19. Characterization of sugar cane bagasse: part II: fluid dynamic characteristics; Caracterizacion del bagazo de la cana de azucar: parte II: caracteristicas fluidodinamicas Energy Technology Data Exchange (ETDEWEB) Alarcon, Guillermo A. Roca [Universidad de Oriente (CEEFE/UO), Santiago de Cuba (Cuba). Centro de Estudios de Eficiencia Energetica], Emails: [email protected], [email protected]; Sanchez, Caio Glauco [Universidade Estadual de Campinas (FEM/UNICAMP), SP (Brazil). Fac. de Engenharia Mecanica], Email: [email protected]; Gomez, Edgardo Olivares [Universidade Estadual de Campinas (NIPE/UNICAMP), SP (Brazil). Nucleo Interdisciplinar de Planejamento Energetico], Emails: [email protected], [email protected]; Cortez, Luis Augusto Barbosa [Universidade Estadual de Campinas (NIPE/FEAGRI/UNICAMP), SP (Brazil). Fac. de Engenharia Agricola. Nucleo Interdisciplinar de Planejamento Energetico], Email: [email protected] 2006-07-01 This paper is the second part of a general study about physic-geometrical and fluid-dynamics characteristic of the sugarcane bagasse particles. These properties has relevant importance on the dimensions and operation of the equipment for transport and treatment of solid particles. Was used the transport column method for the determination of the drag velocity and later on the drag coefficient of the sugarcane bagasse particles was calculated. Both, the installation and experimental technique used for materials of these characteristics are simple and innovations tools, but rigorous conceptually, thus the results obtained are reliable. Were used several sugarcane bagasse fractions of particles of known mean diameter. The properties determined were expressed as a function of Reynolds and Archimedes a dimensional criteria. The best considered model from statistical analysis (model from equation 8) was statistically validated for determined ranges of Reynolds and Archimedes. These empirical equations can be used to determine these properties in the range and conditions specified and also for modeling some processes where these fractions are employed. (author) 20. Technical Information on the Carbonation of the EBR-II Reactor, Summary Report Part 1: Laboratory Experiments and Application to EBR-II Secondary Sodium System Energy Technology Data Exchange (ETDEWEB) Steven R. Sherman 2005-04-01 Residual sodium is defined as sodium metal that remains behind in pipes, vessels, and tanks after the bulk sodium metal has been melted and drained from such components. The residual sodium has the same chemical properties as bulk sodium, and differs from bulk sodium only in the thickness of the sodium deposit. Typically, sodium is considered residual when the thickness of the deposit is less than 5-6 cm. This residual sodium must be removed or deactivated when a pipe, vessel, system, or entire reactor is permanently taken out of service, in order to make the component or system safer and/or to comply with decommissioning regulations. As an alternative to the established residual sodium deactivation techniques (steam-and-nitrogen, wet vapor nitrogen, etc.), a technique involving the use of moisture and carbon dioxide has been developed. With this technique, sodium metal is converted into sodium bicarbonate by reacting it with humid carbon dioxide. Hydrogen is emitted as a by-product. This technique was first developed in the laboratory by exposing sodium samples to humidified carbon dioxide under controlled conditions, and then demonstrated on a larger scale by treating residual sodium within the Experimental Breeder Reactor II (EBR-II) secondary cooling system, followed by the primary cooling system, respectively. The EBR-II facility is located at the Idaho National Laboratory (INL) in southeastern Idaho, U.S.A. This report is Part 1 of a two-part report. It is divided into three sections. The first section describes the chemistry of carbon dioxide-water-sodium reactions. The second section covers the laboratory experiments that were conducted in order to develop the residual sodium deactivation process. The third section discusses the application of the deactivation process to the treatment of residual sodium within the EBR-II secondary sodium cooling system. Part 2 of the report, under separate cover, describes the application of the technique to residual sodium 1. Overlooked Talent Sources and Corporate Strategies for Affirmative Action. Part II Science.gov (United States) Iacobelli, John L.; Muczyk, Jan P. 1975-01-01 Part Two of the two-part article describes corporate strategies for affirmative action in order to obtain the most qualified individuals available for professional positions among minority and female candidates. (Author/BP) 2. Engineering studies on joint bar integrity, part II : finite element analysis Science.gov (United States) 2014-04-02 This paper is the second in a two-part series describing : research sponsored by the Federal Railroad Administration : (FRA) to study the structural integrity of joint bars. In Part I, : observations from field surveys of joint bar inspections : cond... 3. THE FOOTWEAR DESIGNING SESSION USING CRISPIN DYNAMICS ENGINEER. PART II: Creating the parts, Estimating the material consumption, Grading Directory of Open Access Journals (Sweden) IOVAN-DRAGOMIR Alina 2015-05-01 Full Text Available The diversification and customization of products are important characteristic of the modern economy and especially of the fashion industry. Because of this, the lifetime of the footwear product is very short and result the necessity to cut the design and production time. By classic methodology, designing footwear is a very complex and laborious activity. That is because classic methodology requires many graphic executions using manual means, which consume a lot of the producer’s time. With CRISPIN Dynamics, one can visualize a range of designs on-screen; work out the costs of a new style and even cut out sample shoe components. Reliance on manual skills is largely eliminated, so the staff can work creatively, but with increased accuracy and productivity. One can even send designs to a distant office or manufacturing centre in a matter of minutes. This paper presents the basic function of CRISPIN Dynamics CAD Suite Engineer for footwear design. The process of new product development has six stapes: digitized form of the medium copy, last flatting, model drawing, creation and management of individual parts, estimation of material consumption, multiplying the designed footwear product’s pattern. This product has been developed for shoemakers who wish to ensure that their business remains competitive by increasing the efficiency, speed and accuracy of pattern development and grading. 4. Intervenciones musicales para la ansiedad odontológica en pacientes pediátricos y adultos Directory of Open Access Journals (Sweden) Marta Sanjuán Navais 2016-02-01 Full Text Available La ansiedad odontológica ha sido identificada como un problema común tanto en niños como en adultos. La ansiedad odontológica se refiere a un estado de aprensión de que algo terrible va a suceder en relación con el tratamiento dental y se combina con una sensación de pérdida de control. Se informa que uno de cada seis adultos sufren algún tipo de ansiedad y en los niños entre un 5,7% y el 19,5%. Los pacientes con ansiedad odontológica tienden a descuidar el cuidado dental, lo cual plantea un problema, tanto para los pacientes como para los dentistas. Una relación dentista-paciente dominada por la ansiedad grave puede conducir a un mal diagnóstico y tratamiento inadecuado. Existen diferentes opciones de tratamiento para la ansiedad odontológica que pueden ir desde la explicación del proceso de tratamiento, estrategias farmacológicas, biofeedback, hipnosis e intervenciones conductuales. Se cree que la música reduce la ansiedad gracias a un efecto relajante o de distracción, que a su vez disminuye la actividad de los sistemas neuro-endocrino y sistema nervioso simpático. 5. Análise fatorial do Questionário de Ansiedade Social para Adultos Directory of Open Access Journals (Sweden) Marcia Fortes Wagner 2017-01-01 Full Text Available El trastorno de ansiedad social o fobia social es uno de los trastornos psicológicos más comunes y, para un correcto diagnóstico, es necesario el uso de instrumentos de evaluación con adecuadas propiedades psicométricas. Este artículo tiene como objetivo describir el análisis factorial exploratorio del Cuestionario de ansiedad social para adultos, un nuevo instrumento de evaluación del trastorno de ansiedad social. La muestra consistió en 894 sujetos, 768 de población no clínica y 126 clínica. Se trata de un estudio instrumental, cuantitativo y transversal. El Cuestionario de ansie - dad social para adultos mostró buenas propiedades psicométricas, estabilidad de la estructura y naturaleza de los cinco factores, con una excelente consistencia interna. Se concluye que el instrumento posee distribución factorial claramente definida y es adecuado para la medición de la ansiedad social. 6. Atenção, ansiedade e raiva em dependentes químicos Directory of Open Access Journals (Sweden) Scheffer, Morgana 2009-01-01 Full Text Available Este estudo teve como objetivo analisar e comparar a atenção difusa complexa, atenção concentrada, atenção concentrada complexa, ansiedade e raiva entre três grupos; 1 Controle; 2 dependentes de cocaína/crack e 3 dependentes de álcool e cocaína/crack. É uma pesquisa transversal, comparativa do tipo caso-controle, constituída por uma amostra não aleatória. Pasrticiparam 49 indivíduos, do sexo masculino entre idades 18 e 57 anos, avaliados através da Bateria Geral das Funções Mentais 1 e 2; Inventário de Expressão de Raiva como Estado e Traço e Inventário de Ansiedade Beck. O tempo médio de abstinência das drogas foi de 33,05 (DP=19,52 dias. Os resultados mostraram que na atenção não houve diferenças significativas na comparação entre os grupos. Entretanto, houve diferença nos níveis de ansiedade e raiva entre os dependentes quimicos e controles. Conclui-se que não houve prejuízo cognitivo na atenção desses indivíduos, mas alterações emocionais na ansiedade e raiva 7. Protocolo para el manejo de las crisis de ansiedad en el servicio de urgencias Directory of Open Access Journals (Sweden) Francisco Alejandro Múnera Galarza 1996-01-01 Full Text Available Se presenta un protocolo para orientar a los médicos de urgencias en el proceso de evaluación de filtro, control inicial, estudio diagnostico, manejo definitivo y referencia de pacientes que consultan por crisis de ansiedad. 8. Evaluación neurofuncional del tallo cerebral Parte II: Reflejo mandibular = Neurofunctional evaluation of brain stem. II. Mandibular reflex Directory of Open Access Journals (Sweden) Leon Sarmiento, Fidias E. 2011-09-01 Full Text Available El reflejo mandibular o maseterino posee conexiones nerviosas únicas, diferentes de las exhi­bidas por otros reflejos monosinápticos humanos, y permite evaluar, de forma fácil y eficien­te, el tallo cerebral por medio de la estimulación mecánica, eléctrica o magnética. Diversos estudios han demostrado la participación en este reflejo de las interneuronas del tallo cerebral y su modulación por estructuras supraespinales, que hacen parte fundamental de su integra­ción motora. El reflejo mandibular es útil para evaluar la afectación trigémino-trigeminal en polineuropatías como la diabetes, neuromiopatías como la esclerosis múltiple y en pacientes con trastornos del movimiento, con o sin disfunción oromandibular. La evaluación neuro­funcional de este reflejo craneofacial ayuda a identificar la integración sensorimotora del tallo cerebral y las posibles alteraciones de estas vías reflejas, debidas a anormalidades del sistema nervioso central o del periférico. Su apropiada ejecución e interpretación, clínica y neurológica, permite aplicar de manera más personalizada diversos protocolos de neurorre­habilitación, con el fin de ayudar a mejorar la calidad de vida de los individuos con afectación de estas vías neurales. 9. Programming an interim report on the SETL project. Part I: generalities. Part II: the SETL language and examples of its use Energy Technology Data Exchange (ETDEWEB) Schwartz, J T 1975-06-01 A summary of work during the past several years on SETL, a new programming language drawing its dictions and basic concepts from the mathematical theory of sets, is presented. The work was started with the idea that a programming language modeled after an appropriate version of the formal language of mathematics might allow a programming style with some of the succinctness of mathematics, and that this might ultimately enable one to express and experiment with more complex algorithms than are now within reach. Part I discusses the general approach followed in the work. Part II focuses directly on the details of the SETL language as it is now defined. It describes the facilities of SETL, includes short libraries of miscellaneous and of code optimization algorithms illustrating the use of SETL, and gives a detailed description of the manner in which the set-theoretic primitives provided by SETL are currently implemented. (RWR) 10. DIVWAG Model Documentation. Volume II. Programmer/Analyst Manual. Part 1. Science.gov (United States) 1976-07-01 Routine GCLAST IV-7-B-58 IV-7-B-15 Routine GUPDAT IV-7-B-63 IV-7-C- la Unit Geometry and Target Acquisition Sample Output From Ground Combat Model IV-7-C-2...OMTDATFLE 52, EQUI1HENT ON TRAIKS DATA AND INDEXES LISTRI EDEQUIPIIENT -DIATA Figure II-3-B-3. Routine DMPTOE. (Concluded) II-3-B-19 LA (6) Block L206...READ A CARD 4 EOF YES CARD ? AC LL ENTRIES YES ON CARDTA PROCE S NO L14 ERROR YES PRINT ® IN ENTRY? ERROR MSSAGE NO 7 ENTRY CELL NUBE TERRAIN DATA 11. Flor de Citrus aurantium e ansiedade pré-operatória Directory of Open Access Journals (Sweden) Mahmood Akhlaghi 2011-12-01 Full Text Available JUSTIFICATIVA E OBJETIVOS: Reduzir a ansiedade é muito importante antes da operação. A visita no pré-operatório e a utilização de pré-medicação são os métodos mais populares para se atingir esse objetivo, mas o papel da pré-medicação ansiolítica permanece incerto e os efeitos colaterais no pós-operatório podem partir de uma pré-medicação de rotina. Citrus aurantium é usado como medicina alternativa em alguns países para tratar a ansiedade. Recentemente, o papel ansiolítico dessa planta medicinal foi estabelecido em um estudo realizado em modelo animal. O objetivo deste estudo foi avaliar o efeito ansiolítico da flor de Citrus aurantium sobre a ansiedade pré-operatória. MÉTODOS: Foram estudados 60 pacientes ASA I submetidos a uma pequena cirurgia. Em um desenho randomizado e duplo-cego, dois grupos de 30 pacientes receberam uma das seguintes MPA oral duas horas antes da indução da anestesia: 1 Citrus aurantium destilado 1 mL.kg-1 (Grupo C; 2 solução salina 1 mL.kg-1 como placebo (Grupo P. A ansiedade foi medida antes e após pré-medicação com o Inventário de Ansiedade Traço-Estado (IDATE e a Escala de Ansiedade e Informação Pré-Operatória de Amsterdam (APAIS antes da operação. RESULTADOS: Após pré-medicação, tanto o IDATE quanto as escalas APAIS estavam diminuídos no Grupo C (p < 0,05, embora não tenham apresentado alterações significativas no Grupo P. CONCLUSÕES:Citrus aurantium pode mostrar-se eficaz na redução da ansiedade pré-operatória em cirurgias de pequeno porte. 12. Abordaje psicoterapéutico grupal de ansiedad ante la Muerte en el adulto mayor institucionalizado desde el modelo integrativo OpenAIRE Campos Peralta, Diana Janneth 2015-01-01 En el presente trabajo investigativo se elaboró y aplicó un dispositivo psicoterapéutico de modelo integrativo, capaz de disminuir niveles de ansiedad ante la muerte en adultos mayores. La muestra estuvo integrada por 12 pacientes del Hogar Cristo Rey de la ciudad de Cuenca, de ambos sexos, alfabetizados, con un rango de edad comprendido entre los 65 y 80 años. Para medir los niveles de ansiedad ante la muerte se aplicó la Escala Revisada de Ansiedad ante la Muerte, la que fue administrada an... 13. Problemas de ansiedad en niños y adolescentes y su relación con variables cognitivas disfuncionales OpenAIRE Valderrama-Martos, Lidia 2016-01-01 La investigación pretende analizar los trastornos de ansiedad en niños y adolescentes de Málaga y su provincia, así como su relación con factores emocionales (sensibilidad a la ansiedad y ansiedad rasgo) y con variables cognitivas disfuncionales importantes en el origen y mantenimiento de dichos trastornos. Para ello se ha evaluado una muestra compuesta por un total de 1.483 niños y adolescentes de la población general malagueña (escolarizados en Educación Primaria, Educación Secundaria, Bach... 14. Traço e estado de ansiedade de nutrizes com indicadores de hipogalactia e nutrizes com galactia normal OpenAIRE Aragaki, Ilva Marico Mizumoto; Silva, Isília Aparecida; Santos, Jair Lício Ferreira dos 2006-01-01 Este estudo objetivou identificar e comparar o traço e estado de ansiedade, no 10º dia pós-parto e estado de ansiedade no 30º dia puerperal, das nutrizes primíparas e multíparas que apresentaram indicadores de hipogalactia e nutrizes com galactia normal; verificar possíveis relações entre o estado de ansiedade das nutrizes nesses dois momentos com a presença dos indicadores de hipogalactia. É um estudo exploratório/descritivo, cujos dados foram obtidos com 168 nutrizes e seus filhos, por meio... 15. Sintomatologia de Depressão e Ansiedade em Estudantes de uma Universidade Privada do Rio Grande do Sul OpenAIRE Maríndia Brandtner; Marucia P. Bardagi 2009-01-01 Este estudo avaliou sintomas de ansiedade e depressão em 200 estudantes universitários, iniciantes e finalistas, de uma Universidade privada do Rio Grande do Sul. Os participantes responderam ao BAI (Beck Anxiety Inventory) e ao BDI (Beck Depression Inventory) em aplicações coletivas em salas de aula. Resultados principais apontaram alta comorbidade entre depressão e ansiedade, maiores níveis de ansiedade e depressão entre as mulheres do que entre homens e índices significativamente mais alto... 16. AUTOMOTIVE DIESEL MAINTENANCE 2. UNIT XV, UNDERSTANDING DC GENERATOR PRINCIPLES (PART II). Science.gov (United States) Human Engineering Inst., Cleveland, OH. THIS MODULE OF A 25-MODULE COURSE IS DESIGNED TO DEVELOP AN UNDERSTANDING OF MAINTENANCE PROCEDURES FOR DIRECT CURRENT GENERATORS USED ON DIESEL POWERED EQUIPMENT. TOPICS ARE SPECIAL GENERATOR CIRCUITS, GENERATOR TESTING, AND GENERATOR POLARITY. THE MODULE CONSISTS OF A SELF-INSTRUCTIONAL PROGRAMED TRAINING FILM "DC GENERATORS II--GENERATOR… 17. Studies on the sediments of Vembanad Lake, Kerala state: Part II - Distribution of phosphorus Digital Repository Service at National Institute of Oceanography (India) Murty, P.S.N.; Veerayya, M. to the release of these elements into the waters and (iii) mixing with marine sediments containing low concentrations of Mn and Co. The enrichment of Ni and Cu can be due to their association with (i) organic matter of the sediments, (ii) ferric and manganese... 18. 40 CFR Appendix II to Part 1068 - Emission-Related Parameters and Specifications Science.gov (United States) 2010-07-01 ... dimension. 4. Camshaft timing. a. Valve opening—intake exhaust (degrees from top-dead center or bottom-dead center). b. Valve closing—intake exhaust (degrees from top-dead center or bottom-dead center). c. Valve... bottom-dead center). II. Intake Air System. 1. Roots blower/supercharger/turbocharger calibration. 2... 19. Movement Analysis Applied to the Basketball Jump Shot--Part II. Science.gov (United States) Martin, Thomas P. 1981-01-01 The jump shot is one of the most important shots in the game of basketball. The movement analysis of the jump shot designates four phases: (1) preparatory position; (2) movement phase I (crouch); (3) movement phase II (jump); and (4) follow-through. (JN) 20. Mammalian Toxicity of Munitions Compounds Phase III: Effects of Life-Time Exposure Part II: Trinitroglycerin Science.gov (United States) 1978-11-01 34 -0Ine0t Parasitism (pin worm Infestation) I EPeritonitis (e--sinophilic) -- 4- - - - rMeentery Perarctcrlttl nodosa ...die~tneration----- X ---- ----------- Trachea__ Lung T da ii n;Et reiculocsq Heart focal. _=vocarditisL Arteries LoSal panarteritis - - Stomach 1. Revisitando a Simon &Martens: la ansiedad competitiva en deportes de iniciación Directory of Open Access Journals (Sweden) Yago Ramis 2013-01-01 Full Text Available Este trabajo nace con la intención de recuperar y actualizar un artículo clásico de la psicología del deporte como es el de Simon y Martens del 1979. Como su antecesor, nuestro trabajo compara la ansiedad rasgo competitiva en deportes y actividades lúdicas que se clasifican en base a dos variables de agrupación: la Colaboración, que diferencia entre actividades individuales y colectivas, y la Habilidad, que separa aquellas actividades que requieren de habilidades habituales de las que requieren de habilidades perceptivas. 643 deportistas y 140 castellers contestaron la Escala de Ansiedad Competitiva-2 y sus puntuaciones fueron comparadas mediante análisis de la varianza en función de las variables Colaboración, Habilidad y la interacción entre ambas. Los resultados indican que en aquellas actividades cuya habilidad es de tipo habitual, aparecen niveles más altos en el factor de Ansiedad Somática y Preocupación. Además, los participantes en deportes individuales relatan mayores niveles de Desconcentración que aquellos que realizan deportes o actividades colectivas. Se detecta también un efecto interactivo de las variables colaboración y habilidad sobre la Preocupación. Se ha realizado una comparación adicional deportistas y castellers, como actividad evaluativa no deportiva, revelando que el nivel de Ansiedad Somática de los castellers es equivalente al de los deportistas, pero que en las variables de Preocupación y Desconcentración los deportistas relatan niveles significativamente mayores. Finalmente se discute la importancia de conocer las características de cada modalidad deportiva de cara al trabajo específico con los entrenadores y padres para el trabajo preventivo de la ansiedad. 2. Ansiedade ao tratamento odontológico em atendimento de urgência Directory of Open Access Journals (Sweden) Kanegane Kazue 2003-01-01 Full Text Available OBJETIVO: Avaliar a freqüência de pacientes com ansiedade ou medo do tratamento odontológico em um setor de urgência. MÉTODOS: Participaram do estudo 252 pacientes, com 18 anos ou mais, que compareceram ao setor de urgência de uma faculdade de odontologia, de São Paulo, SP, entre agosto e novembro de 2001. Para avaliar a ansiedade, foram utilizadas a Modified Dental Anxiety Scale (MDAS, e a Escala de Medo de Gatchel. O grupo estudado respondeu a questões sobre: tempo decorrido desde a última visita ao dentista e desde o início dos sintomas, escolaridade, renda familiar e história prévia de trauma. Os resultados foram analisados pelos testes estatísticos (chi2 e Teste Exato de Fisher. RESULTADOS: Foram identificados 28,2% de indivíduos com algum grau de ansiedade, segundo a MDAS, na qual as mulheres foram consideradas mais ansiosas que os homens (chi2=0,01; e 14,3% de pacientes com alto grau de medo segundo a Escala de Medo de Gatchel. Em 44,4% da amostra a demora para procura de alívio dos sintomas foi > sete dias. Mulheres ansiosas procuraram atendimento mais rapidamente e em maior número. Experiência traumática anterior ocorreu em 46,5% dos pacientes ansiosos. Não foi possível relacionar escolaridade e renda familiar com ansiedade e/ou medo. CONCLUSÕES: Pacientes ansiosos, com destaque para as mulheres, são freqüentes no atendimento odontológico de urgência. Experiência prévia traumática mostrou-se importante para o desenvolvimento da ansiedade em relação ao atendimento odontológico. 3. Ansiedade ao tratamento odontológico em atendimento de urgência Directory of Open Access Journals (Sweden) Kazue Kanegane 2003-12-01 Full Text Available OBJETIVO: Avaliar a freqüência de pacientes com ansiedade ou medo do tratamento odontológico em um setor de urgência. MÉTODOS: Participaram do estudo 252 pacientes, com 18 anos ou mais, que compareceram ao setor de urgência de uma faculdade de odontologia, de São Paulo, SP, entre agosto e novembro de 2001. Para avaliar a ansiedade, foram utilizadas a Modified Dental Anxiety Scale (MDAS, e a Escala de Medo de Gatchel. O grupo estudado respondeu a questões sobre: tempo decorrido desde a última visita ao dentista e desde o início dos sintomas, escolaridade, renda familiar e história prévia de trauma. Os resultados foram analisados pelos testes estatísticos (chi2 e Teste Exato de Fisher. RESULTADOS: Foram identificados 28,2% de indivíduos com algum grau de ansiedade, segundo a MDAS, na qual as mulheres foram consideradas mais ansiosas que os homens (chi2=0,01; e 14,3% de pacientes com alto grau de medo segundo a Escala de Medo de Gatchel. Em 44,4% da amostra a demora para procura de alívio dos sintomas foi > sete dias. Mulheres ansiosas procuraram atendimento mais rapidamente e em maior número. Experiência traumática anterior ocorreu em 46,5% dos pacientes ansiosos. Não foi possível relacionar escolaridade e renda familiar com ansiedade e/ou medo. CONCLUSÕES: Pacientes ansiosos, com destaque para as mulheres, são freqüentes no atendimento odontológico de urgência. Experiência prévia traumática mostrou-se importante para o desenvolvimento da ansiedade em relação ao atendimento odontológico. 4. Medical Malpractice in Dermatology-Part II: What To Do Once You Have Been Served with a Lawsuit. Science.gov (United States) Shah, Vidhi V; Kapp, Marshall B; Wolverton, Stephen E 2016-12-01 Facing a malpractice lawsuit can be a daunting and traumatic experience for healthcare practitioners, with most clinicians naïve to the legal landscape. It is crucial for physicians to know and understand the malpractice system and his or her role once challenged with litigation. We present part II of a two-part series addressing the most common medicolegal questions that cause a great deal of anxiety. Part I focused upon risk-management strategies and prevention of malpractice lawsuits, whereas part II provides helpful suggestions and guidance for the physician who has been served with a lawsuit complaint. Herein, we address the best approach concerning what to do and what not to do after receipt of a legal claim, during the deposition, and during the trial phases. We also discuss routine concerns that may arise during the development of the case, including the personal, financial, and career implications of a malpractice lawsuit and how these can be best managed. The defense strategies discussed in this paper are not a guide separate from legal representation to winning a lawsuit, but may help physicians prepare for and cope with a medical malpractice lawsuit. This article is written from a US perspective, and therefore not all of the statements made herein will be applicable in other countries. Within the USA, medical practitioners must be familiar with their own state and local laws and should consult with their own legal counsel to obtain advice about specific questions. 5. Not-for-profit versus for-profit health care providers--Part II: Comparing and contrasting their records. Science.gov (United States) Rotarius, Timothy; Trujillo, Antonio J; Liberman, Aaron; Ramirez, Bernardo 2006-01-01 The debate over which health care providers are most capably meeting their responsibilities in serving the public's interest continues unabated, and the comparisons of not-for-profit (NFP) versus for-profit (FP) hospitals remain at the epicenter of the discussion. From the perspective of available factual information, which of the two sides to this debate is correct? This article is part II of a 2-part series on comparing and contrasting the performance records of NFP health care providers with their FP counterparts. Although it is demonstrated that both NFP and FP providers perform virtuous and selfless feats on behalf of America's public, it is also shown that both camps have been accused of being involved in potentially willful clinical and administrative missteps. Part I provided the background information (eg, legal differences, perspectives on social responsibility, and types of questionable and fraudulent behavior) required to adequately understand the scope of the comparison issue. Part II offers actual comparisons of the 2 organizational structures using several disparate factors such as specific organizational behaviors, approach to the health care priorities of cost and quality, and business-focused goals of profits, efficiency, and community benefit. 6. Social anxiety and negative early life events in university students Eventos negativos na infância e ansiedade social em estudantes universitários Directory of Open Access Journals (Sweden) Cynthia Binelli 2012-06-01 Full Text Available INTRODUCTION: There is substantial evidence regarding the impact of negative life events during childhood on the aetiology of psychiatric disorders. We examined the association between negative early life events and social anxiety in a sample of 571 Spanish University students. METHODS: In a cross-sectional survey conducted in 2007, we collected data through a semistructured questionnaire of sociodemographic variables, personal and family psychiatric history, and substance abuse. We assessed the five early negative life events: (i the loss of someone close, (ii emotional abuse, (iii physical abuse, (iv family violence, and (v sexual abuse. All participants completed the Liebowitz Social Anxiety Scale. RESULTS: Mean (SD age was 21 (4.5, 75% female, LSAS score was 40 (DP = 22, 14.2% had a psychiatric family history and 50.6% had negative life events during childhood. Linear regression analyses, after controlling for age, gender, and family psychiatric history, showed a positive association between family violence and social score (p = 0.03. None of the remaining stressors produced a significant increase in LSAS score (p > 0.05. CONCLUSION: University students with high levels of social anxiety presented higher prevalence of negative early life events. Thus, childhood family violence could be a risk factor for social anxiety in such a population.INTRODUÇÃO: Existem evidências substanciais sobre o impacto de eventos negativos da vida durante a infância na etiologia dos transtornos psiquiátricos. Examinamos a associação entre os eventos negativos ocorridos na infância e a ansiedade social em uma amostra de 571 estudantes universitários espanhóis. MÉTODOS: Em um estudo transversal realizado em 2007, foram coletados os dados de variáveis sociodemográficas, história psiquiátrica pessoal e familiar e abuso de substâncias por meio de um questionário semiestruturado e avaliamos cinco eventos negativos ocorridos na infância: (i a perda de 7. Clinical cytology of companion animals: Part II . Cytology of subcutaneous swellings, skin tumours and skin lesions NARCIS (Netherlands) Teske, E. 2009-01-01 Clinical Cytology of Companion Animals: Part 2. Clinical Cytology of Companion Animals: Part 2. Cytology of subcutaneous swellings, skin tumours and skin lesions Subcutaneous swellings, skin tumours, and skin lesions are extremely well suited for cytological examination via FNAB (Fine needle 8. Mathematics for Junior High School. Commentary for Teachers. Volume II (Part 2). Preliminary Edition. Science.gov (United States) Anderson, R. D.; And Others This is part two of a three-part manual for teachers using SMSG junior high school text materials. Each chapter contains an introduction and a collection of sample test questions. Each section contains a discussion related to the topic at hand and answers to all the exercises. Chapter topics include: (1) symmetry, congruence, and the Pythagorean… 9. A study of flame spread in engineered cardboard fuelbeds: Part II: Scaling law approach Science.gov (United States) Brittany A. Adam; Nelson K. Akafuah; Mark Finney; Jason Forthofer; Kozo Saito 2013-01-01 In this second part of a two part exploration of dynamic behavior observed in wildland fires, time scales differentiating convective and radiative heat transfer is further explored. Scaling laws for the two different types of heat transfer considered: Radiation-driven fire spread, and convection-driven fire spread, which can both occur during wildland fires. A new... 10. A Course in Information Technology in Secondary Schools--Part II. Science.gov (United States) Thompson, D. K. 1983-01-01 Part 1 (SE 532 887) focused on the need for a secondary school information technology course. This part provides and describes content appropriate for the course, focusing on the three main themes of the course. Among the topics considered are technology/change, information in post-industrial society, population explosion, automated office, and… 11. Hyposplenism: a comprehensive review. Part II: clinical manifestations, diagnosis, and management. Science.gov (United States) William, Basem M; Thawani, Nitika; Sae-Tia, Sutthichai; Corazza, Gino R 2007-04-01 In the first part of this review, we described the physiological basis of splenic function and hypofunction. We also described the wide spectrum of diseases that can result in functional hyposplenism. In the second part of this review, we will be discussing the clinical picture, including complications, diagnostic methods, and management of hyposplenism. 12. Instrumentation: Photodiode Array Detectors in UV-VIS Spectroscopy. Part II. Science.gov (United States) Jones, Dianna G. 1985-01-01 A previous part (Analytical Chemistry; v57 n9 p1057A) discussed the theoretical aspects of diode ultraviolet-visual (UV-VIS) spectroscopy. This part describes the applications of diode arrays in analytical chemistry, also considering spectroelectrochemistry, high performance liquid chromatography (HPLC), HPLC data processing, stopped flow, and… 13. Empirical Psycho-Aesthetics and Her Sisters: Substantive and Methodological Issues--Part II Science.gov (United States) Konecni, Vladimir J. 2013-01-01 Empirical psycho-aesthetics is approached in this two-part article from two directions. Part I, which appeared in the Winter 2012 issue of "JAE," addressed definitional and organizational issues, including the field's origins, its relation to "sister" disciplines (experimental philosophy, cognitive neuroscience of art, and neuroaesthetics), and… 14. Field Surveys, IOC Valleys. Biological Resources Survey, Dry Lake Valley, Nevada. Volume II, Part I. Science.gov (United States) 1981-08-01 development of possible cultural resource mitigation mea- sures; and i! E-TR-48-1I-I o Native American consultations. The results of these additional tasks...dog (Cynomys parvidens) UT E Black footed ferret (Mustela nigripes) UT E Bald eagle (Haliaeetus leucocephalus) UT, NV E American peregrine falcon...Myocaster coypus River otter Lutra canadensis Other Animals Mountain beaver Aplodontia rufa Protected Pika Ochotona princeps Protected Douglas squirrel 15. Neutronics and thermohydraulics of the reactor C.E.N.E. Part II International Nuclear Information System (INIS) Caro, R. 1976-01-01 In this report the analysis of neutronics thermohydraulics and shielding of the 10 HWt swimming pool reactor C.E.N.E is included. In each of these chapters is given a short description of the theoretical model used, along with the theoretical versus experimental checking carried out, whenever possible, with the reactors JEN-I and JEN-II of Junta de Energia Nuclear. (Author) 11 refs 16. Executive Summary: Parts on Demand Project (CATT Phase II) - Volume 1 of 2 National Research Council Canada - National Science Library Gates, Robert 1996-01-01 .... The GAO has profiled DoD challenges in: maintaining an aging aircraft fleet, reducing response time and cost of providing the spare parts, implementing commercial inventory management practices, reinventing buying and contracting practices... 17. Fundamental Limits of Blind Deconvolution Part II: Sparsity-Ambiguity Trade-offs OpenAIRE Choudhary, Sunav; Mitra, Urbashi 2015-01-01 Blind deconvolution is an ubiquitous non-linear inverse problem in applications like wireless communications and image processing. This problem is generally ill-posed since signal identifiability is a key concern, and there have been efforts to use sparse models for regularizing blind deconvolution to promote signal identifiability. Part I of this two-part paper establishes a measure theoretically tight characterization of the ambiguity space for blind deconvolution and unidentifiability of t... 18. The motion planning problem and exponential stabilization of a heavy chain. Part II OpenAIRE Piotr Grabowski 2008-01-01 This is the second part of paper [P. Grabowski, The motion planning problem and exponential stabilization of a heavy chain. Part I, to appear in International Journal of Control], where a model of a heavy chain system with a punctual load (tip mass) in the form of a system of partial differential equations was interpreted as an abstract semigroup system and then analysed on a Hilbert state space. In particular, in [P. Grabowski, The motion planning problem and exponential stabilization of a h... 19. Strategic planning for hotel operations: The Ritz-Carlton Hotel Company (Part II). Science.gov (United States) Shriver, S J 1993-01-01 The Ritz-Carlton Hotel Company won the Malcolm Baldrige National Quality Award in 1992. One key to its success is its strategic planning process. In this second part of a two-part article, Stephen Shriver concludes his review of the Ritz-Carlton's approach to strategic planning. Shriver begins by outlining some key steps in plan development and goes on to describe how the Ritz-Carlton disseminates, implements, and evaluates the plan. 20. Intelligence-led crime scene processing. Part II : Intelligence and crime scene examination. OpenAIRE Ribaux, O.; Baylon, A.; Lock, E.; Delémont, O.; Roux, C.; Zingg, C.; Margot, P. 2010-01-01 A better integration of the information conveyed by traces within intelligence-led framework would allow forensic science to participate more intensively to security assessments through forensic intelligence (part I). In this view, the collection of data by examining crime scenes is an entire part of intelligence processes. This conception frames our proposal for a model that promotes to better use knowledge available in the organisation for driving and supporting crime scene examinatio... 1. Designer ligands. Part 14. Novel Mn(lI), Ni(II) and Zn(II) complexes of benzamide- and biphenyl-derived ligands CSIR Research Space (South Africa) Wellington, Kevin W 2009-01-01 Full Text Available Manganese (II), nickel (II) and zinc (II) complexes have been prepared using various benzamideand biphenyl-derived ligands; their structures have been investigated using infrared spectroscopy and it is apparent that, depending on the ligand... 2. Carcinogenicity of residual fuel oils by nonbiological laboratory methods: annotated bibliography. Part I. Laboratory methods of analysis. Part II. Analysis results Energy Technology Data Exchange (ETDEWEB) Cichorz, R. S. 1976-04-09 Recent emphases have been directed by Federal government regulatory agencies and other research groups on the carcinogenic effects of certain aromatic hydrocarbon components in naturally occurring petroleum products. These are used in plant operations, and underline the importance of evaluating environments. Since Rocky Flats Plant uses large quantities of fuel oil, the author was prompted to undertake a search of the chemical literature. Articles and accounts of studies were reviewed on nonbiological laboratory methods for determining the carcinogenicity of residual fuel oils and related high-boiling petroleum fractions. The physical and chemical methods involve the separation or measurement (or both) of polynuclear aromatic constituents which generally are responsible for the carcinogenic effects. Thus, the author suggests that the total carcinogenic activity of any petroleum product may not be due to a specific potent carcinogen, but rather to the cumulative effect of several individually weak carcinogens. The literature search is presented as an annotated bibliography, current as of January 1, 1975, and includes significant parts of the studies along with the total number of other references found when the citation was examined in its entirety. Part I deals with laboratory chemical and physical methods of determining carcinogenicity or polynuclear aromatic hydrocarbons (or both) in residual fuel oils and contains ten entries. Part II includes the results of testing specific fuel oils for carcinogenic constituents and contains eleven entries. An author index and subject categories are included. 3. AVALIAÇÃO DA ANSIEDADE ODONTOLÓGICA DE CRIANÇAS SUBMETIDAS AO TRATAMENTO ODONTOLÓGICO OpenAIRE Ribas, Tatiane Araújo; Guimarães, Vanessa Passos; Losso, Estela Maris 2016-01-01 Embora a literatura relate uma associação entre ansiedade materna e infantil em crianças de idade pré-escolar, esta relação não está estabelecida em crianças maiores. O objetivo deste trabalho foi avaliar o nível de ansiedade odontológica das crianças em idade escolar e das mães, verificando se há uma correlação da ansiedade materna e infantil, bem como se a ansiedade dental tem relação com o procedimento realizado (com ou sem o uso de anestesia local). Para isto foi aplicado o questionário d... 4. Depresión y ansiedad estado/rasgo en internos adscritos al "Programa de Inducción al Tratamiento Penitenciario" en Bucaramanga, Colombia Directory of Open Access Journals (Sweden) Ana Fernanda Uribe Rodríguez 2012-12-01 Full Text Available El artículo describe las características de la depresión y la ansiedad estado/rasgo y su prevalencia en internos adscritos al "Programa de Inducción al Tratamiento Penitenciario" del Instituto Nacional Penitenciario y Carcelario -INPEC- en Bucaramanga, Colombia. La muestra comprendió 112 internos con un promedio de edad de 33 años, a quienes se les aplicó el Inventario de Depresión Estado Rasgo (IDER y el Inventario de Ansiedad Estado Rasgo (STAI. Los resultados indican que un 43,1% realizó su primera transgresión a la norma entre los 8 y 18 años de edad, y un 74,1% presentó consumo de sustancias psicoactivas, mientras los registros de manifestaciones depresivas revelan que el 16,7% la calificó como estado y un 43,68% como rasgo. Por su parte, la afectación ansiosa se mostró en un 8,03% como estado y en un 85,7% como rasgo. De acuerdo con esto, hubo mayor proporción de personas con experiencias previas que desarrollaron cuadros sintomatológicos, que aquellas en las que la situación de encarcelamiento constituye un causante o detonante. 5. Spinal sonography in newborns and infants - part II: spinal dysraphism and tethered cord. Science.gov (United States) Deeg, K-H; Lode, H-M; Gassner, I 2008-02-01 Patients with cutaneous markers in the lumbo-sacral region as well as infants with bladder and bowel dysfunction, orthopedic anomalies and progressive neurological dysfunction are at risk for spinal dysraphism and tethered cord. Three types of spinal dysraphism can be distinguished: Type I - open spinal dysraphisms with a non-skin covered back mass; type II - closed spinal dysraphisms with a skin covered back mass; type III - occult spinal dysraphisms without a back mass. All spinal dysraphisms can be associated with a tethered cord, characterized by a low position of the conus medullaris below L3. Type I dysraphisms are meningomyeloceles and myeloceles, which are associated with CHIARI-II malformations characterized by the low position of the cerebellar vermis within the foramen magnum. Type II dysraphisms are lipomyeloceles, lipomyelomeningoceles, posterior meningoceles and myelocystoceles. Lipomeningoceles and lipomyelomeningoceles are characterized by a subcutaneous echogenic mass which communicates with the spinal canal and may cause tethered cord. Posterior meningoceles are, dorsal cystic space occupying lesions without internal neural tissue. Myelocystoceles are characterized by a cystic dorsal mass which communicates with a dilated central canal characteristic of syringo-hydromyelia. Type III dysraphisms without a back mass are frequently associated with cutaneous markers in the lumbo-sacral region. Sonographically dermal sinus tracts, diastematomyelia, tight filum and lipoma of the filum terminale and the caudal regression syndrome have to be distinguished. Dermal sinuses are characterized by an echogenic tract from the skin to the spinal canal, often associated with a spinal dermoid. Diastematomyelia is characterized by a complete or partial duplication of the spinal cord which can only be shown on axial images. Tight filum terminale or lipoma of the filum terminale is characterized by a thick echogenic filum with a diameter of more than 2 mm, and a conus 6. Supuestos resueltos de contabilidad II. parte 1ª. curso 2013-2014 OpenAIRE Osés García, Javier 2013-01-01 Este documento contiene ejercicios solucionados y comentados sobre los temas que componen el Plan Docente de la asignatura COMPTABILITAT II del Grau en Administració i Direcció d’Empreses que se imparte en la Facultat Economia i Empresa de la Universitat de Barcelona. Todos los ejercicios contenidos han aparecido en cursos anteriores al 2012-2013 en alguna de las pruebas de evaluación continuada o exámenes de evaluación final de la asignatura. Entendemos que esta publicación es una h... 7. Supuestos resueltos de contabilidad II. parte 2ª. curso 2013-2014 OpenAIRE Osés García, Javier 2013-01-01 Este documento contiene ejercicios solucionados y comentados sobre los temas que componen el Plan Docente de la asignatura COMPTABILITAT II del Grau en Administració i Direcció d’Empreses que se imparte en la Facultat Economia i Empresa de la Universitat de Barcelona. Todos los ejercicios contenidos han aparecido en cursos anteriores al 2012-2013 en alguna de las pruebas de evaluación continuada o exámenes de evaluación final de la asignatura. Entendemos que esta publicación es una h... 8. Modelling reversibility of central European mountain lakes from acidification: Part II - the Tatra Mountains Czech Academy of Sciences Publication Activity Database Kopáček, Jiří; Cosby, B. J.; Majer, V.; Stuchlík, E.; Veselý, J. 2003-01-01 Roč. 7, č. 4 (2003), s. 510-524 ISSN 1027-5606 Grant - others:EC(XE) AL PE II EV5V-CT92-0205 - PECO; EU(XE) MOLAR ENV4-CT95-0007; EC(XE) EMERGE EVK1-CT-1999-00032; EC(XE) RECOVER 2010 EVK1-CT-1999-00018 Institutional research plan: CEZ:AV0Z6017912 Keywords : atmospheric deposition * water chemistry * recovery Subject RIV: DA - Hydrology ; Limnology Impact factor: 0.948, year: 2003 9. JOGOS COOPERATIVOS E RELAXAMENTO RESPIRATÓRIO: EFEITO SOBRE CRAVING E ANSIEDADE Directory of Open Access Journals (Sweden) João Euclides Fernandes Braga Full Text Available RESUMO Introdução: O uso de substâncias psicoativas é uma prática milenar e universal, que acompanha a história da humanidade. Na atualidade, o crack se alastrou pelo mundo por ter maior potencial de dependência comparado a outras drogas, visto que os usuários apresentam dificuldade para interromper o uso do crack, enfrentar o craving e a ansiedade. É essencial que haja uma abordagem multidisciplinar e integral do usuário, com a utilização de técnicas cognitivo-comportamentais que enfoquem as estratégias de prevenção de recaída. Nesse contexto, os jogos cooperativos (JC e o relaxamento respiratório (RR constituem possíveis estratégias para manejo terapêutico. Objetivo: Avaliar a utilização dos JC e do RR como estratégias de enfrentamento do craving e da ansiedade em usuários de crack em situação de dependência. Método: Trata-se de uma pesquisa exploratória, quase experimental, com abordagem quantitativa, desenvolvida em uma unidade de desintoxicação para dependência química do estado da Paraíba. Resultados: A amostra foi constituída por 40 colaboradores dependentes de crack, com idade superior a 18 anos. Para avaliação dos efeitos dos JC e do RR sobre o craving e a ansiedade foram utilizados os seguintes instrumentos: Cocaine Craving Questionnaire-Brief (CCQ-B e o Inventário de Ansiedade Traço-Estado (IDATE-E. Os resultados demonstraram que os JC e o RR reduziram os escores do craving total e da ansiedade. Quanto ao fator F1, apenas os JC apresentaram resultados satisfatórios. Conclusão: Os JC e o RR demonstraram eficácia como estratégias de enfrentamento do craving e ansiedade em usuários de crack em situação de dependência nas condições em que o estudo foi desenvolvido. 10. Prevalencia de ansiedad y depresión en pacientes de hemodiálisis Directory of Open Access Journals (Sweden) Lídia Gómez Vilaseca Full Text Available Introducción: Los pacientes en hemodiálisis tienen síntomas y trastornos emocionales como ansiedad y depresión. Son pocos los estudios que valoren el diagnóstico mediante la Hospital Anxiety and Depression Scale (HADS; nuestro objetivo es conocer la prevalencia de la ansiedad y depresión en pacientes con enfermedad renal crónica en programa de hemodiálisis. Metodología: Estudio transversal durante el primer trimestre del 2012. Realizado en el servicio de hemodiálisis del hospital de Palamós. Se incluyeron pacientes en programa crónico de hemodiálisis que llevaban como mínimo un mes. Se registró la edad, sexo, talla, peso, índice de masa corporal, índice de Charlson, tiempo en hemodiálisis y número de fármacos. Se utilizó la escala HADS (versión española de Caro-Ibáñez. Resultados: Se analizaron 49 pacientes, 25% fueron mujeres, la edad media 67,2 años, I. Charlson 4,6 (DE:4,5, tiempo en HD 39,9 meses (DE:43,8, IMC 26,9 (DE:4,5, turno de mañana 52,9 % y tarde 50,9 %. La sintomatología depresiva representa 42,9% (IC95% 33,7%-60,6% y la ansiosa 32,7 % (IC95% 21,2%-46,6% según la escala HADS. La ansiedad presenta relación estadísticamente significativa con el índice de masas corporal inferior y sexo femenino, la depresión con una edad más elevada, índice de masa corporal inferior y el turno de la mañana. Conclusiones: Existe una alta prevalencia de ansiedad y depresión en pacientes con enfermedad renal crónica en hemodiálisis. Un índice de masa corporal bajo se relaciona con la ansiedad y depresión, la mayor edad con la depresión y la ansiedad es más frecuente en mujeres. Nuestro estudio sugiere que es necesario un mayor control, seguimiento y tratamiento de las alteraciones emocionales en pacientes con enfermedad renal crónica. 11. Ansiedade em Provas: um Estudo na Obtenção da Licença para Dirigir Directory of Open Access Journals (Sweden) Aline Hessel de Araújo Full Text Available Resumo Situações de avaliação geram ansiedade e dentre elas está a prova prática de direção para a obtenção da licença para dirigir, essa ansiedade pode perturbar o desempenho e impedir a obtenção da habilitação. O presente estudo visou analisar: (a a fundamentação teórica que embasa a intervenção terapêutica em casos de ansiedade em avaliações e provas; e (b o processo terapêutico de uma cliente que procurou terapia comportamental após tentativas fracassadas na obtenção da licença para dirigir. A relevância da análise funcional da ansiedade e dos repertórios de enfrentamento da ansiedade foi considerada. Em seguida, um estudo de caso foi relatado: uma paciente com um histórico de seis reprovações no teste prático para a obtenção da licença para dirigir e níveis altos de ansiedade. Esse estudo demonstrou que um preparo apropriado e a intervenção terapêutica contribuíram para reduzir a ansiedade e promover a condição necessária para a obtenção da licença para dirigir. Estudos adicionais deverão ser realizados de modo a se obter uma melhor compreensão da relação entre a ansiedade e o desempenho, especialmente no que diz respeito ao processo de obtenção de uma licença para dirigir. 12. Transactive System: Part II: Analysis of Two Pilot Transactive Systems using Foundational Theory and Metrics Energy Technology Data Exchange (ETDEWEB) Lian, Jianming [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sun, Y. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kalsi, Karanjit [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Widergren, Steven E. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Wu, Di [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ren, Huiying [Pacific Northwest National Lab. (PNNL), Richland, WA (United States) 2018-01-24 This document is the second of a two-part report. Part 1 reviewed several demonstrations of transactive control and compared them in terms of their payoff functions, control decisions, information privacy, and mathematical solution concepts. It was suggested in Part 1 that these four listed components should be adopted for meaningful comparison and design of future transactive systems. Part 2 proposes qualitative and quantitative metrics that will be needed to compare alternative transactive systems. It then uses the analysis and design principles from Part 1 while conducting more in-depth analysis of two transactive demonstrations: the American Electric Power (AEP) gridSMART Demonstration, which used a double –auction market mechanism, and a consensus method like that used in the Pacific Northwest Smart Grid Demonstration. Ultimately, metrics must be devised and used to meaningfully compare alternative transactive systems. One significant contribution of this report is an observation that the decision function used for thermostat control in the AEP gridSMART Demonstration has superior performance if its decision function is recast to more accurately reflect the power that will be used under for thermostatic control under alternative market outcomes. 13. Arbitrary order 2D virtual elements for polygonal meshes: part II, inelastic problem Science.gov (United States) Artioli, E.; Beirão da Veiga, L.; Lovadina, C.; Sacco, E. 2017-10-01 The present paper is the second part of a twofold work, whose first part is reported in Artioli et al. (Comput Mech, 2017. doi: 10.1007/s00466-017-1404-5), concerning a newly developed Virtual element method (VEM) for 2D continuum problems. The first part of the work proposed a study for linear elastic problem. The aim of this part is to explore the features of the VEM formulation when material nonlinearity is considered, showing that the accuracy and easiness of implementation discovered in the analysis inherent to the first part of the work are still retained. Three different nonlinear constitutive laws are considered in the VEM formulation. In particular, the generalized viscoelastic model, the classical Mises plasticity with isotropic/kinematic hardening and a shape memory alloy constitutive law are implemented. The versatility with respect to all the considered nonlinear material constitutive laws is demonstrated through several numerical examples, also remarking that the proposed 2D VEM formulation can be straightforwardly implemented as in a standard nonlinear structural finite element method framework. 14. Ansiedade, a criança e os pais Ansiedad, los niños y los padres Children, parents and anxiety Directory of Open Access Journals (Sweden) Eduardo Toshiyuki Moro 2004-10-01 Full Text Available JUSTIFICATIVA E OBJETIVOS: A ansiedade pré-operatória na criança é caracterizada por tensão, apreensão, nervosismo e preocupação e pode ser expressa de diversas formas. Alterações de comportamento no pós-operatório como enurese noturna, distúrbios alimentares, apatia, insônia, pesadelos e sono agitado podem ser resultado desta ansiedade. Em algumas crianças, estas alterações persistem por até um ano. O objetivo deste trabalho é avaliar os aspectos envolvidos com a ansiedade que afeta a criança e os pais durante o período que antecede a cirurgia, bem como as intervenções, farmacológicas ou não, para reduzi-la. CONTEÚDO: O artigo aborda a ligação entre a ansiedade pré-operatória em crianças e as alterações de comportamento que podem ocorrer no período pós-operatório, bem como a influência de variáveis como idade, temperamento, experiência hospitalar prévia e dor. Medidas para reduzir a ansiedade pré-operatória na criança como a presença dos pais durante a indução da anestesia ou programas de informação e a utilização de medicação pré-anestésica também são revisadas. CONCLUSÕES: O período que antecede a cirurgia acompanha-se de grande carga emocional para toda família, sobretudo para a criança. Um pré-operatório turbulento significa, para muitas crianças, alterações de comportamento que se manifestam de forma variada e por períodos prolongados em algumas vezes. A presença dos pais durante a indução da anestesia e programas de preparação pré-operatórios para a criança e para os pais podem ser úteis para casos selecionados, levando em conta a idade, temperamento e experiência hospitalar prévia. A medicação pré-anestésica com benzodiazepínicos, em especial o midazolam, é claramente o método mais eficaz para redução da ansiedade pré-operatória em crianças e das alterações de comportamento por ela induzidas.JUSTIFICATIVA Y OBJETIVOS: La ansiedad pre-operatoria en 15. Optimisation of energy absorbing liner for equestrian helmets. Part II: Functionally graded foam liner International Nuclear Information System (INIS) Cui, L.; Forero Rueda, M.A.; Gilchrist, M.D. 2009-01-01 The energy absorbing liner of safety helmets was optimised using finite element modelling. In this present paper, a functionally graded foam (FGF) liner was modelled, while keeping the average liner density the same as in a corresponding reference single uniform density liner model. Use of a functionally graded foam liner would eliminate issues regarding delamination and crack propagation between interfaces of different density layers which could arise in liners with discrete density variations. As in our companion Part I paper [Forero Rueda MA, Cui L, Gilchrist MD. Optimisation of energy absorbing liner for equestrian helmets. Part I: Layered foam liner. Mater Des [submitted for publication 16. Light detection and ranging measurements of wake dynamics. Part II: two-dimensional scanning DEFF Research Database (Denmark) Trujillo, Juan-José; Bingöl, Ferhat; Larsen, Gunner Chr. 2011-01-01 the instantaneous transversal wake position which is quantitatively compared with the prediction of the Dynamic Wake Meandering model. The results, shown for two 10-min time series, suggest that the conjecture of the wake behaving as a passive tracer is a fair approximation; this corroborates and expands...... the results of one-dimensional measurements already presented in the first part of this paper. Consequently, it is now possible to separate the deterministic and turbulent parts of the wake wind field, thus enabling capturing the wake in the meandering frame of reference. The results correspond, qualitatively... 17. Electromagnetism, magnetic monopoles and matter-waves in space-time algebra (part II) International Nuclear Information System (INIS) Daviau, C. 1989-01-01 The formalism of space-time algebra of Hestenes is used: - in the first part to write the equations of electromagnetism of Maxwell and Louis de Broglie, when magnetic monopoles exist; - second to explain equivalence between the equations of Dirac and Hestenes, and to extend this equivalence to Lochak's theory of magnetic monopoles; - to establish that monopoles can exist with very small magnetic charge; - in this second part, to compare waves of fermions and electromagnetism, to associate an electromagnetic field to Dirac's waves and to join the equation of Maxwell - de Broglie to the equation of Dirac - Hestenes [fr 18. FROM ZERO-DIMENSIONAL TO 2-DIMENSIONAL CARBON NANOMATERIALS - part II: GRAPHENE Directory of Open Access Journals (Sweden) Cătălin IANCU 2012-05-01 Full Text Available As was presented in the first part of this review paper, lately, many theoretical and experimental studies have been carried out to develop one of the most interesting aspects of the science and nanotechnology which is called carbon-related nanomaterials. In this review paper are presented some of the most exciting and important developments in the synthesis, properties, and applications of low-dimensional carbon nanomaterials. In this part of the paper are presented the synthesis techniques used to produce the two-dimensional carbon nanomaterials (including graphene, and also the most important properties and potential applications of graphene. 19. A Quantum Computational Semantics for Epistemic Logical Operators. Part II: Semantics Science.gov (United States) Beltrametti, Enrico; Dalla Chiara, Maria Luisa; Giuntini, Roberto; Leporini, Roberto; Sergioli, Giuseppe 2014-10-01 By using the abstract structures investigated in the first Part of this article, we develop a semantics for an epistemic language, which expresses sentences like "Alice knows that Bob does not understand that π is irrational". One is dealing with a holistic form of quantum computational semantics, where entanglement plays a fundamental role; thus, the meaning of a global expression determines the contextual meanings of its parts, but generally not the other way around. The epistemic situations represented in this semantics seem to reflect some characteristic limitations of the real processes of acquiring information. Since knowledge is not generally closed under logical consequence, the unpleasant phenomenon of logical omniscience is here avoided. 20. Women's experiences of victimizing sexualization, Part II: Community and longer term personal impacts. Science.gov (United States) Smith, S K 1997-01-01 This is the second of a two-part article describing the results of a qualitative study on women's experiences of victimizing sexualization. Ten adult women described their experiences of harmful learning about themselves as female and sexual. A four-part thematic description of women's experiences of victimizing sexualization was derived. This article reports on two of the major categories: community and cultural characteristics and longer term personal impacts. Findings of the study support the feminist position that the enactment of gender itself at social and cultural levels sometimes places women at risk for victimization. 1. Pharmacotherapy of intraocular pressure - part II. Carbonic anhydrase inhibitors, prostaglandin analogues and prostamides. Science.gov (United States) Costagliola, Ciro; dell'Omo, Roberto; Romano, Mario R; Rinaldi, Michele; Zeppa, Lucia; Parmeggiani, Francesco 2009-12-01 The second part of this two part review (please see Expert Opinion on Pharmacotherapy 10(16)) reports the characteristics of other antiglaucoma medications: systemic (acetazomide) and topical (dorzolamide and brinzolamide) carbonic anhydrase inhibitors, which suppress aqueous humour formation; and prostaglandin analogues (latanoprost and travoprost) and prostamides (bimatoprost), which raise aqueous humour outflow. The pharmacologic properties of each compound and its efficacy in the medical treatment of glaucoma, mainly the primary open-angle form, are discussed briefly, focusing on the clinical evidence supporting their use. 2. Bloqueio do nervo supraescapular: procedimento importante na prática clínica. Parte II OpenAIRE Fernandes,Marcos Rassi; Barbosa,Maria Alves; Sousa,Ana Luiza Lima; Ramos,Gilson Cassem 2012-01-01 O bloqueio do nervo supraescapular é um método de tratamento reprodutível, confiável e extremamente efetivo no controle da dor no ombro. Esse método tem sido amplamente utilizado por profissionais na prática clínica, como reumatologistas, ortopedistas, neurologistas e especialistas em dor, na terapêutica de enfermidades crônicas, como lesão irreparável do manguito rotador, artrite reumatoide, sequelas de AVC e capsulite adesiva, o que justifica a presente revisão (Parte II). O objetivo deste ... 3. Efeitos da ansiedade sobre a pressão arterial em mulheres com hipertensão Directory of Open Access Journals (Sweden) Chaves Eliane Corrêa 2004-01-01 Full Text Available Estudo descritivo, associativo, que objetiva conhecer a relação da ansiedade com os níveis de pressão arterial em mulheres hipertensas e da ansiedade com o tempo de tratamento da hipertensão. Foram pesquisadas 78 mulheres em tratamento para hipertensão no InCor, mediante o Inventário de Ansiedade de Spilberger - IDATE, e a pressão arterial verificada, utilizando-se da medida indireta, obtida pelo método auscultatório. Os dados foram submetidos à análise estatística, com nível de significância de 5%. A amostra apresentou traço e estado de ansiedade moderados e médias de pressão acima do normal, compatível com hipertensão estágio 1. Não houve diferença estatisticamente significante entre pressão arterial e níveis de ansiedade e entre o tempo de tratamento para hipertensão e níveis de ansiedade. 4. AN ENGLISH-AMHARIC DICTIONARY OF EVERYDAY USAGE, PART II, (L-Z). Science.gov (United States) LESLAU, WOLF THIS VOLUME, (L-Z), COMPRISES THE SECOND HALF OF THE FIRST MODERN ENGLISH-AMHARIC DICTIONARY. THIS TWO-PART DICTIONARY HAS BEEN PREPARED FOR THE STUDENT FAMILIAR WITH THE SCRIPT AND GRAMMAR OF AMHARIC, THE NATIONAL LANGUAGE OF ETHIOPIA. THE SELECTIONS, LIMITED IN SCOPE, ARE BASED ON EDUCATED COLLOQUIAL AND ARE PRESENTED IN CONTEXTUAL SENTENCES.… 5. MATLAB-based Applications for Image Processing and Image Quality Assessment – Part II: Experimental Results Directory of Open Access Journals (Sweden) L. Krasula 2012-04-01 Full Text Available The paper provides an overview of some possible usage of the software described in the Part I. It contains the real examples of image quality improvement, distortion simulations, objective and subjective quality assessment and other ways of image processing that can be obtained by the individual applications. 6. Automatic Dictionary Construction; Part II of Scientific Report No. ISR-18, Information Storage and Retrieval... Science.gov (United States) Cornell Univ., Ithaca, NY. Dept. of Computer Science. Part Two of the eighteenth report on Salton's Magical Automatic Retriever of Texts (SMART) project is composed of three papers: The first: "The Effect of Common Words and Synonyms on Retrieval Performance" by D. Bergmark discloses that removal of common words from the query and document vectors significantly increases precision and that… 7. The Multi-Disciplinary Graduate Program in Educational Research. Final Report, Part II; Methodoloqical Trilogy. Science.gov (United States) Lazarsfeld, Paul F., Ed. Part two of a seven-section, final report on the Multi-Disciplinary Graduate Program in Educational Research, this document contains discussions of quantification and reason analysis. Quantification is presented as a language consisting of sentences (graphs and tables), words, (classificatory instruments), and grammar (rules for constructing and… 8. The Concept of Time in Rehabilitation and Psychosocial Adaptation to Chronic Illness and Disability: Part II Science.gov (United States) Livneh, Hanoch 2013-01-01 The first part of this article focused on providing the reader with a general overview of the concept of time with special emphasis on understanding time's role in the structure of personality theories and their associated therapeutic approaches, as well as linking the discussion to the understanding of time in the context of psychosocial… 9. The σ-stark effect of rotational transitions : Part II: The microwave spectrum of methyl alcohol NARCIS (Netherlands) Dijkerman, H.A.; Dymanus, A. 1962-01-01 The method described in Part I is applied to absorption lines of methyl alcohol in the microwave region. Recorded ΔM = ± 1 Stark patterns of absorption lines with linear Stark effect are compared with calculated patterns. The ΔM = ± 1 Stark patterns of 7 absorption lines with second order Stark 10. Tile Patterns with LOGO--Part II: Tile Patterns from Rep Tiles Using LOGO. Science.gov (United States) Clason, Robert G. 1991-01-01 Described is a recursive LOGO method for dissecting polygons into congruent parts (rep tiles) similar to the original polygon, thereby producing unexpected patterns. A list of descriptions for such dissections is included along with suggestions for modifications that allow extended student explorations into tile patterns. (JJK) 11. Algunas aplicaciones de las calculadoras programables en el análisis estructural. II parte OpenAIRE Alfonso Ramírez Rivera 2011-01-01 En el anterior número se presentaron algunos algoritmos que pueden programarse fácilmente en una calculadora y que pueden utilizarse en el análisis de estructuras a flexión y en muchos otros casos. Esta segunda parte complementa la expuesta en el primer número de esta publicación. 12. Algunas aplicaciones de las calculadoras programables en el análisis estructural. II parte Directory of Open Access Journals (Sweden) Alfonso Ramírez Rivera 1982-01-01 Full Text Available En el anterior número se presentaron algunos algoritmos que pueden programarse fácilmente en una calculadora y que pueden utilizarse en el análisis de estructuras a flexión y en muchos otros casos. Esta segunda parte complementa la expuesta en el primer número de esta publicación. 13. Two fluid space-time discontinuous Galerkin finite element method. Part II: Applications NARCIS (Netherlands) Sollie, W.E.H.; van der Vegt, Jacobus J.W. 2009-01-01 The numerical method for two fluid flow computations presented in Sollie, Bokhove \\& van der Vegt, Two Fluid Space-Time Discontinuous Galerkin Finite Element Method. Part I: Numerical Algorithm is applied to a number of one and two dimensional single and two fluid test problems, including a magma - 14. Nurses' Home Health Experience. Part II: The Unique Demands of Home Visits. Science.gov (United States) Stulginsky, Maryfran McKenzie 1993-01-01 In the second of two parts, six health nurses explore how home care nurses deal with issues surrounding home care's practice setting. They discuss the need to build trust and support, set limits, use common sense, remain flexible, deal with distractions, and use time wisely. (JOW) 15. Elastic and Piezoelectric Properties of Boron Nitride Nanotube Composites. Part II; Finite Element Model Science.gov (United States) Kim, H. Alicia; Hardie, Robert; Yamakov, Vesselin; Park, Cheol 2015-01-01 This paper is the second part of a two-part series where the first part presents a molecular dynamics model of a single Boron Nitride Nanotube (BNNT) and this paper scales up to multiple BNNTs in a polymer matrix. This paper presents finite element (FE) models to investigate the effective elastic and piezoelectric properties of (BNNT) nanocomposites. The nanocomposites studied in this paper are thin films of polymer matrix with aligned co-planar BNNTs. The FE modelling approach provides a computationally efficient way to gain an understanding of the material properties. We examine several FE models to identify the most suitable models and investigate the effective properties with respect to the BNNT volume fraction and the number of nanotube walls. The FE models are constructed to represent aligned and randomly distributed BNNTs in a matrix of resin using 2D and 3D hollow and 3D filled cylinders. The homogenisation approach is employed to determine the overall elastic and piezoelectric constants for a range of volume fractions. These models are compared with an analytical model based on Mori-Tanaka formulation suitable for finite length cylindrical inclusions. The model applies to primarily single-wall BNNTs but is also extended to multi-wall BNNTs, for which preliminary results will be presented. Results from the Part 1 of this series can help to establish a constitutive relationship for input into the finite element model to enable the modeling of multiple BNNTs in a polymer matrix. 16. Institutional Advancement: A Marketing Perspective. Part II: A Status Report, 1978-79. Science.gov (United States) Moriarty, Daniel F. This follow-up report examines the status of the recruitment and retention strategies implemented by Triton College in 1978 as part of an effort to utilize the marketing concept in identifying and meeting changing educational needs. The report first provides operational definitions for "institutional advancement,""marketing concept,""promotion,"… 17. Hip protectors: recommendations for conducting clinical trials--an international consensus statement (part II) DEFF Research Database (Denmark) Cameron, I D; Robinovitch, S; Birge, S 2010-01-01 While hip protectors are effective in some clinical trials, many, including all in community settings, have been unable to demonstrate effectiveness. This is due partly to differences in the design and analysis. The aim of this report is to develop recommendations for subsequent clinical research.... 18. A history of the autonomic nervous system: part II: from Reil to the modern era. Science.gov (United States) Oakes, Peter C; Fisahn, Christian; Iwanaga, Joe; DiLorenzo, Daniel; Oskouian, Rod J; Tubbs, R Shane 2016-12-01 The history of the study of the autonomic nervous system is rich. At the beginning of the nineteenth century, scientists were beginning to more firmly grasp the reality of this part of the human nervous system. The evolution of our understanding of the autonomic nervous system has a rich history. Our current understanding is based on centuries of research and trial and error. 19. Uni-directional waves over slowly varying bottom, part II: Deformation of travelling waves NARCIS (Netherlands) Pudjaprasetya, S.R.; Pudjaprasetya, S.R.; van Groesen, Embrecht W.C. 1996-01-01 A new Korteweg-de Vries type of equation for uni-directional waves over slowly varying bottom has been derived in Part I. The equation retains the Hamiltonian structure of the underlying complete set of equations for surface waves. For flat bottom it reduces to the standard Korteweg-de Vries 20. Facilitating age diversity in organizations – part II: managing perceptions and interactions NARCIS (Netherlands) Hertel, Guido; van der Heijden, Beatrice; de Lange, Annet H.; Deller, Jürgen 2013-01-01 Purpose – Due to demographic changes in most industrialized countries, the average age of working people is continuously increasing, and the workforce is becoming more age-diverse. This review, together with the earlier JMP Special Issue “Facilitating age diversity in organizations – part I: 1. Facilitating age diversity in organizations ‐ part II: managing perceptions and interactions NARCIS (Netherlands) Annet de Lange; Jürgen Deller; Beatrice van der Heijden; Guido Hertel 2013-01-01 Purpose ‐ Due to demographic changes in most industrialized countries, the average age of working people is continuously increasing, and the workforce is becoming more age-diverse. This review, together with the earlier JMP Special Issue "Facilitating age diversity in organizations ‐ part I: 2. Tales from Academia: History of anthropology in the Netherlands. Part II NARCIS (Netherlands) Vermeulen, H.F.; Kommers, J.H.M. 2002-01-01 This book in two parts aims to provide a comprehensive overview of the history of cultural, social and physical anthropology in The Netherlands. Experienced anthropologists were invited to describe the history of their own departments and specialisations. The forty-four authors present detailed 3. The decision to extract: part II. Analysis of clinicians' stated reasons for extraction. Science.gov (United States) Baumrind, S; Korn, E L; Boyd, R L; Maxwell, R 1996-04-01 In a recently reported study, the pretreatment records of each subject in a randomized clinical trial of 148 patients with Class I and Class II malocclusions presenting for orthodontic treatment were evaluated independently by five experienced clinicians (drawn from a panel of 14). The clinicians displayed a higher incidence of agreement with each other than had been expected with respect to the decision as to whether extraction was indicated in each specific case. To improve our understanding of how clinicians made their decisions on whether to extract or not, the records of a subset of 72 subjects randomly selected from the full sample of 148, have now been examined in greater detail. In 21 of these cases, all five clinicians decided to treat without extraction. Among the remaining 51 cases, there were 202 decisions to extract (31 unanimous decision cases and 20 split decision cases). The clinicians cited a total of 469 reasons to support these decisions. Crowding was cited as the first reason in 49% of decisions to extract, followed by incisor protrusion (14%), need for profile correction (8%), Class II severity (5%), and achievement of a stable result (5%). When all the reasons for extraction in each clinician's decision were considered as a group, crowding was cited in 73% of decisions, incisor protrusion in 35%, need for profile correction in 27%, Class II severity in 15% and posttreatment stability in 9%. Tooth size anomalies, midline deviations, reduced growth potential, severity of overjet, maintenance of existing profile, desire to close the bite, periodontal problems, and anticipation of poor cooperation accounted collectively for 12% of the first reasons and were mentioned in 54% of the decisions, implying that these considerations play a consequential, if secondary, role in the decision-making process. All other reasons taken together were mentioned in fewer than 20% of cases. In this sample at least, clinicians focused heavily on appearance 4. Integrating model of the Project Independence Evaluation System. Volume VI. Data documentation. Part II Energy Technology Data Exchange (ETDEWEB) Allen, B J 1979-02-01 This documentation describes the PIES Integrating Model as it existed on January 1, 1978. This Volume VI of six volumes is data documentation, containing the standard table data used for the Administrator's Report at the beginning of 1978, along with the primary data sources and the office responsible. It also contains a copy of a PIES Integrating Model Report with a description of its content. Following an overview chapter, Chapter II, Supply and Demand Data Tables and Sources for the Mid-range Scenario for Target Years 1985 and 1990, data on demand, price, and elasticity; coal; imports; oil and gas; refineries; synthetics, shale, and solar/geothermal; transportation; and utilities are presented. The following data on alternate scenarios are discussed: low and high demand; low and high oil and gas supply; refinery and oil and gas data assuming a 5% annual increase in real world oil prices. Chapter IV describes the solution output obtained from an execution of PIES. 5. Ceramic materials for porcelain veneers: part II. Effect of material, shade, and thickness on translucency. Science.gov (United States) Barizon, Karine T L; Bergeron, Cathia; Vargas, Marcos A; Qian, Fang; Cobb, Deborah S; Gratton, David G; Geraldeli, Saulo 2014-10-01 Information regarding the differences in translucency among new ceramic systems is lacking. The purpose of this study was to compare the relative translucency of the different types of ceramic systems indicated for porcelain veneers and to evaluate the effect of shade and thickness on translucency. Disk specimens 13 mm in diameter and 0.7-mm thick were fabricated for the following 9 materials (n=5): VITA VM9, IPS Empress Esthetic, VITA PM9, Vitablocks Mark II, Kavo Everest G-Blank, IPS Empress CAD, IPS e.max CAD, IPS e.maxPress, and Lava Zirconia. VITA VM9 served as the positive control and Lava as the negative control. The disks were fabricated with the shade that corresponds to A1. For IPS e.maxPress, additional disks were made with different shades (BL2, BL4, A1, B1, O1, O2, V1, V2, V3), thickness (0.3 mm), and translucencies (high translucency, low translucency). Color coordinates (CIE L∗ a∗ b∗) were measured with a tristimulus colorimeter. The translucency parameter was calculated from the color difference of the material on a black versus a white background. One-way ANOVA, the post hoc Tukey honestly significant difference, and the Ryan-Einot-Gabriel-Welsch multiple range tests were used to analyze the data (α=.05). Statistically significant differences in the translucency parameter were found among porcelains (PPM9, Empress Esthetic>Empress CAD>Mark II, Everest, e.max CAD>e.max Press>Lava. Significant differences also were noted when different shades and thickness were compared (Pceramic systems designed for porcelain veneers present varying degrees of translucency. The thickness and shade of lithium disilicate ceramic affect its translucency. Shade affects translucency parameter less than thickness. Copyright © 2014 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved. 6. Converting Eucalyptus biomass into ethanol: Financial and sensitivity analysis in a co-current dilute acid process. Part II International Nuclear Information System (INIS) Gonzalez, R.; Treasure, T.; Phillips, R.; Jameel, H.; Saloni, D.; Wright, J.; Abt, R. 2011-01-01 The technical and financial performance of high yield Eucalyptus biomass in a co-current dilute acid pretreatment followed by enzymatic hydrolysis process was simulated using WinGEMS registered and Excel registered . Average ethanol yield per dry Mg of Eucalyptus biomass was approximately 347.6 L of ethanol (with average carbohydrate content in the biomass around 66.1%) at a cost of 0.49 L -1 of ethanol, cash cost of ∝0.46 L -1 and CAPEX of 1.03 L -1 of ethanol. The main cost drivers are: biomass, enzyme, tax, fuel (gasoline), depreciation and labor. Profitability of the process is very sensitive to biomass cost, carbohydrate content (%) in biomass and enzyme cost. Biomass delivered cost was simulated and financially evaluated in Part I; here in Part II the conversion of this raw material into cellulosic ethanol using the dilute acid process is evaluated. (author) 7. Practical recommendations for fertility preservation in women by the FertiPROTEKT network. Part II: fertility preservation techniques. Science.gov (United States) von Wolff, Michael; Germeyer, A; Liebenthron, J; Korell, M; Nawroth, F 2018-01-01 In addition to guidelines focusing on scientific evidence, practical recommendations on fertility preservation are also needed. A selective literature search was performed based on the clinical and scientific experience of the authors. This article (Part II) focuses on fertility preservation techniques. Part I, also published in this journal, provides information on disease prognosis, disease-specific therapy, and risks for loss of fertility. Ovarian stimulation including double stimulation and freezing of oocytes is the best-established therapy providing live birth chances in women preservation in women provides realistic chances of becoming pregnant. The choice of technique needs to be based on the time required, the woman's age, its risks and efficacy, and the individual preference of the patient. 8. Steady flow in a model of the human carotid bifurcation. Part II--laser-Doppler anemometer measurements. Science.gov (United States) Bharadvaj, B K; Mabon, R F; Giddens, D P 1982-01-01 The evidence for hypothesizing a relationship between hemodynamics and atherogenesis as well as the motivation for selecting the carotid bifurcation for extensive fluid dynamic studies has been discussed in Part I of this two-paper sequence. Part II deals with velocity measurements within the bifurcation model described by Fig. 1 and Table 1 of the previous paper. A plexiglass model conforming to the dimensions of the average carotid bifurcation was machined and employed for velocity measurements with a laser-Doppler anemometer (LDA). The objective of this phase of the study was to obtain quantitative information on the velocity field and to estimate levels and directions of wall shear stress in the region of the bifurcation. 9. Ansiedade e depressão na DPOC: O conhecimento actual, questões não respondidas e investigação necessária Directory of Open Access Journals (Sweden) J. Maurer 2009-07-01 superior em estádios mais avançados da DPOC, chegando a atingir taxas de 62% em doentes a fazer oxigenoterapia de longa duração. Também em doentes a recuperar de uma exacerbação, a percentagem de depressão e ansiedade aumentam para níveis próximos dos 50%.Os inquéritos utilizados na detecção de sintomas de ansiedade e depressão foram o PRIME-MD, Beck Depression Inventory – II e Beck Anxiety Inventory. O primeiro questionário apresenta um valor preditivo positivo bom na detecção destas afecções.A depressão pode ser um factor preditivo de fadiga, dispneia e descondicionamento físico e mortalidade, em doentes com insuficiência cardíaca ou DPOC. Inclusive, possui um papel preponderante nas decisões do doente em estádio terminal da DPOC, que quando deprimido opta na maioria dos casos pela não ressuscitação.A ansiedade e a depressão não tratadas aumentam a incapacidade física, a morbilidade e o consumo de recursos médicos. Os doentes, médicos e o sistema de saúde são muitas vezes responsáveis pela baixa taxa de diagnóstico destas alterações na DPOC.Existem vários trabalhos que comprovam a eficácia da intervenção farmacológica e não farmacológica no controlo destas comorbilidades em doentes com DPOC, contudo apenas uma pequena percentagem recebe tratamento eficaz.Os autores concluem que é necessária maior investigação nesta área, detecção precoce e tratamento da ansiedade e depressão nos doentes com DPOC. 10. Personalidad, ansiedad estado-rasgo e ingreso a la universidad en alumnos preuniversitarios Directory of Open Access Journals (Sweden) Isabel Niño de Guzmán 2000-12-01 Full Text Available Estudio correlacional que identifica las dimensiones de personalidad, el tipo de ansiedad y las características sociodemográficas de alumnos de un centropreuniversitario. Se trabajó con 318 participantes de ambos sexos (43.8% mujeres y 56.2% hombres, que entre 16 y 19 años. Se los evaluó con los siguientes instrumentos: (a NEO PI-R de Costa y McCrae (1992, (b Inventario de Ansiedad Estado-Rasgo de Spielberger (IDARE, 1975 y (c reporte de los tutores. Los resultados confirman la presencia de una estructura factorial básica de cinco dimensiones. Asimismo, revelan correlaciones significativas entre el C.I., características de personalidad asociadas al factor conciencia e ingreso a la universidad. Entre las facetas que correlacionan con el ingreso destaca la autodisciplina. Finalmente, se discuten los resultados. 11. Bienestar psicológico y ansiedad competitiva: el papel de las estrategias de afrontamiento Directory of Open Access Journals (Sweden) Enrique Cantón Chirivella 2016-01-01 Full Text Available El interés por el estudio del bienestar psicológico (Ryff, 1989 y su relación con las estrategias de afrontamiento se ha producido también en el ámbito deportivo. En esta investigación, se evalúan la ansiedad competitiva, el bienestar psicológico y las estrategias de afrontamiento utilizadas en la competición por 213 deportistas, de cuatro deportes diferentes. Los resultados permiten predecir, a través de los análisis de regresión, la posibilidad de experimentar ansiedad y bienestar psicológico en el contexto competitivo, regulado por el papel mediador de las estrategias de afrontamiento dirigidas a la tarea. 12. Optimismo, ansiedad-estado y autoconfianza en jóvenes jugadores de balonmano Directory of Open Access Journals (Sweden) Francisco J. Ortín-Montero 2013-10-01 Full Text Available Este estudio analiza la relación entre optimismo, ansiedad competitiva y autoconfianza en una muestra de 133 jugadores adolescentes de balonmano. Para dicho análisis se administraron los cuestionarios LOT-R en su versión española de Otero-López, Luengo, Romero, Triñanes, Gómez y Castro (1998, y el CSAI-2 (Competitive State Anxiety Inventory-2 de Martens, Burton, Vealy, Bump y Smith (1990. Los resultados indican que los deportistas con perfil optimista, sienten menos ansiedad estado, tanto cognitiva como fisiológica, encontrando en esta última resultados estadísticamente significativos. Por otro lado, los sujetos optimistas muestran mayores niveles de autoconfianza. 13. Part I. The role of metabolism in N-methylthiobenzamide-induced pneumotoxicity. Part II. The role of the sympathetic nervous system in methylcyclopentadienyl manganese tricarbonyl-induced pneumotoxicity International Nuclear Information System (INIS) Penney, D.A. 1984-01-01 Part I. This is an investigation of the role of metabolism in the induction of lung injury by N-methylthiobenzamide (NMTB). N-methylthiobenzamide S-oxide (NMTBSO), a metabolite of NMTB, was prepared and found to produce lung injury that was qualitatively identical to that of NMTB. 1-methyl-1-phenyl-3-benzoylthiourea (MPBTU) protected rodents from lethal doses of either NMTB or NMTBSO. MPBTU also blocked the increases in pulmonary 14 C-thymidine incorporation induced by these compounds. Both NMTB and NMTBSO were found to undergo oxidation when incubated with either lung or liver microsomes and an NADPH-generating system. The in vitro microsomal oxidation of NMTB and NMTBSO was markedly inhibited by addition of MPBTU. These data suggest that oxidation of NMTB is required for the expression of NMTB-induced pneumotoxicity. Part II. Methylcyclopentadienyl Manganese Tricarbonyl (MMT) has been used as an antiknock additive in unleaded gasoline. Rats treated with MMT exhibit severe convulsions accompanied by hemorrhagic pulmonary edema. The purpose of this study was to investigate the possible role of neurogenic mechanisms in MMT-induced hemorrhagic pulmonary edema 14. Mixed ligand complexes of alkaline earth metals: Part XII. Mg(II, Ca(II, Sr(II and Ba(II complexes with 5-chlorosalicylaldehyde and salicylaldehyde or hydroxyaromatic ketones Directory of Open Access Journals (Sweden) MITHLESH AGRAWAL 2002-04-01 Full Text Available The reactions of alkaline earth metal chlorides with 5-chlorosalicylaldehyde and salicylaldehyde, 2-hydroxyacetophenone or 2-hydroxypropiophenone have been carried out in 1 : 1 : 1 mole ratio and the mixed ligand complexes of the type MLL’(H2O2 (where M = Mg(II, Ca(II, Sr(II and Ba(II, HL = 5-chlorosalicylaldehyde and HL’ = salicylaldehyde, 2-hydroxyacetophenone or 2-hydroxypropiophenone have been isolated. These complexes were characterized by TLC, conductance measurements, IR and 1H-NMR spectra. 15. Eating one's words, part II: The embodied mind and reflective function in anorexia nervosa--theory. Science.gov (United States) Skårderud, Finn 2007-07-01 Anorexia nervosa as a psychiatric disorder presents itself through the concreteness of symptoms. Emotions are experienced as a corporeality here-and-now. In a companion article, Part I, different 'body metaphors' are described and categorised. The human body functions as metaphor, and in anorexia nervosa there is a striking closeness between emotions and different bodily experiences. This is interpreted as impaired 'reflective function', referring to the capacity to make mental representations, and is proposed as a central psychopathological feature. The psychodynamic concepts 'concretised metaphors' and 'psychic equivalence' are discussed as useful tools to better understand such compromised symbolic capacity. Psychotherapy in anorexia nervosa can be described as a relational process where concretised metaphors will be developed into genuine linguistic ones. Part III in this series of articles presents an outline for psychotherapy for anorexia nervosa. 2007 John Wiley & Sons, Ltd and Eating Disorders Association 16. Imaging of juvenile spondyloarthritis. Part II: Ultrasonography and magnetic resonance imaging Directory of Open Access Journals (Sweden) Iwona Sudoł-Szopińska 2017-09-01 Full Text Available Juvenile spondyloarthropathies are mainly manifested by symptoms of peripheral arthritis and enthesitis. Early involvement of sacroiliac joints and spine is exceptionally rare in children; this usually happens in adulthood. Conventional radiographs visualize late inflammatory lesions. Early diagnosis is possible with the use of ultrasonography and magnetic resonance imaging. The first part of the article presented classifications and radiographic presentation of juvenile spondyloarthropathies. This part discusses changes seen on ultrasonography and magnetic resonance imaging. In patients with juvenile spondyloarthropathies, these examinations are conducted to diagnose inflammatory lesions in peripheral joints, tendon sheaths, tendons and bursae. Moreover, magnetic resonance also shows subchondral bone marrow edema, which is considered an early sign of inflammation. Ultrasonography and magnetic resonance imaging do not show specific lesions for any rheumatic disease. Nevertheless, they are conducted for early diagnosis, treatment monitoring and identifying complications. This article presents a spectrum of inflammatory changes and discusses the diagnostic value of ultrasonography and magnetic resonance imaging. 17. Computational and experimental prediction of dust production in pebble bed reactors, Part II Energy Technology Data Exchange (ETDEWEB) Hiruta, Mie; Johnson, Gannon [Department of Mechanical Engineering, University of Idaho, 1776 Science Center Drive, Idaho Falls, ID 83401 (United States); Rostamian, Maziar, E-mail: [email protected] [Department of Mechanical Engineering, University of Idaho, 1776 Science Center Drive, Idaho Falls, ID 83401 (United States); Potirniche, Gabriel P. [Department of Mechanical Engineering, University of Idaho, 1776 Science Center Drive, Idaho Falls, ID 83401 (United States); Ougouag, Abderrafi M. [Idaho National Laboratory, 2525 N Fremont Avenue, Idaho Falls, ID 83401 (United States); Bertino, Massimo; Franzel, Louis [Department of Physics, Virginia Commonwealth University, Richmond, VA 23284 (United States); Tokuhiro, Akira [Department of Mechanical Engineering, University of Idaho, 1776 Science Center Drive, Idaho Falls, ID 83401 (United States) 2013-10-15 Highlights: • Custom-built high temperature, high pressure tribometer is designed. • Two different wear phenomena at high temperatures are observed. • Experimental wear results for graphite are presented. • The graphite wear dust production in a typical Pebble Bed Reactor is predicted. -- Abstract: This paper is the continuation of Part I, which describes the high temperature and high pressure helium environment wear tests of graphite–graphite in frictional contact. In the present work, it has been attempted to simulate a Pebble Bed Reactor core environment as compared to Part I. The experimental apparatus, which is a custom-designed tribometer, is capable of performing wear tests at PBR relevant higher temperatures and pressures under a helium environment. This environment facilitates prediction of wear mass loss of graphite as dust particulates from the pebble bed. The experimental results of high temperature helium environment are used to anticipate the amount of wear mass produced in a pebble bed nuclear reactor. 18. Computational and experimental prediction of dust production in pebble bed reactors, Part II Energy Technology Data Exchange (ETDEWEB) Mie Hiruta; Gannon Johnson; Maziar Rostamian; Gabriel P. Potirniche; Abderrafi M. Ougouag; Massimo Bertino; Louis Franzel; Akira Tokuhiro 2013-10-01 This paper is the continuation of Part I, which describes the high temperature and high pressure helium environment wear tests of graphite–graphite in frictional contact. In the present work, it has been attempted to simulate a Pebble Bed Reactor core environment as compared to Part I. The experimental apparatus, which is a custom-designed tribometer, is capable of performing wear tests at PBR relevant higher temperatures and pressures under a helium environment. This environment facilitates prediction of wear mass loss of graphite as dust particulates from the pebble bed. The experimental results of high temperature helium environment are used to anticipate the amount of wear mass produced in a pebble bed nuclear reactor. 19. Nanotechnology and its Relationship to Interventional Radiology. Part II: Drug Delivery, Thermotherapy, and Vascular Intervention. LENUS (Irish Health Repository) Power, Sarah 2010-09-16 Nanotechnology can be defined as the design, creation, and manipulation of structures on the nanometer scale. This two-part review is intended to acquaint the interventionalist with the field of nanotechnology, and provide an overview of potential applications, while highlighting advances relevant to interventional radiology. Part 2 of the article concentrates on drug delivery, thermotherapy, and vascular intervention. In oncology, advances in drug delivery allow for improved efficacy, decreased toxicity, and greater potential for targeted therapy. Magnetic nanoparticles show potential for use in thermotherapy treatments of various tumours, and the effectiveness of radiofrequency ablation can be enhanced with nanoparticle chemotherapy agents. In vascular intervention, much work is focused on prevention of restenosis through developments in stent technology and systems for localised drug delivery to vessel walls. Further areas of interest include applications for thrombolysis and haemostasis. 20. Nanotechnology and its relationship to interventional radiology. Part II: Drug Delivery, Thermotherapy, and Vascular Intervention. LENUS (Irish Health Repository) Power, Sarah 2012-02-01 Nanotechnology can be defined as the design, creation, and manipulation of structures on the nanometer scale. This two-part review is intended to acquaint the interventionalist with the field of nanotechnology, and provide an overview of potential applications, while highlighting advances relevant to interventional radiology. Part 2 of the article concentrates on drug delivery, thermotherapy, and vascular intervention. In oncology, advances in drug delivery allow for improved efficacy, decreased toxicity, and greater potential for targeted therapy. Magnetic nanoparticles show potential for use in thermotherapy treatments of various tumours, and the effectiveness of radiofrequency ablation can be enhanced with nanoparticle chemotherapy agents. In vascular intervention, much work is focused on prevention of restenosis through developments in stent technology and systems for localised drug delivery to vessel walls. Further areas of interest include applications for thrombolysis and haemostasis. 1. Treatment planning and dental rehabilitation of periodontally compromised partially edentulous patient: a case report - part II. Science.gov (United States) Brezavšcek, Miha; Lamott, Ulrich; Att, Wael 2014-01-01 When planning a prosthetic rehabilitation of a periodontally compromised case, the clinician is often confronted with difficulties and dilemmas related to selecting the appropriate treatment that would provide long-term successful outcomes in function and esthetics. In such cases, a correct diagnosis and prognosis of the intraoral situation supported by evidence-based dentistry is the basis for the establishment of a proper treatment strategy. In this second part of a two-part treatment planning series, a systematic approach of patient examination and prognosis of each tooth is presented. Furthermore, different removable and fixed treatment possibilities are described and the rationale governing the decision-making process is revealed. The execution of the final treatment plan as specified by the concept of comprehensive dental care is outlined, and the final outcome is discussed according to the literature. 2. Musculoskeletal disorders associated with HIV infection and AIDS. Part II: Non-infectious musculoskeletal conditions Energy Technology Data Exchange (ETDEWEB) Tehranzadeh, Jamshid [Department of Radiological Sciences, University of California, Irvine, CA (United States); Department of Radiological Sciences, Orange, CA (United States); Ter-Oganesyan, Ramon R. [College of Medicine, University of California, Irvine, CA (United States); Steinbach, Lynne S. [Department of Radiological Sciences, University of California, San Francisco (United States) 2004-06-01 This section of a two-part series on musculoskeletal disorders associated with HIV infection and AIDS reviews the non-infectious musculoskeletal conditions. In the first part, the infectious conditions were reviewed. The non-infectious conditions include polymyositis, drug-induced myopathy, myositis ossificans, adhesive capsulitis, avascular necrosis, bone marrow abnormalities, and hypertrophic osteoarthropathy. Inflammatory and reactive arthropathies are more prevalent in HIV-positive individuals, and a separate section is dedicated to these conditions, including Reiter's syndrome, psoriatic arthritis, HIV-associated arthritis, painful articular syndrome, and acute symmetric polyarthritis. Lastly, we include a discussion of HIV-related neoplastic processes that affect the musculoskeletal system, namely Kaposi's sarcoma and non-Hodgkin's lymphoma. (orig.) 3. Recent progress and continuing challenges in bio-fuel cells. Part II: Microbial. Science.gov (United States) Osman, M H; Shah, A A; Walsh, F C 2010-11-15 Recent key developments in microbial fuel cell technology are reviewed. Fuel sources, electron transfer mechanisms, anode materials and enhanced O(2) reduction are discussed in detail. A summary of recently developed microbial fuel cell systems, including performance measurements, is conveniently provided in tabular form. The current challenges involved in developing practical bio-fuel cell systems are described, with particular emphasis on a fundamental understanding of the reaction environment, the performance and stability requirements, modularity and scalability. This review is the second part of a review of bio-fuel cells. In Part 1 a general introduction to bio-fuel cells, including their operating principles and applications, was provided and enzymatic fuel cell technology was reviewed. Copyright © 2010 Elsevier B.V. All rights reserved. 4. HIERARCHICAL METHODOLOGY FOR MODELING HYDROGEN STORAGE SYSTEMS PART II: DETAILED MODELS Energy Technology Data Exchange (ETDEWEB) Hardy, B; Donald L. Anton, D 2008-12-22 There is significant interest in hydrogen storage systems that employ a media which either adsorbs, absorbs or reacts with hydrogen in a nearly reversible manner. In any media based storage system the rate of hydrogen uptake and the system capacity is governed by a number of complex, coupled physical processes. To design and evaluate such storage systems, a comprehensive methodology was developed, consisting of a hierarchical sequence of models that range from scoping calculations to numerical models that couple reaction kinetics with heat and mass transfer for both the hydrogen charging and discharging phases. The scoping models were presented in Part I [1] of this two part series of papers. This paper describes a detailed numerical model that integrates the phenomena occurring when hydrogen is charged and discharged. A specific application of the methodology is made to a system using NaAlH{sub 4} as the storage media. 5. Information theory in systems biology. Part II: protein-protein interaction and signaling networks. Science.gov (United States) Mousavian, Zaynab; Díaz, José; Masoudi-Nejad, Ali 2016-03-01 By the development of information theory in 1948 by Claude Shannon to address the problems in the field of data storage and data communication over (noisy) communication channel, it has been successfully applied in many other research areas such as bioinformatics and systems biology. In this manuscript, we attempt to review some of the existing literatures in systems biology, which are using the information theory measures in their calculations. As we have reviewed most of the existing information-theoretic methods in gene regulatory and metabolic networks in the first part of the review, so in the second part of our study, the application of information theory in other types of biological networks including protein-protein interaction and signaling networks will be surveyed. Copyright © 2015 Elsevier Ltd. All rights reserved. 6. Musculoskeletal disorders associated with HIV infection and AIDS. Part II: Non-infectious musculoskeletal conditions International Nuclear Information System (INIS) Tehranzadeh, Jamshid; Ter-Oganesyan, Ramon R.; Steinbach, Lynne S. 2004-01-01 This section of a two-part series on musculoskeletal disorders associated with HIV infection and AIDS reviews the non-infectious musculoskeletal conditions. In the first part, the infectious conditions were reviewed. The non-infectious conditions include polymyositis, drug-induced myopathy, myositis ossificans, adhesive capsulitis, avascular necrosis, bone marrow abnormalities, and hypertrophic osteoarthropathy. Inflammatory and reactive arthropathies are more prevalent in HIV-positive individuals, and a separate section is dedicated to these conditions, including Reiter's syndrome, psoriatic arthritis, HIV-associated arthritis, painful articular syndrome, and acute symmetric polyarthritis. Lastly, we include a discussion of HIV-related neoplastic processes that affect the musculoskeletal system, namely Kaposi's sarcoma and non-Hodgkin's lymphoma. (orig.) 7. Intelligence-led crime scene processing. Part II: Intelligence and crime scene examination. Science.gov (United States) Ribaux, Olivier; Baylon, Amélie; Lock, Eric; Delémont, Olivier; Roux, Claude; Zingg, Christian; Margot, Pierre 2010-06-15 A better integration of the information conveyed by traces within intelligence-led framework would allow forensic science to participate more intensively to security assessments through forensic intelligence (part I). In this view, the collection of data by examining crime scenes is an entire part of intelligence processes. This conception frames our proposal for a model that promotes to better use knowledge available in the organisation for driving and supporting crime scene examination. The suggested model also clarifies the uncomfortable situation of crime scene examiners who must simultaneously comply with justice needs and expectations, and serve organisations that are mostly driven by broader security objectives. It also opens new perspective for forensic science and crime scene investigation, by the proposal to follow other directions than the traditional path suggested by dominant movements in these fields. (c) 2010 Elsevier Ireland Ltd. All rights reserved. 8. Protective clothing for pesticide operators: part II--data analysis of fabric characteristics. Science.gov (United States) Shaw, Anugrah; Schiffelbein, Paul 2016-01-01 Development of objective measurements is an important requirement for establishing performance-based standards for protective clothing used while handling pesticide. This study, the second in a two-part series, reports on the work completed to evaluate the performance of approximately 100 fabrics that are either used or have the potential to be used for garments worn by operators while applying pesticides. Part I, published separately, provides an overview of these issues and describes research undertaken to select a test chemical for use in subsequent studies. The goals of this study were first to develop a comprehensive approach to evaluate the performance of garments currently being used by pesticide operators, and second, to use the laboratory and field data in the development of performance specifications. 9. 'Forms of energy', an intermediary language on the road to thermodynamics? Part II Science.gov (United States) Kaper, Wolter H.; Goedhart, Martin J. 2002-02-01 In secondary education, 'energy' is often introduced by distinguishing different 'forms of energy' for different phenomena. Of these forms of energy, only kinetic and potential energy are accepted in current science. The question has been raised whether 'forms of energy' should be eliminated from secondary school science curricula. As a contribution to this discussion we have analysed 'forms of energy' language for inconsistencies and limitations of validity in Part I. In this second part, results are presented of two teaching experiments at university level, each involving five students. In these experiments attempts are made to build on students 'forms of energy' language as well as to challenge its limitations. Details of student and teacher reasoning are presented. The conclusion is drawn that 'forms of energy' language must be reformulated before it can be evaluated with reference to experience. A reformulation in terms of 'value' (cf. Scheler 1997) proved to be productive. 10. A review on fault classification methodologies in power transmission systems: Part-II Directory of Open Access Journals (Sweden) Avagaddi Prasad 2018-05-01 Full Text Available The countless extent of power systems and applications requires the improvement in suitable techniques for the fault classification in power transmission systems, to increase the efficiency of the systems and to avoid major damages. For this purpose, the technical literature proposes a large number of methods. The paper analyzes the technical literature, summarizing the most important methods that can be applied to fault classification methodologies in power transmission systems.The part 2 of the article is named “A review on fault classification methodologies in power transmission systems”. In this part 2 we discussed the advanced technologies developed by various researchers for fault classification in power transmission systems. Keywords: Transmission line protection, Protective relaying, Soft computing techniques 11. Visual servoing in medical robotics: a survey. Part II: tomographic imaging modalities--techniques and applications. Science.gov (United States) Azizian, Mahdi; Najmaei, Nima; Khoshnam, Mahta; Patel, Rajni 2015-03-01 Intraoperative application of tomographic imaging techniques provides a means of visual servoing for objects beneath the surface of organs. The focus of this survey is on therapeutic and diagnostic medical applications where tomographic imaging is used in visual servoing. To this end, a comprehensive search of the electronic databases was completed for the period 2000-2013. Existing techniques and products are categorized and studied, based on the imaging modality and their medical applications. This part complements Part I of the survey, which covers visual servoing techniques using endoscopic imaging and direct vision. The main challenges in using visual servoing based on tomographic images have been identified. 'Supervised automation of medical robotics' is found to be a major trend in this field and ultrasound is the most commonly used tomographic modality for visual servoing. Copyright © 2014 John Wiley & Sons, Ltd. 12. Advances in metabolome information retrieval: turning chemistry into biology. Part II: biological information recovery. Science.gov (United States) Tebani, Abdellah; Afonso, Carlos; Bekri, Soumeya 2017-08-25 This work reports the second part of a review intending to give the state of the art of major metabolic phenotyping strategies. It particularly deals with inherent advantages and limits regarding data analysis issues and biological information retrieval tools along with translational challenges. This Part starts with introducing the main data preprocessing strategies of the different metabolomics data. Then, it describes the main data analysis techniques including univariate and multivariate aspects. It also addresses the challenges related to metabolite annotation and characterization. Finally, functional analysis including pathway and network strategies are discussed. The last section of this review is devoted to practical considerations and current challenges and pathways to bring metabolomics into clinical environments. 13. Multiobjective Optimization for Fixture Locating Layout of Sheet Metal Part Using SVR and NSGA-II OpenAIRE Yuan Yang; Zhongqi Wang; Bo Yang; Zewang Jing; Yonggang Kang 2017-01-01 Fixture plays a significant role in determining the sheet metal part (SMP) spatial position and restraining its excessive deformation in many manufacturing operations. However, it is still a difficult task to design and optimize SMP fixture locating layout at present because there exist multiple conflicting objectives and excessive computational cost of finite element analysis (FEA) during the optimization process. To this end, a new multiobjective optimization method for SMP fixture locating... 14. Development of an Adaptable Monitoring Package for Marine Renewable Energy Projects Part II: Hydrodynamic Performance OpenAIRE Joslin, James; Rush, Ben; Stewart, Andrew; Polagye, Brian 2014-01-01 The Adaptable Monitoring Package (AMP), along with a remotely operated vehicle (ROV) and custom tool skid, is being developed to support near-field (≤10 meters) monitoring of hydrokinetic energy converters. The AMP is intended to support a wide range of environmental monitoring in harsh oceanographic conditions, at a cost in line with other aspects of technology demonstrations. This paper, which is the second in a two part series, covers the hydrodynamic analysis of the AMP and deployment ROV... 15. EFSUMB Guidelines on Interventional Ultrasound (INVUS), Part II Diagnostic Ultrasound-Guided Interventional Procedures (Long Version) DEFF Research Database (Denmark) Sidhu, P. S.; Brabrand, K.; Cantisani, V. 2015-01-01 This is the second part of the series on interventional ultrasound guidelines of the Federation of Societies for Ultrasound in Medicine and Biology (EFSUMB). It deals with the diagnostic interventional procedure. General points are discussed which are pertinent to all patients, followed by organ......-specific imaging that will allow the correct pathway and planning for the interventional procedure. This will allow for the appropriate imaging workup for each individual interventional procedure (Long version).... 16. How to succeed in science: a concise guide for young biomedical scientists. Part II: making discoveries OpenAIRE Yewdell, Jonathan W. 2008-01-01 Making discoveries is the most important part of being a scientist, and also the most fun. Young scientists need to develop the experimental and mental skill sets that enable them to make discoveries, including how to recognize and exploit serendipity when it strikes. Here, I provide practical advice to young scientists on choosing a research topic, designing, performing and interpreting experiments and, last but not least, on maintaining your sanity in the process. 17. How to succeed in science: a concise guide for young biomedical scientists. Part II: making discoveries. Science.gov (United States) Yewdell, Jonathan W 2008-06-01 Making discoveries is the most important part of being a scientist, and also the most fun. Young scientists need to develop the experimental and mental skill sets that enable them to make discoveries, including how to recognize and exploit serendipity when it strikes. Here, I provide practical advice to young scientists on choosing a research topic, designing, performing and interpreting experiments and, last but not least, on maintaining your sanity in the process. 18. FORMATION OF TOILET HABITS IN CHILDREN IN MOSCOW. RETROSPECTIVE STUDY RESULTS. PART II Directory of Open Access Journals (Sweden) G. A. Karkashadze 2013-01-01 Full Text Available The results of the first Russian study of toilet habits formation in children have been obtained. The article was planned to be published in 2 subsequent parts due to the extensiveness of the material. This article is the 2nd part*. It presents and comments on the remaining part of results in the form of the connection between main parameters and characteristics of toilet habits training processes and physiological, psychological and social factors; it also presents the discussion and conclusions. Comparative data (with foreign studies is given. A multitude of both physiological and social factors affect the process of children’s toilet habits training. The following physiological factors have been revealed: stool frequency, physiological involuntary night urination, peculiarities of falling asleep and pernicious habits – processes, which reflect the intestinal motility regulation and defecation states, urination control and neuropsychic activity. The selected training strategy and tactics, style of communication with a child also affect the training process. The most influential family-social factors in terms of toilet habits training processes are: two- or one-parent family, mother’s education and twins in the family. 19. Microstructural Features Controlling Mechanical Properties in Nb-Mo Microalloyed Steels. Part II: Impact Toughness Science.gov (United States) Isasti, Nerea; Jorge-Badiola, Denis; Taheri, Mitra L.; Uranga, Pello 2014-10-01 The present paper is the final part of a two-part paper where the influence of coiling temperature on the final microstructure and mechanical properties of Nb-Mo microalloyed steels is described. More specifically, this second paper deals with the different mechanisms affecting impact toughness. A detailed microstructural characterization and the relations linking the microstructural parameters and the tensile properties have already been discussed in Part I. Using these results as a starting point, the present work takes a step forward and develops a methodology for consistently incorporating the effect of the microstructural heterogeneity into the existing relations that link the Charpy impact toughness to the microstructure. In conventional heat treatments or rolling schedules, the microstructure can be properly described by its mean attributes, and the ductile-brittle transition temperatures measured by Charpy tests can be properly predicted. However, when different microalloying elements are added and multiphase microstructures are formed, the influences of microstructural heterogeneity and secondary hard phases have to be included in a modified equation in order to accurately predict the DB transition temperature in Nb and Nb-Mo microalloyed steels. 20. Anti-Hypertensive Herbs and their Mechanisms of Action: Part II Directory of Open Access Journals (Sweden) M. Akhtar eAnwar 2016-03-01 Full Text Available Traditional medicine has a history extending back to thousands of years, and during the intervening time, man has identified the healing properties of a very broad range of plants. Globally, the use of herbal therapies to treat and manage cardiovascular disease (CVD is on the rise. This is the second part of our comprehensive review where we discuss the mechanisms of plants and herbs used for the treatment and management of high blood pressure. Similar to the first part, PubMed and ScienceDirect databases were utilized, and the following keywords and phrases were used as inclusion criteria: hypertension, high blood pressure, herbal medicine, complementary and alternative medicine, endothelial cells, nitric oxide, vascular smooth muscle cell (VSMC proliferation, hydrogen sulfide, nuclear factor kappa-B, oxidative stress and epigenetics/epigenomics. Each of the aforementioned keywords was co-joined with plant or herb in question, and where possible with its constituent molecule(s. This part deals in particular with plants that are used, albeit less frequently, for the treatment and management of hypertension. We then discuss the interplay between herbs/prescription drugs and herbs/epigenetics in the context of this disease. The review then concludes with a recommendation for more rigorous, well-developed clinical trials to concretely determine the beneficial impact of herbs and plants on hypertension and a disease-free living. 1. PIO I-II tendencies case study. Part 1. Mathematical modeling Directory of Open Access Journals (Sweden) Adrian TOADER 2010-03-01 Full Text Available In the paper, a study is performed from the perspective of giving a method to reduce the conservatism of the well known PIO (Pilot-Induced Oscillation criteria in predicting the susceptibility of an aircraft to this very harmful phenomenon. There are three interacting components of a PIO – the pilot, the vehicle, and the trigger (in fact, the hazard. The study, conceived in two parts, aims to underline the importance of human pilot model involved in analysis. In this first part, it is shown, following classical sources, how the LQG theory of control and estimation is used to obtain a complex model of human pilot. The approach is based on the argument, experimentally proved, that the human behaves “optimally” in some sense, subject to his inherent psychophysical limitations. The validation of such model is accomplished based on the experimental model of a VTOL-type aircraft. Then, the procedure of inserting typical saturation nonlinearities in the open loop transfer function is presented. A second part of the paper will illustrate PIO tendencies evaluation by means of a grapho-analytic method. 2. Evaluación de ansiedad ante exámenes : datos de aplicación y fiabilidad de un cuestionario CAEX OpenAIRE Valero Aguayo, Luis 1999-01-01 Se presenta un cuestionario de evaluación de problemas de ansiedad ante exámenes. Los datos se han obtenido entre estudiantes universitarios, justo antes de realizar los exámenes correspondientes. Los sujetos informar sobre sus respuestas fisiológicas, cognitivas y conductuales, así como su ansiedad ante diferentes tipos de pruebas. Se presentan los datos sobre los grupos, ítems, fiabilidad y diferencias entre las escalas de frecuencia y ansiedad utilizadas. 3. RELACIÓN ENTRE VARIABLES MOTIVACIONALES Y ANSIEDAD EN JUGADORES DE BALONMANO Directory of Open Access Journals (Sweden) Marta Leyton Román 2015-04-01 Full Text Available Tradicionalmente, los postulados de la Teoría de la Autodeterminación (TAD diferenciaban entre tres grandes motivaciones: la intrínseca, la extrínseca y la desmotivación (Deci y Ryan, 2000. Sin embargo, las últimas aportaciones a la teoría (Vansteenkiste, Niemiec, y Soenens, 2010, se decantan más por una agrupación formada por la motivación autónoma (compuesta por la motivación intrínseca y la regulación identificada, motivación controlada (formada por las regulaciones introyectada y externa y la desmotivación. La TAD que se ha desarrollado en las últimas décadas se basa en que el comportamiento humano es motivado fundamentalmente por 3 necesidades psicológicas básicas (NPB: autonomía, competencia, y relaciones sociales (Deci y Ryan, 2000. Por otro lado, los jugadores se exponen a diversas situaciones de juego en las que puede aparecer síntomas de ansiedad. Las preocupaciones respecto a la ejecución y falta de habilidad para concentrarse se conocen como ansiedad cognitiva. La ansiedad somática son las percepciones de los síntomas corporales (Martens, Burton, Vealey, Bump, y Smith 1990. La autoconfianza hace referencia a la creencia que tiene una persona de poder hacer aquello que quiere hacer (Feltz, 1994.Objetivo: Analizar las relaciones entre los tipos de motivación, las necesidades psicológicas básicas y la ansiedad precompetitiva del deportista en jugadores de balonmano. 4. Correlações entre ansiedade e depressão no desempenho cognitivo de idosos Directory of Open Access Journals (Sweden) Regina Maria Fernandes Lopes 2014-01-01 Full Text Available Se observa que en los últimos años, el cuerpo humano pasa por un proceso de envejecimiento natural, la generación de modificaciones funcionales, la reducción de la vitalidad; favoreciendo así la aparición de enfermedades relacionadas con ese periodo de la vida. Los principales factores de riesgo para el desarrollo de la depresión en los ancianos incluyen factores genéticos, los acontecimientos estresantes de la vida, el deterioro cognitivo asociado con el envejecimiento y las alteraciones neurobiológicas, con tasas de prevalencia del trastorno depresivo mayor en el rango de edad avanzada del 2 % al 5 %. Los adultos mayores con síntomas depresivos y síntomas de la ansiedad mostraron déficits cognitivos más severos. El objetivo de este estudio es verificar si existe una correlación significativa entre el rendimiento cognitivo de los adultos mayores con síntomas de la depresión, la ansiedad y la edad. 231 personas mayores participaron en este estudio. El diseño fue un estudio cuantitativo transversal. Se utilizó el Mini Examen del Estado Mental (MMSE, la Escala de Depresión Geriátrica (GDS y el Inventario de Ansiedad de Beck (BAI. Los resultados mostraron que la edad y correlacionada estadísticamente significativamente asociada negativamente con la puntuación del MMSE (r = -0,205, p <0,005, demostrando que la edad, disminuyó el rendimiento cognitivo. Personas mayores con los síntomas depresivos y los síntomas de ansiedad severa tenían puntuaciones en el MMSE inferiores. 5. Efectividad de la auriculoterapia en el tratamiento de la ansiedad en el adulto mayor Directory of Open Access Journals (Sweden) María Onelia Díaz Rivadeneira 2015-05-01 Full Text Available Se realizó un estudio experimental, para determinar la efectividad de la auriculoterapia en pacientes adultos mayores con ansiedad, comparada con el tratamiento convencional, remitidos a la consulta de Medicina Tradicional y Natural del policlínico “Julio Antonio Mella”, Camagüey, Cuba; en el período comprendido entre enero y diciembre de 2013, previa valoración por psiquiatría. El universo fue de 520 pacientes adultos mayores con diagnóstico de ansiedad generalizada, remitidos de la consulta de psiquiatría de la entidad antes mencionada. La muestra quedó constituida por 60 de ellos, que además no habían ingerido psicofármacos en un período menor a seis semanas. Se formaron dos grupos, uno control (30, con tratamiento convencional, y otro de estudio (30, con auriculoterapia. La asignación de los sujetos a cada grupo fue por el método aleatorio simple. Al grupo de estudio se le aplicó esquema de tratamiento con auriculoterapia y al grupo control se le aplicó tratamiento farmacológico convencional. En los resultados obtenidos se observó que el grupo de edades más afectadas por la ansiedad fue el de 70 a 74 años y predominó el sexo femenino. El insomnio, la irritabilidad y la dificultad de memoria fueron los síntomas que más se observaron en el estudio. Al finalizar el tratamiento los síntomas disminuyeron de forma más apreciable en el grupo tratado con auriculoterapia. La auriculoterapia fue más eficaz en el tratamiento de la ansiedad y es una técnica prácticamente inocua, con la cual se puede disminuir la utilización de psicofármacos 6. Information Extraction from Large-scale WSNs: Approaches and Research Issues Part II: Query-Based and Macroprogramming Approaches Directory of Open Access Journals (Sweden) Tessa DANIEL 2008-07-01 Full Text Available Regardless of the application domain and deployment scope, the ability to retrieve information is critical to the successful functioning of any wireless sensor network (WSN system. In general, information extraction procedures can be categorized into three main approaches: agent-based, query-based and macroprogramming led. Whilst query-based systems are the most popular, macroprogramming techniques provide a more general-purpose approach to distributed computation. Finally, the agent-based approaches tailor the information extraction mechanism to the type of information needed and the configuration of the network it needs to be extracted from. This suite of three papers (Part I-III offers an extensive survey of the literature in the area of WSN information extraction, covering in Part I and Part II the three main approaches above. Part III highlights the open research questions and issues faced by deployable WSN system designers and discusses the potential benefits of both in-network processing and complex querying for large scale wireless informational systems. 7. Avaliação do Sentimento de Ansiedade Frente ao Atendimento Odontológico Directory of Open Access Journals (Sweden) Patrícia Aleixo dos Santos 2015-01-01 Full Text Available O presente estudo teve por objetivo avaliar a ansiedade e o comportamento de indivíduos adultos frente às visitas realizadas ao dentista. Para tanto, aplicouse um questionário a uma amostra de 984 indivíduos, entre 14 a 93 anos, de ambos os sexos, abordando temas sobre: medo e/ou ansiedade, freqüência de consultas ao dentista e procedimentos odontológicos realizados na última consulta. Os resultados demonstraram não haver diferença estatisticamente significante entre os sexos (masculino 23,81; feminino 27,7%, ao afirmarem não ter medo de dentista. Para os que tinham medo, 9,04% tiveram experiências desagradáveis no atendimento; 4,98% sentem medo a partir do ruído do alta-rotação; 6,20% da anestesia; 3,46% de todos os itens citados. Pode-se concluir que maior ênfase deve ser dada às manifestações de ansiedade e medo odontológico, pois os indivíduos são relutantes a admitirem seus medos, descuidando e fugindo da filosofia de prevenção em saúde bucal. 8. Depressão, ansiedade e estresse em crianças trabalhadoras migrantes Directory of Open Access Journals (Sweden) Noriega, José Angel Vera 2009-01-01 Full Text Available O objetivo deste estudo foi descrever os sintomas de depressão, ansiedade e estresse em um grupo de crianças migrantes aos campos agrícolas no Estado de Sonora, México Participaram 358 crianças com idades compreendidas entre os 8 e os 14 anos em 16 fazendas onde trabalharam ao lado dos seus pais. Todos pertenciam a diferentes grupos étnicos do México. Crianças escutam e responderam às três medidas psicológicas apoiado por um psicólogo. Os resultados indicam que o sexo, idade e número de migrações são três fatores que afetam a média de depressão, ansiedade e estresse. No entanto, os resultados indicam a existência de um risco, não existem dados que sugerem uma patologia nas três medidas. Foi observado que a idade de início da migração e número de migrações a partir de seu local de residência para o trabalho aumenta proporcionalmente com os níveis de estresse e ansiedade, mas não relaciona com a pontuação de depressão 9. O estudo bibliométrico do transtorno de ansiedade social em universitários Directory of Open Access Journals (Sweden) Sabrina Maura Pereira 2012-01-01 Full Text Available El trastorno de Ansiedad social (TAS o Fobia Social (FS se caracteriza por una ansiedad excesiva y persiste en situaciones de interacción social o de desempeño. El presente trabajo pretende analizar los artículos indexados en las bases de datos Pubmed y Web of Science, en el período comprendido entre 2006 y 2010, y evaluar los indicadores bibliométricos de la literatura científica relacionados con el trastorno de ansiedad social/fobia social en estudiantes universitarios. La muestra final constaba de 13 artículos que atendían al tema estudiado. Estos artículos presentan una metodología cuantitativa y el uso de los instrumentos de SIAS, SPS, LSAS, SPIN y SPQS para evaluar el FS. Los resultados indicaron que existe TAS en los universitarios y que éste interfiere en la percepción que el sujeto tiene de sí mismo, evaluando su desempeño de forma negativa y crítica. Se sugiere la realización de otros análisis en otras bases de datos, para que se examine sistemáticamente la producción científica nacional e internacional. 10. Hybrid infrared scene projector (HIRSP): a high dynamic range infrared scene projector, part II Science.gov (United States) Cantey, Thomas M.; Bowden, Mark; Cosby, David; Ballard, Gary 2008-04-01 This paper is a continuation of the merging of two dynamic infrared scene projector technologies to provide a unique and innovative solution for the simulation of high dynamic temperature ranges for testing infrared imaging sensors. This paper will present some of the challenges and performance issues encountered in implementing this unique projector system into a Hardware-in-the-Loop (HWIL) simulation facility. The projection system combines the technologies of a Honeywell BRITE II extended voltage range emissive resistor array device and an optically scanned laser diode array projector (LDAP). The high apparent temperature simulations are produced from the luminescent infrared radiation emitted by the high power laser diodes. The hybrid infrared projector system is being integrated into an existing HWIL simulation facility and is used to provide real-world high radiance imagery to an imaging infrared unit under test. The performance and operation of the projector is presented demonstrating the merit and success of the hybrid approach. The high dynamic range capability simulates a 250 Kelvin apparent background temperature to 850 Kelvin maximum apparent temperature signatures. This is a large increase in radiance projection over current infrared scene projection capabilities. 11. Investigation of mixed mode - I/II fracture problems - Part 1: computational and experimental analyses Directory of Open Access Journals (Sweden) O. Demir 2016-01-01 Full Text Available In this study, to investigate and understand the nature of fracture behavior properly under in-plane mixed mode (Mode-I/II loading, three-dimensional fracture analyses and experiments of compact tension shear (CTS specimen are performed under different mixed mode loading conditions. Al 7075-T651 aluminum machined from rolled plates in the L-T rolling direction (crack plane is perpendicular to the rolling direction is used in this study. Results from finite element analyses and fracture loads, crack deflection angles obtained from the experiments are presented. To simulate the real conditions in the experiments, contacts are defined between the contact surfaces of the loading devices, specimen and loading pins. Modeling, meshing and the solution of the problem involving the whole assembly, i.e., loading devices, pins and the specimen, with contact mechanics are performed using ANSYSTM. Then, CTS specimen is analyzed separately using a submodeling approach, in which three-dimensional enriched finite elements are used in FRAC3D solver to calculate the resulting stress intensity factors along the crack front. Having performed the detailed computational and experimental studies on the CTS specimen, a new specimen type together with its loading device is also proposed that has smaller dimensions compared to the regular CTS specimen. Experimental results for the new specimen are also presented. 12. Diseño de una intervención de enfermería para disminuir la ansiedad perioperatoria y el dolor postoperatorio del paciente quirúrgico OpenAIRE Mora Alins, Sofía 2015-01-01 Objetivo: Valorar la eficacia de una intervención de enfermería para disminuir la ansiedad perioperatoria y el dolor postoperatorio en el paciente quirúrgico programado en el hospital de Barbastro. Metodología: Estudio cuasi-experimental del tipo antes y después. La muestra representativa resultante es de 376 pacientes que serán distribuidos: 188 formaran parte del grupo de intervención a los que se les entregará un cuestionario informativo y otras 188 formaran parte del grupo de control. ... 13. The evolution of the temperature field during cavity collapse in liquid nitromethane. Part II: Reactive case OpenAIRE Michael, Louisa; Nikiforakis, Nikolaos 2017-01-01 We study effect of cavity collapse in non-ideal explosives as a means of controlling their sensitivity. The main aim is to understand the origin of localised temperature peaks (hot spots) that play a leading order role at early ignition stages. Thus, we perform 2D and 3D numerical simulations of shock induced single gas-cavity collapse in nitromethane. Ignition is the result of a complex interplay between fluid dynamics and exothermic chemical reaction. In part I of this work we focused on th... 14. The role of the illegality factor in the taxation of income. Part II OpenAIRE Čerka, Paulius; Gudynienė, Lina 2012-01-01 The taxation of illegal income is quite common in many foreign countries, but this practice is not yet applicable in Lithuania, though the recent movements of Lithuania’s Finance Minister, when she admitted that all income should be taxed despite it’s source show her positive attitude towards the taxation of illegal income. The article promotes the idea that all personal income, despite its source, should be taxed.The article is divided into two parts: the first one, which is not published he... 15. Institute for Defense Analyses Tactical Warfare (TACWAR) Model. Program Maintenance Manual. Part II. Science.gov (United States) 1977-09-06 attacker diverted to SAM- suppression CAS close air support CASA close-air-support attacker CASD close-air-support defender CASE close-air-support escort...nom~iI,*o 3 SUMMI4 - P050 to CASA arnsy. for 032 TUMM - PKOAS PCSSISTA) CA Oroc "Cof IA -. NI TUMMWI *PICOAS TEMP PSSCUSIS) CASAA(IACI - E At~toZ.SSm...TA 9 YF ABORTEp - 133" farm ratio for TA L .I I n~~~WO~ amPIIow N Figufrce 13. FocatoTAWRRuieASG Figue 16. F(Part of 4) WRRotn ASG 597 IRATIO - smr 16. Global optimization of truss topology with discrete bar areas-Part II: Implementation and numerical results DEFF Research Database (Denmark) Achtziger, Wolfgang; Stolpe, Mathias 2009-01-01 on the implementation details but also establish finite convergence of the branch-and-bound method. The algorithm is based on solving a sequence of continuous non-convex relaxations which can be formulated as quadratic programs according to the theory in Part I. The quadratic programs to be treated within the branch......-and-bound search all have the same feasible set and differ from each other only in the objective function. This is one reason for making the resulting branch-and-bound method very efficient. The paper closes with several large-scale numerical examples. These examples are, to the knowledge of the authors, by far... 17. Coupled Vibration of Unshrouded Centrifugal Compressor Impellers. Part II: Computation of Vibration Behavior Directory of Open Access Journals (Sweden) Dirk Hagelstein 2000-01-01 Full Text Available The increased use of small gas turbines and turbochargers in different technical fields has led to the development of highly-loaded centrifugal compressors with extremely thin blades. Due to high rotational speed and the correspondingly high centrifugal loads, the shape of the impeller hub must also be optimized. This has led to a reduction of the thickness of the impeller disc in the outlet region. The thin parts of the impeller are very sensitive and may be damaged by the excitation of dangerous blade vibrations. 18. El reduccionismo científico y el control de las conciencias. Parte II OpenAIRE Leonardo Viniegra Velázquez 2014-01-01 En esta segunda parte se analizan los vínculos de subordinación del quehacer científico con lo que se designa como la lógica del poder y la dominación, a través de dar prioridad absoluta a los hechos sobre las ideas y favorecer el conocimiento capitalizable por la innovación tecnológica, la cual es decisiva en la rentabilidad y competitividad de las grandes empresas (los intereses de lucro que gobiernan el planeta), y base de los mecanismos de control político-social de las conciencias y de l... OpenAIRE Tatiana María Blanco Álvarez 2015-01-01 : Establecer la relación entre los factores de vulnerabilización y la ansiedad ante la muerte en ofensores sexuales, recluidos en el Centro de Atención Institucional Adulto Mayor. Metodología: Estudio de tipo mixto, con un alcance correlacional y transversal. Instrumentos: Escala de Ansiedad ante la Muerte de Templer, Cuestionario de Ansiedad Estado-Rasgo, la Escala Geriátrica de Ansiedad. Muestra: 103 personas adultas mayores de la Asociación Gerontológica Costarricense y 80 privados de libe... 20. Relación entre ansiedad escénica, perfeccionismo y calificaciones en estudiantes del Título Superior de Música OpenAIRE Francisco Javier Zarza-Alzugaray; Óscar Casanova-López; José Elías Robles-Rubio 2016-01-01 La ansiedad escénica es uno de los principales problemas a los que los músicos, estudiantes y profesionales, deben enfrentarse. El perfeccionismo es un constructo asociado a problemas de fobia social como pueda ser la ansiedad escénica. La asociación entre perfeccionismo y ansiedad escénica no ha recibido especial atención dentro del ámbito investigador y formativo español. Objetivos: Estudiar la presencia de niveles de ansiedad escénica en una muestra de estudiantes del Conservatorio Superio... 1. Interaction of Zn(II) with hematite nanoparticles and microparticles: Part 2. ATR-FTIR and EXAFS study of the aqueous Zn(II)/oxalate/hematite ternary system. Science.gov (United States) Ha, Juyoung; Trainor, Thomas P; Farges, François; Brown, Gordon E 2009-05-19 Sorption of Zn(II) to hematite nanoparticles (HN) (av diam=10.5 nm) and microparticles (HM) (av diam=550 nm) was studied in the presence of oxalate anions (Ox2-(aq)) in aqueous solutions as a function of total Zn(II)(aq) to total Ox2-(aq) concentration ratio (R=[Zn(II)(aq)]tot/[Ox2-(aq)]tot) at pH 5.5. Zn(II) uptake is similar in extent for both the Zn(II)/Ox/HN and Zn(II)/Ox/HM ternary systems and the Zn(II)/HN binary system at [Zn(II)(aq)](tot)system than for the Zn(II)/Ox/HM ternary and the Zn(II)/HN and Zn(II)/HM binary systems at [Zn(II)(aq)]tot>4 mM. In contrast, Zn(II) uptake for the Zn(II)/HM binary system is a factor of 2 greater than that for the Zn(II)/Ox/HM and Zn(II)/Ox/HN ternary systems and the Zn(II)/HN binary system at [Zn(II)(aq)]totternary system at both R values examined (0.16 and 0.68), attenuated total reflectance Fourier transform infrared (ATR-FTIR) results are consistent with the presence of inner-sphere oxalate complexes and outer-sphere ZnOx(aq) complexes, and/or type A ternary complexes. In addition, extended X-ray absorption fine structure (EXAFS) spectroscopic results suggest that type A ternary surface complexes (i.e., >O2-Zn-Ox) are present. In the Zn(II)/Ox/HN ternary system at R=0.15, ATR-FTIR results indicate the presence of inner-sphere oxalate and outer-sphere ZnOx(aq) complexes; the EXAFS results provide no evidence for inner-sphere Zn(II) complexes or type A ternary complexes. In contrast, ATR-FTIR results for the Zn/Ox/HN sample with R = 0.68 are consistent with a ZnOx(s)-like surface precipitate and possibly type B ternary surface complexes (i.e., >O2-Ox-Zn). EXAFS results are also consistent with the presence of ZnOx(s)-like precipitates. We ascribe the observed increase of Zn(II)(aq) uptake in the Zn(II)/Ox/HN ternary system at [Zn(II)(aq)]tot>or=4 mM relative to the Zn(II)/Ox/HM ternary system to formation of a ZnOx(s)-like precipitate at the hematite nanoparticle/water interface. 2. Quality control for digital mammography: Part II recommendations from the ACRIN DMIST trial International Nuclear Information System (INIS) Yaffe, Martin J.; Bloomquist, Aili K.; Mawdsley, Gordon E. 2006-01-01 The Digital Mammography Imaging Screening Trial (DMIST), conducted under the auspices of the American College of Radiology Imaging Network (ACRIN), is a clinical trial designed to compare the accuracy of digital versus screen-film mammography in a screening population [E. Pisano et al., ACRIN 6652--Digital vs. Screen-Film Mammography, ACRIN (2001)]. Part I of this work described the Quality Control program developed to ensure consistency and optimal operation of the digital equipment. For many of the tests, there were no failures during the 24 months imaging was performed in DMIST. When systems failed, they generally did so suddenly rather than through gradual deterioration of performance. In this part, the utility and effectiveness of those tests are considered. This suggests that after verification of proper operation, routine extensive testing would be of minimal value. A recommended set of tests is presented including additional and improved tests, which we believe meet the intent and spirit of the Mammography Quality Standards Act regulations to ensure that full-field digital mammography systems are functioning correctly, and consistently producing mammograms of excellent image quality 3. Os principais delineamentos na Epidemiologia – Ensaios Clínicos (Parte II Directory of Open Access Journals (Sweden) Luciano Santos Pinto Guimarães 2014-01-01 4. Genetic and epigenetic features in radiation sensitivity. Part II: implications for clinical practice and radiation protection International Nuclear Information System (INIS) Bourguignon, Michel H.; Gisone, Pablo A.; Perez, Maria R.; Michelin, Severino; Dubner, Diana; Giorgio, Marina di; Carosella, Edgardo D. 2005-01-01 5. Structural Characterization of Lecithin-Stabilized Tetracosane Lipid Nanoparticles. Part II: Suspensions. Science.gov (United States) Schmiele, M; Busch, S; Morhenn, H; Schindler, T; Schmutzler, T; Schweins, R; Lindner, P; Boesecke, P; Westermann, M; Steiniger, F; Funari, Sérgio S; Unruh, T 2016-06-23 Using photon correlation spectroscopy, transmission electron microscopy, microcalorimetry, wide-angle X-ray scattering (WAXS), and small-angle X-ray and neutron scattering (SAXS, SANS), the structure of 1,2-dimyristoyl-sn-glycero-3-phosphocholine (DMPC)-stabilized colloidal tetracosane suspensions was studied from the molecular level to the microscopic scale as a function of the temperature. The platelike nanocrystals exhibit for tetracosane an unusual orthorhombic low-temperature crystal structure. The corresponding WAXS pattern can be reproduced with a predicted orthorhombic unit cell (space group Pca21), which usually occurs only for much longer even-numbered n-alkanes. Special emphasis was placed on the structure of the DMPC stabilizer layer covering the nanocrystals. Their structure was investigated by SAXS and SANS, using suspensions with different neutron scattering contrasts. As for the emulsions in Part I , the crystallized nanoparticles are covered by a DMPC monolayer. Their significant smaller thickness of 10.5 Å (for the emulsions in Part I : 16 Å) could be related to a more tilted orientation of the DMPC molecules to cover the expanded surface of the crystallized nanoparticles. 6. A computational chemistry study on friction of h-MoS₂. Part II. Friction anisotropy. Science.gov (United States) Onodera, Tasuku; Morita, Yusuke; Nagumo, Ryo; Miura, Ryuji; Suzuki, Ai; Tsuboi, Hideyuki; Hatakeyama, Nozomu; Endou, Akira; Takaba, Hiromitsu; Dassenoy, Fabrice; Minfray, Clotilde; Joly-Pottuz, Lucile; Kubo, Momoji; Martin, Jean-Michel; Miyamoto, Akira 2010-12-09 In this work, the friction anisotropy of hexagonal MoS(2) (a well-known lamellar compound) was theoretically investigated. A molecular dynamics method was adopted to study the dynamical friction of two-layered MoS(2) sheets at atomistic level. Rotational disorder was depicted by rotating one layer and was changed from 0° to 60°, in 5° intervals. The superimposed structures with misfit angle of 0° and 60° are commensurate, and others are incommensurate. Friction dynamics was simulated by applying an external pressure and a sliding speed to the model. During friction simulation, the incommensurate structures showed extremely low friction due to cancellation of the atomic force in the sliding direction, leading to smooth motion. On the other hand, in commensurate situations, all the atoms in the sliding part were overcoming the atoms in counterpart at the same time while the atomic forces were acted in the same direction, leading to 100 times larger friction than incommensurate situation. Thus, lubrication by MoS(2) strongly depended on its interlayer contacts in the atomic scale. According to part I of this paper [Onodera, T., et al. J. Phys. Chem. B 2009, 113, 16526-16536], interlayer sliding was source of friction reduction by MoS(2) and was originally derived by its material property (interlayer Coulombic interaction). In addition to this interlayer sliding, the rotational disorder was also important to achieve low friction state. 7. The multigene families of actinoporins (part II): Strategies for heterologous production in Escherichia coli. Science.gov (United States) Valle, A; Hervis, Y P; Socas, L B P; Canet, L; Faheem, M; Barbosa, J A R G; Lanio, M E; Pazos, I F 2016-08-01 The sea anemone venom contains pore-forming proteins (PFP) named actinoporins, due to their purification from organisms belonging to Actiniaria order and its ability to form pores in sphingomyelin-containing membranes. Actinoporins are generally basic, monomeric and single-domain small proteins (∼20 kDa) that are classified as α-type PFP since the pore formation in membranes occur through α-helical elements. Different actinoporin isoforms have been isolated from most of the anemones species, as was analyzed in the first part of this review. Several actinoporin full-length genes have been identified from genomic-DNA libraries or messenger RNA. Since the actinoporins lack carbohydrates and disulfide bridges, their expression in bacterial systems is suitable. The actinoporins heterologous expression in Escherichia coli simplifies their production, replaces the natural source reducing the ecological damage in anemone populations, and allows the production of site-specific mutants for the study of the structure-function relationship. In this second part of the review, the strategies for heterologous production of actinoporins in Escherichia coli are analyzed, as well as the different approaches used for their purification. The activity of the recombinant proteins with respect to the wild-type is also reviewed. Copyright © 2016 Elsevier Ltd. All rights reserved. 8. A Review of CAM for Procedural Pain in Infancy: Part II. Other Interventions Directory of Open Access Journals (Sweden) Jennie C. I. Tsao 2008-01-01 Full Text Available This article is the second in a two-part series reviewing the empirical evidence for complementary and alternative medicine (CAM approaches for the management of pain related to medical procedures in infants up to 6 weeks of age. Part I of this series investigated the effects of sucrose with or without non-nutritive sucking (NNS. The present article examines other CAM interventions for procedural pain including music-based interventions, olfactory stimulation, kangaroo care and swaddling. Computerized databases were searched for relevant studies including prior reviews and primary trials. Preliminary support was revealed for the analgesic effects of the CAM modalities reviewed. However, the overall quality of the evidence for these approaches remains relatively weak. Additional well-designed trials incorporating rigorous methodology are required. Such investigations will assist in the development of evidence-based guidelines on the use of CAM interventions either alone or in concert with conventional approaches to provide safe, reliable analgesia for infant procedural pain. 9. Nuclear fission, today and tomorrow. From renaissance to technological breakthrough (generation IV) - Part II International Nuclear Information System (INIS) Van Goethem, Georges 2010-01-01 This paper is an overview of the current Euratom FP-7 research and training actions in innovative nuclear fission reactors and fuel cycle technologies, including partitioning and transmutation. It is based on the more than 40 invited lectures that were delivered by research project coordinators and by keynote speakers at the FISA-2009 Conference, organised by the European Commission DG Research/Euratom. The education and training programmes in nuclear fission and radiation protection are also discussed, aiming at continuously increasing the level of nuclear competences across the EU. It is necessary to consider the most recent nuclear fission technologies (Generations of Nuclear Power Plants): - GEN II: safety and reliability of nuclear facilities and energy independence; - GEN III: continuous improvement of safety and reliability, and increased industrial competitiveness in a growing energy market; - GEN IV: for increased sustainability, and proliferation resistance. The focus in this paper is on the design objectives and research issues associated to Generations IV systems that have been agreed upon internationally. Their benefits are discussed according to a series of ambitious criteria or technology goals established at the international level. One will have to produce not only electricity at lower costs but also heat at very high temperatures, while exploiting a maximum of fissile and fertile matters, and recycling all actinides, under safe and reliable conditions. Scientific viability studies and technological performance tests for each Generation IV system are now being carried out in many laboratories world-wide, in line with the intergovernmental GIF agreement. The ultimate phase of commercial deployment is foreseen for 2040. (orig.) 10. The evolution of the temperature field during cavity collapse in liquid nitromethane. Part II: reactive case Science.gov (United States) Michael, L.; Nikiforakis, N. 2018-02-01 This work is concerned with the effect of cavity collapse in non-ideal explosives as a means of controlling their sensitivity. The main objective is to understand the origin of localised temperature peaks (hot spots) which play a leading order role at the early stages of ignition. To this end, we perform two- and three-dimensional numerical simulations of shock-induced single gas-cavity collapse in liquid nitromethane. Ignition is the result of a complex interplay between fluid dynamics and exothermic chemical reaction. In the first part of this work, we focused on the hydrodynamic effects in the collapse process by switching off the reaction terms in the mathematical formulation. In this part, we reinstate the reactive terms and study the collapse of the cavity in the presence of chemical reactions. By using a multi-phase formulation which overcomes current challenges of cavity collapse modelling in reactive media, we account for the large density difference across the material interface without generating spurious temperature peaks, thus allowing the use of a temperature-based reaction rate law. The mathematical and physical models are validated against experimental and analytic data. In Part I, we demonstrated that, compared to experiments, the generated hot spots have a more complex topological structure and that additional hot spots arise in regions away from the cavity centreline. Here, we extend this by identifying which of the previously determined high-temperature regions in fact lead to ignition and comment on the reactive strength and reaction growth rate in the distinct hot spots. We demonstrate and quantify the sensitisation of nitromethane by the collapse of the isolated cavity by comparing the ignition times of nitromethane due to cavity collapse and the ignition time of the neat material. The ignition in both the centreline hot spots and the hot spots generated by Mach stems occurs in less than half the ignition time of the neat material. We compare 11. Analysis of the absorptive behavior of photopolymer materials. Part II. Experimental validation Science.gov (United States) Li, Haoyu; Qi, Yue; Tolstik, Elen; Guo, Jinxin; Sheridan, John T. 2015-01-01 In the first part of this paper, a model describing photopolymer materials, which incorporates both the physical electromagnetic and photochemical effects taking place, was developed. This model is now validated by applying it to fit experimental data for two different types of photopolymer materials. The first photopolymer material, acrylamide/polyvinyl alcohol, is studied when four photosensitizers are used, i.e. Erythrosine B, Eosin Y, Phloxine B and Rose Bengal. The second type of photopolymer material involves phenanthrenequinone in a polymethylmethacrylate matrix. Using our model, the values of physical parameters, are extracted by numerical fitting experimentally obtained normalized transmittance growth curves. Experimental data sets for different exposure intensities, dye concentrations, and exposure geometries are studied. The advantages of our approach are demonstrated and it is shown that the parameters proposed by us to quantify the absorptive behavior in our model are both physical and can be estimated. 12. Generational influences in academic emergency medicine: structure, function, and culture (Part II). Science.gov (United States) Mohr, Nicholas M; Smith-Coggins, Rebecca; Larrabee, Hollynn; Dyne, Pamela L; Promes, Susan B 2011-02-01 13. Exhaust Gas Temperature Measurements in Diagnostics of Turbocharged Marine Internal Combustion Engines Part II Dynamic Measurements Directory of Open Access Journals (Sweden) Korczewski Zbigniew 2016-01-01 Full Text Available The second part of the article describes the technology of marine engine diagnostics making use of dynamic measurements of the exhaust gas temperature. Little-known achievements of Prof. S. Rutkowski of the Naval College in Gdynia (now: Polish Naval Academy in this area are presented. A novel approach is proposed which consists in the use of the measured exhaust gas temperature dynamics for qualitative and quantitative assessment of the enthalpy flux of successive pressure pulses of the exhaust gas supplying the marine engine turbocompressor. General design assumptions are presented for the measuring and diagnostic system which makes use of a sheathed thermocouple installed in the engine exhaust gas manifold. The corrected thermal inertia of the thermocouple enables to reproduce a real time-history of exhaust gas temperature changes. 14. Study of a phase change energy storage using spherical capsules. Part II: Numerical modelling Energy Technology Data Exchange (ETDEWEB) Bedecarrats, J.P.; Castaing-Lasvignottes, J.; Strub, F.; Dumas, J.P. [Laboratoire de Thermique, Energetique et Procedes, Universite de Pau et des Pays de l' Adour, Avenue de l' Universite, BP 1155, 64013 Pau cedex (France) 2009-10-15 The objective of this work is the numerical study of an industrial process of energy storage which consists in the use of a cylindrical tank filled with encapsulated phase change materials (PCM). A particularity is present in this kind of processes; it concerns the delay of the crystallization of the PCM, called supercooling phenomenon. The development of the model for cold storage with heat transfer fluid flowing enables a detailed analysis of this process. The effects of different parameters on the behaviour of the tank, such as the inlet temperature, the flow rate, are examined when the tank is in vertical position. There is substantial agreement between the prediction and the experimental values already presented in part I. (author) 15. The Objective Structured Clinical Examination (OSCE): AMEE Guide No. 81. Part II: organisation & administration. Science.gov (United States) Khan, Kamran Z; Gaunt, Kathryn; Ramachandran, Sankaranarayanan; Pushkar, Piyush 2013-09-01 The organisation, administration and running of a successful OSCE programme need considerable knowledge, experience and planning. Different teams looking after various aspects of OSCE need to work collaboratively for an effective question bank development, examiner training and standardised patients' training. Quality assurance is an ongoing process taking place throughout the OSCE cycle. In order for the OSCE to generate reliable results it is essential to pay attention to each and every element of quality assurance, as poorly standardised patients, untrained examiners, poor quality questions and inappropriate scoring rubrics each will affect the reliability of the OSCE. The validity will also be influenced if the questions are not realistic and mapped against the learning outcomes of the teaching programme. This part of the Guide addresses all these important issues in order to help the reader setup and quality assure their new or existing OSCE programmes. 16. Evaluation of pump pulsation in respirable size-selective sampling: part II. Changes in sampling efficiency. Science.gov (United States) Lee, Eun Gyung; Lee, Taekhee; Kim, Seung Won; Lee, Larry; Flemmer, Michael M; Harper, Martin 2014-01-01 This second, and concluding, part of this study evaluated changes in sampling efficiency of respirable size-selective samplers due to air pulsations generated by the selected personal sampling pumps characterized in Part I (Lee E, Lee L, Möhlmann C et al. Evaluation of pump pulsation in respirable size-selective sampling: Part I. Pulsation measurements. Ann Occup Hyg 2013). Nine particle sizes of monodisperse ammonium fluorescein (from 1 to 9 μm mass median aerodynamic diameter) were generated individually by a vibrating orifice aerosol generator from dilute solutions of fluorescein in aqueous ammonia and then injected into an environmental chamber. To collect these particles, 10-mm nylon cyclones, also known as Dorr-Oliver (DO) cyclones, were used with five medium volumetric flow rate pumps. Those were the Apex IS, HFS513, GilAir5, Elite5, and Basic5 pumps, which were found in Part I to generate pulsations of 5% (the lowest), 25%, 30%, 56%, and 70% (the highest), respectively. GK2.69 cyclones were used with the Legacy [pump pulsation (PP) = 15%] and Elite12 (PP = 41%) pumps for collection at high flows. The DO cyclone was also used to evaluate changes in sampling efficiency due to pulse shape. The HFS513 pump, which generates a more complex pulse shape, was compared to a single sine wave fluctuation generated by a piston. The luminescent intensity of the fluorescein extracted from each sample was measured with a luminescence spectrometer. Sampling efficiencies were obtained by dividing the intensity of the fluorescein extracted from the filter placed in a cyclone with the intensity obtained from the filter used with a sharp-edged reference sampler. Then, sampling efficiency curves were generated using a sigmoid function with three parameters and each sampling efficiency curve was compared to that of the reference cyclone by constructing bias maps. In general, no change in sampling efficiency (bias under ±10%) was observed until pulsations exceeded 25% for the 17. An Update on the Hazards and Risks of Forensic Anthropology, Part II: Field and Laboratory Considerations. Science.gov (United States) Roberts, Lindsey G; Dabbs, Gretchen R; Spencer, Jessica R 2016-01-01 This paper focuses on potential hazards and risks to forensic anthropologists while working in the field and laboratory in North America. Much has changed since Galloway and Snodgrass published their seminal article addressing these issues. The increased number of forensic practitioners combined with new information about potential hazards calls for an updated review of these pathogens and chemicals. Discussion of pathogen hazards (Brucella, Borrelia burgdorferi, Yersinia pestis, Clostridium tetani and West Nile virus) includes important history, exposure routes, environmental survivability, early symptoms, treatments with corresponding morbidity and mortality rates, and decontamination measures. Additionally, data pertaining to the use of formaldehyde in the laboratory environment have resulted in updated safety regulations, and these are highlighted. These data should inform field and laboratory protocols. The hazards of working directly with human remains are discussed in a companion article, "An Update on the Hazards and Risks of Forensic Anthropology, Part I: Human Remains." © 2015 American Academy of Forensic Sciences. 18. OH-initiated oxidation of benzene - Part II. Influence of elevated NOx concentrations DEFF Research Database (Denmark) Klotz, B; Volkamer, R; Hurley, MD 2002-01-01 The present work represents a continuation of part I of this series of papers, in which we investigated the phenol yields in the OH-initiated oxidation of benzene under conditions of low to moderate concentrations of NOx, to elevated NOx levels. The products of the OH-initiated oxidation of benzene...... in 700 760 Torr of N-2/O-2 diluent at 297 +/- 4 K were investigated in 3 different photochemical reaction chambers. In situ spectroscopic techniques were employed for the detection of products, and the initial concentrations of benzene, NOx, and O-2 were widely varied (by factors of 6300, 1500, and 13......, respectively). In contrast to results from previous studies, a pronounced dependence of the product distribution on the NOx concentration was observed. The phenol yield decreases from approximately 50-60% in the presence of low concentrations (10 000 ppb) NOx concentrations. In the presence of high... 19. Electrically controlled fluorescence quenching of quantum dots on monolayer Molybdenum Disulfide - Part II Science.gov (United States) Klots, Andrey; Prasai, Dhiraj; Newaz, A. K. M.; Niezgoda, Scott; Orfield, Noah; Rosenthal, Sandra; Jennings, Kane; Bolotin, Kirill 2015-03-01 In the second part of this talk, we investigate the mechanisms that enable energy exchange between semiconductor quantum dots (QDs) and two-dimensional (2D) materials. First, we study possible contributions due to multiple mechanisms such as charge transfer, metallic screening, mechanical strain, and Forster resonant energy transfer (FRET). By implementing different 2D materials (graphene, MoS2, hexagonal boron nitride), varying their thickness and QD emission wavelengths we demonstrate that QD fluorescence quenching is dominated by FRET. Next, we study the dependence of the FRET rate on electrostatic doping of 2D materials, focusing on the case of monolayer MoS2. We develop a simple model, which shows that that moderate (QD photoluminescence intensity. Finally, we demonstrate that FRET can be used as an efficient spectroscopic tool that probes states in 2D materials that are not accessible via conventional absorption spectroscopy. 20. The nuclear engineering programmes at the Royal Military College of Canada. Part II Energy Technology Data Exchange (ETDEWEB) Bonin, H.W. [Royal Military College of Canada, Dept. of Chemistry and Chemical Engineering, Kingston, Ontario (Canada) 2002-08-01 The coverage of the activities within the nuclear science and engineering programmes at RMC reveals the dynamism of the College which is still growing at a fast rate. Being the only completely bilingual university in Canada and a true national institution gathering students and staff from all parts of the country. RMC continues in its mission to support the Canadian Forces, the Department of National Defence, the people of Canada and Canadian Industry that includes the nuclear sector. It is in this spirit that the staff has been actively involved with organizations such as the Canadian Nuclear Society and the Canadian Nuclear Association, having hosted four of the Student conferences and three major topical conferences of the CNS. 1. Managing Returnable Containers Logistics - A Case Study Part II - Improving Visibility through Using Automatic Identification Technologies Directory of Open Access Journals (Sweden) Gretchen Meiser 2011-05-01 Full Text Available This case study is the result of a project conducted on behalf of a company that uses its own returnable containers to transport purchased parts from suppliers. The objective of this project was to develop a proposal to enable the company to more effectively track and manage its returnable containers. The research activities in support of this project included (1 the analysis and documentation of the physical flow and the information flow associated with the containers and (2 the investigation of new technologies to improve the automatic identification and tracking of containers. This paper explains the automatic identification technologies and important criteria for selection. A companion paper details the flow of information and containers within the logistics chain, and it identifies areas for improving the management of the containers. 2. STEVENS–JOHNSON SYNDROME — TOXIC EPIDERMAL NECTROLYSIS IN CHILDREN. PART II. SYSTEM, LOCAL TREATMENT Directory of Open Access Journals (Sweden) V.F. Zhernosek 2011-01-01 Full Text Available The second part of the article concerning Stevens–Johnson syndrome — toxic epidermal necrolysis (SJS–TEN is devoted to the treatment of this disease. The modern approaches to the use of systemic agents — antibacterial, antiviral, analgesics and sedatives, and anticoagulants are discussed in detail. Regulations of the drugs use depending on the patient state and the etiology of SJS–TEN are marked out. The basic principles of the fluid therapy for rehydration and dehydration prevention are shown in the article. Particular attention is paid to the local therapy — treatment of mucous membranes and skin lesions.Key words: Stevens-Johnson syndrome, toxic epidermal necrolysis, children, antibiotic therapy, topical treatment. 3. The nuclear engineering programmes at the Royal Military College of Canada. Part II International Nuclear Information System (INIS) Bonin, H.W. 2002-01-01 The coverage of the activities within the nuclear science and engineering programmes at RMC reveals the dynamism of the College which is still growing at a fast rate. Being the only completely bilingual university in Canada and a true national institution gathering students and staff from all parts of the country. RMC continues in its mission to support the Canadian Forces, the Department of National Defence, the people of Canada and Canadian Industry that includes the nuclear sector. It is in this spirit that the staff has been actively involved with organizations such as the Canadian Nuclear Society and the Canadian Nuclear Association, having hosted four of the Student conferences and three major topical conferences of the CNS 4. The Road to a Court of Appeal—Part II: Distinguishing Features and Establishment DEFF Research Database (Denmark) Butler, Graham 2015-01-01 of the road taken. By mapping the sequence of events that lead to the creation of the new court, the complexity that goes into large-scale judicial restructuring can begin to be fully appreciated. This is the second and concluding part of the article, covering the distinguishing features and establishment......-lasting effects on the judicial system of the state. The creation of a new court takes a considerable effort from a number of branches of the State, in formulating the correct path for its establishment to proceed. In this article, the history of a Court of Appeal is set out, before discussing the referendum...... to amend the Constitution to allow for it. This is followed by looking at some of the provisions of the Amendment Bill that was put before both the Oireachtas and the people, before looking at three distinguishing features of the Bill, and finally discussing its establishment in 2014, along with analysis... 5. How Clean Are Hotel Rooms? Part II: Examining the Concept of Cleanliness Standards. Science.gov (United States) Almanza, Barbara A; Kirsch, Katie; Kline, Sheryl Fried; Sirsat, Sujata; Stroia, Olivia; Choi, Jin Kyung; Neal, Jay 2015-01-01 Hotel room cleanliness is based on observation and not on microbial assessment even though recent reports suggest that infections may be acquired while staying in hotel rooms. Exploratory research in the first part of the authors' study was conducted to determine if contamination of hotel rooms occurs and whether visual assessments are accurate indicators of hotel room cleanliness. Data suggested the presence of microbial contamination that was not reflective of visual assessments. Unfortunately, no standards exist for interpreting microbiological data and other indicators of cleanliness in hotel rooms. The purpose of the second half of the authors' study was to examine cleanliness standards in other industries to see if they might suggest standards in hotels. Results of the authors' study indicate that standards from other related industries do not provide analogous criteria, but do provide suggestions for further research. 6. The Systemic Products as a Source of Competitive Advantage on Healthcare Sector Example. Part II Directory of Open Access Journals (Sweden) Izabela SZTANGRET 2015-12-01 Full Text Available In the healthcare sector, different healthcare providers, such as home care, primary care, pharmacies and hospital clinics but also a financial institution, collaborate in order to increase values for patients, such as better health state, more complex services, high quality of services, and increased feeling of safety. By creating a value, flexible networks health care providers and additional actors create value through collaboration. The purpose of this article is to identify the specific character of systemic healthcare product, created in synergy relations of medical enntities in the area of new way of meeting customers’ needs. Critical analysis of literature in the field of studied category is conducted in the article; furthermore qualitative method of empirical studies (case study and quantitative (online questionnaire is applied for practical illustration of described processes and phenomena. The article is a second part of the stud. 7. Cutaneous involvement in the deep mycoses: A review. Part II -Systemic mycoses. Science.gov (United States) Carrasco-Zuber, J E; Navarrete-Dechent, C; Bonifaz, A; Fich, F; Vial-Letelier, V; Berroeta-Mauriziano, D 2016-12-01 In the second part of this review on the deep mycoses, we describe the main systemic mycoses-paracoccidioidomycosis, coccidioidomycosis, histoplasmosis, mucormycosis, and cryptococcosis-and their cutaneous manifestations. Skin lesions are only occasionally seen in deep systemic mycoses either directly, when the skin is the route of entry for the fungus, or indirectly, when the infection has spread from a deeper focus. These cutaneous signs are often the only clue to the presence of a potentially fatal infection. As with the subcutaneous mycoses, early diagnosis and treatment is important, but in this case, even more so. Copyright © 2016 AEDV. Publicado por Elsevier España, S.L.U. All rights reserved. 8. Molecular biology - Part II: Beneficial liaisons: Radiobiology meets cellular and molecular biology International Nuclear Information System (INIS) Stevenson, Mary Ann; Coleman, C. Norman 1997-01-01 9. A stable partitioned FSI algorithm for rigid bodies and incompressible flow. Part II: General formulation Science.gov (United States) Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.; Tang, Qi 2017-08-01 A stable partitioned algorithm is developed for fluid-structure interaction (FSI) problems involving viscous incompressible flow and rigid bodies. This added-mass partitioned (AMP) algorithm remains stable, without sub-iterations, for light and even zero mass rigid bodies when added-mass and viscous added-damping effects are large. The scheme is based on a generalized Robin interface condition for the fluid pressure that includes terms involving the linear acceleration and angular acceleration of the rigid body. Added mass effects are handled in the Robin condition by inclusion of a boundary integral term that depends on the pressure. Added-damping effects due to the viscous shear forces on the body are treated by inclusion of added-damping tensors that are derived through a linearization of the integrals defining the force and torque. Added-damping effects may be important at low Reynolds number, or, for example, in the case of a rotating cylinder or rotating sphere when the rotational moments of inertia are small. In this second part of a two-part series, the general formulation of the AMP scheme is presented including the form of the AMP interface conditions and added-damping tensors for general geometries. A fully second-order accurate implementation of the AMP scheme is developed in two dimensions based on a fractional-step method for the incompressible Navier-Stokes equations using finite difference methods and overlapping grids to handle the moving geometry. The numerical scheme is verified on a number of difficult benchmark problems. 10. AICRG, Part II: Crestal bone loss associated with the Ankylos implant: loading to 36 months. Science.gov (United States) Chou, Cherng-Tzeh; Morris, Harold F; Ochi, Shigeru; Walker, Lori; DesRosiers, Deborah 2004-01-01 11. Ink dating part II: Interpretation of results in a legal perspective. Science.gov (United States) Koenig, Agnès; Weyermann, Céline 2018-01-01 The development of an ink dating method requires an important investment of resources in order to step from the monitoring of ink ageing on paper to the determination of the actual age of a questioned ink entry. This article aimed at developing and evaluating the potential of three interpretation models to date ink entries in a legal perspective: (1) the threshold model comparing analytical results to tabulated values in order to determine the maximal possible age of an ink entry, (2) the trend tests that focusing on the "ageing status" of an ink entry, and (3) the likelihood ratio calculation comparing the probabilities to observe the results under at least two alternative hypotheses. This is the first report showing ink dating interpretation results on a ballpoint be ink reference population. In the first part of this paper three ageing parameters were selected as promising from the population of 25 ink entries aged during 4 to 304days: the quantity of phenoxyethanol (PE), the difference between the PE quantities contained in a naturally aged sample and an artificially aged sample (R NORM ) and the solvent loss ratio (R%). In the current part, each model was tested using the three selected ageing parameters. Results showed that threshold definition remains a simple model easily applicable in practice, but that the risk of false positive cannot be completely avoided without reducing significantly the feasibility of the ink dating approaches. The trend tests from the literature showed unreliable results and an alternative had to be developed yielding encouraging results. The likelihood ratio calculation introduced a degree of certainty to the ink dating conclusion in comparison to the threshold approach. The proposed model remains quite simple to apply in practice, but should be further developed in order to yield reliable results in practice. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved. 12. ALTERNATIVE BINDERS TO BENTONITE FOR IRON ORE PELLETIZING : PART II : EFFECTS ON METALLURGICAL AND CHEMICAL PROPERTIES Directory of Open Access Journals (Sweden) Osman Sivrikaya 2014-07-01 Full Text Available This study was started to find alternative binders to bentonite and to recover the low preheated and fired pellet mechanical strengths of organic binders-bonded pellets. Bentonite is considered as a chemical impurity for pellet chemistry due to acid constituents (SiO2 and Al2O3. Especially addition of silica-alumina bearing binders is detrimental for iron ore concentrate with high acidic content. Organic binders are the most studied binders since they are free in silica. Although they yield pellets with good wet strength; they have found limited application in industry since they fail to give sufficient physical and mechanical strength to preheated and fired pellets. It is investigated that how insufficient preheated and fired pellet strengths can be improved when organic binders are used as binder. The addition of a slag bonding/strength increasing constituent (free in acidic contents into pellet feed to provide pellet strength with the use of organic binders was proposed. Addition of boron compounds such as colemanite, tincal, borax pentahydrate, boric acid together with organic binders such as CMC, starch, dextrin and some organic based binders, into magnetite and hematite pellet mixture was tested. After determining the addition of boron compounds is beneficial to recover the low pellet physical and mechanical qualities in the first part of this study, in this second part, metallurgical and chemical properties (reducibility - swelling index – microstructure – mineralogy - chemical content of pellets produced with combined binders (an organic binder plus a boron compound were presented. The metallurgical and chemical tests results showed that good quality product pellets can be produced with combined binders when compared with the bentonite-bonded pellets. Hence, the suggested combined binders can be used as binder in place of bentonite in iron ore pelletizing without compromising the pellet chemistry. 13. Understanding HIV infection for the design of a therapeutic vaccine. Part II: Vaccination strategies for HIV. Science.gov (United States) de Goede, A L; Vulto, A G; Osterhaus, A D M E; Gruters, R A 2015-05-01 HIV infection leads to a gradual loss CD4(+) T lymphocytes comprising immune competence and progression to AIDS. Effective treatment with combined antiretroviral drugs (cART) decreases viral load below detectable levels but is not able to eliminate the virus from the body. The success of cART is frustrated by the requirement of expensive lifelong adherence, accumulating drug toxicities and chronic immune activation resulting in increased risk of several non-AIDS disorders, even when viral replication is suppressed. Therefore, there is a strong need for therapeutic strategies as an alternative to cART. Immunotherapy, or therapeutic vaccination, aims to increase existing immune responses against HIV or induce de novo immune responses. These immune responses should provide a functional cure by controlling viral replication and preventing disease progression in the absence of cART. The key difficulty in the development of an HIV vaccine is our ignorance of the immune responses that control of viral replication, and thus how these responses can be elicited and how they can be monitored. Part one of this review provides an extensive overview of the (patho-) physiology of HIV infection. It describes the structure and replication cycle of HIV, the epidemiology and pathogenesis of HIV infection and the innate and adaptive immune responses against HIV. Part two of this review discusses therapeutic options for HIV. Prevention modalities and antiretroviral therapy are briefly touched upon, after which an extensive overview on vaccination strategies for HIV is provided, including the choice of immunogens and delivery strategies. Copyright © 2014. Published by Elsevier Masson SAS. 14. Marine Hydrokinetic Energy Site Identification and Ranking Methodology Part II: Tidal Energy Energy Technology Data Exchange (ETDEWEB) Kilcher, Levi [National Renewable Energy Lab. (NREL), Golden, CO (United States); Thresher, Robert [National Renewable Energy Lab. (NREL), Golden, CO (United States); Tinnesand, Heidi [National Renewable Energy Lab. (NREL), Golden, CO (United States) 2016-10-01 Marine hydrokinetic energy is a promising and growing piece of the renewable energy sector that offers high predictability and additional energy sources for a diversified energy economy. This report investigates the market opportunities for tidal energy along the U.S. coastlines. It is part one of a two-part investigation into the United States' two largest marine hydrokinetic resources (wave and tidal). Tidal energy technology is still an emerging form of renewable energy for which large-scale grid-connected project costs are currently poorly defined. Ideally, device designers would like to know the resource conditions at economical project sites so they can optimize device designs. On the other hand, project developers need detailed device cost data to identify sites where projects are economical. That is, device design and siting are, to some extent, a coupled problem. This work describes a methodology for identifying likely deployment locations based on a set of criteria that tidal energy experts in industry, academia, and national laboratories agree are likely to be important factors for all technology types. Several factors that will affect tidal project costs and siting have not been considered here -- including permitting constraints, conflicting use, seasonal resource variability, extreme event likelihood, and distance to ports -- because consistent data are unavailable or technology-independent scoring could not be identified. As the industry continues to mature and converge around a subset of device archetypes with well-defined costs, more precise investigations of project siting that include these factors will be possible. For now, these results provide a high-level guide pointing to the regions where markets and resource will one day support commercial tidal energy projects. 15. Molecular biology - Part II: Beneficial liaisons: Radiobiology meets cellular and molecular biology International Nuclear Information System (INIS) Stevenson, Mary Ann; Coleman, C. Norman 1996-01-01 16. Parity violation induced by weak neutral currents in atomic physics. Part II International Nuclear Information System (INIS) Bouchiat, M.A.; Bouchiat, C. 1975-01-01 The first part of this paper gives a detailed account of the evaluation of the electric dipole amplitude induced in alkali one-photon S-S transitions, by the parity violating electron-nucleus short range potential associated with the weak neutral currents. Two methods are presented: the first involves an explicit sum over the contributions of the P-states admixed with the S-states and incorporates the best information available on S-P electric dipole amplitudes. The second method, mathematically more elegant, avoids with the help of Green's function techniques any explicit sum over the P states, and, provided that some spin-orbit corrections are neglected, leads to a fairly simple formula involving Coulomb integrals tabulated in the literature and the interpolated quantum defects for S and P waves. The second part is devoted to a description of possible ways to detect parity violation induced in radiative S-S transitions, with a brief discussion of physical processes which could be a source of experimental difficulty. The last section of the paper deals with a theoretical analysis of the influence of a static electric field on the radiative S-S transitions. An evaluation of the induced electric dipole amplitude in the case of cesium indicates that it will compete with the magnetic dipole amplitude for electric fields larger than 10 V/cm. An interference effect between these two amplitudes gives rise to an electronic polarization in the final atomic state proportional to the vector product of the static electric field by the photon momentum which, in a typical case, could be as large as 64%; the measurement of this interesting and rather peculiar effect will lead to a determination of the sign of the magnetic dipole amplitude. Moreaver parity violation could manifest itself by a dependence of this electron polarization on the state of circular polarization of the incident photon [fr 17. Ansiedad ante la muerte en enfermeras de Atención Sociosanitaria: datos y significados Directory of Open Access Journals (Sweden) 18. Modelo computacional para suporte à decisão em áreas irrigadas. Parte II: testes e aplicação Computer model for decision support in irrigated areas. Part II: tests and application Directory of Open Access Journals (Sweden) Paulo A. Ferreira 2006-12-01 19. The Narrative Reproduction of Contemporary Montenegrin Identity in The Process of Euroatlantic Intergrations (Part II Directory of Open Access Journals (Sweden) Branko Banović 2016-02-01 Full Text Available If we conceptualize reality as a large narrative we “build ourselves into” as social beings, and consider social activities and identities as narratively mediated, the full extent of the capacity of narratives in the creation, shaping, transmission and reconstruction of contemporary social identities, as well as the reproduction of the concept of nation in everyday life becomes apparent. The imagined Euro- Atlantic future of Montenegro demands certain narrative interpretations of the past, which, in latter stages tend to become meta-narratives susceptible to consensus. The linkage of significant historical events to the process of Euro-Atlantic integrations of Montenegro is preformed through different meta-discursive practices, most often through ceremonial evocations of memories of significant events from the recent as well as further history of Montenegro. In this context, celebrations of Statehood Day and Independence Day are especially important, as they serve as reminders of the decisions of the Congress of Berlin, the Podgorica Assembly, the antifascist struggle of World War II and the independence of Montenegro attained through the referendum held in 2006. The clearly defined key points, along with the logical coherence the narrative is based on, provide the narrative with a certain “flexibility” which enables it to take in new elements. Narrative interpretations of the past have a significant role in the reproduction of the nation, as well as the shaping and consolidation of a desirable national identity, while the established narrative continuity between the past, present and imagined Euro-Atlantic future of Montenegro emerges as the “official” mediator in the reproduction of contemporary Montenegrin identity in the process of Euro-Atlantic integrations. In order to fully comprehend this narrative, it is advisable to conceptualize it both in a synchronic as well as a diachronic perspective, as can be shown in two charts 20. Indicadores de ansiedad en el DFH y rasgos de personalidad en niños: un estudio de validez Directory of Open Access Journals (Sweden) Marcos Antonio Batista 2014-01-01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5112578272819519, "perplexity": 19238.8998089941}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863901.24/warc/CC-MAIN-20180521004325-20180521024325-00408.warc.gz"}
https://www.scribd.com/document/64225489/Bayesian-Macro
# Bayesian Macroeconometrics Marco Del Negro Federal Reserve Bank of New York Frank Schorfheide∗ University of Pennsylvania CEPR and NBER April 18, 2010 Prepared for Handbook of Bayesian Econometrics Correspondence: Marco Del Negro: Research Department, Federal Reserve Bank of New York, 33 Liberty Street, New York NY 10045: [email protected]. Frank Schorfheide: Department of Economics, 3718 Locust Walk, University of Pennsylvania, Philadelphia, PA 19104-6297. Email: [email protected]. The views expressed in this chapter do not necessarily reflect those of the Federal Reserve Bank of New York or the Federal Reserve System. Ed Herbst and Maxym Kryshko provided excellent research assistant. We are thankful for the feedback received from the editors of the Handbook John Geweke, Gary Koop, and Herman van Dijk as well as comments by Giorgio Primiceri, Dan Waggoner, and Tao Zha. Del Negro, Schorfheide – Bayesian Macroeconometrics: April 18, 2010 2 Contents 1 Introduction 1.1 1.2 1.3 Challenges for Inference and Decision Making . . . . . . . . . . . . . How Can Bayesian Analysis Help? . . . . . . . . . . . . . . . . . . . Outline of this Chapter . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 2 4 7 8 10 14 16 29 29 30 32 35 38 39 41 44 49 51 52 54 61 2 Vector Autoregressions 2.1 2.2 2.3 2.4 2.5 A Reduced-Form VAR . . . . . . . . . . . . . . . . . . . . . . . . . . Dummy Observations and the Minnesota Prior . . . . . . . . . . . . A Second Reduced-Form VAR . . . . . . . . . . . . . . . . . . . . . . Structural VARs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Further VAR Topics . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 VARs with Reduced-Rank Restrictions 3.1 3.2 3.3 Cointegration Restrictions . . . . . . . . . . . . . . . . . . . . . . . . Bayesian Inference with Gaussian Prior for β . . . . . . . . . . . . . Further Research on Bayesian Cointegration Models . . . . . . . . . 4 Dynamic Stochastic General Equilibrium Models 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 A Prototypical DSGE Model . . . . . . . . . . . . . . . . . . . . . . Model Solution and State-Space Form . . . . . . . . . . . . . . . . . Bayesian Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . Extensions I: Indeterminacy . . . . . . . . . . . . . . . . . . . . . . . Extensions II: Stochastic Volatility . . . . . . . . . . . . . . . . . . . Extension III: General Nonlinear DSGE Models . . . . . . . . . . . . DSGE Model Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . DSGE Models in Applied Work . . . . . . . . . . . . . . . . . . . . . Del Negro, Schorfheide – Bayesian Macroeconometrics: April 18, 2010 5 Time-Varying Parameters Models 5.1 5.2 5.3 Models with Autoregressive Coefficients . . . . . . . . . . . . . . . . Models with Markov-Switching Parameters . . . . . . . . . . . . . . Applications of Bayesian TVP Models . . . . . . . . . . . . . . . . . 0 62 63 68 73 74 75 78 90 91 95 99 6 Models for Data-Rich Environments 6.1 6.2 Restricted High-Dimensional VARs . . . . . . . . . . . . . . . . . . . Dynamic Factor Models . . . . . . . . . . . . . . . . . . . . . . . . . 7 Model Uncertainty 7.1 7.2 7.3 Posterior Model Probabilities and Model Selection . . . . . . . . . . Decision Making and Inference with Multiple Models . . . . . . . . . Difficulties in Decision-Making with Multiple Models . . . . . . . . . obtained from theoretical considerations. Some questions require high-dimensional empirical models. Answers to some questions. such as what are the main driving forces of business cycles.1 Challenges for Inference and Decision Making Unfortunately. 2010 1 1 Introduction One of the goals of macroeconometric analysis is to provide quantitative answers to substantive macroeconomic questions. for instance. require at least a minimal set of restrictions. because of changes in economic policies. The study of international comovements is often based on highly parameterized multicountry vector autoregressive models. High-dimensional models are also necessary in applications in which it is reasonable to believe that parameters evolve over time. Other questions. Thus. For instance. Thus. such as whether gross domestic product (GDP) will decline over the next two quarters. can be obtained with univariate time-series models by simply exploiting serial correlations. Many macroeconomists have a strong preference for models with a high degree of theoretical coherence such as dynamic stochastic general equilibrium (DSGE) models. 1.Del Negro. For instance. that allow the identification of structural disturbances in a multivariate time-series model. In these models. macroeconometricians often face a shortage of observations necessary for providing precise answers. documenting the uncertainty associated with empirical findings or predictions is of first-order importance for scientific reporting. but they do demand identification restrictions that are not selfevident and that are highly contested in the empirical literature. macroeconometricians might be confronted with questions demanding a sophisticated theoretical model that is able to predict how agents adjust their behavior in response to new economic policies. the analysis of domestic business cycles might involve processing information from a large cross section of macroeconomic and financial variables. sample information alone is often insufficient to enable sharp inference about model parameters and implications. Schorfheide – Bayesian Macroeconometrics: April 18. an unambiguous measurement of the quantitative response of output and inflation to an unexpected reduction in the federal funds rate remains elusive. Finally. such as changes in monetary or fiscal policy. Other questions do not necessarily require a very densely parameterized empirical model. decision rules of economic agents are derived from assumptions . In any sample of realistic size. In the context of time-varying coefficient models. rational expectations.2 How Can Bayesian Analysis Help? In Bayesian inference. and competitive equilibrium. to the extent that the prior is based on nonsample information. Thus. but a theoretically coherent model is required for the analysis of a particular economic policy. Thus. likelihood functions for empirical models with a strong degree of theoretical coherence tend to be more restrictive than likelihood functions associated with atheoretical models. This combination of information sets is prominently used in the context of DSGE model inference in Section 4. this means that the functional forms and parameters of equations that describe the behavior of economic agents are tightly restricted by optimality and equilibrium conditions. there will be a shortage of information for determining the model coefficients. Schorfheide – Bayesian Macroeconometrics: April 18. Examples include the vector autoregressions (VARs) with time-varying coefficients in Section 5 and the multicountry VARs considered in Section 6. a prior distribution is updated by sample information contained in the likelihood function to form a posterior distribution. it provides the ideal framework for combining different sources of information and thereby sharpening inference in macroeconometric analysis. but only gradually.Del Negro. Many macroeconometric models are richly parameterized. or that they change frequently. These sources might include microeconometric panel studies that are informative about aggregate elasticities or long-run averages of macroeconomic variables that are not included in the likelihood function because the DSGE model under consideration is too stylized to be able to explain their cyclical fluctuations. Through informative prior distributions. leading to very imprecise inference and diffuse predictive distributions. 2010 2 about agents’ preferences and production technologies and some fundamental principles such as intertemporal optimization. A challenge arises if the data favor the atheoretical model and the atheoretical model generates more accurate forecasts. Bayesian DSGE model inference can draw from a wide range of data sources that are (at least approximately) independent of the sample information. but by a potentially large amount. Such assumptions can be conveniently imposed by treating the sequence of model parameters as a . In practice. it is often appealing to conduct inference under the assumption that either coefficient change only infrequently. 1. which might be undesirable. the lack of identification poses no conceptual problem in a Bayesian framework. which enter the likelihood function. Schorfheide – Bayesian Macroeconometrics: April 18. Predictive distributions of future observations such as aggregate output. Unfortunately. and an orthogonal matrix Ω. as long as the joint prior distribution of reduced-form and nonidentifiable parameters is proper. 2010 3 stochastic process. Ω is not identifiable based on the sample information. inflation. variance. In this sense.Del Negro. To the extent that the substantive analysis requires a researcher to consider multiple theoretical and empirical frameworks. which does not enter the likelihood function. so is the joint posterior distribution. which is of course nothing but a prior distribution that can be updated with the likelihood function. namely as random variables. Identification issues also arise in the context of DSGE models. Since shocks and parameters are treated symmetrically in a Bayesian framework. one could of course set many coefficients equal to zero or impose the condition that the same coefficient interacts with multiple regressors. However. which can be easily incorporated through probability distributions for those coefficients that are “centered” at the desired restrictions but that have a small. the conditional distribution of Ω given the reduced-form parameters will not be updated. accounting for these two sources of uncertainty simultaneously is conceptually straightforward. An extreme version of lack of sample information arises in the context of structural VARs. An important and empirically successful example of such a prior is the Minnesota prior discussed in Section 2. which are studied in Section 2. To reduce the number of parameters in a high-dimensional VAR. yet nonzero. meaning that the total probability mass is one. In this case. Conceptually more appealing is the use of soft restrictions. it does pose a challenge: it becomes more important to document which aspects of the prior distribution are not updated by the likelihood function and to recognize the extreme sensitivity of those aspects to the specification of the prior distribution. such hard restrictions rule out the existence of certain spillover effects. Thus. These distributions need to account for uncertainty about realizations of structural shocks as well as uncertainty associated with parameter estimates. Bayesian analysis allows the researcher to assign probabilities to competing model specifications . and its conditional posterior is identical to the conditional prior. and interest rates are important for macroeconomic forecasts and policy decisions. Structural VARs can be parameterized in terms of reduced-form parameters. In general. This idea is explored in more detail in Section 4. to center a prior distribution on a more flexible reference model. While many macroeconomic time series are well described by stochastic trend . distinguishing between reduced-form and structural VARs. With posterior model probabilities in hand. Bayesian methods offer a rich tool kit for linking structural econometric models to more densely parameterized reference models. For instance. The DSGE models discussed in Section 4 provide an example. Schorfheide – Bayesian Macroeconometrics: April 18. As an empirical illustration. 2010 4 and update these probabilities in view of the data. one could use the restrictions associated with the theoretically coherent DSGE model only loosely.3 Outline of this Chapter Throughout this chapter. we devote the remainder of Section 2 is devoted to a discussion of advanced topics such as inference in restricted or overidentified VARs. Reduced-form VARs essentially summarize autocovariance properties of vector time series and can also be used to generate multivariate forecasts. changes in monetary policy unanticipated by the public. Nonetheless. in practice posterior model probabilities often favor more flexible. we will emphasize multivariate models that can capture comovements of macroeconomic time series. 1.Del Negro. More useful for substantive empirical work in macroeconomics are so-called structural VARs. Throughout this chapter. nonstructural time-series models such as VARs. inference and decisions can be based on model averages (section 7). Predictions of how economic agents would behave under counterfactual economic policies never previously observed require empirical models with a large degree of theoretical coherence. After discussing various identification schemes and their implementation. Section 3 is devoted to VARs with explicit restrictions on the long-run dynamics. we will encounter a large number of variants of VARs (sections 2 and 3) and DSGE models (section 4) that potentially differ in their economic implications. We will begin with a discussion of vector autoregressive models in Section 2. As mentioned earlier. that is. Much of the structural VAR literature has focused on studying the propagation of monetary policy shocks. in which the innovations do not correspond to one-step-ahead forecast errors but instead are interpreted as structural shocks. we measure the effects of an unanticipated change in monetary policy using a four-variable VAR. Schorfheide – Bayesian Macroeconometrics: April 18. Eichenbaum. This observation is consistent with a widely used version of the neoclassical growth model (King. Plosser. output and investment data. and Rebelo (1988)). as pointed out by Sims and Uhlig (1991).S. written as so-called vector error correction models (VECM). One can impose such common trends in a VAR by restricting some of the eigenvalues of the characteristic polynomial to unity. uses them to regularize or smooth the likelihood function of a cointegration model in areas of the parameter space in which it is very nonelliptical. Modern dynamic macroeconomic theory implies fairly tight cross-equation restrictions for vector autoregressive processes.Del Negro. Plosser. instead of using priors as a tool to incorporate additional information. While frequentist analysis of nonstationary time-series models requires a different set of statistical tools. and in Section 4 we turn to Bayesian inference with DSGE models. Most of the controversies are related to the specification of prior distributions. in many countries the ratio (or log difference) of aggregate consumption and investment is stationary. The term DSGE model is typically used to refer to a broad class that spans the standard neoclassical growth model discussed in King. Moreover. and Evans (2005). However. these stochastic trends are often common to several time series. we also discuss an important strand of the literature that. agents potentially face uncertainty with respect to total factor productivity. Nonetheless. the DSGE model generates a joint probability distribution for the endogenous model variables such . given the specification of preferences and technology. for instance. VARs with eigenvalue restrictions. For example. the shape of the likelihood function is largely unaffected by the presence of unit roots in autoregressive models. or the nominal interest rate set by a central bank. This uncertainty is generated by exogenous stochastic processes or shocks that shift technology or generate unanticipated deviations from a central bank’s interest-rate feedback rule. Conditional on the specified distribution of the exogenous shocks. the Bayesian literature has experienced a lively debate about how to best analyze VECMs. 2010 5 models. A common feature of these models is that the solution of intertemporal optimization problems determines the decision rules. and Rebelo (1988) as well as the monetary model with numerous real and nominal frictions developed by Christiano. in which the exogenous technology process follows a random walk. have been widely used in applied work after Engle and Granger (1987) popularized the concept of cointegration. We will focus on the use of informative priors in the context of an empirical model for U. Our prior is based on the balancedgrowth-path implications of a neoclassical growth model. Schorfheide – Bayesian Macroeconometrics: April 18. we augment the VAR models of Section 2 and the DSGE models of Section 4 with time-varying parameters. output and hours worked data. consumption. one has to take into account that agents are aware that parameters are not constant over time and hence adjust their decision rules accordingly. We study empirical models for so-called data-rich environments in Section 6. As an illustration. We distinguish between models in which parameters evolve according to a potentially nonstationary autoregressive law of motion and models in which parameters evolve according to a finite-state Markov-switching (MS) process. investment. or they might be caused by the introduction of new economic policies or the formation of new institutions. These changes might be a reflection of inherent nonlinearities of the business cycle. which in the context of a DSGE model could be interpreted as the most important economic state variables. Section 4 discusses inference with linearized as well as nonlinear DSGE models and reviews various approaches for evaluating the empirical fit of DSGE models. such as the number of lags in a VAR. Parsimonious empirical models for large data sets can be obtained in several ways. an additional layer of complication arises. discussed in Section 5. the presence of timevariation in coefficients. the importance of certain types of propagation mechanisms in DSGE models. we will encounter uncertainty about model specifications. or the number of factors in a dynamic factor model. A . We consider restricted large-dimensional vector autoregressive models as well as dynamic factor models (DFMs).Del Negro. The dynamics of macroeconomic variables tend to change over time. These factors are typically unobserved and follow some vector autoregressive law of motion. When solving for the equilibrium law of motion. Because of the rapid advances in information technologies. macroeconomists now have access to and the ability to process data sets with a large cross-sectional as well as a large time-series dimension. Throughout the various sections of the chapter. Much of the empirical work with DSGE models employs Bayesian methods. and inflation. The key challenge for econometric modeling is to avoid a proliferation of parameters. Thus.S. The latter class of models assumes that the comovement between variables is due to a relatively small number of common factors. we conduct inference with a simple stochastic growth model based on U. If time-varying coefficients are introduced in a DSGE model. 2010 6 as output. Such changes can be captured by econometric models with time-varying parameters (TVP). Sims (1980) proposed that VARs should replace large-scale macroeconometric models inherited from the 1960s.) ) to denote the j’th column (row) of a matrix A. more generally. we sometimes drop the time subscripts and abbreviate Y1:T by Y . VARs have been used for macroeconomic forecasting and policy analysis to investigate the sources of business-cycle fluctuations and to provide a benchmark against which modern dynamic macroeconomic theories can be evaluated. we say that (X. tr[A] is the trace of the square matrix A. If no ambiguity arises. we follow the Appendix of this Handbook. and vec(A) stacks the columns of A. in Section 4 it will become evident that the equilibrium law of motion of many dynamic stochastic equilibrium models can be well approximated by a VAR. we let A = tr[A A]. I{x ≥ a} is the indicator function equal to one if x ≥ a and equal to zero otherwise. Finally. With respect to notation for probability distributions. p(Y |θ) is the likelihood function. . We use Yt0 :t1 to denote the sequence of observations or random variables {yt0 . The remainder of this section is organized as follows. |A| is its determinant. p(θ) is the density associated with the prior distribution. ν). decision making under model uncertainty is provided in Section 7. . ν) has an Inverted Wishart distribution. We derive the likelihood function of a reduced-form VAR in Section 2. If X|Σ ∼ M Np×q (M. Σ) ∼ M N IW (M. If A is a vector.2 discusses how to use dummy observations to construct prior distributions and reviews the widely . Section 2. Since then. . We use iid to abbreviate independently and identically distributed. In fact. which were largely inconsistent with the notion that economic agents take the effect of today’s choices on tomorrow’s utility into account. VARs appear to be straightforward multivariate generalizations of univariate autoregressive models. Here ⊗ is the Kronecker product. 2 Vector Autoregressions At first glance. S. they turn out to be one of the key empirical tools in modern macroeconomics.Del Negro. At second sight.1. yt1 }. . Finally.j) (A(j. θ often serves as generic parameter vector. and p(θ|Y ) the posterior density. 2010 7 treatment of Bayesian model selection and. P. Moreover. Σ ⊗ P ) is matricvariate Normal and Σ ∼ IWq (S. We use A(. a word on notation. because the latter imposed incredible restrictions. √ then A = A A is its length. Schorfheide – Bayesian Macroeconometrics: April 18. We use I to denote the identity matrix and use a subscript indicating the dimension if necessary. 2010 8 used Minnesota prior. Insert Figure 1 Here 2. The evolution of yt is described by the p’th order difference equation: yt = Φ1 yt−1 + . Let yt be an n × 1 random vector that takes values in Rn . Φc ] . we consider a reduced-form VAR that is expressed in terms of deviations from a deterministic trend. In Section 2. over the period from 1964:Q1 to 2006:Q4: percentage deviations of real GDP from a linear time trend. designed to capture the joint dynamics of multiple time series. Figure 1 depicts the evolution of three important quarterly macroeconomic time series for the U. . (1) We refer to (1) as the reduced-form representation of a VAR(p).S. . The joint density of Y1:T . where n = 3 in our empirical illustration. .3. annualized inflation rates computed from the GDP deflator. yT . To characterize the conditional distribution of yt given its history. .Del Negro. We shall proceed under the assumption that the conditional distribution of yt is Normal: ut ∼ iidN (0. These series are obtained from the FRED database maintained by the Federal Reserve Bank of St. Louis.1 A Reduced-Form VAR Vector autoregressions are linear time-series models. because the ut ’s are simply one-step-ahead forecast errors and do not have a specific economic interpretation. . . conditional on Y1−p:0 and the coefficient matrices Φ and . Section 2. .4 is devoted to structural VARs in which innovations are expressed as functions of structural shocks with a particular economic interpretation. (2) We are now in a position to characterize the joint distribution of a sequence of observations y1 . We will subsequently illustrate the VAR analysis using the three series plotted in Figure 1. Finally. Let k = np + 1 and define the k × n matrix Φ = [Φ1 . Schorfheide – Bayesian Macroeconometrics: April 18. Section 2. + Φp yt−p + Φc + ut . Σ). . and the effective federal funds rate. one has to make a distributional assumption for ut . . an unanticipated change in monetary policy. Φp . for example.5 provides some suggestions for further reading. . (7) (6) ˆ ˆ Φ is the maximum-likelihood estimator (MLE) of Φ. . Σ. the T × n matrices Y    y1  .  .   . Draws from this posterior can be easily obtained by direct Monte Carlo sampling. . S. . and S is a matrix with sums of squared residuals. Σ): 1 ˆ p(Y |Φ. we abbreviate p(Y1:T |Φ. Σ) ∝ |Σ|−T /2 exp − tr[Σ−1 S] 2 1 ˆ ˆ × exp − tr[Σ−1 (Φ − Φ) X X(Φ − Φ)] . Σ. nsim : ˆ 1. Σ. . T − k) distribution. (3) The conditional likelihood function can be conveniently expressed if the VAR is written as a multivariate linear regression model in matrix notation: Y = XΦ + U. .  . Σ)|Y ∼ M N IW Φ. . 1]. Σ) ∝ |Σ|−(n+1)/2 . ˆ ˆ ˆ S = (Y − X Φ) (Y − X Φ). It can be factorized as T p(Y1:T |Φ. . . xt = [yt−1 . Draw Σ(s) from an IW (S.  . Y1−p:t−1 ). . If we combine the likelihood function with the improper prior p(Φ.1: Direct Monte Carlo Sampling from Posterior of VAR Parameters For s = 1. (X X)−1 .   Y =  . Y1−p:0 ) by p(Y |Φ. yt−p . is called (conditional) likelihood function when it is viewed as function of the parameters. . we can deduce immediately that the posterior distribution is of the form ˆ ˆ (Φ. T − k .  xT uT (4) (5) In a slight abuse of notation. (8) Detailed derivations for the multivariate Gaussian linear regression model can be found in Zellner (1971). Y1−p:0 ) = t=1 p(yt |Φ. . .Del Negro. U =  . X =   . Here. 2 where ˆ Φ = (X X)−1 X Y.   yT and U and the T × k matrix X are defined as    x1 u1   . Algorithm 2. 2010 9 Σ. Schorfheide – Bayesian Macroeconometrics: April 18. Suppose T ∗ dummy observations are collected in matrices Y ∗ and X ∗ . T − k). the sample size shrinks to 96 observations. and each equation of a VAR with p = 4 lags has 13 coefficients. and Sims (1984). which dates back to Litterman (1980) and Doan. inflation. Σ(s) ⊗ (X X)−1 ). In . X ] . S. Using the same arguments that lead to (8). T ∗ − k) prior for Φ and Σ. Notice that all three series are fairly persistent. Informative prior distributions can compensate for lack of sample information. Y ] . ¯ ¯ ¯ ¯ ¯ then we deduce that the posterior of (Φ. Provided that T ∗ > k+n ¯ and X ∗ X ∗ is invertible. or observations generated from introspection. This insight dates back at least to Theil and Goldberger (1961). in which yt is composed of output deviations. and we will subsequently discuss alternatives to the improper prior used so far. observations generated by simulating a macroeconomic model. and the Euro Area on post-1982 data. depicted in Figure 1. Σ) · |Σ|−(n+1)/2 can be interpreted as a M N IW (Φ. Consider our lead example. 10 An important challenge in practice is to cope with the dimensionality of the parameter matrix Φ. S. Litterman. Now imagine estimating a two-country VAR for the U. we deduce that up to a constant the product p(Y ∗ |Φ. after the disinflation under Fed Chairman Paul Volcker. Our exposition follows the more recent description in Sims and Zha (1998). Schorfheide – Bayesian Macroeconometrics: April 18. 2. with the exception that for now we focus on a reduced-form rather than on a structural VAR. and we use the likelihood function associated with the VAR to relate the dummy observations to the parameters Φ and Σ.2 Dummy Observations and the Minnesota Prior Prior distributions can be conveniently represented by dummy observations. If the sample is restricted to the post-1982 period. Consider the data depicted in Figure 1. Σ) is M N IW (Φ. 2010 ˆ 2. the use of dummy observations leads to a conjugate prior. and interest rates. These dummy observations might be actual observations from other countries. where Φ and S are obtained ˆ ˆ from Φ and S in (7) by replacing Y and X with Y ∗ and X ∗ . Thus. (X X)−1 . ¯ ¯ ¯ ¯ ˆ ˆ Y = [Y ∗ . Our sample consists of 172 observations.Del Negro. which doubles the number of parameters. (X ∗ X ∗ )−1 . and let Φ and S be the analogue of Φ and S in (7). Prior and likelihood are conjugate if the posterior belongs to the same distributional family as the prior distribution. Now let T = T + T ∗ . the prior distribution is proper. X = [X ∗ . Draw Φ(s) from the conditional distribution M N (Φ. A widely used prior in the VAR literature is the so-called Minnesota prior.S. Schorfheide – Bayesian Macroeconometrics: April 18. Let Y−τ :0 be a presample. The dummy observations are interpreted as observations from the regression model (4). In turn. the rows of U are normally distributed. We begin with dummy observations that generate a prior distribution for Φ1 . The Minnesota prior is typically specified conditional on several hyperparameters. While it is fairly straightforward to choose prior means and variances for the elements of Φ.Del Negro. For instance. there are nk(nk + 1)/2 of them.t . The use of dummy observations provides a parsimonious way of introducing plausible correlations between parameters. To simplify the exposition. would be fairly well described by a random-walk model of the form yi. possibly with the exception of post-1982 inflation rates. 2010 11 fact. it tends to be difficult to elicit beliefs about the correlation between elements of the Φ matrix. Thus.t−1 + ηi. the dummy observations are plugged into (4): λ1 s1 0 0 λ1 s2 = λ1 s1 0 0 0 0 0 Φ+ u11 u12 u21 u22 . alternatively. We will pursue the latter route for the following reason. the univariate behavior of these series. For illustrative purposes. we will specify the rows of the matrices Y ∗ and X ∗ . through dummy observations. if some series have very little serial correlation because they have been transformed to induce stationarity – for example log output has been converted into output growth – then an iid approximation might be preferable. and let y and s be n × 1 vectors of means and standard deviations. 0 = λ1 s1 φ21 + u12 . we will discuss how DSGE model restrictions could be used to construct a prior. suppose that n = 2 and p = 2. The random-walk approximation is taken for convenience and could be replaced by other representations. After all. The remaining hyperparameters are stacked in the 5 × 1 vector λ with elements λi . The idea behind the Minnesota prior is to center the distribution of Φ at a value that implies a random-walk behavior for each of the components of yt . In Section 4. we can rewrite the first row of (9) as λ1 s1 = λ1 s1 φ11 + u11 .t = yi. At the same time. The Minnesota prior can be implemented either by directly specifying a distribution for Φ or. (9) λ1 s2 0 0 0 According to the distributional assumption in (2). setting all these correlations to zero potentially leads to a prior that assigns a lot of probability mass to parameter combinations that imply quite unreasonable dynamics for the endogenous variables yt . A prior for the covariance matrix Σ. and they tend to improve VAR forecasting performance. Suppose we assume that φj ∼ N (0. The sums-of-coefficients dummy observations.1 The prior for Φ2 is implemented with the dummy observations 0 0 0 0 = 0 0 λ1 s1 2λ2 0 0 0 0 λ1 s2 2λ2 0 0 Φ + U. can be obtained by stacking the observations s1 0 λ3 times. which is consistent with the beliefs of many applied macroeconomists. 2010 and interpret it as φ11 ∼ N (1. λ2 ). They favor unit roots and cointegration.t /sj . If we define φj = φj sj and xj. The remaining sets of dummy observations provide a prior for the intercept Φc and will generate some a priori correlation between the coefficients. then the transformed parameters ˜ ˜ interact with regressors that have the same scale. The sj terms that appear in the definition of the dummy observations achieve j this scale adjustment. (13) Consider the regression yt = φ1 x1. .t +ut . the same value y i is likely to be a good forecast of yi. regardless of the value of other variables: λ4 y 1 0 0 λ4 y 2 = λ4 y 1 0 0 λ4 y 2 λ4 y 1 0 0 0 Φ + U. 1 1 φ21 ∼ N (0.t is sj . j of the matrix Φ. (12) 0 s2 = 0 0 0 0 0 0 0 0 0 0 Φ+U (11) λ4 y 2 0 The co-persistence dummy observations. (10) where the hyperparameter λ2 is used to scale the prior standard deviations for coefficients associated with yt−l according to l−λ2 .Del Negro. The hyperparameter λ1 controls the tightness of the prior. ut ∼ iidN (0.t . j of Σ. introduced in Doan. and suppose that the standard ˜ deviation of xj. λ2 /s2 ). Litterman. Σ22 /(λ2 s2 )). Schorfheide – Bayesian Macroeconometrics: April 18. 1). 1 1 12 φij denotes the element i. and Σij corresponds to element i. then φj ∼ N (0.t are at the level y i . Σ11 /(λ2 s2 )). yt tends to persist at that level: λ5 y 1 λ5 y 2 1 = λ 5 y 1 λ 5 y 2 λ5 y 1 λ5 y 2 λ 5 Φ + U. centered at a matrix that is diagonal with elements equal to the presample variance of yt .t = xj. proposed by Sims (1993) reflect the belief that when all lagged yt ’s are at the level y. and Sims (1984).t +φ2 x2. capture the view that when lagged values of a variable yi. If the prior distribution is constructed based on T ∗ dummy observations. which is commonly done in hierarchical Bayes models. we let T = T ∗ + T . A potential drawback of the dummy-observation prior is that one is forced to treat all equations symmetrically when specifying a prior. Methods for relaxing this restriction and alternative approaches of implementing the Minnesota prior (as well as other VAR priors) are discussed in Kadiyala and Karlsson (1997). if the prior variance for the lagged inflation terms in the output equation is 10 times larger than the prior variance for the coefficients on lagged interest rate terms. then it also has to be 10 times larger in the inflation equation and the interest rate equation. an empirical Bayes approach of choosing λ based on the marginal likelihood function pλ (Y ) = p(Y |Φ. s. including the intercept. Schorfheide – Bayesian Macroeconometrics: April 18. Y = [Y ∗ . The hyper¯ parameters (¯. These two sets of dummy observations introduce correlations in prior beliefs about all coefficients. From a a practitioner’s view. If λ = 0. the more weight is placed on various components of the Minnesota prior vis-´-vis the likelihood function. S ∗ (S) is y ¯ ˆ ¯ ¯ obtained from S in (7) by replacing Y and X with Y ∗ and X ∗ (Y and X). Y ] . then an analytical expression for the marginal likelihood can be obtained by using the normalization constants for the MNIW distribution (see Zellner (1971)): pλ (Y ) = (2π) −nT /2 ¯ ¯ n ¯ T −k |X X|− 2 |S|− 2 |X ∗ X ∗ |− 2 |S ∗ | n ∗ −k −T 2 ¯ 2 2 ¯ n(T −k) 2 n(T ∗ −k) 2 n ¯ i=1 Γ[(T − k + 1 − i)/2] . Σ) (14) tends to work well for inference as well as for forecasting purposes. then all the dummy observations are zero. one could specify a prior distribution for λ and integrate out the hyperparameter. A more detailed discussion of selection versus averaging is provided in Section 7. The larger the elements of λ.Del Negro. and the VAR is estimated under an improper prior. VAR inference tends to be sensitive to the choice of hyperparameters. . Σ|λ)d(Φ. Instead of conditioning on the value of λ that maximizes the marginal likelihood function pλ (Y ). in a given equation. Σ)p(Φ. 2010 13 The strength of these beliefs is controlled by λ4 and λ5 . We will provide an empirical illustration of this hyperparameter selection approach in Section 2.4. n Γ[(T ∗ − k + 1 − i)/2] i=1 (15) ¯ ¯ ¯ As before. In other words. X ] . the prior covariance matrix for the coefficients in all equations has to be proportional to (X ∗ X ∗ )−1 . For instance. λ) enter through the dummy observations X ∗ and Y ∗ . and X = [X ∗ . . 2010 14 2. Γ0 +Γ1 t. as long as the prior for Φ and Σ conditional on Γ is M N IW .Del Negro. Lj t Wt (Φ) = I− j=1 Φj . the posterior of I− j=1 Φj Lj (yt − Γ0 − Γ1 t) = ut . . Thus. let Y (Γ) be the T × n matrix with rows (yt − Γ0 − Γ1 t) and X(Γ) be the T × (pn) matrix with rows [(yt−1 − Γ0 − Γ1 (t − 1)) . studied. This alternative specification makes it straightforward to separate beliefs about the deterministic trend component from beliefs about the persistence of fluctuations around this trend. (16) Here Γ0 and Γ1 are n×1 vectors. . Y1−p:0 ) ∝ exp − 1 2 T (18) (zt (Φ) − Wt (Φ)Γ) Σ−1 (zt (Φ) − Wt (Φ)Γ) . Using this operator. yt = Φ1 yt−1 + . this unconditional mean also depends on the autoregressive coefficients Φ1 . Thus. . captures stochastic fluctuations around the deterministic trend. Γ. Γ. zt (Φ) = Wt (Φ)Γ + ut and the likelihood function can be rewritten as p(Y1:T |Φ. The first term. in Villani (2009): yt = Γ0 + Γ1 t + yt . . Φp . Moreover. I− j=1 Φj Lj t with the understanding that = t − j. Σ. Suppose we define Φ = [Φ1 . for instance. . Φp ] and Γ = [Γ1 . However. Σ)|Γ is of the M N IW form. one can use the following representation. Y1−p:0 ) 1 ∝ |Σ|−T /2 exp − tr Σ−1 (Y (Γ) − X(Γ)Φ) (Y (Γ) − X(Γ)Φ) 2 (Φ. . t=1 . whereas the second part. (yt−p − Γ0 − Γ1 (t − p)) ]. Let L denote the temporal lag operator such that Lj yt = yt−j . . These fluctuations could either be stationary or nonstationary. then the conditional likelihood function associated with (16) is p(Y1:T |Φ. ut ∼ iidN (0. . one can rewrite (16) as p (17) . Now define p p p zt (Φ) = I− j=1 Φj Lj yt . . the law of motion of yt . . . . Schorfheide – Bayesian Macroeconometrics: April 18. Alternatively. captures the deterministic trend of yt . Γ2 ] . + Φp yt−p + ut . . Σ). Σ.3 A Second Reduced-Form VAR The reduced-form VAR in (1) is specified with an intercept term that determines the unconditional mean of yt if the VAR is stationary. 2. 1). ut ∼ iidN (0. Schorfheide – Bayesian Macroeconometrics: April 18.2: Gibbs Sampling from Posterior of VAR Parameters For s = 1. If φ1 = 1. Conditional on φc . Σ(s) . 1). 1 − ξ] and φc ∼ N (φc . Σ(s) ) from the MNIW distribution of (Φ. Schotman and van Dijk (1991) make the case that the representation (20) is more appealing. . as the parameter is now capturing the drift in a unit-root process instead of determining the long-run mean of yt . 2010 15 Thus. λ2 ). and Kohn (This Volume) discuss evidence that in many instances the so-called centered parameterization of (20) can increase the efficiency of MCMC algorithms. it is assumed that ξ > 0 to impose stationarity. Y ). In empirical work researchers often treat parameters as independent and might combine (19) with a prior distribution that implies φ1 ∼ U [0. In turn. whereas the first allows only for fluctuations around a constant mean. this prior for φ1 and φc E[y has the following implication.2 Since the initial level of the latent process y0 is ˜ unobserved. which is an example of a so-called Markov chain Monte Carlo (MCMC) algorithm discussed in detail in Chib (This Volume): Algorithm 2. in practice it is advisable to specify a proper prior for γ0 in (20). characterized by (20). E[y 2 Giordani. . Thus. we consider the special case of two univariate AR(1) processes: yt = φ1 yt−1 + φc + ut . Draw Γ(s) from the Normal distribution of Γ|(Φ(s) . . ut ∼ iidN (0. γ0 in (20) is nonidentifiable if φ1 = 1. . the interpretation of φc in model (19) changes drastically. Since the expected value of I t ] = φc /(1 − φ1 ). the (conditional) posterior distribution of Γ is also Normal. The second process. if the goal of the empirical analysis is to determine the evidence in favor of the hypothesis that φ1 = 1. For the subsequent argument. yt = γ0 + γ1 t + yt . To illustrate the subtle difference between the VAR in (1) and the VAR in (16). nsim : 1.Del Negro. Posterior inference can then be implemented via Gibbs sampling. allows for stationary fluctuations around a linear time trend. it is straightforward to verify that as long as the prior distribution of Γ conditional on Φ and Σ is matricvariate Normal. the prior mean and variance of the population mean I t ] increases (in absolute value) as φ1 −→ 1 − ξ. . If |φ1 | < 1 both AR(1) processes are stationary. (19) (20) yt = φ1 yt−1 + ut . Y ). Pitt. Σ)|(Γ(s−1) . Draw (Φ(s) . 2010 16 this prior generates a fairly diffuse distribution of yt that might place little mass on values of yt that appear a priori plausible. We will consider two ways of adding economic content to the VAR specified in (1). Shocks to these equations can in turn be interpreted as monetary policy shocks or as innovations to aggregate supply and demand. With these dummy observations. the co-persistence dummy observations discussed in Section 2. and γ1 = 0 – avoids the problem of an overly diffuse data distribution. but the notion of an aggregate demand or supply function is obscure. money demand equation.4 Structural VARs Reduced-form VARs summarize the autocovariance properties of the data and provide a useful forecasting tool. More generally. these models are specified in terms of preferences of economic agents and production technologies. γ0 ∼ N (γ 0 . A second way of adding economic content to VARs exploits the close connection between VARs and modern dynamic stochastic general equilibrium models. researchers often assume that shocks to the aggregate supply and demand equations are independent of each other. aggregate supply equation. the implied prior distribution of the population mean of yt conditional on φ1 takes the form I t ]|φ1 ∼ N (y. The optimal solution of agents’ decision problems combined with an equilibrium concept leads to an autoregressive law of motion for . 1 − ξ]. As we will see in Section 4. Treating the parameters of Model (20) as independent – for example. φ1 ∼ U [0. one can turn (1) into a dynamic simultaneous equations model by premultiplying it with a matrix A0 .Del Negro. it is natural to assume that the monetary policy shocks are orthogonal to the other innovations. but they lack economic interpretability. a monetary policy rule might be well defined. for instance. at least the location reE[y mains centered at y regardless of φ1 . (λ5 (1 − φ1 ))−2 ). First. To the extent that the monetary policy rule captures the central bank’s systematic reaction to the state of the economy. In the context of a DSGE model. monetary policy rule. such that the equations could be interpreted as. In this case I t ] has a priori mean γ 0 and variance λ2 for E[y every value of φ1 . 2. For researchers who do prefer to work with Model (19) but are concerned about a priori implausible data distributions. While the scale of the distribution of E[y I t ] is still dependent on the autoregressive coefficient. λ2 ).2 are useful. and aggregate demand equation. Schorfheide – Bayesian Macroeconometrics: April 18. To summarize. preferences. We now express the one-step-ahead forecast errors as a linear = Σtr Ω t . 2010 17 the endogenous model variables.4. Σ)p(Φ. Σ). The identification problem arises precisely from the absence of Ω in this likelihood function.4. Σ. One reason for this independence assumption is that many researchers view the purpose of DSGE models as that of generating the observed comovements between macroeconomic variables through well-specified economic propagation mechanisms. denoted by p(Y |Φ. Σtr refers to the unique lower-triangular Cholesky factor of Σ with nonnegative diagonal elements. Σ).1 Reduced-Form Innovations and Structural Shocks A straightforward calculation shows that we need to impose additional restrictions to identify a structural VAR. or fiscal policy. or policies. Schorfheide – Bayesian Macroeconometrics: April 18. one can think of a structural VAR either as a dynamic simultaneous equations model. preferences. 2. our structural VAR is parameterized in terms of the reduced-form parameters Φ and Σ (or its Cholesky factor Σtr ) and the orthogonal matrix Ω.4. monetary policy. Σ)p(Ω|Φ. Φ. Φ has to satisfy the restriction Σ = Φ Φ . (21) Here. . the likelihood function here is the same as the likelihood function of the reduced-form VAR in (6). rather than from correlated exogenous shocks. (22) Since the distribution of Y depends only on the covariance matrix Σ and not on its factorization Σtr ΩΩ Σtr . these kinds of dynamic macroeconomic theories suggest that the one-step-ahead forecast errors ut in (1) are functions of orthogonal fundamental innovations in technology. These shocks are typically assumed to be independent of each other. Thus. Thus.2. We adopt the latter view in Section 2.1 and consider the former interpretation in Section 2. The second equality ensures that the covariance matrix of ut is preserved. that is. in which the forecast errors are explicitly linked to such fundamental innovations. Economic fluctuations are generated by shocks to technology. Ω) = p(Y |Φ. The joint distribution of data and parameters is given by p(Y. and Ω is an n×n orthogonal matrix.Del Negro. in which each equation has a particular structural interpretation. or as an autoregressive model. Let combination of structural shocks ut = Φ t t be a vector of orthogonal structural shocks with unit variances. Algorithm 2. (23) Thus. Φ. and Moon and Schorfheide (2009).Del Negro. . For the remainder of this subsection. 1 (25) . reduces to a point mass. Σ). for instance.3: Posterior Sampler for Structural VARs For s = 1. Integrating the joint density with respect to Ω yields p(Y. nsim : 1. it is assumed that the eigenvalues of Φ1 are all less than one in absolute value. Σ(s) ). Christiano. Σ)p(Φ. Ω|Y ) can in principle be obtained in two steps. the relationship between the forecast errors ut and the structural shocks surveys. given the reduced-form parameters. Most authors use dogmatic priors for Ω such that the conditional distribution of Ω. . Kadane (1974). Σ. Σ)p(Ω|Φ. Σ|Y ). Schorfheide – Bayesian Macroeconometrics: April 18. Σ) = p(Y |Φ. much of the literature on structural VARs reduces to arguments about the appropriate choice of p(Ω|Φ. . Σ)dΩ (24) The conditional distribution of the nonidentifiable parameter Ω does not get updated in view of the data. conditional on Ω. Φ. . the calculation of the posterior distribution of the reduced-form parameters is not affected by the presence of the nonidentifiable matrix Ω. Φ. Priors for Ω are typically referred to as identification schemes because. Not surprisingly. see. Eichenbaum. Σ). p = 1. Σ) = p(Ω|Φ. Draw Ω(s) from the conditional prior distribution p(Ω|Φ(s) . This eigenvalue restriction guarantees that the VAR can be written as infinite-order moving average (MA(∞)): ∞ t is uniquely determined. we consider a simple bivariate VAR(1) without intercept. Cochrane (1994). Σ(s) ) from the posterior p(Φ. p(Y. To present various identification schemes that have been employed in the literature. Σ) = p(Y. Σ)p(Ω|Φ. Φ. 2. The conditional posterior density of Ω can be calculated as follows: p(Ω|Y. Poirier (1998). Σ). and Evans (1999). that is. We can deduce immediately that draws from the joint posterior distribution p(Φ. 2010 18 We proceed by examining the effect of the identification problem on the calculation of posterior distributions. and Φc = 0. This is a well-known property of Bayesian inference in partially identified models. Draw (Φ(s) . and Stock and Watson (2001) provide detailed yt = j=0 Φj Σtr Ω t . we set n = 2. In the stationary bivariate VAR(1). rotating the two vectors by 180 degrees simply changes the sign of the impulse responses to both shocks. each triplet (Φ(s) .Del Negro. Handling these nonlinear transformations of the VAR parameters in a Bayesian framework is straightforward. s = 1. 1 (i) (27) Thus. Σ(s) . Notice that Ω(ϕ) = −Ω(ϕ + π). . macroeconomists are often interested in so-called variance decompositions. Then we can define the contribution of the i’th structural shock to the variance of yt as ∞ Γ(i) yy = j=0 Φj Σtr ΩI (i) Ω Σtr (Φj ) .0 ](jj) /[Γyy. Switching from ξ = 1 to ξ = −1 changes . the (unconditional) covariance matrix is given by ∞ Γyy = j=0 Φj Σtr ΩΩ Σtr (Φj ) . 1}: Ω(ϕ. . In addition. 2010 We will refer to the sequence of partial derivatives ∂yt+j = Φj Σtr Ω. . The determinant of Ω equals ξ. . 19 (26) as the impulse-response function. nsim . A variance decomposition measures the fraction that each of the structural shocks contributes to the overall variance of a particular element of yt . it is straightforward to compute posterior moments and credible sets. Using (26) or (27). because one can simply postprocess the output of the posterior sampler (Algorithm 2. Schorfheide – Bayesian Macroeconometrics: April 18. π]. Thus. and the two vectors are orthogonal. can be converted into a draw from the posterior distribution of impulse responses or variance decompositions. the set of orthogonal matrices Ω can be conveniently characterized by an angle ϕ and a parameter ξ ∈ {−1.3). Based on these draws. i is equal to one and all other elements are equal to zero. 1 Let I i be the matrix for which element i.t explained by shock i is [Γyy. . Each column represents a vector of unit length in R2 . ξ) = cos ϕ −ξ sin ϕ sin ϕ ξ cos ϕ (28) where ϕ ∈ (−π. 1. . 1 ∂ t j = 0. For n = 2. .0 ](jj) . Ω(s) ). Variance decompositions based on h-step-ahead forecast error covariance matrices j h j j=0 Φ1 Σ(Φ ) can be constructed in the same manner. the fraction of the variance of yj. and output growth: yt = [πt .2 (Long-Run Identification): Now suppose yt is composed of inflation. for instance.t in cases (i) and (ii). the desired long-run response is given by − Φ1 ) = (I [(I − Φ1 )−1 Σtr ](2. The long-run response of the loglevel of output to a monetary policy shock can be obtained from the infinite sum of growth-rate responses implies that j ∞ j=0 Φ1 ∞ ˜ j=0 ∂∆ ln yt+j /∂ R. This identification scheme has been used. We now use the following identification restriction: unanticipated changes in monetary policy shocks do not raise output in the long run.1) (ϕ. for instance. t z. after imposing the identi- fication and normalization restrictions.t ] .t . ξ). ∆ ln yt ] . we need to determine the ϕ and ξ . we ˜ maintain the assumption that business-cycle fluctuations are generated by monetary policy and technology shocks. by Nason and Cogley (1994) and Schorfheide (2000). For instance. A short-run identification scheme was used in the seminal work by Sims (1980). R. and the vector ˜ t consists of innovations to technology. considering responses to expansionary monetary policy and technology shocks. Rt ] and y =[ z. Rt . Such a restriction on Ω is typically referred to as a short-run identification scheme. but now reverse the ordering: t = [ R. Since by construction Σtr ≥ 0. output 11 increases in response to z. and (iv) ϕ = π and ξ = −1. 2010 20 the sign of the impulse responses to the second shock. That is.t . following an earlier literature.t ] . since Σtr ≥ 0.t . πt . Identification can be achieved by imposing restric- tions on the informational structure. Likewise. (ii) ϕ = 0 and ξ = −1. R. Σ) assigns probability one to the matrix Ω that is diagonal with elements 1 and -1. (iii) ϕ = π and ξ = 1. We will now consider three different identification schemes that restrict Ω conditional on Φ and Σ. yt = [˜t .j) (A(j. Example 2. Thus. (29) where A(. The former could be defined as shocks that lower interest rates upon impact. the prior p(Ω|Φ. Since the stationarity assumption −1 . z. As in the previous example. Schorfheide – Bayesian Macroeconometrics: April 18.) Ω(. It is common in the literature to normalize the direction of the impulse response by. To obtain the orthogonal matrix Ω.) ) is the j’th column (row) of a matrix A.t . interest rates fall in response 22 to a monetary policy shock in cases (ii) and (iii). yt . Example 2.t . Boivin and Giannoni (2006b) assume in a slightly richer setting that the private sector does not respond to monetary policy shocks contemporaneously. and that the federal funds rate.Del Negro.1 (Short-Run Identification): Suppose that yt is composed of output deviations from trend. This assumption can be formalized by considering the following choices of ϕ and ξ in (28): (i) ϕ = 0 and ξ = 1. and monetary policy. and Uhlig (2005) propose to be more o agnostic in the choice of Ω. the sign will be different. Eichenbaum. and Vigfusson (2007) and Chari. similar to Example 2. ∆ ln yt ] and ˜ [ R. ξ) ≥ 0 and is referred to as a sign-restriction identification scheme.1) (ϕ. z. 2010 21 such that the expression in (29) equals zero. structural VARs identified with long-run schemes often lead to imprecise estimates of the impulse response function and to inference that is very sensitive to lag length choice and prefiltering of the observations. A long-run identification scheme was initially used by Blanchard and Quah (1989) to identify supply and demand disturbances in a bivariate VAR. By rotating the vector Ω(.t . Σ) in the two preceding examples were degenerate. ξ) are composed of orthonormal vectors. However. To implement this normalization. Canova and De Nicol´ (2002). where we used ϕ = 0 and ξ = −1 regardless of Φ and Σ. Schorfheide – Bayesian Macroeconometrics: April 18. Σ) remains a point mass. output rises. ξ) such that the long-run effect (29) of a monetary policy shock on output is zero. once the normalization has been imposed. and McGrattan (2008). ξ) by 180 degrees. this implies that Σtr Ω(. we ˜ ˜ can find a second angle ϕ such that the long-run response in (29) equals zero. We could use the same normalization as in Example 2. While the shapes of the response functions are the same for each of these pairs. that is. Example 2. it only changes the sign of the response to the second shock.1. This point dates back to Sims (1972) and a detailed discussion in the structural VAR context can be found in Leeper and Faust (1997). The priors for Ω|(Φ.3 (Sign-Restrictions): As before. the usefulness of long-run restrictions has been debated in the papers by Christiano. It will become clear subsequently that sign restrictions only partially identify impulse responses in the sense that they deliver (nonsingleton) sets. ξ) that is perpendicular to [(I − Φ1 )−1 Σtr ](2.1) (ϕ.1 by considering the effects of expansionary technology shocks (the level of output rises in the long run) and expansionary monetary policy shocks (interest rates fall in the short run). Unlike in Example 2.Del Negro. In addition. we can find four pairs (ϕ. Faust (1998).1. Since long-run effects of shocks in dynamic systems are intrinsically difficult to measure. here the choice depends on Φ and Σ. one has to choose one of the four (ϕ. Notice that ξ does not affect the first column of Ω. p(Ω|Φ. Formally.1) (ϕ. ξ) pairs. Kehoe.t ] t = . we can deduce 11 . we normalize the monetary policy shock to be expansionary. Thus. Since the columns of Ω(ϕ. Suppose we restrict only the direction of impulse responses by assuming that monetary policy shocks move inflation and output in the same direction upon impact. Suppose that (29) equals zero for ϕ. let yt = [πt . Since by construction Σtr ≥ 0.) . we need to find a unit length vector Ω(. More recently. 3. the error bands typically reported in the literature have to be interpreted point-wise. say a monetary policy shock. For each triplet (Φ. Σ). Ω). In an effort to account for the correlation between responses at different horizons. sign restrictions are imposed not . The parameter ξ can be determined conditional on Σ and ϕ by normalizing the technology shock to be expansionary. To implement Bayesian inference. suitable generalizations of (26) and (27) can be used to convert parameter draws into draws of impulse responses or variance decompositions. in which Ω(s) is calculated directly as function of (Φ(s) . which is a unit-length vector. they delimit the credible set for the response of a particular variable at a particular horizon to a particular shock. 2010 22 from (28) and the sign restriction on the inflation response that ϕ ∈ (−π/2. π/2] and a prior for ξ|(ϕ. In this case. restrict their attention to one particular shock and parameterize only one column of the matrix Ω. researchers are interested only in the response of an n-dimensional vector yt to one particular shock. For short. the inequality restriction for the output response can be used 22 to sharpen the lower bound: Σtr cos ϕ + Σ22 sin ϕ ≥ 0 21 implies ϕ ≥ ϕ(Σ) = arctan − Σ21 /Σ22 . a researcher now has to specify a prior distribution for ϕ|Σ with support on the interval [ϕ(Σ). standard deviations. Other authors. π/2]. With these draws in hand. Σ). In practice. researchers have often chosen a uniform distribution for ϕ|Σ as we will discuss in more detail below. medians. Bayesian inference in sign-restricted structural VARs is more complicated because one has to sample from the conditional distribution of p(Ω|Φ.Del Negro.1) . one can approximate features of marginal posterior distributions such as means. Σ(s) ). Some authors. that is. or credible sets.and long-run identification schemes. one can simply replace Ω in the previous expressions by its first column Ω(. construct responses for the full set of n shocks. it is straightforward to implement Bayesian inference. However. One can use a simplified version of Algorithm 2. like Uhlig (2005). In many applications. including the empirical illustration provided below. It is important to keep in mind that impulse-response functions are multidimensional objects. Σ. Sims and Zha (1999) propose a method for computing credible bands that relies on the first few principal components of the covariance matrix of the responses. Since Σtr ≥ 0 as well. Schorfheide – Bayesian Macroeconometrics: April 18. like Peersman (2005). Credible sets for impulse responses are typically plotted as error bands around mean or median responses. In practice. this uniform distribution is obtained by letting ϕ ∼ U (−π. specifying a prior distribution for (the columns of) Ω can be viewed as placing probabilities on a Grassmann manifold. The sample used for posterior inference is restricted to the period from 1965:I to 2005:I. . We then remove a linear trend from log inverse velocity and scale the deviations from trend by 100. Louis. in case of Example 2. and λ5 = 1.1 and the marginal likelihood formula (15) which do not allow for equation-specific parameter restrictions. Detailed descriptions of algorithms for Bayesian inference in sign-restricted structural VARs for n > 2 can be found. restricting it to the interval [−ϕ(Σ). We consider 3 This deterministic trend could also be incorporated into the specification of the VAR. We use the dummy-observation version of the Minnesota prior described in Section 2. we add our measure of detrended per capita real GDP to obtain real money balances. Illustration 2. and Zha (2010). Any r columns of Ω can be interpreted as an orthonormal basis for an r-dimensional subspace of Rn .3. interest rates. Schorfheide – Bayesian Macroeconometrics: April 18. π] in (28) and. for instance. λ4 = 1.3 The deviations from the linear trend are scaled by 100 to convert them into percentages. A uniform distribution can be defined as the unique distribution that is invariant to transformations induced by orthonormal transformations of Rn (James (1954)). A similar problem arises when placing prior probabilities on cointegration spaces. scaled by 400 to obtain annualized percentage rates. λ3 = 1. We take the natural log of per capita output and extract a deterministic trend by OLS regression over the period 1959:I to 2006:IV.Del Negro. Finally. and we will provide a more extensive discussion in Section 3. and real money balances.n−r . Waggoner.3. in Uhlig (2005) and Rubio-Ram´ ırez. π/2]. Most authors use a conditional prior distribution of Ω|(Φ. We divide sweep-adjusted M2 money balances by quarterly nominal GDP to obtain inverse velocity. in this illustration we wanted (i) to only remove a deterministic trend from output and not from the other variables and (ii) to use Algorithm 2. However. Thus. The data are obtained from the FRED database of the Federal Reserve Bank of St. Our measure of nominal interest rates corresponds to the average federal funds rate (FEDFUNDS) within a quarter. inflation. 2010 23 just on impact but also over longer horizons j > 0.1: We consider a VAR(4) based on output.2 with the hyperparameters λ2 = 4. Per capita output is defined as real GDP (GDPC96) divided by the civilian noninstitutionalized population (CNP16OV). The set of these subspaces is called Grassmann manifold and denoted by Gr. For n = 2. Inflation is defined as the log difference of the GDP deflator (GDPDEF). Σ) that is uniform. Database identifiers are provided in parentheses. 2010 24 Table 1: Hyperparameter Choice for Minnesota Prior λ1 πi.00 0. According to the posterior mean estimates.20 -888.00 1. The subsequent analysis is conducted conditional on this hyperparameter setting.00 0. the second step of Algorithm 2. described at the beginning of Section 2. Schorfheide – Bayesian Macroeconometrics: April 18.18 0.20 -898. using the appropriate modification of S.4 percent. with a weight of approximately one on λ1 = 0. We assign equal prior probability to each of these values and use (15) to compute the marginal likelihoods pλ (Y ).1. Φ and X. Results are reported in Table 1.0 ln pλ (Y ) πi. a one-standard deviation shock raises interest rates by 40 basis points upon impact. In response. Proposal draws ˜ ˜ Ω are obtained by sampling Z ∼ N (0.50 0.32 0.35 0. Posterior means and credible sets for the impulse responses are plotted in Figure 2. The posterior mean of the output response is slightly positive. Σ). and real money balances fall by 0.20 -914.00 0. we focus on the first column of the orthogonal matrix Ω. indicating substantial uncertainty about the sign and magnitude of the real effect of unanticipated changes in monetary policy . I) and letting Ω = Z/ Z .3.2. This uniform distribution is truncated to enforce the sign restrictions given (Φ. we use the sign-restriction approach described in Example 2.71 1. we assume that a contractionary monetary policy shock raises the nominal interest rate upon impact and for one period after the impact.43 0. which controls the overall variance of the prior.20 -902.01 0.20 -868.3 is implemented with an acceptance sampler that rejects proposed draws of Ω for which the sign restrictions are not satisfied.00 five possible values for λ1 .10 0.00 2.1. In particular. We specify a prior for Ω(.00 0. but the 90% credible set ranges from -50 to about 60 basis points.1) that implies that the space spanned by this vector is uniformly distributed on the relevant Grassman manifold.T 0. During these two periods. Thus. the (annualized) inflation rate drops by 30 basis points. To identify the dynamic response to a monetary policy shock. Draws from the posterior distribution of the reduced-form parameters Φ and Σ ˆ ˆ can be generated with Algorithm 2. Since we are identifying only one shock.Del Negro. the shock also lowers inflation and real money balances. The posterior probabilites of the hyperparameter values are essentially degenerate. define A = [A1 . . For instance. then we obtain tr A0 yt = A1 yt−1 + .t would Finally. Schorfheide – Bayesian Macroeconometrics: April 18. Ac ] such that (30) can be expressed as a multivariate regression of the form Y A0 = XA + E with likelihood function 1 p(Y |A0 . yt−p . Sims and Zha (1998) propose prior distributions that share the Kronecker structure of the likelihood function and hence lead to posterior distributions that can . . the posterior of A is matricvariate Normal. . correspond to unanticipated deviations from the expected policy.Del Negro. p. A detailed discussion of the Bayesian analysis of (30) is provided in Sims and Zha (1998). . .1) . conditional on A0 . tr tr tr j = 1. (30) Much of the empirical analysis in the Bayesian SVAR literature is based on this alternative parameterization (see. Moreover. 2010 under our fairly agnostic prior for the vector Ω(. xt . As in (5). 1. . Accordingly. . . Sims and Zha (1998)). 1] and Y and X be matrices with rows yt . . Ap . . one could impose identifying restrictions on A0 such that the first equation in (30) corresponds to the monetary policy rule of the central bank. we use E to denote the T × n matrix with rows t. Notice that. . Insert Figure 2 Here 2. for instance. A) ∝ |A0 |T exp − tr[(Y A0 − XA) (Y A0 − XA)] .2 An Alternative Structural VAR Parameterization 25 We introduced structural VARs by expressing the one-step-ahead forecast errors of a reduced-form VAR as a linear function of orthogonal structural shocks. . 2 (32) (31) The term |A0 |T is the determinant of the Jacobian associated with the transformation of E into Y . . the likelihood function is quadratic in A.4. meaning that under a suitable choice of prior. The advantage of (30) is that the coefficients have direct behaviorial interpretations. respectively. Aj = Ω Σ−1 Φj . Ap yt−p + Ac + t . let xt = [yt−1 . I). . and Ac = Ω Σ−1 Φc . t ∼ iidN (0. Suppose we now premultiply both sides of (1) by Ω Σ−1 and define A0 = Ω Σ−1 . is provided next. λ−1 I ⊗ V (A0 ) . M2. that is. Each row in the table corresponds to a behavioral equation labeled on the left-hand side of the row. The first equation represents an information market. real GDP interpolated to monthly frequency (˜). without having to invert matrices of the dimension nk × nk.4: Suppose yt is composed of a price index for industrial commodities (PCOM). V (A0 ) = (X ∗ X ∗ )−1 . and the remaining three equations characterize the production sector of the economy. Specifically. for instance. An example of such restrictions. Schorfheide – Bayesian Macroeconometrics: April 18. where −1 (34) ¯ A(A0 ) = ¯ V (A0 ) = λV −1 (A0 ) + X X −1 λV −1 (A0 )A(A0 ) + X Y A0 .2.2: A(A0 ) = (X ∗ X ∗ )−1 X ∗ Y ∗ A0 . Example 2. y The exclusion restrictions on the matrix A0 used by Robertson and Tallman (2001) are summarized in Table 2. The matrices A(A0 ) and V (A0 ) can.Del Negro. be constructed from the dummy observations presented in Section 2. the second equation is the monetary policy rule. the federal funds rate (R). Combining the likelihood function (32) with the prior (33) leads to a posterior for A that is conditionally matricvariate Normal: ¯ ¯ A|A0 . based on a structural VAR analyzed by Robertson and Tallman (2001). Y ∼ M N A(A0 ). λV −1 (A0 ) + X X The specific form of the posterior for A0 depends on the form of the prior density p(A0 ). The entries in the table imply that the .4. The prior distribution typically includes normalization and identification restrictions. the consumer price index (CPI). and the unemployment rate (U). 2010 26 be evaluated with a high degree of numerical efficiency. I ⊗ V (A0 ) . the third equation describes money demand. it is convenient to factorize the joint prior density as p(A0 )p(A|A0 ) and to assume that the conditional prior distribution of A takes the form A|A0 ∼ M N A(A0 ). (33) where the matrix of means A(A0 ) and the covariance matrix V (A0 ) are potentially functions of A0 and λ is a hyperparameter that scales the prior covariance matrix. ii < 0. n by −1. . whereas the matrix A0 has only 18 free elements. In practice. 2010 27 Table 2: Identification Restrictions for A0 Pcom Inform MP MD Prod Prod Prod X 0 0 0 0 0 M2 X X X 0 0 0 R X X X 0 0 0 Y X 0 X X X X CPI X 0 X 0 X X U X 0 0 0 0 X Notes: Each row in the table represents a behavioral equation labeled on the lefthand side of the row: information market (Inform). this normalization can be imposed by postprocessing the output of the posterior sampler: for all draws (A0 . Ac ) multiply the i’th row of each matrix by −1 if A0. . because the covariance matrix of the one-step-ahead forecast errors of a VAR with n = 6 has in principle 21 free elements.Del Negro. A1 . Waggoner and Zha (2003) developed an efficient MCMC algorithm to generate draws from a restricted A0 matrix. consumer price index (CPI). . Ap . federal funds rate (R). Schorfheide – Bayesian Macroeconometrics: April 18. . A 0 entry denotes a coefficient set to zero. with the restriction that A(A0 ) = M A0 for some matrix M and that V (A0 ) = V does not depend on A0 . only variables that enter contemporaneously into the monetary policy rule (MP) are the federal funds rate (R) and M2. assume that the prior for A|A0 takes the form (33). the system requires a further normalization. . and three equations that characterize the production sector of the economy (Prod). A common normalization scheme is to require that the diagonal elements of A0 all be nonnegative. monetary aggregate (M2). as is the case for our . This normalization works well if the posterior support of each diagonal element of A0 is well away from zero. money demand (MD). The column labels reflect the observables: commodity prices (Pcom). Otherwise. . Despite the fact that overidentifying restrictions were imposed. . this normalization may induce bimodality in distributions of other parameters. monetary policy rule (MP). and unemployment (U). without changing the distribution of the endogenous variables. real GDP (Y). One can multiply the coefficients for each equation i = 1. . For expositional purposes. The structural VAR here is overidentified. . independently across i. . 2010 28 dummy-observation prior. bi−1 . . Choose w2 . . qi is the number of unrestricted elements of A0(. . . (36) ¯ where Si = Ui (S + Ω−1 )Ui and A0 can be recovered from the bi ’s. Un bn ]|T exp − bi Si bi . bn ): T p(bi |Y. b1 . . . j = i. . . . . wqi such that w1 . . . wqi form an orthonormal basis for Rqi and we can introduce the parameters β1 . . . β1 has a Gamma . . . j = i and define w1 = Vi Ui w/ Vi Ui w . wqi by construction falls in the space spanned by Uj bj . Under the assumption that bi ∼ N (bi . . . Schorfheide – Bayesian Macroeconometrics: April 18. and Ui is an n × qi matrix. let w be an n × 1 vector perpendicular to each vector Uj bj . bi+1 . . . . . Let Vi be a qi × qi matrix such that Vi Si Vi = I.i) = Ui bi where bi is a qi × 1 vector. we can verify that the conditional posterior of the βj ’s is given by p(β1 . . . its distribution is not Normal. . . . bn ) ∝ |[U1 b1 . (37) By the orthonormal property of the wj ’s. . bi+1 . . . .Del Negro. . . . bi−1 . . .i) . . all βj ’s are independent of each other. composed of orthonormal column vectors. bi−1 . Now consider the i conditional density of bi |(b1 . . . . βqi and reparameterize the vector bi as a linear combination of the wj ’s: qi bi = V i j=1 βj wj . .   2 j=1    The last line follows because w2 . . Moreover. bi+1 . . . . . Then the marginal likelihood function for A0 is of the form p(Y |A0 ) = 1 ¯ p(Y |A0 . . . we obtain p(b1 . . . . bn )  T (38) qi 2 βj j=1  qi  T ∝  |[U1 b1 . . 2 (35) ¯ where S is a function of the data as well as M and V . . 2 Since bi also appears in the determinant. . βj Vi wj . b1 . Un bn ]|T exp − T 2 n bi S i bi i=1 . . . . . Ωi ). A)p(A|A0 )dA ∝ |A0 |T exp − tr[A0 SA0 ] . βqi |Y. . Characterizing the distribution of bi requires a few additional steps. Thus. . . Un bn ]| exp −  2 j=1    T qi  2 ∝ |β1 |T exp − βj . Waggoner and Zha (2003) write the restricted columns of A0 as A0(. . bn |Y ) ∝ |[U1 b1 . . in Pelloni and Polasek (2003). The left panel of Figure 3 . . and investment exhibit clear trends and tend to be very persistent. . .Del Negro. (s) 2. define bi (s) 2. 3 VARs with Reduced-Rank Restrictions It is well documented that many economic time series such as aggregate output. Y ) from the matricvariate Normal distribution in (34).5 Further VAR Topics The literature on Bayesian analysis of VARs is by now extensive. bi−1 . n generate (s) (s) (s−1) (s−1) β1 . are normally distributed.4: Gibbs Sampler for Structural VARs For s = 1. . A complementary survey of Bayesian analysis of VARs including VARs with time-varying coefficients and factor-augmented VARs can be found in Koop and Korobilis (2010). We will discuss VAR models with stochastic volatility in Section 5. for instance. Waggoner. Our exposition was based on the assumption that the VAR innovations are homoskedastic. Draw A0 (s) conditional on (A(s−1) . possibly conditional on the future path of a subset of variables. Draws from the posterior of A0 can be obtained by Gibbs sampling. . . Algorithm 2. . Readers who are interested in using VARs for forecasting purposes can find algorithms to compute such predictions efficiently. Extensions to GARCHtype heteroskedasticity can be found. it has long been recognized that linear combinations of macroeconomic time series (potentially after a logarithmic transformation) appear to be stationary. bn according to (37). For i = 1. . . consumption. .i) = Ui bi . and βj . Y ) as follows. . 2 ≤ j ≤ qi . . . Examples are the so-called Great Ratios. such as the consumption-output or investment-output ratio (see Klein and Kosobud (1961)). 2010 29 distribution. βqi from (38) conditional on (b1 . Rubio-Ram´ ırez. Uhlig (1997) proposes a Bayesian approach to VARs with stochastic volatility. bi+1 . . . and our presentation is by no means exhaustive. nsim : 1. . At the same time. and let A0(. . and Zha (2010) provide conditions for the global identification of VARs of the form (30). in Waggoner and Zha (1999). . (s) (s) ). Draw A(s) conditional on (A0 . . Schorfheide – Bayesian Macroeconometrics: April 18. 1 Cointegration Restrictions Consider the reduced-form VAR specified in (1). (39) . . Schorfheide – Bayesian Macroeconometrics: April 18. Cointegration implies that the series have common stochastic trends that can be eliminated by taking suitable linear combinations. and the fluctuations look at first glance mean-reverting. we discuss Bayesian inference in cointegration systems under various types of prior distributions. For now. Subtracting yt−1 from both sides of the equality leads to ∆yt = (Φ1 − I)yt−1 + Φ2 yt−2 + . then yt is nonstationary. the dynamic behavior of a univariate autoregressive process φ(L)yt = ut . then these series are said to be cointegrated.Del Negro. which takes the form of a reduced-rank regression. crucially depends on the roots of the characteristic polynomial φ(z). Johansen (1991). where φ(L) = 1 − p p j=1 φj L and L is the lag operator. Unit-root processes are often called integrated of order one. we will show in Section 3. Such restricted VARs have become a useful and empirically successful tool in applied macroeconomics. The observation that particular linear combinations of nonstationary economic time series appear to be stationary has triggered a large literature on cointegration starting in the mid 1980’s. it exhibits no apparent trend. we will discuss how such cointegration relationships arise in a dynamic stochastic general equilibrium framework. 2010 30 depicts log nominal GDP and nominal aggregate investment for the United States over the period 1965-2006 (obtained from the FRED database of the Federal Reserve Bank of St. . This leads to the so-called vector error correction model.1 that one can impose cotrending restrictions in a VAR by restricting some of the eigenvalues of its characteristic polynomial to unity. + Φp yt−p + Φc + ut . and Phillips (1991). If the smallest root is unity and all other roots are outside the unit circle. In Section 4. Johansen (1988). Engle and Granger (1987).2. ut ∼ iidN (0. In Section 3. Louis) and the right panel shows the log of the investment-output ratio. 3. Σ). While the ratio is far from constant. I(1). Insert Figure 3 Here More formally. If a linear combination of univariate I(1) time series is stationary. because stationarity can be induced by taking first differences ∆yt = (1 − L)yt . for example. see. 2010 For j = 1. Thus. where α and β are n × r matrices of full column rank. for instance. Ψ(L)ut = ∞ Ψj ut−j is a stationary linear process. it is useful to define a matrix α⊥ and β⊥ of full column rank and dimension n × (n − r) such that α α⊥ = 0 and β β⊥ = 0. (42) p−1 j=1 Πj . yt has n − r common stochastic trends given by (α⊥ Γβ⊥ )−1 α⊥ t τ =1 (ut + Πc ). then (41) implies that yt can be expressed as (Granger’s Representation Theorem): t (41) yt = β⊥ (α⊥ Γβ⊥ ) Γ=I− and −1 α⊥ τ =1 (ut + Πc ) + Ψ(L)(ut + Πc ) + Pβ⊥ y0 . according to (41) the growth rates of output and investment should be modeled as functions of lagged growth rates as well as the log investment-output ratio.Del Negro. It follows immediately j=0 that the r linear combinations β yt are stationary. |Φ(1)| = 0 – then the matrix Π∗ is of reduced rank. Then we can rewrite (39) (40) Φj z j . . This reparameterization leads to the so-called vector error correction or vector equilibrium correction (VECM) representation: ∆yt = αβ yt−1 + Π1 ∆yt−1 + . Moreover. A detailed exposition can be found. + Πp−1 ∆yt−p+1 + Πc + ut . . studied by Engle and Granger (1987). Φ(z) is the characteristic polynomial of the VAR. we can reparameterize the matrix as Π∗ = αβ . Since in this example β⊥ is . . If no root of Φ(z) = 0 lies inside the unit circle and α⊥ β⊥ has full rank. . . – that is. . we ˜ can define α and β such that Π∗ = αAA−1 β = αβ . If yt is composed of log GDP and investment. If the rank of Π∗ equals r < n. . Schorfheide – Bayesian Macroeconometrics: April 18. The columns of β are called cointegration vectors. in the monograph by Johansen (1995). In addition to the matrices α ˜ ˜˜ and β. + Πp−1 ∆yt−p+1 + Πc + ut . −1] . Pβ⊥ is the matrix that projects onto the space spanned by β⊥ . It can be easily verified that the parameterization of Π∗ in terms of α and β is not unique: for any nonsingular r × r matrix A. a visual inspection of Figure 3 suggests that the cointegration vector β is close to [1. p−1 define Πj = − as ∆yt = Π∗ yt−1 + Π1 ∆yt−1 + . If the VAR has unit roots. A few remarks are in order. where Π∗ = −Φ(1) and Φ(z) = I − j=1 p p i=j+1 Φp 31 and Πc = Φc . . Br×(n−r) ] . nsim : 1. β) is MNIW. . we will focus on inference for Π∗ = αβ conditional on Π and Σ for the remainder of this section (Step 2 of Algorithm 3. Throughout this subsection we normalize β = [Ir×r . . Σ)|(α. and ut . Σ)|(Y. We will examine various approaches to specifying a prior distribution for Π∗ and discuss Gibbs samplers to implement posterior inference. In this section. β) is also of the MNIW form and can easily be derived following the calculations in Section 2. ut ∼ iidN (0.2 Bayesian Inference with Gaussian Prior for β Define Π = [Π1 . To simplify the subsequent exposition. In particular. . Inspection of (41) suggests that conditional on α and β. Let ∆Y . Σ|Π∗ (s) (s−1) . Equation (42) highlights the fact that output and investment have a common stochastic trend. X. Draw Π∗ from the posterior p(Π∗ |Π(s) . Σ). the VECM reduces to a multivariate linear Gaussian regression model. Schorfheide – Bayesian Macroeconometrics: April 18. . To do so. . Geweke (1996) used such priors to study inference in the reduced-rank regression model. 2. α. Πc ] and let ut ∼ N (0. yt−1 . it is convenient to write the regression in matrix form. As before. Y ). if (Π. 2010 2 × 1 and the term (α⊥ Γβ⊥ )−1 α⊥ t τ =1 (ut + Πc ) 32 is scalar. A Gibbs sampler to generate draws from the posterior distribution of the VECM typically has the following structure: Algorithm 3. Y ). Π∗ = αβ . we study the simplified model ∆yt = Π∗ yt−1 + ut . we consider independent priors p(α) and p(β) that are either flat or Gaussian. and U denote the T × n matrices with rows ∆yt . Πp−1 . Σ(s) . Σ). Σ(s) ) from the posterior p(Π. respectively. The remainder of Section 3 focuses on the formal Bayesian analysis of the vector error correction model. In practice.Del Negro. A discussion of model selection and averaging approaches is deferred to Section 7.1). the researcher faces uncertainty about the number of cointegration relationships as well as the number of lags that should be included.1: Gibbs Sampler for VECM For s = 1. such that ∆Y = XΠ∗ + U . . (43) and treat Σ as known. . 3. then we can deduce immediately that the posterior (Π. Draw (Π(s) . . β) ∝ p(α) exp 1 ˜ ˜ ˜ − tr[Σ−1 (αX Xα − 2αX ∆Y )] . β) ∝ |Σ|−T /2 exp 1 − tr[Σ−1 (∆Y − Xβα ) (∆Y − Xβα )] . In the context of our output-investment illustration. Then p(α|Y. If the prior has the same Kronecker structure as the likelihood function. For brevity. The following steps are designed to eliminate the α term. An informative prior for α could be constructed from beliefs about the speed at which the economy returns to its balanced-growth path in the absence of shocks. Partition X = [X1 . 2 (45) Thus. Schorfheide – Bayesian Macroeconometrics: April 18. as long as the prior of vec(α ) is Gaussian. 2010 33 The prior distribution for β is induced by a prior distribution for B. then the posterior is matricvariate Normal. We will discuss the consequences of this normalization later on. We will encounter a DSGE model with such a balanced-growth-path property in Section 4. We begin with the posterior of α. we will derive conditional posterior distributions for α and β based on the ˜ likelihood (44). we refer to this class of priors as balanced-growth-path priors. This normalization requires that the elements of yt be ordered such that each of these variables appears in at least one cointegration relationship. reflecting either presample evidence on the stability of the investment-output ratio or the belief in an economic theory that implies that industrialized economies evolve along a balanced-growth path along which consumption and output grow at the same rate. Post-multiplying (46) by the matrix . The derivation of the conditional posterior of β is more tedious. Define X = Xβ. Now define Z = ∆Y − X1 α and write Z = X2 Bα + U. X2 ] such that the partitions of X conform to the partitions of β = [I. the likelihood function is of the form p(Y |α. the posterior of vec(α ) is multivariate Normal. 2 (44) In turn. (46) The fact that B is right-multiplied by α complicates the analysis. one might find it attractive to center the prior for the cointegration coefficient B at −1.Del Negro. B ] and rewrite the reduced-rank regression as ∆Y = X1 α + X2 Bα + U. Conditional on an initial observation and the covariance matrix Σ (both subsequently omitted from our notation). respectively.S. We use an improper prior of the form p(Π. ˜ Z2 = Zα⊥ . Draws from the posterior distribution are generated through a Gibbs sampler in which Step 2 of Algorithm 3. let Σ = C ΣC and partition Σ conforming ˜ ˜ ˜ ˜ ˜ with U = [U1 . α) ∝ p(β(B)) exp 1 ˜ ˜ ˜ − tr Σ−1 (Z1|2 − X2 B) (Z1|2 − X2 B) 1|2 2 . . 0. if the prior distribution for B is either flat or Normal. (48) Thus. Then we can deduce 22 p(B|Y. 2λ where λ ∈ {0. The posterior is similar for all three choices of λ. The mean and variance of Z1 conditional on Z2 are given −1 ˜ −1 ˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ by (Σ12 Σ22 Z2 + X2 B) and Σ1|2 = Σ11 − Σ12 Σ22 Σ21 .1: We use the VECM in (41) with p = 4 and the associated movingaverage representation (42) to extract a common trend from the U. 0 + U1 . 2010 C = [α(α α)−1 . ˜ U2 = U α⊥ . . indicating that the data .1 is replaced by the two steps described in Algorithm 3.01. Σ. B) ∝ |Σ|−(n+1)/2 exp − 1 (B − (−1))2 . −1] . U2 ]. The posterior density for B is plotted in Figure 4 for the three parameterizations of the prior variance λ. Z2 = X2 B. 1}. Define Z1|2 = ˜ ˜ ˜ ˜ Z1 − Σ12 Σ−1 Z2 . which sharp˜ ˜ ens the inference for B. B] is centered at the balanced-growth-path values [1. Algorithm 3. α. Formally. Draw α(s) from p(α|β (s−1) . Y ) given in (45). Illustration 3.Del Negro. The prior distribution for the cointegration vector β = [1. then the conditional posterior of B given α is Normal. ˜ U1 = U α(α α)−1 . where ˜ Z1 = Zα(α α)−1 . investment and GDP data depicted in Figure 3. . nsim : 1. 2. . α⊥ ] yields the seemingly unrelated regression ˜ ˜ ˜ ˜ Z1 . Schorfheide – Bayesian Macroeconometrics: April 18.2: Gibbs Sampler for Simple VECM with Gaussian Priors For s = 1.1.2. Draw B (s) from p(B|α(s) . U2 . Through Z2 . 34 (47) ˜ ˜ Notice that we cannot simply drop the Z2 equations. we obtain ˜ ˜ information about U2 and hence indirectly information about U1 . Y ) given in (48) and let β (s) = [I. B (s) ] . ϕ ∈ (0. For each prior. and Villani (2006). the posterior mean of B is about −1.1. We begin by reviewing the first strand. The second strand uses prior distributions to regularize or smooth the likelihood function of a cointegration model in areas of the parameter space in which it is very nonelliptical. Using posterior draws based on λ = 0. Our discussion focuses on the output-investment example with n = 2 and r = 1. Figure 5 plots the decompositions of log nominal aggregate investment and log nominal GDP into common trends and stationary fluctuations around those trends. National Bureau of Economic Research (NBER) recession dates are overlayed in gray.10. which we previously encountered in the context of structural VARs in Section 2. The plots in the left column of the Figure display the common trend β⊥ (α⊥ Γβ⊥ )−1 α⊥ t τ =1 (ut + Πc ) for each series.3 Further Research on Bayesian Cointegration Models The Bayesian analysis of cointegration systems has been an active area of research. 2010 35 are quite informative about the cointegration relationship.07. indicating a slight violation of the balanced-growth-path restriction. Rather than normalizing one of the ordinates of the cointegration vector β to one. Insert Figure 4 Here Insert Figure 5 Here 3. van Dijk. we can alternatively normalize its length to one and express it in terms of polar coordinates. . we let β(ϕ) = [cos(−π/4 + π(ϕ − 1/2)).n−r ). Schorfheide – Bayesian Macroeconometrics: April 18. Subsequently.Del Negro. sin(−π/4 + π(ϕ − 1/2))] . Strachan and Inder (2004) and Villani (2005) emphasize that specifying a prior distribution for β amounts to placing a prior probability on the set of r-dimensional subspaces of Rn (Grassmann manifold Gr. while the plots in the right column show the demeaned stationary component Ψ(L)ut . In this case the Grassmann manifold consists of all the lines in R2 that pass through the origin. and a detailed survey is provided by Koop. with most of the mass of the distributions placed on values less than −1. 1]. Strachan.4. For reasons that will become apparent subsequently. The first strand points out that the columns of β in (41) should be interpreted as a characterization of a subspace of Rn and that priors for β are priors over subspaces. we consider two strands of this literature. for general n and r. B ]. B ]. we can choose a Beta distribution for ϕ and let ϕ ∼ B(γ. if γ = 1. and it turns out that the subspaces associated with β(ϕ) are uniformly distributed on the Grassmann manifold (see James (1954)). B becomes nonidentifiable. For n = 2. caused by local nonidentifiability of α and B under the ordinal normalization β = [I. As the loadings α for the cointegration relationships β yt−1 approach zero. then the prior is fairly dogmatic. we used a balanced-growth-path prior that was centered at the cointegration vector [1. Instead. 1]. If γ >> 1. γ). to generate prior distributions that are centered at the balanced-growth-path restriction. the marginal posterior density of α can be written as p(α|Y ) ∝ p(α) p(Y |α. which rotate the subspace spanned by β(ϕ) around the origin. derives the posterior distribution for α and β using the ordinal normalization β = [I. As γ approaches 1 from above it becomes more diffuse. Thus. B ] favors the cointegration spaces near the region where the linear normalization is invalid. and its density integrates to infinity. B)dB. Kleibergen and van Dijk (1994) and Kleibergen and Paap (2002) use prior distributions to correct irregularities in the likelihood function of the VECM. meaning that some of the first r variables do not appear in any cointegration vector. We now turn to the literature on regularization. In fact. Under this prior. then the conditional posterior of B given α = 0 is improper. this group is given by the set of orthogonal matrices specified in (28). 2010 36 The one-dimensional subspace associated with β(ϕ) is given by λβ(ϕ). In our empirical illustration. because a flat and apparently noninformative prior on B in β = [I. −1] . these authors propose to normalize β according to β β = I and develop methods of constructing informative and diffuse priors on the Grassmann manifold associated with β. . where λ ∈ R. This vector lies in the space spanned by β(1/2). Strachan and Inder (2004) are very critical of the ordinal normalization. then ϕ ∼ U (0. Villani (2005) proposes to use the uniform distribution on the Grassman manifold as a reference prior for the analysis of cointegration systems and. This uniform distribution is defined to be the unique distribution that is invariant under the group of orthonormal transformations of Rn . If the highly informative balanced-growth-path prior discussed previously were replaced by a flat prior for B – that is p(B) ∝ constant – to express diffuse prior beliefs about cointegration relationships. Schorfheide – Bayesian Macroeconometrics: April 18.Del Negro. B. (51) Here. D11 . α and B. This prior has the property that as α −→ 0 its density vanishes and counteracts the divergence of p(Y |α. W21 ]. The matrix Λ is chosen to obtain a convenient functional form for the prior density below: Λ = (V22 V22 )−1/2 V22 D22 W22 (W22 W22 )−1/2 . p(Π∗ ) ∝ constant. For Λ = 0 the rank of the unrestricted Π∗ in (50) reduces to r and we obtain the familiar expression Π∗ = βα . They proceed by deriving a conditional distribution for Π∗ given Λ = 0. and D is a diagonal n × n matrix. Regardless of the rank of Π∗ . Schorfheide – Bayesian Macroeconometrics: April 18. Kleibergen and Paap (2002) propose the following alternative. Λ). α = 0)dB determines the marginal density at α = 0. Λ)) is the Jacobian associated with the mapping between Π∗ and (α. −1 B = V21 V11 . B.Del Negro. B)dB. B) ∝ |JΛ=0 (Π∗ (α. . B. and α = V11 D11 [W11 . and W11 are of dimension r×r. 2010 Since 37 p(Y |B. where β= I B . Here Mα and Mβ are chosen such that the second equality in (50) holds. The partitions V11 . The starting point is a singular-value decomposition of a (for now) unrestricted n × n matrix Π∗ . ignoring the rank reduction generated by the r cointegration relationships. the matrices β⊥ and α⊥ take the form β⊥ = Mβ [V12 V22 ] and α⊥ = Mα [W12 W22 ]. Details of the implementation of a posterior simulator are provided in Kleibergen and Paap (2002). Finally. the posterior of α tends to favor near-zero values for which the cointegration relationships are poorly identified. it can be verified that the matrix can be decomposed as follows: Π∗ = V11 V21 D11 W11 W21 + V12 V22 D22 W12 W22 (50) = βα + β⊥ Λα⊥ . Thus. which takes the form: Π∗ = V DW = V11 V12 V21 V22 D11 0 0 D22 W11 W21 W12 W22 . The authors start from a flat prior on Π∗ : that is. JΛ=0 (Π∗ (α. Λ))| ∝ |β β|(n−r)/2 |αα |(n−r)/2 . respectively. and all other partitions conform. p(α. and finally use a change of variables to obtain a distribution for the parameters of interest. (49) V and W are orthogonal n × n matrices. This uncertainty is generated by exogenous stochastic processes that shift technology. Conditional on distributional assumptions for the exogenous shocks. . 2010 38 4 Dynamic Stochastic General Equilibrium Models The term DSGE model is typically used to refer to a broad class of dynamic macroeconomic models that spans the standard neoclassical growth model discussed in King. Schorfheide – Bayesian Macroeconometrics: April 18. agents potentially face uncertainty with respect to total factor productivity.7 discusses numerous methods of documenting the performance of DSGE models and comparing them to less restrictive models such as vector autoregressions. we provide a brief discussion of some empirical applications in Section 4. Bayesian inference on the parameters of a linearized DSGE model is discussed in Section 4.Del Negro. the DSGE model generates a joint probability distribution for the endogenous model variables such as output. and analyzing the welfare effects of economic policies.1. This posterior is the basis for substantive inference and decision making.3. and inflation. or the nominal interest rate set by a central bank. and to models solved with nonlinear techniques are discussed in Sections 4. The model solution and state-space representation are discussed in Section 4.4. In a Bayesian framework.6. generating predictive distributions for key macroeconomic variables. for instance.5. Extensions to models with indeterminacies or stochastic volatility. and 4. taking both parameter and model uncertainty into account. investment. 4. consumption. Finally. We present a prototypical DSGE model in Section 4. A detailed survey of Bayesian techniques for the estimation and evaluation of DSGE models is provided in An and Schorfheide (2007a). for example. this likelihood function can be used to transform a prior distribution for the structural parameters of the DSGE model into a posterior distribution. Section 4. such as studying the sources of business-cycle fluctuations and the propagation of shocks to the macroeconomy. and Evans (2005).2. Moreover. DSGE models can be used for numerous tasks. respectively.8. Eichenbaum. Plosser. A common feature of these models is that decision rules of economic agents are derived from assumptions about preferences and technologies by solving intertemporal optimization problems. The remainder of this section is organized as follows. or generate unanticipated deviations from a central bank’s interest-rate feedback rule. and Rebelo (1988) as well as the monetary model with numerous real and nominal frictions developed by Christiano. Finally. Both output and labor productivity are plotted in terms of percentage deviations from a linear trend. According to this model. and Bt is an exogenous preference shifter that can be interpreted as a labor supply shock. and log labor productivity for the US. where Wt is the hourly wage. If Bt increases. and Santaeulalia-Llopis (2009). The household receives the labor income Wt Ht . Fuentes-Albero. It owns the capital stock Kt and rents it to the firms at the rate Rt . an important source of the observed fluctuations in the three series is exogenous changes in total factor productivity. (53) where It is investment and δ is the depreciation rate. hours worked.1 A Prototypical DSGE Model Figure 6 depicts postwar aggregate log output. Schorfheide – Bayesian Macroeconometrics: April 18. (54) . The representative household maximizes the expected discounted lifetime utility from consumption Ct and hours worked Ht : ∞ I t E s=0 β t+s ln Ct+s − (Ht+s /Bt+s )1+1/ν 1 + 1/ν (52) subject to a sequence of budget constraints Ct + It ≤ Wt Ht + Rt Kt . 2010 39 Insert Figure 6 Here 4. We will illustrate the techniques discussed in this section with the estimation of a stochastic growth model based on observations on aggregate output and hours worked. The household uses the discount rate β. ν is the aggregate labor supply elasticity. The simplest DSGE model that tries to capture the dynamics of these series is the neoclassical stochastic growth model. The first-order conditions associated with the household’s optimization problem are given by a consumption Euler equation and a labor supply condition: 1 1 = βI E (Rt+1 + (1 − δ)) Ct Ct+1 and 1 1 Wt = Ct Bt Ht Bt 1/ν . then the disutility associated with hours worked falls. Capital accumulates according to Kt+1 = (1 − δ)Kt + It .Del Negro. Precise data definitions are provided in R´ ıos-Rull. Kryshko. The model consists of a representative household and perfectly competitive firms. Schorfheide. Wt . Log technology evolves according to ln At = ln A0 +(ln γ)t+ln At . 1). respectively: Wt = α Yt . Firms solve a static profit maximization problem and choose labor and capital to equate marginal products of labor and capital with the wage and rental rate of capital. The solution to the rational expectations difference equations (53) to (59) determines the law of motion for the endogenous variables Yt . Kt (56) An equilibrium is a sequence of prices and quantities such that (i) the representative household maximizes utility and firms maximize profits taking the prices as given. (59) and 0 ≤ ρb < 1. Schorfheide – Bayesian Macroeconometrics: April 18. we specify a law of motion for the two exogenous processes. Ct = . Since we will subsequently solve the model by constructing a local approximation of its dynamics near a steady state. we assume ln A−τ = 0 and ln B−τ = 0. Ht .t ∼ iidN (0. consumption. Ht Rt = (1 − α) Yt . (55) The stochastic process At represents the exogenous labor augmenting technological progress.Del Negro. To initialize the exogenous processes. and produce final goods according to the following Cobb-Douglas technology: 1−α Yt = (At Ht )α Kt . 1). Wt = . hire labor services. and Rt . At At At At At (60) . It = . 1]. b.t ∼ iidN (0. (57) To close the model. 2010 40 Firms rent capital. If 0 ≤ ρa < 1. (58) where ρa ∈ [0. If ρa = 1. Kt+1 = . Kt . ln At = ρa ln At−1 +σa a. investment. The technology process ln At induces a common trend in output. a. Ct . then ln At is a random-walk process with drift. and wages.t . it is useful to detrend the model variables as follows: Yt = Ct It Kt+1 Wt Yt . and (ii) markets clear. capital. implying that Yt = Ct + It . the technology process is trend stationary.t . It . Exogenous labor supply shifts are assumed to follow a stationary AR(1) process: ln Bt = (1 − ρb ) ln B∗ + ρb ln Bt−1 + σb b. and ln Wt . Schorfheide – Bayesian Macroeconometrics: April 18. γ. β K∗ Y∗ = (1 − α)γ . σa . (62) This log ratio is always stationary. and is a function of shocks dated t and earlier. Yt Kt eat Kt+1 = (1 − δ)Kt e−at + It . At = ln γ + (ρa − 1) ln At−1 + σa At−1 a. the model and b. ln A0 . δ. 4. we are detrending Kt+1 by At .t economy becomes deterministic and has a steady state in terms of the detrended variables. the detrended variables follow a stationary law of motion. which according to (60) are obtained by taking pairwise differences of ln Yt . ν.t (63) to zero. ln Ct .t . we stack the parameters of the DSGE model in the vector θ: θ = [α.2 Model Solution and State-Space Form The solution to the equilibrium conditions (59). even if the underlying technology shock is nonstationary. R∗ I∗ Y∗ = 1− 1−δ γ K∗ Y∗ . ρa . β. the model generates a number of cointegration relationships. if ρa = 1. and (62) leads to a probability distribution for the endogenous model variables.Del Negro. It is straightforward to rewrite (53) to (57) in terms of the detrended variables: 1 Ct = βI E Yt . the capital-output. Ht 1 Ct+1 e−at+1 (Rt+1 + (1 − δ)) . ln B∗ . because if ρa = 1 the ln At−1 term drops out. Finally. Moreover. For instance. Yt = Ct + It . and the investment-output ratios are given by R∗ = γ − (1 − δ). (61). the rental rate of capital. 2010 41 The detrended variables are mean reverting. indexed by the vector of structural . This bounds the probability of experiencing large deviations from the log-linearization point for which the approximate solution becomes inaccurate. (64) In a stochastic environment. ρb . 1 Ct Wt = 1 Bt Ht Bt 1/ν (61) Wt = α Rt = (1 − α) 1−α Yt = Htα Kt e−at The process at is defined as at = ln . σb ] . This steady state is a function of θ. Hence. If we set the standard deviations of the innovations a. According to our timing convention. ln It . ln Kt+1 . Kt+1 refers to capital at the end of period t/beginning of t + 1. θ). This likelihood function can be used for Bayesian inference. where st is a vector of suitably defined state variables and the innovations for the structural shocks. a few remarks about the model solution are in order. at = At − At−1 . the intertemporal optimization problems of economic agents can be written recursively. In general. For now. Yt At Kt+1 = 1−δ I∗ 1−δ Kt + It − at . ignoring the discrepancy between the nonlinear model solution and the first-order approximation. A multitude of techniques are available for solving linear rational expectations models (see. using Bellman equations. γ γ K∗ C∗ I∗ Ct + It . Before turning to the Bayesian analysis of DSGE models. t . with the loose justification that any explosive solution would violate the transversality conditions associated with the underlying dynamic optimization problems. Yt = Y∗ Y∗ = ρa At−1 + σa a. Schorfheide – Bayesian Macroeconometrics: April 18. the solution takes the form st = Φ1 (θ)st−1 + Φ (θ) t . For the neoclassical growth model. Economists focus on solutions that guarantee a nonexplosive law of motion for the endogenous variables that appear in (66). then Xt = ln Xt − ln X∗ (Xt = ln Xt − ln X∗ ).t . We adopt the convention that if a variable Xt (Xt ) has a steady state X∗ (X∗ ).t . The solution of the DSGE model can be written as st = Φ(st−1 . for instance. (66) t (65) is a vector that stacks Ht = ν Wt − ν Ct + (1 + ν)Bt . = αHt + (1 − α)Kt − (1 − α)at . 2010 42 parameters θ. The log-linearized equilibrium conditions of the neoclassical growth model (61) are given by the following system of linear expectational difference equations: Ct = I t Ct+1 + at+1 − E R∗ Rt+1 R∗ + (1 − δ) Wt = Yt − Ht . we proceed under the assumption that the DSGE model’s equilibrium law of motion is approximated by log-linearization techniques. the value and policy functions associated with the optimization problems are nonlinear in terms of both the state and the control variables. In most DSGE models. Sims (2002b)).Del Negro. (67) . and the solution of the optimization problems requires numerical techniques. Bt = ρb Bt−1 + σb b. Rt = Yt − Kt + at . and Ht are linear functions of st . In the subsequent illustration. At and Bt . or additional shocks as in Leeper and Sims (1995) and more recently Smets and Wouters (2003). The other endogenous variables. and where H∗ is the steady state of hours worked and the variables At . Ht . It . 2010 43 The system matrices Φ1 and Φ are functions of the DSGE model parameters θ. Thus. the trend generated by technology (ln γ)t + At is added in the measurement equation. Kt+1 . Yt . The model predicts that certain linear combinations of variables. Like all DSGE models. If the innovations Kohn (This Volume). Sargent (1989).Del Negro. and Rt can be expressed as linear functions of st . Wt . (68) Equations (67) and (68) provide a state-space representation for the linearized DSGE model. Pitt. the linearized neoclassical growth model has some apparent counterfactual implications. Yt . Equation (68) becomes ln GDPt ln Ht = ln Y0 ln H∗ + ln γ 0 t+ Yt + At Ht . as well as the two exogenous processes At and Bt . Ct . so that it matches the number of exogenous shocks. are constant. the likelihood function for more than two variables is degenerate. In the subsequent empirical illustration. which is clearly at odds with the data. and st is composed of three elements: the capital stock at the end of period t. it is instructive to examine the measurement equations that the model yields for output . Our measurement equation takes the form yt = Ψ0 (θ) + Ψ1 (θ)t + Ψ2 (θ)st . Since fluctuations are generated by two exogenous disturbances. and Ireland (2004). t are Gaussian. which is described in detail in Giordani. In this case. Schorfheide – Bayesian Macroeconometrics: April 18. then the likelihood function can be obtained from the Kalman filter. To cope with this problem authors have added either so-called measurement errors. Notice that even though the DSGE model was solved in terms of the detrended model variable Yt . Altug (1989). we let yt consist of log GDP and log hours worked. Although we focus on the dynamics of output and hours in this section. such as the labor share lsh = Ht + Wt − Yt . we restrict the dimension of the vector of observables yt to n = 2. we are able to use nondetrended log real GDP as an observable and to learn about the technology growth rate γ and its persistence ρa from the available information about the level of output. Schorfheide – Bayesian Macroeconometrics: April 18. suppose the neoclassical growth model is estimated based on aggregate output and hours data over the period 1955 to 2006. even if ρa = 1.2 as justification of our informative prior for the cointegration vector. If the information is vague. the model implies the following cointegration relationship: −1 1 ln GDPt ln It = ln (1 − α)(γ − 1 + δ) + It − Yt . If ρa = 1 then the last line of (66) implies that At follows a random-walk process and hence induces nonstationary dynamics. the posterior estimates of the cointegration vector reported in Illustration 3. we can write ln GDPt ln It = ln Y0 ln Y0 + (ln I∗ − ln Y∗ ) + ln γ ln γ t+ At + Yt At + It . To the extent that this information is indeed precise. In contrast. such a model deficiency may lead to posterior distributions of the autoregressive coefficients associated with shocks other than technology that concentrate near unity. In this case. To the contrary. Then. the choice of prior should be properly documented. the use of a tight prior distribution is desirable.1 suggest that the balanced-growth-path implication of the DSGE model is overly restrictive. Suppose we use the GDP deflator to convert the two series depicted in Figure 3 from nominal into real terms. This representation highlights the common trend in output and investment generated by the technology process At . this should not be interpreted as “cooking up” desired results based on almost dogmatic priors. In practice. the spirit behind the prior elicitation is to use other sources of information that do not directly enter the likelihood function. There are three important sources of information that are approximately independent of the data that enter the likelihood function and therefore could be used for the elicitation of prior distribution: (i) information from macroeconomic time series other than . γ/β − 1 + δ Recall that both Yt and It are stationary. 4. 2010 44 and investment.3 Bayesian Inference Although most of the literature on Bayesian estimation of DSGE models uses fairly informative prior distributions. For concreteness. it should translate into a more dispersed prior distribution.Del Negro. We used this model implication in Section 3. Most important. and (iii) macroeconomic data. to individuals moving in and out of unemployment. that is. microeconometric estimates of labor supply elasticities – an example of source (ii) – could be used to specify a prior for the Frisch elasticity ν. Since none of these variables directly enters the likelihood function. Let Σ be the inverse of the (negative) Hessian computed at the posterior mode ˜ θ. and Whiteman (2000). for instance. 0 . it is sensible to incorporate this information through the prior distribution. Schorfheide – Bayesian Macroeconometrics: April 18. ρb . and δ. which can be computed numerically. Consider source (i). (ii) micro-level observations that are. Hence. 2010 45 output and hours during the period 1955 to 2006. Because of the nonlinear relationship between the DSGE model parameters θ and the system matrices Ψ0 . Moreover. Finally. Denote the posterior mode by θ. The parameters ρa . Up to now. It is apparent from (64) that long-run averages of real interest rates. The basic RWM Algorithm takes the following form Algorithm 4. prior to 1955. the marginal and conditional distributions of the elements of θ do not fall into the well-known families of probability distributions. capital-output ratios. Ingram. Ψ2 . Φ1 and Φ in (67) and (68).Del Negro. that is. accounting for the fact that most of the variation in hours worked at the aggregate level is due to the extensive margin. prior distributions for these parameters can be chosen such that the implied dynamics of output and hours are broadly in line with presample evidence. and σb implicitly affect the persistence and volatility of output and hours worked. ˜ ˜ 3. Del Negro and Schorfheide (2008) provide an approach for automating this type of prior elicitation. ˜ 2. informative about labor-supply decisions. Ψ1 .1: Random-Walk Metropolis (RWM) Algorithm for DSGE Model 1. β. including observations on output and hours worked. c2 Σ) or directly specify a starting value. which up ˜ to a constant is given by ln p(Y |θ) + ln p(θ). Use a numerical optimization routine to maximize the log posterior. Draw θ(0) from N (θ. information from source (iii). and investment-output ratios are informative about α. the parameter α equals the labor share of income in our model. σa . the most commonly used procedures for generating draws from the posterior distribution of θ are the Random-Walk Metropolis (RWM) Algorithm described in Schorfheide (2000) and Otrok (2001) or the Importance Sampler proposed in DeJong. Del Negro. For s = 1. . Thus. and replacing Σ in Step 4 with a matrix whose diagonal elements are equal to the prior variances of the DSGE model parameters and whose off-diagonal elements are zero. then the maximization in Step 1 can be implemented with a gradient-based numerical optimization routine. Here. standard deviations. and credible sets. ϑ|Y ) = p(Y |ϑ)p(ϑ) . Any inaccuracy in the computation of the steady states will translate into an inaccurate evaluation of the likelihood function that makes use of gradient-based optimization methods impractical. While the computation of the steady states is trivial in our neoclassical stochastic growth model. and ˜ then set θ to the value that attains the highest posterior density across optimization runs. . An and Schorfheide (2007b) describe a hybrid MCMC algorithm with transition mixture to deal with a bimodal posterior distribution. it is advisable to start the optimization routine from multiple starting values. . for example posterior means. Chib and Ramamurthy (2010) recommend using a simulated annealing algorithm for Step 1. In this case. nsim : draw ϑ from the proposal distribution N (θ(s−1) .1 tends to work well if the posterior density is unimodal. c2 Σ). 2010 46 ˜ 4. such as the mean of the prior distribution. The scale factor c0 controls the expected distance between the mode and the starting point of the Markov chain. (ii) the solution of the linear rational expectations system.000 iterations provide very similar approximations of the objects of interest.000. Schorfheide – Bayesian Macroeconometrics: April 18. The optimization is often not straightforward as the posterior density is typically not globally concave. The jump from θ(s−1) is accepted (θ(s) = ϑ) with probability min {1. it might require the use of numerical equation solvers for more complicated DSGE models. r(θ(s−1) . . Based on practitioners’ experience. The tuning parameter c is typically chosen to obtain a rejection rate of about 50%. reasonable perturbations of the starting points lead to chains that after 100. Chib and Ramamurthy . which could be drawn from the prior distribution. The evaluation of the likelihood typically involves three steps: (i) the computation of the steady state. Most recently. In some applications we found it useful to skip Steps 1 to 3 by choosing a reasonable ˜ starting value. p(Y |θ(s−1) )p(θ(s−1) ) If the likelihood can be evaluated with a high degree of precision. Algorithm 4. ϑ|Y )} and rejected (θ(s) = θ(s−1) ) otherwise. and (iii) the evaluation of the likelihood function of a linear state-space model with the Kalman filter.000 to 1. medians. r(θ(s−1) . The use of a dogmatic prior can then be viewed as a (fairly good) approximation of a low-variance prior. Schorfheide. These choices yield values of α = 0.025 in quarterly terms. Fuentes-Albero. and δ to be consistent with a labor share of 0.Del Negro. it implies that the total factor productivity has a serial correlation between 0. We assume that α has a Beta distribution with a standard deviation of 0. an investmentto-output ratio of about 25%. we define ln Y0 = ln Y∗ + ln A0 and use fairly agnostic priors on the location parameters ln Y0 and ln H∗ . Our prior implies that the preference shock is slightly less persistent than the technology shock.2. Fixing these parameters is typically justified as follows.02.66. β = 0. A detailed discussion can be found in Chib (This Volume). and δ = 0. Based on National Income and Product Account (NIPA) data. we use such a low-variance prior for α.66. Finally. We use a Gamma distribution with parameters that imply a prior mean of 2 and a standard deviation of 1. we choose the prior means for α. Conditional on the adoption of a particular data definition. An important parameter for the behavior of the model is the labor supply elasticity. 2010 47 (2010) have developed a multiblock Metropolis-within-Gibbs algorithm that randomly groups parameters in blocks and thereby dramatically reduces the persistence of the resulting Markov chain and improves the efficiency of the posterior sampler compared to a single-block RWM algorithm. Kryshko. resulting in small prior variances. As is quite common in the literature. balanced-growth considerations under slightly different household preferences suggest a value of 2. Illustration 4.1: The prior distribution for our empirical illustration is summarized in the first five columns of Table 3. For illustrative purpose. a priori plausible values vary considerably. Schorfheide – Bayesian Macroeconometrics: April 18. and that the standard deviation of the shocks is about 1% each quarter. and SantaeulaliaLlopis (2009). and Rogerson (1988) model of hours’ variation along the extensive margin would lead to ν = ∞. Micro-level estimates based on middle-age white males yield a value of 0. and an annual interest rate of 4%. .99.0. the relevant long-run averages computed from NIPA data appear to deliver fairly precise measurements of steady-state relationships that can be used to extract information about parameters such as β and δ. we decided to use dogmatic priors for β and δ. As discussed in R´ ıos-Rull. β.99. Our prior for the technology shock parameters is fairly diffuse with respect to the average growth rate.91 and 0. published by the Bureau of Economic Analysis. S. let lsh∗ (θ) be the model-implied labor share as a function of θ and lsh a sample average of postwar U.Del Negro. The posterior means of the labor supply elasticity are 0. Due to the fairly tight prior. where λ reflects the strength of the belief about the labor share. We used a logarithmic transformation of γ. which leads to a rejection rate of about 50%. Kryshko. Unlike in Figure 6. Del Negro and Schorfheide (2008) propose to multiply an initial prior p(θ) constructed from marginal distributions for the individual elements of θ by a ˜ function f (θ) that reflects beliefs about steady-state relationships and autocovariances. lsh∗ . we do not remove a deterministic trend from the output series. Alternatively. the autocorrelation parameter of the technology shock is estimated subject to the restriction that it lie in the interval [0. The scale parameter in the proposal density is chosen to be c = 0. We apply the RWM Algorithm to generate 100. FuentesAlbero. 2010 48 The distributions specified in the first columns of Table 3 are marginal distributions. 1). This function is generated by interpreting long-run averages of variables that do not appear in the model and presample autocovariances of yt as noisy measures of steady states and population autocovariances.000 draws from the posterior distribution of the parameters of the stochastic growth model. which is what we will do in the empirical illustration. and the innovation standard deviations of the shocks are 1. The estimated shock autocorrelations are around 0. Schorfheide – Bayesian Macroeconometrics: April 18. are summarized in the last four columns of Table 3. whereas it is fixed at 1 in the stochastic trend version. Schorfheide.70. A joint prior is typically obtained by taking the product of the marginals for all elements of θ. one could replace a subset of the structural parameters by. respectively. These relatively small values of ν imply that most of the fluctuations in hours worked are due to the labor supply shock.7% for the preference shock. which can be interpreted as the average quarterly growth rate of the economy and is estimated . For example. and then regard beliefs about these various steady states as independent.42 and 0. Then ln f (θ) could be defined as −(lsh∗ (θ) − lsh)2 /(2λ). R∗ . ˜ The prior distribution is updated based on quarterly data on aggregate output and hours worked ranging from 1955 to 2006. Posterior means and 90% credible intervals. I∗ /K∗ . In the deterministic trend version.5. and K∗ /Y∗ . the distribution of α is essentially not updated in view of the data. computed from the output of the posterior simulator. which is in line with the range of estimates reported in R´ ıos-Rull. The overall prior then takes the form p(θ) ∝ p(θ)f (θ). for instance.97. labor shares. and Santaeulalia-Llopis (2009). We consider two versions of the model.1% for the technology shock and 0. (71) Here. described in detail in Lubik and Schorfheide (2004). on the one hand. (70) If. because this indeterminacy might arise if a central bank does not react forcefully enough to counteract deviations of inflation from its long-run target value. which is scalar. Schorfheide – Bayesian Macroeconometrics: April 18. the unique stable equilibrium law of motion of the endogenous variable yt is given by yt = t.1) process yt = θyt−1 + (1 + M ) t − θ t−1 . Clarida. From a macroeconomist’s perspective. the law of motion of yt is not uniquely determined. θ should be interpreted as the structural parameter. It can be verified that if. M captures an indeterminacy: based on θ alone. Once draws from the posterior distribution have been generated. The estimates of ln H∗ and ln Y0 capture the level of the two series. In an influential paper.3% to 0.Del Negro. but it does affect the law of motion of yt if θ ≤ 1. θ ∈ (0. The presence of indeterminacies raises a few complications for Bayesian inference. θ > 1. . they can be converted into other objects of interest such as responses to structural shocks.4%. Gali. 2010 49 to be 0. on the other hand. one obtains a much larger class of solutions that can be characterized by the ARMA(1. M is completely unrelated to the agents’ tastes and technologies characterized by θ. and this is referred to as indeterminacy. 4. 2]. postwar data and found that the policy rule estimated for pre-1979 data would lead to indeterminate equilibrium dynamics in a DSGE model with nominal price rigidities. Consider the following simple example. the scalar parameter M ∈ R is used to characterize all stationary solutions of (69). and Gertler (2000) estimated interest rate feedback rules based on U.S. (69) Here.4 Extensions I: Indeterminacy Linear rational expectations systems can have multiple stable solutions. Suppose that yt is scalar and satisfies the expectational difference equation 1 E yt = I t [yt+1 ] + t . θ ≤ 1. DSGE models that allow for indeterminate equilibrium solutions have received a lot of attention in the literature. θ t ∼ iidN (0. 1). .16.01] [8.61.00 .007.03 8.04 0.010.93] 0.07.80 0. 0. 0.68] Mean 90% Intv.002.10 .02 1.99] [. Trend Domain [0. .011 [.22.98 . .93.003 [.77 0.08. s and ν for the Inverted Gamma distribution.65 [0. 1) I R I R I + R Beta InvGamma Beta InvGamma Normal Normal 0. [0.008] [-0.98 0. To estimate the stochastic growth version of the model we set ρa = 1. 0.004 1.01 4. 0.010. 0.012] [0. .98] I R I + R I R I R I R + + + Posterior Stoch.00 2.23] [. and Normal distributions.007 -0.00 .63. Schorfheide – Bayesian Macroeconometrics: April 18.10 0. Gamma. .01 4.004] 0.70 .67] 0.011 0.008] [-0. .95 0.025 50 are fixed.42 [0. 2010 Notes: Para (1) and Para (2) list the means and the standard deviations for Beta.66 Density Para (1) α ν 4 ln γ ρa σa ρb σb ln H∗ ln Y0 Del Negro. 0. .002.00 0.Table 3: Prior and Posterior Distribution for DSGE Model Parameters Prior Det.86] 90% Intv.62.00 .96.65 0.0 -0. 0.99 and δ = 0.008 0. Trend Mean 0. where pIG (σ|ν.39 [.00 10.95.69] [0. the upper and lower bound of the support for the Uniform distribution.02] [7. s) ∝ σ −ν−1 e−νs 2 /2σ 2 .96. The parameters β = 0.97 [0. Name Beta Gamma Normal 0.006.99] [.00 100 8. 0. 8.005] Para (2) 0.00 0. 1. 8.02 0.012] [0. 1) process (71) cancel. Their approach amounts to using (67) and .Del Negro. suppose ηt ∼ iidN (0. consider for instance the technology shock that a. The likelihood function has the following features. this irregular shape of the likelihood function does not pose any conceptual challenge.t ∼ N (0. Justiniano and Primiceri (2008) solved the linear rational expectational system obtained from the log-linearized equilibrium conditions of their DSGE model and then augmented the linear solution by equations that characterize the stochastic volatility of the exogenous structural shocks. 2010 51 From an econometrician’s perspective. According to (70). If θ ≤ 1 and M = 0 the likelihood function does not vary with θ because the roots of the autoregressive and the moving-average polynomial in the ARMA(1. 2] and ΘD = [0.S. 4. 1]) along the lines of the determinacyindeterminacy boundary. In a Bayesian framework. treated the subspaces as separate models. If θ ≤ 1 and M = 0. Schorfheide – Bayesian Macroeconometrics: April 18. generated posterior draws for each subspace separately. However. then the likelihood function exhibits curvature. one can combine proper priors for θ and M and obtain a posterior distribution. a. Justiniano and Primiceri (2008) allow the volatility of the structural shocks t in (67) to vary stochastically over time. Lubik and Schorfheide (2004) divided the parameter space into ΘD and ΘI (for model (69) ΘD = (1. vt ). 1). one needs to introduce this auxiliary parameter M to construct the likelihood function.t . (72) ln vt = ρv ln vt−1 + ηt . GDP data is the reduction in the volatility of output growth around 1984. To investigate the sources of this volatility reduction. An alternative approach would be to capture the Great Moderation with Markov-switching shock standard deviations (see Section 5). in more realistic applications the implementation of posterior simulation procedures require extra care. In principle. the likelihood function is completely flat (does not vary with θ and M ) for θ > 1 because all parameters drop from the equilibrium law of motion. The authors adopt a specification in which log standard deviations evolve according to an autoregressive process. Alternatively. In the context of the stochastic growth model.t 2 ∼ N (0. This phenomenon has been termed the Great Moderation and is also observable in many other industrialized countries. We previously assumed in (58) that a. ω 2 ). and used marginal likelihoods to obtain posterior probabilities for ΘD and ΘI .5 Extensions II: Stochastic Volatility One of the most striking features of postwar U. ρv . As we will see in the next subsection.Del Negro. the RWM step described in Algorithm 4. Y ). The empirical model of Justiniano and Primiceri (2008) ignores any higher-order dynamics generated from the nonlinearities of the DSGE model itself on grounds of computational ease.2: Metropolis-within-Gibbs Sampler for DSGE Model with Stochastic Volatility For s = 1. and Kohn (This Volume). as can be seen from the equilibrium conditions (61) associated with our stochastic growth model. Y (s) (s−1) ). Y ) from the Normal-Inverse Gamma posterior obtained from the AR(1) law of motion for ln vt in (72). and Kohn (This Volume). nsim : 1.6 Extension III: General Nonlinear DSGE Models DSGE models are inherently nonlinear. 2. .t t evolves according to (72). ω . Schorfheide – Bayesian Macroeconometrics: April 18. The following Gibbs sampler can be used to generate draws from the posterior distribu- Algorithm 4. 4. Draw θ(s) conditional on (θ|v1:T .1:T conditional on (θ(s) . Pitt. Draw (s) a. many researchers take the stand that the equilibrium dynamics are .1 can be used to generate a draw θ(s) . v1:T . where is the observable and vt is the latent state.1:T . Draw (ρv . Given the sequence v1:T (s−1) (s−1) the likeli- hood function of the state-space model can be evaluated with the Kalman filter. . 2010 assuming that the element tion. Pitt. Shephard. and Rossi (1994) and Kim. Nonetheless. . Smoothing algorithms that generate draws of the sequence of stochastic volatilities have been developed by Jacquier. Bayesian inference is more difficult to implement for DSGE models solved with nonlinear techniques. given the magnitude of the business-cycle fluctuations of a country like the United States or the Euro area. Notice that (72) can be intera. described in Giordani. ω (s) ) conditional on (v1:T .t preted as a nonlinear state-space model. 3. in the shock vector 52 a. and Chib (1998) and are discussed in Jacquier and Polson (This Volume) and Giordani. . Y ) using the simulation smoother of (s−1) Carter and Kohn (1994). Draw v1:T conditional on ( (s) (s) (s) (s) a. Consequently. Polson. 4. 2010 53 well approximated by a linear state-space system. First. (74) Fern´ndez-Villaverde and Rubio-Ram´ (2007) and Fern´ndez-Villaverde and Rubioa ırez a Ram´ ırez (2008) show how a particle filter can be used to evaluate the likelihood function associated with a DSGE model. Bayesian analysis of nonlinear DSGE models is currently an active area of research and faces a number of difficulties that have not yet been fully resolved. Without errors in the measurement equation. the linearized consumption Euler equation takes the form Ct = I t Ct+1 + at+1 − Rj. It can be easily shown that for any asset j. log-linear approximations have the undesirable feature (for asset-pricing applications) that risk premiums disappear. θ). (67) and (68) are replaced by (65) and yt = Ψ(st . a and Rubio-Ram´ ırez (2004). the researcher has to introduce measurement errors in (74). Thus. The use of nonlinear model solution techniques complicates the implementation of Bayesian estimation for two reasons. the evaluation of the likelihood function becomes more costly because both the state transition equation and the measurement equation of the state-space model are nonlinear. t . A comparison of solution methods for DSGE models can be found in Aruoba. a proposed particle st has to satisfy ˜ the following two equations: yt = Ψ(˜t . However. θ). or if the goal of the analysis is to study asset-pricing implications or consumer welfare. yielding a gross return Rj.Del Negro. Thus. Pitt. this linear approximation becomes unreliable if economies are hit by large shocks.t+1 . as is often the case for emerging market economies. Second. The most common approach in the literature on estimated DSGE models is to use second-order perturbation methods. and Kohn (This Volume). Suppose that {st−1 }N is i=1 a collection of particles whose empirical distribution approximates p(st−1 |Y1:t−1 . it is computationally more demanding to obtain the nonlinear solution. Schorfheide – Bayesian Macroeconometrics: April 18. A detailed description of the particle filter is provided in Giordani. θ) s (i) st ˜ (i) (i) (i) (75) (76) = (i) (i) Φ(st−1 . . E (73) implying that all assets yield the same expected return.t . For the particle filter to work in the context of the stochastic growth model described above. θ). Fern´ndez-Villaverde. Based on the ˜ s. Posterior odds of a model with adjustment costs versus a model without are useful for such an assessment. An efficient ˜ s implementation of the particle filter is one for which a large fraction of the N st ’s ˜ are associated with values of ηt that are small relative to Ση . Thus. θ) + ηt .Del Negro. First. the probability that (75) is satisfied ˜ is zero. 2010 (i) 54 If st is sampled from a continuous distribution. Finally. because the nonlinear equation might have multiple solutions. st needs to be sampled from a ˜ discrete distribution. θ). and then find all ˜ real solutions ˜ of for the equation yt = Ψ(Φ(st−1 . in the context of the stochastic growth model we could examine whether the model is able to capture the correlation between output and hours worked that we observe in the data. in the absence of measurement errors. it is important to realize that one needs to bound the magnitude of the measurement error standard deviations from below to avoid a deterioration of the particle filter performance as these standard deviations approach zero. eliminating st . Schorfheide – Bayesian Macroeconometrics: April 18. In this case. Such comparisons can be used to examine whether a particular . In practice. . then (75) turns into yt = Ψ(˜t . ˜ this calculation is difficult if not infeasible to implement. For instance. One can plug (76) into (75). one could examine to what extent a DSGE model is able to capture salient features of the data.7 DSGE Model Evaluation An important aspect of empirical work with DSGE models is the evaluation of fit. ˜. This type of evaluation can be implemented with predictive checks. Second. Some authors – referring to earlier work by Sargent (1989). If errors ηt ∼ N (0. a researcher might want to compare one or more DSGE models to a more flexible reference model such as a VAR. a researcher might be interested in assessing whether the fit of a stochastic growth model improves if one allows for convex investment adjustment costs. which in the context of our stochastic growth model amounts to a modification of the DSGE model. We consider three methods of doing so. We will distinguish three approaches. θ). θ). s (i) (i) (i) (77) This equation can be solved for any st by setting ηt = yt − Ψ(˜t . θ). or Ireland (2004) – make measurement errors part of the specification of their empirical model. (i) 4. Altug (1989). Ση ) are added to the measurement equation (74). one (i) (i) (i) (i) (i) can obtain the support points for the distribution of st as Φ(st−1 . the methods proposed by Geweke (1999) and Chib and Jeliazkov (2001) can be used to obtain numerical approximations of the marginal likelihood. 2010 55 DSGE model captures certain important features of the data.1 Posterior Odds The Bayesian framework allows researchers to assign probabilities to various competing models. and Smets and Wouters (2007) use odds to determine the importance of a variety of real and nominal frictions in a medium-scale New Keynesian DSGE model. Schorfheide – Bayesian Macroeconometrics: April 18. Section 7 provides a more detailed discussion of model selection and model averaging based on posterior probabilities. 4. Predictive checks . Alternatively. These probabilities are updated through marginal likelihood ratios according to πi. 4.2 Predictive Checks A general discussion of the role of predictive checks in Bayesian analysis can be found in Lancaster (2004). Illustration 4. πj.0 p(Y |Mi ) = × . they can be used to rank different DSGE model specifications. respectively.8 and 1395.0 (79) is the marginal likelihood function.2: We previously estimated two versions of the neoclassical stochastic growth model: a version with a trend-stationary technology process and a version with a difference-stationary exogenous productivity process. these marginal data densities imply that the posterior probability of the difference-stationary specification is approximately 90%. If posterior draws for the DSGE model parameters are generated with the RWM algorithm. The key challenge in posterior odds comparisons is the computation of the marginal likelihood that involves a high-dimensional integral.0 p(Y |Mj ) (πi. and Geweke (2007).T πj. Geweke (2005).7.7. Mi )p(θ(i) )dθ(i) (78) Here.Del Negro. If the prior probabilities for the two specifications are identical. The log-marginal data densities ln p(Y |Mi ) are 1392. Posterior odds-based model comparisons are fairly popular in the DSGE model literature. πi.T πi. For instance. Rabanal and Rubio-Ram´ ırez (2005) use posterior odds to assess the importance of price and wage stickiness in the context of a small-scale New Keynesian DSGE model.2.T ) is the prior (posterior) probability of model Mi and p(Y |Mi ) = p(Y |θ(i) . p(θ|FT ). In posterior predictive checks. The goal of prior predictive checks is to determine whether the model is able to capture salient features of the data. generate a parameter draw θ from p(θ|Ft ). Finally. The simulated trajectories can be converted into sample statistics of interest. Draws from the predictive distribu˜ tion can be obtained in two steps. to obtain an approximation for predictive distributions of sample moments. In its core. If S(Y1:T ) is located far in the tails. then the model is discredited. ∗ Second. the distribution of the parameters. Let Y1:T be a hypothetical sample of length T .Del Negro. similar to the . the prior predictive check replaces Y1:T in (80) with Y1:T and tries to measure whether the density that the Bayesian model assigns a priori to the observed data is high or low. The ∗ predictive distribution for Y1:T based on the time t information set Ft is ∗ p(Y1:T |Ft ) = ∗ p(Y1:T |θ)p(θ|Ft )dθ. 2010 56 can be implemented based on either the prior or the posterior distribution of the ∗ DSGE model parameters θ. Canova (1994) was the first author to use prior predictive checks to assess implications of a stochastic growth model driven solely by a technology shock. A comparison of (79) and (80) for t = 0 indicates that the two expressions are identical. If S(Y1:T ) falls into the tails (or lowdensity region) of the predictive distribution derived from the estimated model. Schorfheide – Bayesian Macroeconometrics: April 18. prior predictive checks can be very useful at an early stage of model development. Because the prior predictive distribution conveys the implications of models without having to develop methods for formal posterior inference. the posterior predictive check works like a frequentist specification test. is conditioned on the observed data Y1:T . First. Prior predictive distributions are closely related to marginal likelihoods. simulate a trajectory of observations Y1:T from the DSGE model conditional ˜ on θ. and Schorfheide (2007) use posterior predictive checks to determine whether a stochastic growth model. One can make the procedure more easily interpretable by replacing the high-dimensional data matrix Y with a low-dimensional statistic S(Y ). ∗ S(Y1:T ). Chang. such as the sample correlation between output and hours worked. one can compute the value of the statistic S(Y1:T ) based on the actual data and assess how far it lies in the tails of its predictive distribution. one concludes that the model has difficulties explaining the observed patterns in the data. (80) We can then use F0 to denote the prior information and FT to denote the posterior information set that includes the sample Y1:T . ∗ In its implementation. Doh. which in turn is a function of the DSGE model parameters θ. which are transformations of model . Thus. if there is a strong overlap between the predictive densities for ϕ between DSGE model M1 and VAR M0 . 2010 57 one analyzed in this section.Del Negro. to examine asset-pricing implications of DSGE models. hours worked. the densities p(ϕ|Mi ) and p(ϕ|Y.7. In practice.0 π2. Let p(ϕ|Y. At the same time. Draws of ϕ can be obtained by transforming draws of the DSGE model and VAR parameters. M0 ) denote the posterior distribution of population characteristics as obtained from the VAR. The ratio formalizes the confidence interval overlap criterion proposed by DeJong. M0 )dϕ (81) can be interpreted as odds ratio of M1 versus M2 conditional on the reference model M0 . Loss-Function-Based Evaluation: Schorfheide (2000) proposes a Bayesian framework for a loss function-based evaluation of DSGE models. these models are designed to capture certain underlying population moments. is able to capture the observed persistence of hours worked.3 VARs as Reference Models Vector autoregressions play an important role in the assessment of DSGE models. since they provide a more richly parameterized benchmark. such as the volatilities of output growth. As in Geweke (2010)’s framework. for instance. the researcher considers a VAR as reference model M0 that is meant to describe the data and at the same time delivers predictions about ϕ. The numerator in (81) is large. respectively. the researcher is interested in the relative ability of two DSGE models to capture a certain set of population moments ϕ. Models of Moments: Geweke (2010) points out that many DSGE models are too stylized to deliver a realistic distribution for the data Y that is usable for likelihoodbased inference. and the correlation between these two variables. M0 ) can be approximated by Kernel density estimates based on draws of ϕ. We consider three approaches to using VARs for the assessment of DSGE models. Geweke (2010) shows that π1. and Whiteman (1996) and has been used. M0 )dϕ p(ϕ|M2 )p(ϕ|Y. Instead. denoted by p(ϕ|Mi ). a prior distribution for θ induces a model-specific distribution for the population characteristics. 4.0 p(ϕ|M1 )p(ϕ|Y. Ingram. Schorfheide – Bayesian Macroeconometrics: April 18. Suppose we collect these population moments in the vector ϕ. We will refer to such a model as DSGE-VAR. one specifies a loss function L(ϕ. Mi ) and posterior model probabilities πi. the evaluation is loss-function dependent. say. ϕ) = ϕ − ϕ 2 . Mi ). then the predictive distribution is dominated by M1 . Third. 2.Del Negro. under which a ˆ ˆ ˆ point prediction ϕ of ϕ is to be evaluated. one can form a predictive density for ϕ by averaging across the three models p(ϕ|Y ) = i=0. Second. none of the DSGE models fits well. let I D [·] Eθ be the expectation under the DSGE model conditional on parameterization θ and define the autocovariance matrices ΓXX (θ) = I D [xt xt ]. Assuming that the data have been transformed such that yt is stationary. 1. the DSGE models are assumed to deliver a probability distribution for the data Y . Unlike in Geweke (2010). and a VAR that serves as a reference model M0 . Del Negro and Schorfheide (2004) link DSGE models and VARs by constructing families of prior distributions that are more or less tightly concentrated in the vicinity of the restrictions that a DSGE model implies for the coefficients of a VAR. DSGE-VARs: Building on work by Ingram and Whiteman (1994).T p(ϕ|Y. For each DSGE model. Mi )dϕ. Eθ ΓXY (θ) = I D [xt yt ]. Suppose there are two DSGE models. the prediction ˆ ϕ(i) is computed by minimizing the expected loss under the DSGE model-specific ˆ posterior: ϕ(i) = argminϕ ˆ ˜ L(ϕ. whereas the model ranking becomes effectively loss-function independent if one of the DSGE models has a posterior probability that is close to one. DSGE model M1 is well specified and attains a high posterior probability. XX Σ∗ (θ) = ΓY Y (θ) − ΓY X (θ)Γ−1 (θ)ΓXY (θ). Eθ A VAR approximation of the DSGE model can be obtained from the following restriction functions that relate the DSGE model parameters to the VAR parameters: Φ∗ (θ) = Γ−1 (θ)ΓXY (θ). ϕ)p(ϕ|Y )dϕ. ϕ). Finally one can compare DSGE models M1 and M2 based on the posterior expected loss L(ϕ(i) . (82) If. for example L(ϕ. 2. In this procedure. The first step of the analysis consists of computing model-specific posterior predictive distributions p(ϕ|Y. computed under the overall posterior distribution (82) ˆ that averages the predictions of the reference model and all DSGE models. i = 0. however. if the DSGE models are poorly specified. 2010 58 parameters θ. then the predictive density is dominated by the VAR. M1 and M2 . The starting point is the VAR specified in Equation (1). Schorfheide – Bayesian Macroeconometrics: April 18. ˜ i = 1.2 πi. XX (83) .T . ϕ)p(ϕ|Y. If.1. With a QR factorization. λT Σ∗ (θ). Σ)pλ (Φ. see (21). 2010 59 To account for potential misspecification of the DSGE model. tr DSGE (85) where Σ∗ (θ) is lower-triangular and Ω∗ (θ) is an orthogonal matrix. is given by ∂yt ∂ t = Σtr Ω. allows for deviations of Φ and Σ from the restriction functions: Φ. Since Φ and Σ can be conveniently integrated out. the posterior short-run responses of the VAR with those from the DSGE model. and T denotes the actual sample size. Let A0 (θ) be the contemporaneous on yt according to the DSGE model. the DSGE’s and the DSGE-VAR’s impulse responses to all shocks approximately coincide. the initial response of yt to the structural shocks can be uniquely decomposed into ∂yt ∂ t = A0 (θ) = Σ∗ (θ)Ω∗ (θ). The initial tr impact of t on yt in the VAR. (87) with the understanding that the distribution of Ω|θ is a point mass at Ω∗ (θ). as opposed to the covariance matrix of innovations. that is ut = Σtr Ω t . θ) = p(Y |Φ. we now use a prior distribution that. the one-step-ahead forecast errors ut are functions of the structural shocks impact of t t. λT − k . Σ|θ ∼ M N IW Φ∗ (θ). the identification procedure can be interpreted as matching. The next step is to turn the reduced-form VAR into a structural VAR. we can first draw from the marginal . (84) This prior distribution can be interpreted as a posterior calculated from a sample of T ∗ = λT artificial observations generated from the DSGE model with parameters θ. in absence of misspecification. The final step is to specify a prior distribution for the DSGE model parameters θ. which can follow the same elicitation procedure that was used when the DSGE model was estimated directly. we obtain the hierarchical model pλ (Y. Schorfheide – Bayesian Macroeconometrics: April 18. [λT ΓXX (θ)]−1 . λ is a hyperparameter. Φ. Σ. The rotation matrix is chosen such that.Del Negro. while centered at Φ∗ (θ) and Σ∗ (θ). we maintain the triangularization of its covariance matrix Σ and replace the rotation Ω in (86) with the function Ω∗ (θ) that appears in (85). in contrast. V AR (86) To identify the DSGE-VAR. To the extent that misspecification is mainly in the dynamics. According to the DSGE model. Here. Σ|θ)p(Ω|θ)p(θ). at least qualitatively. Thus. . then the normalized pλ (Y )’s can be interpreted as posterior probabilities for λ. then a comˆ parison between DSGE-VAR(λ) and DSGE model impulse responses can potentially yield important insights about the misspecification of the DSGE model. Del Negro. Define ˆ λ = argmaxλ∈Λ pλ (Y ). . . say. . compute Ω(s) = Ω∗ (θ(s) ). 2010 60 posterior of θ and then from the conditional distribution of (Φ. Σ) given θ. The framework has also been used as a tool for model evaluation and comparison in Del Negro. . Smets.5 and 2. The marginal likelihood pλ (Y |θ) is obtained by straightforward modification of (15). Moreover. . Schorfheide.Del Negro. Schorfheide – Bayesian Macroeconometrics: April 18. a natural criterion for the choice of λ is the marginal data density pλ (Y ) = pλ (Y |θ)p(θ)dθ. it is convenient to restrict the hyperparameter to a finite grid Λ. given by pλ (θ|Y ) ∝ pλ (Y |θ)p(θ). nsim . This leads to the following algorithm. Since the empirical performance of the DSGE-VAR procedure crucially depends on the weight placed on the DSGE model restrictions. nsim : draw a pair (Φ(s) . If one assigns equal prior probability to each grid point.2.1 to generate a sequence of draws θ(s) . For s = 1. (89) If pλ (Y ) peaks at an intermediate value of λ. . between 0.3: Posterior Draws for DSGE-VAR 1. Σ(s) ) from its conditional MNIW posterior distribution given θ(s) . The DSGEVAR approach was designed to improve forecasting and monetary policy analysis with VARs. Algorithm 4. . Schorfheide. s = 1. 2. and Wouters (2007) and for policy analysis with potentially misspecified DSGE models in Del Negro and Schorfheide (2009). it is useful to consider a datadriven procedure to select λ. The MNIW distribution can be obtained by the modification of (8) described in Section 2. (88) For computational reasons. Use Algorithm 4. As in the context of the Minnesota prior. from the posterior distribution of θ. and Wouters (2007) emphasize that the posterior of λ provides a measure of fit for the DSGE model: high posterior probabilities for large values of λ indicate that the model is well specified and that a lot of weight should be placed on its implied restrictions. . Smets. Del Negro, Schorfheide – Bayesian Macroeconometrics: April 18, 2010 61 4.8 DSGE Models in Applied Work Much of the empirical analysis with DSGE models is conducted with Bayesian methods. Since the literature is fairly extensive and rapidly growing, we do not attempt to provide a survey of the empirical work. Instead, we will highlight a few important contributions and discuss how Bayesian analysis has contributed to the proliferation of estimated DSGE models. The first published papers that conduct Bayesian inference in DSGE models are DeJong, Ingram, and Whiteman (2000), Schorfheide (2000), and Otrok (2001). Smets and Wouters (2003) document that a DSGE model that is built around the neoclassical growth model presented previously and enriched by habit formation in consumption, capital adjustment costs, variable factor utilization, nominal price and wage stickiness, behavioral rules for government spending and monetary policy, and numerous exogenous shocks could deliver a time-series fit and forecasting performance for a vector of key macroeconomic variables that is comparable to a VAR. Even though posterior odds comparison, literally taken, often favor VARs, the theoretical coherence and the ease with which model implications can be interpreted make DSGE models an attractive competitor. One reason for the rapid adoption of Bayesian methods is the ability to incorporate nonsample information, meaning data that do not enter the likelihood function, through the use of prior distributions. Many of the priors used by Smets and Wouters (2003) as well as in subsequent work are fairly informative, and over the past five years the literature has become more careful about systematically documenting the specification of prior distributions in view of the available nonsample information. From a purely computational perspective, this kind of prior information often tends to smooth out the shape of the posterior density, which improves the performance of posterior simulators. Once parameter draws have been obtained, they can be easily converted into objects of interest. For instance, Justiniano, Primiceri, and Tambalotti (2009) study the relative importance of investment-specific technology shocks and thereby provide posterior distributions of the fraction of the businesscycle variation of key macroeconomic variables explained by these shocks. A large part of the literature tries to assess the importance of various propagation mechanisms that are useful for explaining observed business-cycle fluctuations. Bayesian posterior model probabilities are widely employed to compare competing model specifications. For instance, Rabanal and Rubio-Ram´ (2005) compare the ırez relative importance of wage and price rigidities. Unlike standard frequentist likeli- Del Negro, Schorfheide – Bayesian Macroeconometrics: April 18, 2010 62 hood ratio tests, posterior odds remain applicable, even if the model specifications under consideration are nonnested, for example, a DSGE model with sticky wages versus a DSGE model with sticky prices. DSGE models with nominal rigidities are widely used to analyze monetary policy. This analysis might consist of determining the range of policy rule coefficients that guarantees a unique stable rational expectations solution and suppresses selffulfilling expectations, of choosing interest-rate feedback rule parameters that maximize the welfare of a representative agent or minimizes a convex combination of inflation and output-gap volatility, or in finding a welfare-maximizing mapping between the underlying state variables of the economy and the policy instruments. The solution of these optimal policy problems always depends on the unknown taste and technology parameters. The Bayesian framework enables researchers and policy makers to take this parameter uncertainty into account by maximizing posterior expected welfare. A good example of this line of work is the paper by Levin, Onatski, Williams, and Williams (2006). Several central banks have adopted DSGE models as tools for macroeconomic forecasting, for example, Adolfson, Lind´, and e Villani (2007) and Edge, Kiley, and Laforte (2009). An important advantage of the Bayesian methods described in this section is that they deliver predictive distributions for the future path of macroeconomic variables that reflect both parameter uncertainty and uncertainty about the realization of future exogenous shocks. 5 Time-Varying Parameters Models The parameters of the models presented in the preceding sections were assumed to be time-invariant, implying that economic relationships are stable. In Figure 7, we plot quarterly U.S. GDP-deflator inflation from 1960 to 2006. Suppose one adopts the view that the inflation rate can be decomposed into a target inflation, set by the central bank, and some stochastic fluctuations around this target. The figure offers three views of U.S. monetary history. First, it is conceivable that the target rate was essentially constant between 1960 and 2006, but there were times, for instance, the 1970s, when the central bank let the actual inflation deviate substantially from the target. An alternative interpretation is that throughout the 1970s the Fed tried to exploit an apparent trade-off between unemployment and inflation and gradually revised its target upward. In the early 1980s, however, it realized that the long-run Del Negro, Schorfheide – Bayesian Macroeconometrics: April 18, 2010 63 Phillips curve is essentially vertical and that the high inflation had led to a significant distortion of the economy. Under the chairmanship of Paul Volcker, the Fed decided to disinflate, that is, to reduce the target inflation rate. This time-variation in the target rate could be captured either by a slowly-varying autoregressive process or through a regime-switching process that shifts from a 2.5% target to a 7% target and back. This section considers models that can capture structural changes in the economy. Model parameters either vary gradually over time according to a multivariate autoregressive process (section 5.1), or they change abruptly as in Markov-switching or structural-break models (section 5.2). The models discussed subsequently can be written in state-space form, and much of the technical apparatus needed for Bayesian inference can be found in Giordani, Pitt, and Kohn (This Volume). We focus on placing the TVP models in the context of the empirical macroeconomics literature and discuss specific applications in Section 5.3. There are other important classes of nonlinear time-series models such as threshold vector autoregressive models, Geweke and Terui (1993) and Koop and Potter (1999), for instance, in which the parameter change is linked directly to observables rather than to latent state variables. Due to space constraints, we are unable to discuss these models in this chapter. 5.1 Models with Autoregressive Coefficients Most of the subsequent discussion is devoted to VARs with parameters that follow an autoregressive law of motion (section 5.1.1). Whenever time-varying parameters are introduced into a DSGE model, an additional complication arises. For the model to be theoretically coherent, one should assume that the agents in the model are aware of the time-variation, say, in the coefficients of a monetary policy rule, and form their expectations and decision rules accordingly. Hence, the presence of time-varying parameters significantly complicates the solution of the DSGE model’s equilibrium law of motion and requires the estimation of a nonlinear state-space model (section 5.1.2). 5.1.1 Vector Autoregressions While VARs with time-varying coefficients were estimated with Bayesian methods almost two decades ago, see, for instance, Sims (1993), their current popularity in 2. Schorfheide – Bayesian Macroeconometrics: April 18. Consider the reduced-form VAR in Equation (1). Now let Xt = In ⊗ xt and φ = vec(Φ). Φc ] . We defined xt = [yt−1 . .Del Negro. arises from the interest in documenting time-varying features of business cycles in the United States and other countries. . unemployment. The ut innovations are also . We let the parameters evolve according to the random-walk process: φt = φt−1 + νt . yt−p . . (91) We restrict the covariance matrix Q to be diagonal and the parameter innovations νt to be uncorrelated with the VAR innovations ut . The rationale for their reduced-form specification is provided by models in which the policy maker and/or agents in the private sector gradually learn about the dynamics of the economy and consequently adapt their behavior (see Sargent (1999)). νt ∼ iidN (0. φt . as well as for the competing Markov-switching approach of Sims and Zha (2006) discussed in Section 5. To the extent that this adjustment occurs gradually in every period. Then we can write the VAR as yt = Xt φt + ut . They estimated a VAR in which the coefficients follow unit-root autoregressive processes. Q). Cogley and Sargent (2002) set out to investigate time-variation in US inflation persistence using a three-variable VAR with inflation. Cogley and Sargent (2005b) address this criticism of their earlier work by adding time-varying volatility to their model. 1] and Φ = [Φ1 . and the agents might slowly learn about the policy change. The central bank might adjust its target inflation rate in view of changing beliefs about the effectiveness of monetary policy. it can be captured by models in which the coefficients are allowed to vary in each period. . Φp . φ with a vector of time-varying coefficients. . and interest rates. . who took advantage of the MCMC innovations in the 1990s. The motivation for their work. Cogley and Sargent (2002)’s work was criticized by Sims (2002a). . 2010 64 empirical macroeconomics is largely due to Cogley and Sargent (2002). + Φp yt−p + Φc + ut . . . who pointed out that the lack of time-varying volatility in their VAR may well bias the results in favor of finding changes in the dynamics. (90) where we replaced the vector of constant coefficients. which we are reproducing here for convenience: yt = Φ1 yt−1 + . . Our subsequent exposition of a TVP VAR allows for drifts in both the conditional mean and the variance parameters. (92) In the decomposition of Σt . .5 to make the innovation variances for shocks in DSGE models time varying. nsim : 1.4. If the prior distributions for φ0 .1: Gibbs Sampler for TVP VAR For s = 1. Algorithm 5. . then one can use the following Gibbs sampler for posterior inference. described in Giordani. . Σt = B −1 Ht (B −1 ) . σn (s−1) . σ1 (s−1) . σi ). Q(s−1) . Condi- tional on the VAR parameters φt . In practice these priors are chosen to ensure that the shocks to (91) and (93) are small enough that the short.2.t random walk: ln hi. σ1 (s−1) . their variance now evolves over time: ut ∼ N (0. . and the σi ’s are conjugate.t . . Q. H1:T (s) (s−1) . The prior distributions for Q and the σi ’s can be used to express beliefs about the magnitude of the period-to-period drift in the VAR coefficients and the changes in the volatility of the VAR innovations. . but unlike in Section 2. Y ).t = ln hi. B. (93) Notice that this form of stochastic volatility was also used in Section 4. and Kohn (This Volume). Pitt. and Ht is a diagonal matrix with elements h2 following a geometric i.t ∼ iidN (0. According to (92). . φ1:T can be sampled using the algorithm developed by Carter and Kohn (1994). But is normally distributed with variance Ht : 1 But = Ht2 t . the matrix B is a lower-triangular matrix with ones on the diagonal. Σt ). Schorfheide – Bayesian Macroeconometrics: April 18. Y ). Q(s−1) .and medium-run dynamics of yt are not swamped by the random-walk behavior of φt and Ht . (90) and (91) provide a state-space representation for yt . Draw B (s) conditional on (φ1:T . Thus. Thus. σn (s−1) .Del Negro. Draw φ1:T conditional on (B (s−1) . the innovations to equation (90) are known. 2. . 2010 65 normally distributed. 2 ηi. H1:T (s) (s−1) . where t (94) is a vector of standard normals. . the problem of sampling from the posterior distribution of B under a conjugate prior is identical to the problem of sampling from the posterior distribution of A0 in the structural VAR specification (30) described in detail in Section 2.t−1 + ηi. V 0 ). Del Negro (2003) advocates the use of a shrinkage prior with tighter variance than Cogley and Sargent’s to partly overcome the problem of overfitting. . and Chib (1998) to draw the sequences hi. Draw H1:T conditional on (φ1:T . and unemployment to estimate a time-varying monetary policy rule for the postwar United States. 2010 3. A Bayesian analysis of a TVP cointegration model can be found in Koop. . i. H1:T . Y ) from the appro- priate Inverted Wishart distribution derived from (91). Koop and Potter (2008) discuss how to impose such a restriction efficiently. . φ0 .t:T . s 5. Finally. Q(s−1) . no cointegration restrictions are imposed on the VAR specified in (90).5. σn conditional on (φ1:T . Conditional on φt and B. Draw σ1 . where φ0 and V 0 are obtained by estimating a fixed-coefficient VAR with a flat prior on a presample.) ut ∼ N (0. the parameters of the VAR in equation (30). then this model generalizes the constant-coefficient structural SVAR discussed in Section 2.t which is identical to (72). Thus. H1:T . Primiceri (2005) extends the above TVP VAR by also allowing the nonzero offdiagonal elements of the contemporaneous covariance matrix B to evolve as randomwalk processes. h2 ). Schorfheide – Bayesian Macroeconometrics: April 18. Primiceri (2005) uses a structural TVP VAR for interest rates. B (s) . . . Polson. B (s) . 4. Q(s) . For the initial vector of VAR coefficients. B (s) . Y ). σ1 (s) (s) (s−1) . and Strachan (2008).Del Negro. Shephard. and Rossi (1994) or Kim. Cogley and Sargent (2002) and Cogley and Sargent (2005b) use a prior of the form φ0 ∼ N (φ0 . Leon-Gonzalez. inflation. we can write the i’th equation of (94) as zi. σn (s−1) . . σn (s−1) .t = B(i. σ1 (s) (s) (s−1) 66 . as in Section 4. Del Negro (2003) suggests an alternative approach where time-variation is directly imposed on the parameters of the structural model – that is. . one can use the algorithms of Jacquier. Draw Q(s) conditional on (φ1:T . Imposing the restriction that for each t all roots of the characteristic polynomial associated with the VAR coefficients φt lie outside the unit circle introduces a complication that we do not explore here.4 with Ω = I to a TVP environment. If one is willing to assume that the lower-triangular Bt ’s identify structural shocks. Y ) from the appropriate (s) (s) (s) Inverted Gamma distributions derived from (93). for additional shocks or time-varying parameters to be identifiable. Thus. as in (66). (96) Now imagine replacing the constant Frisch elasticity ν in (52) and (95) by a timevarying process νt .6. the preference shock appears in the labor supply function Ht = ν Wt − ν Ct + (1 + ν)Bt . in which we have replaced the constant parameter B. Thus. by a time-varying parameter Bt . 2010 5. In a log-linear approximation of the equilibrium conditions. we never mentioned time-varying parameters. then νt has no effects on the first-order dynamics. it is important that the log-linear approximation be replaced by a nonlinear solution technique. we simply referred to Bt as a labor supply or preference shock. the time-varying elasticity will appear as an additional additive shock in (96) and therefore be indistinguishable in its dynamic effects from Bt . the authors use a second-order perturbation method to solve the model and the particle filter to approximate its likelihood function. (95) We can interpret our original objective function (52) as a generalization of (95). Thus.1. the topic of DSGE models with time-varying autoregressive parameters has essentially been covered in Section 4. If H∗ /B∗ = 1.2 DSGE Models with Drifting Parameters 67 Recall the stochastic growth model introduced in Section 4. Suppose that one changes the objective function of the household to ∞ I t E s=0 β t+s ln Ct+s − (Ht+s /B)1+1/ν 1 + 1/ν . provided that the steady-state ratio H∗ /B∗ = 1. Fern´ndez-Villaverde and Rubio-Ram´ a ırez (2008) take a version of the constant-coefficient DSGE model estimated by Smets and Wouters (2003) and allow for time variation in the coefficients that determine the interest-rate policy of the central bank and the degree of price and wage stickiness in the economy.1.which affects the disutility associated with working. a time-varying parameter is essentially just another shock. For instance. . But in our discussion of the DSGE model in Section 4. then all structural shocks (or timevarying coefficients) appear additively in the equilibrium conditions. for instance. If the DSGE model is log-linearized.Del Negro. To capture the different effects of a typical monetary policy shock and a shock that changes the central bank’s reaction to deviations from the inflation target.1. Schorfheide – Bayesian Macroeconometrics: April 18. respectively.2. We will begin by adding regime-switching to the coefficients of the reduced-form VAR specified in (1). . l. without any restrictions. For simplicity. 2010 68 5. . suppose that M = 2 and all elements of Φ(Kt ) and Σ(Kt ) switch simultaneously. Σ(Kt )) (97) using the same definitions of Φ and xt as in Section 2.2. Σ(l)) are MNIW and the priors for the regime-switching probabilities π11 and π22 are independent Beta distributions. MS models are able to capture sudden changes in time-series dynamics.1. . m ∈ {1.2 Models with Markov-Switching Parameters Markov-switching (MS) models represent an alternative to drifting autoregressive coefficients in time-series models with time-varying parameters. who used them to allow for different GDP-growth-rate dynamics in recession and expansion states. . Here. nsim : . M }.2). Recall the two different representations of a time-varying target inflation rate in Figure 7. . . ut ∼ iidN (0. 2.1 Markov-Switching VARs MS models have been popularized in economics by the work of Hamilton (1989). Unlike before. We denote the values of the VAR parameter matrices in state Kt = l by Φ(l) and Σ(l).2. Schorfheide – Bayesian Macroeconometrics: April 18.1) and then consider the estimation of DSGE models with MS parameters (section 5. the coefficient vector Φ is now a function of Kt . l = 1. We will begin with a discussion of MS coefficients in the context of a VAR (section 5.2: Gibbs Sampler for Unrestricted MS VARs For s = 1. Kt is a discrete M -state Markov process with time-invariant transition probabilities πlm = P [Kt = l | Kt−1 = m]. If the prior distributions of (Φ(l). which we write in terms of a multivariate linear regression model as yt = xt Φ(Kt ) + ut . 5. then posterior inference in this simple MS VAR model can be implemented with the following Gibbs sampler Algorithm 5. . The piecewise constant path of the target can be generated by a MS model but not by the driftingparameter model of the previous subsection. .Del Negro. S. Y ) using a variant of the Carter and Kohn (1994) approach. Draw π11 and π22 conditional on (Φ(s) (s). described in detail in Giordani. If one ignores the relationship between the transition probabilities and the distribution of K1 . . Σ(s) (s). Draw (Φ(s) (l). Draw K1:T conditional on (Φ(s) (l). then model (97) becomes a change-point model in which state 2 is the final state. the unrestricted MS VAR in (97) with coefficient matrices that are a priori independent across states may involve a large number of 4 (s) (i−1) . π11 and Kohn (This Volume). If K1 is distributed according to the stationary distribution of the Markov chain. Pitt. l = 1. but the time of the break is unknown. Chopin and Pelgrin (2004) consider a setup that allows the joint estimation of the parameters and the number of regimes that have actually occurred in the sample period. Leon-Gonzalez. Let Tl be a set that contains the time periods when Kt = l. 2010 1. ut ∼ N (0. π11 (i−1) (i−1) 69 . Σ(s) (l). 2. (s) (s) (s) (s) (s) More generally. Koop and Potter (2007) and Koop and Potter (2009) explore posterior inference in change-point models under various types of prior distributions. Y ). By increasing the number of states and imposing the appropriate restrictions on the transition probabilities. and Strachan (2009) consider a modification of Primiceri (2005)’s framework where parameters evolve according to a change-point model and study the evolution over time of the monetary policy transmission mechanism in the United States. the posterior of Φ(l) and Σ(l) is MNIW. Σ(s) (l)) conditional on (K1:T . Koop. π22 (i−1) . t ∈ Tl . Under a conjugate prior. Schorfheide – Bayesian Macroeconometrics: April 18. 3. If one imposes the condition that π22 = 1 and π12 = 0.4 Alternatively.j + πjj = 1. one can generalize the change-point model to allow for several breaks. then the posteriors of π11 and π22 take the form of Beta distributions. 2. Σ(l)). GDP growth toward stabilization. obtained from the regression yt = xt Φ(l) + ut . for a process with M states one would impose the restrictions πM M = 1 and πj+1. In a multivariate setting. Y ). Kim and Nelson (1999a) use a changepoint model to study whether there has been a structural break in postwar U. such a model can be viewed as a structural-break model in which at most one break can occur. π22 (i−1) . then the Beta distributions can be used as proposal distributions in a Metropolis step. K1:T .Del Negro. The Gibbs sampler for the parameters of (100) is obtained .S. (ii) only the coefficients of the private-sector equations switch.l correspond to the coefficient associated with lag l of variable i in equation j.l λi. For instance. ut ∼ iidN (0. if the prior for D(Kt ) is centered at zero.1. Sims and Zha (2006) impose constraints on the evolution of D(Kt ) across states. Thus far. This model captures growth-rate differentials between recessions and expansions and is used to capture the joint dynamics of U. To avoid a proliferation of parameters. (100) If D(Kt ) = 0. The authors impose the restriction that only the trend is affected by the MS process: ∗ yt = yt + Γ0 (Kt ) + yt . ¯ The authors reparameterize the k × n matrix A(Kt ) as D(Kt ) + GA0 (Kt ). the prior for the reduced-form VAR is centered at a random-walk representation.Del Negro. .j. and (iii) only coefficients that implicitly control innovation variances (heteroskedasticity) change.j. + Φp yt−p + ut . Sims and Zha (2006) extend the structural VAR given in (30) to a MS setting: yt A0 (Kt ) = xt A(Kt ) + t . and parameter restrictions can compensate for lack of sample information. as implied by the mean of the Minnesota prior (see Section 2. Thus. yt = Φ1 yt−1 + . I) (99) is a vector of orthogonal structural shocks and xt is defined as in Section 2. (98) where ∗ ∗ yt = yt−1 + Γ1 (Kt ). where t t ∼ iidN (0.3 that expresses yt as a deterministic trend and autoregressive deviations from this trend. Paap and van Dijk (2003) start from the VAR specification used in Section 2. The authors impose that di. aggregate output and consumption. . Schorfheide – Bayesian Macroeconometrics: April 18. yt A0 (Kt ) = xt (D(Kt ) + GA0 (Kt )) + t . Let di. we have focused on reduced-form VARs with MS parameters. then the reduced-form VAR coefficients are given by Φ = A(Kt )[A0 (Kt )]−1 = G and the elements of yt follow random-walk processes.j. Loosely speaking.j (Kt ). where S is a k × n with the n × n identity matrix in the first n rows and zeros elsewhere.l (Kt ) = δi. The authors use their setup to estimate MS VAR specifications in which (i) only the coefficients of the monetary policy rule change across Markov states. This specification allows for shifts in D(Kt ) to be equation or variable dependent but rules out lag dependency.2). 2010 70 coefficients. Σ). In most applications. Consider the nonlinear equilibrium conditions of our stochastic growth model in (61).2 DSGE Models with Markov-Switching Coefficients A growing number of papers incorporates Markov-switching effects in DSGE models. Kt+1 . and solving the nonlinear model while accounting for the time variation in θ. it is straightforward. E E E The vector ηt comprises the following one-step-ahead rational expectations forecast errors: ηt = (Ct − I t−1 [Ct ]). and Zha (2008).2. Details are provided in Sims. albeit slightly tedious. 5. The most rigorous and general treatment of Markov-switching coefficients would involve replacing the vector θ with a function of the latent state Kt . Ht . I t [at+1 ]. Rt .t . one can define the vector xt such that the observables yt can. Waggoner. we write the linearized equilibrium conditions of the DSGE model in the following canonical form: Γ0 (θ)xt = C(θ) + Γ1 (θ)xt−1 + Ψ(θ) t + Π(θ)ηt . Schorfheide – Bayesian Macroeconometrics: April 18. Yt . θ(Kt ). which we denote by θ(Kt ). and the vector xt can be defined as follows: xt = Ct . the literature has focused on various short-cuts. Wt . .2. At . that is: yt = Ψ0 (θ) + Ψ1 (θ)t + Ψ2 (θ)xt .4 and 5.t ] . I t [Ct+1 ]. Bt .2. b. Following Sims (2002b). as in Section 4. which introduce Markov-switching in the coefficients of the linearized model given by (66). 2010 71 by merging and generalizing Algorithms 2. It .Del Negro. With these definitions. (at − I t−1 [at ]). including our stochastic growth model. to rewrite (66) in terms of the canonical form (101). be expressed simply as a linear function of xt . Since the implementation of the solution and the subsequent computation of the likelihood function are very challenging. at . (102) Markov-switching can be introduced into the linearized DSGE model by expressing the DSGE model parameters θ as a function of a hidden Markov process Kt . I t [Rt+1 ] . (Rt − I t−1 [Rt ]) E E E and t stacks the innovations of the exogenous shocks: t =[ a. θ is defined in (63). (101) For the stochastic growth model presented in Section 4. For instance. but not the matrices Ψ0 . Schorfheide (2005) constructs an approximate likelihood that depends only on θ1 .1. Ψ2 . (104) where only Φ0 and µ depend on the Markov process Kt (indirectly through θ2 (Kt )). Ψ1 . discussed in Kim and Nelson (1999b). Φ1 . and Φ . Waggoner.1 to implement posterior inference. 2. Equation (104) defines a (linear) Markov-switching state-space model. Schorfheide – Bayesian Macroeconometrics: April 18. the resulting rational expectations system can be written as Γ0 (θ1 )xt = C(θ1 . and the state transition probabilities are denoted by πlm . which can be low or high. θ2 (1). To capture this explanation in a Markovswitching rational expectations model. Using the same notation as in Section 5. 2010 72 Schorfheide (2005) considers a special case of this Markov-switching linear rational expectations framework. and Zha (2009) is more ambitious in that it allows for switches in all the matrices of the canonical rational expectations model: Γ0 (θ(Kt ))xt = C(θ(Kt )) + Γ1 (θ(Kt ))xt−1 + Ψ(θ(Kt )) t + Π(θ(Kt ))ηt . A candidate explanation for the reduction of macroeconomic volatility in the 1980s is a more forceful reaction of central banks to inflation deviations. θ2 (Kt )) + Γ1 (θ1 )xt−1 + Ψ(θ1 ) t + Π(θ1 )ηt (103) and is solvable with the algorithm provided in Sims (2002b). . The analysis in Schorfheide (2005) is clearly restrictive. If we partition the parameter vector θ(Kt ) into a component θ1 that is unaffected by the hidden Markov process Kt and a component θ2 (Kt ) that varies with Kt and takes the values θ2 (l). This likelihood function is then used in Algorithm 4.2. because in his analysis the process Kt affects only the target inflation rate of the central bank. with the understanding that the system matrices are functions of the DSGE model parameters θ1 and θ2 (Kt ). it is necessary that not just the intercept in (101) but also the slope coefficients be affected by the regime shifts. xt = Φ1 xt−1 + Φ [µ(Kt ) + t ] + Φ0 (Kt ). there is a large debate in the literature about whether the central bank’s reaction to inflation and output deviations from target changed around 1980. Thus. Following a filtering approach that simultaneously integrates over xt and Kt .Del Negro. subsequent work by Davig and Leeper (2007) and Farmer. l = 1. θ2 (2) and the transition probabilities π11 and π22 . The solution takes the special form yt = Ψ0 + Ψ1 t + Ψ2 xt . the number of states is M = 2. Sims (1993) and Cogley. Sims and Zha (2006) conduct inference with a MS VAR and find no support for the hypothesis that the parameters of the monetary policy rule differed pre. Cogley and Sargent (2005b) find that their earlier empirical results are robust to time-variation in the volatility of shocks and argue that changes in the monetary policy rule are partly responsible for the changes in inflation dynamics. Bayesian inference in a TVP VAR yields posterior estimates of the reduced-form coefficients φt in (90). whether monetary policy played a major role in affecting inflation dynamics. for example. they provide evidence that it was the behavior of the private sector that changed and that shock heteroskedasticity is important. that is. Conditioning on estimates of φt for various periods between 1960 and 2000. the debate over whether the dynamics of U. or other structural changes – it is likely that these same causes affected the dynamics of inflation. Cogley and Sargent (2002) compute the spectrum of inflation based on their VAR and use it as evidence that both inflation volatility and persistence have changed dramatically in the United States.and post-1980. To the contrary. to the extent that they have. Whatever the causes of the changes in output dynamics were – shocks. He claims that variation in the volatility of the shocks is the main cause for the lower volatility of both inflation and business cycles in the post-Volcker period. monetary policy. this debate evolved in parallel to the debate over the magnitude and causes of the Great Moderation. 2010 73 Characterizing the full set of solutions for this general MS linear rational expectations model and conditions under which a unique stable solution exists is the subject of ongoing research. Here. 5. including macroeconomic forecasting. Similarly.Del Negro.3 Applications of Bayesian TVP Models Bayesian TVP models have been applied to several issues of interest.S. Primiceri (2005) argues that monetary policy has indeed changed since the 1980s but that the impact of these changes on the rest of the economy has been small. using an AR time-varying coefficients VAR identified with sign restrictions Canova and Gambetti (2009) find little evidence that monetary policy has become more aggressive in . Schorfheide – Bayesian Macroeconometrics: April 18. Morozov. Naturally. we shall focus on one specific issue. inflation changed over the last quarter of the 20th century and. and Sargent (2005). namely. Based on an estimated structural TVP VAR. the decline in the volatility of business cycles around 1984 initially documented by Kim and Nelson (1999a) and McConnell and Perez-Quiros (2000). Del Negro, Schorfheide – Bayesian Macroeconometrics: April 18, 2010 74 responding to inflation since the early 1980s. Cogley and Sbordone (2008) use a TVP VAR to assess the stability of the New Keynesian Phillips curve during the past four decades. Given the numerical difficulties of estimating nonlinear DSGE models, there currently exists less published empirical work based on DSGE models with timevarying coefficients. Two notable exceptions are the papers by Justiniano and Primiceri (2008) discussed in Section (4.5) and Fern´ndez-Villaverde and Rubio-Ram´ a ırez (2008). The latter paper provides evidence that after 1980 the U.S. central bank has changed interest rates more aggressively in response to deviations of inflation from the target rate. The authors also find that the estimated frequency of price changes has decreased over time. This frequency is taken as exogenous within the Calvo framework they adopt. 6 Models for Data-Rich Environments We now turn to inference with models for data sets that have a large cross-sectional and time-series dimension. Consider the VAR(p) from Section 2: yt = Φ1 yt−1 + . . . + Φp yt−p + Φc + ut , ut ∼ iidN (0, Σ), t = 1, . . . , T where yt is an n × 1 vector. Without mentioning it explicitly, our previous analysis was tailored to situations in which the time-series dimension T of the data set is much larger than the cross-sectional dimension n. For instance, in Illustration 2.1 the time-series dimension was approximately T = 160 and the cross-sectional dimension was n = 4. This section focuses on applications in which the ratio T /n is relatively small, possibly less than 5. High-dimensional VARs are useful for applications that involve large cross sections of macroeconomic indicators for a particular country – for example, GDP and its components, industrial production, measures of employment and compensation, housing starts and new orders of capital goods, price indices, interest rates, consumer confidence measures, et cetera. Examples of such data sets can be found in Stock and Watson (1999) and Stock and Watson (2002). Large-scale VARs are also frequently employed in the context of multicountry econometric modeling. For instance, to study international business cycles among OECD countries, yt might Del Negro, Schorfheide – Bayesian Macroeconometrics: April 18, 2010 75 be composed of aggregate output, consumption, investment, and employment for a group of 20 to 30 countries, which leads to n > 80. In general, for the models considered in this section there will be a shortage of sample information to determine parameters, leading to imprecise inference and diffuse predictive distributions. Priors can be used to impose either hard or soft parameter restrictions and thereby to sharpen inference. Hard restrictions involve setting combinations of VAR coefficients equal to zero. For instance, Stock and Watson (2005), who study international business cycles using output data for the G7 countries, impose the restriction that in the equation for GDP growth in a given country enter only the trade-weighted averages of the other countries’ GDP growth rates. Second, one could use very informative, yet nondegenerate, prior distributions for the many VAR coefficients, which is what is meant by soft restrictions. Both types of restrictions are discussed in Section 6.1. Finally, one could express yt as a function of a lower-dimensional vector of variables called factors, possibly latent, that drive all the comovement among the elements of yt , plus a vector ζt of so-called idiosyncratic components, which evolve independently from one another. In such a setting, one needs only to parameterize the evolution of the factors, the impact of these on the observables yt , and the evolution of the univariate idiosyncratic components, rather than the dynamic interrelationships among all the elements of the yt vector. Factor models are explored in Section 6.2. 6.1 Restricted High-Dimensional VARs We begin by directly imposing hard restrictions on the coefficients of the VAR. As before, define the k × 1 vector xt = [yt−1 , . . . , yt−p , 1] and the k × n matrix Φ = [Φ1 , . . . , Φp , Φc ] , where k = np + 1. Moreover, let Xt = In ⊗ xt and φ = vec(Φ) with dimensions kn × n and kn × 1, respectively. Then we can write the VAR as yt = X t φ + u t , ut ∼ iidN (0, Σ). (105) To incorporate the restrictions on φ, we reparameterize the VAR as follows: φ = M θ. (106) θ is a vector of size κ << nk, and the nk × κ matrix M induces the restrictions by linking the VAR coefficients φ to the lower-dimensional parameter vector θ. The elements of M are known. For instance, M could be specified such that the Del Negro, Schorfheide – Bayesian Macroeconometrics: April 18, 2010 76 coefficient in Equation i, i = 1, .., n, associated with the l’th lag of variable j is the sum of an equation-specific, a variable-specific parameter, and a lag-specific parameter. Here, θ would comprise the set of all n + n + p equation/variable/lagspecific parameters, and M would be an indicator matrix of zeros and ones that selects the elements of θ associated with each element of φ. The matrix M could also be specified to set certain elements of φ equal to zero and thereby exclude regressors from each of the n equations of the VAR. Since the relationship between φ and θ is linear, Bayesian inference in this restricted VAR under a Gaussian prior for θ and an Inverted Wishart prior for Σ is straightforward. To turn the hard restrictions (106) into soft restrictions, one can construct a hierarchical model, in which the prior distribution for φ conditional on θ has a nonzero variance: φ = M θ + ν, ν ∼ N (0, V ), (107) where ν is an nk ×1 vector with nk ×nk covariance matrix V . The joint distribution of parameters and data can be factorized as p(Y, φ, θ) = p(Y |φ)p(φ|θ)p(θ). (108) A few remarks are in order. First, (108) has the same form as the DSGE-VAR discussed in Section 4.7.3, except that the conditional distribution of φ given θ is centered at the simple linear restriction M θ rather than the rather complicated VAR approximation of a DSGE model. Second, (108) also nests the Minnesota prior discussed in Section 2.2, which can be obtained by using a degenerate distribution for θ concentrated at θ with a suitable choice of M , θ, and V . Third, in practice the choice of the prior covariance matrix V is crucial for inference. In the context of the Minnesota prior and the DSGE-VAR, we expressed this covariance matrix in terms of a low-dimensional vector λ of hyperparameters such that V (λ) −→ 0 ( V (λ) −→ ∞) as λ −→ ∞ ( λ −→ 0) and recommended conditioning on a value of λ that maximizes the marginal likelihood function pλ (Y ) over a suitably chosen grid. Finally, since the discrepancy between the posterior mean estimate of φ and the restriction M θ can be reduced by increasing the hyperparameter λ, the resulting Bayes estimator of φ is often called a shrinkage estimator. De Mol, Giannone, and Reichlin (2008) consider a covariance matrix V that in our notation takes the form V = Σ ⊗ (Ik /λ2 ) and show that there is a tight connection between these shrinkage estimators and estimators of conditional mean functions obtained from factor which we will discuss below.Del Negro. they allow for timevariation in φ and let φt = M θ + νt . The authors interpret the time-varying θt as a vector of latent factors. Inserting (109) into (105). (110) The n × κ matrix of regressors Xt M essentially contains weighted averages of the regressors. If one chooses a prior covariance matrix of the form V = Σ ⊗ (Ik /λ2 ). Schorfheide – Bayesian Macroeconometrics: April 18. then the covariance matrix of ζt reduces to (1 + (xt xt )/λ2 )Σ. we obtain the system yt = (Xt M )θ + ζt . They discuss in detail how to implement Bayesian inference in this more general environment. V ). Their setting is therefore related to that of the factor models described in the next subsection. Canova and Ciccarelli (2009) allow the deviations of φ from the restricted subspace characterized by M θ to differ in each period t. where the weights are given by the columns of M . which simplifies inference. The random vector ζt is given by ζt = Xt νt +ut and. Canova and Ciccarelli (2009) further generalize expression (109) by assuming that the vector θ is time-varying and follows a simple autoregressive law of motion. since xt contains lagged values of yt . νt ∼ iidN (0. λ) ∝ (1 + (xt xt )/λ2 )Σ T −1/2 (111) × t=1 exp − 1 (yt − Xt M θ) Σ−1 (yt − Xt M θ) . resulting in a model for which Bayesian inference is fairly straightforward to implement. 2010 77 models. (109) The deviations νt from the restriction M θ are assumed to be independent over time. forms a Martingale difference sequence with conditional covariance matrix Xt V Xt + Σ. 2(1 + (xt xt )/λ2 ) and Bayesian inference under a conjugate prior for θ and Σ is straightforward. M could be chosen such that yt is a . In fact. as is often done in the factor model literature. In multicountry VAR applications. the random deviations νt can be merged with the VAR innovations ut . They document empirically that with a suitably chosen shrinkage parameter the forecast performance of their Bayes predictor constructed from a large number of regressors is similar to the performance of a predictor obtained by regressing yt on the first few principal components of the regressors xt . The likelihood function (conditional on the initial observations Y−p+1:0 ) takes the convenient form p(Y1:T |θ. Formally. T. n.t ∼ iidN (0. (112) Here.2 Dynamic Factor Models Factor models describe the dynamic behavior of a possibly large cross section of observations as the sum of a few common components.2. and of series-specific components. say. .4 surveys various extensions of the basic DFM. The factors follow a vector autoregressive processes of order q: ft = Φ0.Del Negro. (113) . Schorfheide – Bayesian Macroeconometrics: April 18.2. . ft is a κ × 1 vector of factors that are common to all observables.1. . then the business cycles in the various countries are highly synchronized. which capture idiosyncratic dynamics of each series.2. . ai is a constant. . Our baseline version of the DFM is introduced in Section 6.2. .t .1 ft−1 + . t = 1. .t = ai + λi ft + ξi.t to the factor ft . for example – the contribution of Stock and Watson (1989) generated renewed interest in this class of models among macroeconomists. and posterior inference is described in Section 6. Moreover. 6. While Stock and Watson (1989) employ maximum likelihood methods. average lagged output growth and unemployment across countries.t . Finally. Canova and Ciccarelli (2009) use their framework to study the convergence in business cycles among G7 countries. . If most of the variation in the elements of yt is due to the cross-sectional averages.t is an idiosyncratic process that is specific to each i. i = 1. Geweke and Zhou (1996) and Otrok and Whiteman (1998) conduct Bayesian inference with dynamic factor models. which explain comovements.q ft−q + u0. These authors use a factor model to exploit information from a large cross section of macroeconomic time series for forecasting. Section 6. 6. and λi is a 1 × κ vector of loadings that links yi. . Σ0 ). into the sum of two unobservable components: yi. u0. While factor models have been part of the econometricians’ toolbox for a long time – the unobservable index models by Sargent and Sims (1977) and Geweke (1977)). Some applications are discussed in Section 6. + Φ0. 2010 78 function of lagged country-specific variables and.2. .2.t . and ξi.1 Baseline Specification A DFM decomposes the dynamics of n observables yi.3. One can premultiply ft and its lags in (112) and (113) as well as u0.Del Negro. and so forth.  . We used 0-subscripts to denote parameter matrices that describe the law of motion of the factors.  .t−1 + .t ∼ iidN (0.t = φi. and 0 denotes a zero restriction. Without further restrictions.pi ξi. X denotes an unrestricted element. σi ). However. without changing the distribution of the observables. These orthogonality assumptions are important to identifying the factor model.t and y2.t innovations are independent across i and independent of the innovations to the law of motion of the factors u0.κ =  . . these zero restrictions alone are not sufficient for identification because the factors and hence the matrices Φ0. 2010 79 where Σ0 and the Φ0.t is a κ × 1 vector of innovations. . .t does not affect y1.t . Under this transformation. 2 ui.t by a κ × κ invertible matrix H and postmultiply the vectors λi and the matrices Φ0.t . The restrictions can be interpreted as follows. .  λκ The loadings λi for i > κ are always left unrestricted.j matrices are of dimension κ × κ and u0.t does not affect y1. Schorfheide – Bayesian Macroeconometrics: April 18. . Example 6.and postmultiplication of an arbitrary invertible lower-triangular κ × κ matrix Htr without changing the distribution of the observables.  Λ1. There are several approaches to restricting the parameters of the DFM to normalize the factors and achieve identification.   X X ···X X  Λ1.t . factor f3. the latent factors and the coefficient matrices of the DFM are not identifiable.κ to be lower-triangular:  X 0···0 0  .t−pi + ui.j by H −1 . . According to (115). the factor innovations . .κ = Λtr 1. The idiosyncratic components follow autoregressive processes of order pi : ξi. the ui. (114) At all leads and lags. = .κ (115) Here. We will provide three specific examples in which we impose restrictions on Σ0 and the first κ loading vectors stacked in the matrix   λ1  .t ..  . factor f2.1: Geweke and Zhou (1996) restrict Λ1. as they imply that all comovements in the data arise from the factors. .j and Σ0 could still be transformed by pre.1 ξi. + φi. i = 1.t . factor fi. Moreover. be the diagonal elements of Λ1.κ .κ is restricted to be the identity matrix and Σ0 is an unrestricted covariance matrix.t is uncorrelated with all other observables.t are factors that affect the Eastern and Western regions. i = 1.i ≥ 0. the one-entries on the diagonal of Λ1. (117) Thus. 3. For instance. the signs of the factors need to be normalized. In this case. imagine that the factor model is used to study comovements in output across U.κ take care of the sign normalization.Del Negro.t . . As in Example 6. .t correspond to output in state i in period t. . (116) Finally. where f1. . one can choose Htr = Σ−1 such 0. The one-entries on the diagonal of Λ1. .κ is restricted to be lower-triangular with ones on the diagonal and Σ0 is a diagonal matrix with nonnegative elements. Since under the normalization λi. .t is interpreted as a national business cycle and f2. Example 6. respectively.i . 2010 80 become Htr u0. . κ. The sign normalization can be achieved with a set of restrictions of the form λi. we simply let Σ0 = Iκ .S.tr and its transpose. Example 6.j = 0 if state i does not belong to region j = 2. Let λi. κ. This transformation leads to a normalization in which Λ1.2. To implement this normalization. This transformation leads to a normalization in which Λ1. Schorfheide – Bayesian Macroeconometrics: April 18. Imposing λ1. there exists a potential pitfall.1 = 1 may result in a misleading inference for the factor as well as for the other loadings. . For concreteness. suppose that the number of factors is κ = 3. .3: Suppose we start from the normalization in Example 6.i = 1. . . and let yi. .κ the loadings by H −1 . (115).tr that the factor innovations reduce to a vector of independent standard Normals.t and f3. one might find it attractive to impose overidentifying restrictions.κ by H −1 . . and (117) provide a set of identifying restrictions.1 and proceed with premultiplying the factors by the matrix H = Λtr in (115) and postmultiplying 1. states. (116). one could impose the condition that λi. i = 1. Since Σ0 can be expressed as the product of the unique lowertriangular Choleski factor Σ0. Finally. imagine that there is only one factor and that y1. κ.2: Suppose we start from the normalization in the previous example and proceed with premultiplying the factors by the diagonal matrix H that is composed of the diagonal elements of Λtr in (115) and postmultiplying the loadings 1.t is forced to have a unit impact on yi.κ also take care of the sign normalization. which can be derived from the autoregressive law of motion (114) by assuming that ξi.  .  . the distribution of ft conditional on (Y1:t−1 . yi. T. . φi. we adopt the convention that Yt0 :t1 and Ft0 :t1 denote the sequences {yt0 . ft1 }.t−p − ai − λi ft−p ) + ui. . . θ0 ) n × i=1 p(Yi. . ... and latent factors can be written as p(Y1:T .t = ai + λi ft + φi. σi .  (F0:p . As we did previously in this chapter. we exploited the fact that the conditional distribution of yi. To obtain the factorization on the right-hand side of (119). θi ) and p(ft |Ft−q:t−1 .p ] .1:p (θi ) . . F0:t−1 . .1  .t−p:t−1 . . . respectively. θi ) in (119) represents the distribution of the first p observations conditional on  yi. . The joint distribution of data. θ0 ) can easily be derived from expressions (118) and (113).1:p |F0:p . .  .t |Yi. which is given by     ai + f1     .p (yi. .p ] be the parameters entering (118) and θ0 be the parameters pertaining to the law of motion of the factors (113). where L here denotes the lag operator. . To simplify the notation.t .2. Schorfheide – Bayesian Macroeconometrics: April 18. Σi.1:p (θi ) is the covariance matrix of [ξi.t |Yi.t is stationary for all θi in the . ξi. .1 (yi.−(τ +p) = 0 for some τ > 0.1 . Ft−p:t . Moreover. F0:T . The quasi-differenced measurement equation takes the form yi.p Lp . θi ) p(F0:p |θ0 ) i=1 p(θi ) p(θ0 ). parameters.. we will discuss the case in which the lag length in (114) is the same for all i (pi = p) and q ≤ p + 1.2 Priors and Posteriors 81 We now describe Bayesian inference for the DFM.t given (Y1:t−1 . . The distributions p(yi. (118) Let θi = [ai .. yt1 } and {ft0 . . .1 L · · · − φi. for t = p+1. θi ) p(ft |Ft−q:t−1 . +φi.p the factors. θi ) ∼ N  . θ0 ) is a function only of Ft−q:t−1 .1 . . If the law of motion of ξi. Premultiply (112) by 1 − φi.t−1 − ai − λi ft−1 ) + .t−p:t−1 and on the factors only through Ft−p:t . Ft−p:t .−(τ +1) = .1:p |F0:p . θ0 ) i=1  T n (119)  =  t=p+1 n i=1 p(yi. λi . F0:t . θi ) depends on lagged observables only through Yi.t−p:t−1 . {θi }n . The term p(Yi. . 2010 6.     ai + fp (120) The matrix Σi.Del Negro. φi. = ξi. respectively. V φi ). Draws from the distribution associated with (121) can be obtained with the procedure of Chib and Greenberg (1994).. The autoregressive coefficients for the factors and the idiosyncratic shocks have a Normal prior. N (ai . represent the priors for θi and θ0 . p(θi ) and p(θ0 ). . in Otrok and Whiteman (1998). . Specifically..1 . If the prior for λi. Y1:T ) ∝ p(θi )  t=p+1 p(yi. θ0 . 2010 82 support of the prior. one can set τ = ∞. . and its log is not a quadratic function of θi . In some applications. vec(Φ0. namely... A Gibbs sampler can be used to generate draws from the posterior distribution. the prior for the idiosyncratic volatilities σi can be chosen to be of the Inverted Gamma form. then the density associated with the prior for λi needs to be multiplied by the indicator function I{λi. If . Detailed derivations can be found in Otrok and Whiteman (1998).i ≥ 0} to impose the constraint (117). . Otrok and Whiteman (1998)). Conditional on the factors. κ elements are restricted to be nonnegative to resolve the sign-indeterminacy of the factors as in Example 6. the first two terms on the right-hand side correspond to the density of a Normal-Inverted Gamma distribution. The last term reflects the effect of the initialization of the AR(p) error process.i < 0. V λi ).t lie outside the unit circle. the priors on the constant term ai and the loadings λi are normal. which are typically chosen to be conjugate (see.i . Ft−p:t . θi ). θi ) p(Yi. .q ) ] and assume that Σ0 is normalized to be equal to the identity matrix.i .1 ) . The posterior density takes the form  T  p(θi |F0:T . The prior for φ0 is N (φ0 . .1:p becomes the covariance matrix associated with the unconditional distribution of the idiosyncratic shocks. it may be desirable to truncate the prior for φ0 (φi ) to rule out parameters for which not all of the roots of the characteristic polynomial associated with the autoregressive laws of motion of ft and ξi. and Σi. Equation (112) is a linear Gaussian regression with AR(p) errors. (121) Under a conjugate prior. φi. If the λi. The initial distribution of the factors p(F0:p |θ0 ) can be obtained in a similar manner using (113). . .t |Yi.Del Negro. for instance. one can use an acceptance sampler that discards all draws of θi for which λi.i ≥ 0}.t−p:t−1 . Schorfheide – Bayesian Macroeconometrics: April 18. the prior for φi = [φi. Define φ0 = [vec(Φ0.1. i = 1.1:p |F0:p . i = 1. The remaining terms. Finally. V ai ) and N (λi . Likewise. . V φ0 ). The basic structure of the sampler is fairly straightforward though some of the details are tedious and can be found. .p ] is N (φi . κ includes the indicator function I{λi. for example. fp . θ0 . {θi }n . the first two terms are proportional to the density of a MNIW distribution if Σ0 is unrestricted and corresponds to a multivariate normal density if the DFM is normalized such that Σ0 = I. Waggoner. which implies that computational cost is linear in the size of the cross section.i ≥ 0. As in the case of θi . . . . say. Otrok i=1 and Whiteman (1998) explicitly write out the joint Normal distribution of the observations Y1:T and the factors F0:T .n . F0:T ) i=1 such that λi. and 5 If X = [X1 .Del Negro. θ0 . then one can resolve the sign indeterminacy by postprocessing the output of the (unrestricted) Gibbs sampler: for each set of draws ({θi }n . Two approaches exist in the Bayesian DFM literature. . θ0 cannot be directly sampled from. the sampling can be implemented one i at a time. the posterior for the coefficients θ0 in (113) is obtained from a multivariate generalization of the preceding steps. Conditional on the factors. Σ11 − 22 ¡   Σ12 Σ−1 Σ21 . The last terms capture the probability density function of the initial factors f0 . κ. θ0 . one draws the factors F0:T conditional on ({θi }n . X2 ] is distributed N (µ. Since the errors ξi. In the third block of the Gibbs sampler. Schorfheide – Bayesian Macroeconometrics: April 18. Thus.t in equation (112) are independent across i. Σ) then X1 |X2 is distributed N µ1 +Σ12 Σ−1 (X2 −µ2 ). . F0:T |{θi }i=1. Its density can be written as  p(θ0 |F0:T . An alternative is to cast the DFM into a linear state-space form and apply the algorithm of Carter and Kohn (1994) for sampling from the distribution of the latent states. described in Giordani. flip the sign of the i’th factor and the sign of the loadings of all n observables on the ith factor. Pitt. one can use a variant of the procedure proposed by Chib and Greenberg (1994). . Hamilton. If the prior for θ0 is conjugate.i < 0. p(Y1:T . . θ0 ) p(θ0 )p(F0:p |θ0 ). Y1:T ) using the formula for conditional means and covariance matrices of a multivariate normal distribution. 2010 83 the prior of the loadings does not restrict λi. Y1:T ) ∝  i=1 t=p+1 T  p(ft |Ft−p:t−1 . 22 . θ0 ) and derive the posterior distribution p(F0:T |{θi }i=1.n . .5 Their approach involves inverting matrices of size T and hence becomes computationally expensive for data sets with a large time-series dimension. Y1:T ). but is symmetric around zero. and Zha (2007) discuss the sign normalization and related normalization issues in other models at length. i = 1. (122) The first term on the right-hand side corresponds to the conditional likelihood function of a VAR(q) and has been extensively analyzed in Section 2. where the partitions of µ and Σ conform with the partitions of X. a MNIW distribution. t ] . The Gibbs sampler can be summarized as follows . To avoid the increase in the dimension of the state vector with the cross-sectional dimension n. We will now provide some more details on how to cast the DFM into state-space form with iid measurement errors and a VAR(1) state-transition equation. . The (p + 1) × 1 vector ft collects the latent states and is defined ˜ as ft = [ft . this conditional distrii=1 bution can be obtained from the joint distribution p(F0:p .t . As mentioned above. Stacking (118) for all i. .. ft−p ] . the Φj ’s are diagonal n × n matrices with elements φ1. we shall subsequently assume that the factor ft is scalar (κ = 1). φn. . Del Negro and Otrok (2008) provide formulas for the initialization. . .t . . Schorfheide – Bayesian Macroeconometrics: April 18.1 . (123) where L is the temporal lag operator. Φ0. .. 01×(p+1−q) ] Ip 0p×1 . .j . . 2010 84 Kohn (This Volume). . . . θ0 ). . λn −λn φn. (125) Since (123) starts from t = p + 1 as opposed to t = 1. .. θ0 ) by using i=1 the formula for conditional means and covariance matrices of a multivariate normal distribution. T. . 0. . un.t = [u0. . Y1:p |{θi }n . .1 .   −λ1 φ1. . .j . . an ] . . 0] is an iid (p + 1) × 1 random vector and Φ0 is the (p + 1) × ˜ (p + 1) companion form matrix ˜ Φ0 = [Φ0. t = p + 1. ∗  . . ut = ˜ ˜ ˜ ˜ [u1. . . yn. . . .t ] .t . .. it is convenient to exclude the AR(p) processes ξi. For ease of notation. one needs to initialize the filtering step in the Carter and Kohn (1994) algorithm with the conditional distribution of p(F0:p |Y1:p . a = [a1 .t from the state vector and to use the quasi-differenced measurement equation (118) instead of (112).q .Del Negro.p  . and λ1 −λ1 φ1..t . Λ = . . the random variables ut in the measurement equa˜ ˜ tion (123) are iid. ˜ (124) ˜ where u0.1 . . one obtains the measurement equation p p (In − j=1 ˜ Φj Lj )˜t = (In − y j=1 ˜ ˜ ˜ a Φj )˜ + Λ∗ ft + ut .p Due to the quasi-differencing.  . The state-transition equation is obtained by expressing the law of motion of the factor (113) in companion form ˜ ˜ ˜ ft = Φ0 ft−1 + u0. . . .  −λn φn. yt = [y1. . {θi }n . Schorfheide – Bayesian Macroeconometrics: April 18.3 Applications of Dynamic Factor Models How integrated are international business cycles? Are countries more integrated in terms of business-cycle synchronization within a region (say. Otrok. Latin America). . {θi }n . . regional factors that capture region-specific cycles (say. and country-specific cycles. Y1:T ). which will be discussed in more detail in Section 7. Draw F0:T . i=1 (s) (s) 3. The model includes a world factor that captures the world business cycle. . . is numerically challenging. θ0 . .1: Sampling from the Posterior of the DFM For s = 1. The authors also consider a MCMC approach where the number of factors is treated as an unknown parameter and is drawn jointly with all the other parameters. investment. The authors estimate a DFM on a panel of annual data on output.2. 2. Lopes and West (2004) discuss the computation of marginal likelihoods for a static factor model in which the factors are iid. and consumption for 60 countries and about 30 years. which is precisely what Kose. 2010 Algorithm 6.Del Negro. In principle. for instance. Y1:T ) from (121). This can be done independently for each i = 1. in the . world cycles are on average as important as country-specific cycles. 6. and Whiteman (2003) do. Draw θ0 conditional on (F0:T (s) (s) (s−1) (s) . we have not discussed the issue of determining the number of factors κ. The exact distributions can be found in the references given in this section. . Draw θi (s) 85 conditional on (F0:T (s−1) . In practice. i=1 We have omitted the details of the conditional posterior distributions. Last. . within Europe) than across regions (say. θ0 (s−1) . conditional on ({θi }n . These factors are assumed to evolve independently from one another. the computation of marginal likelihoods for DFMs. In terms of the variance decomposition of output in the G7 countries. The authors find that international business-cycle comovement is significant. Y1:T ) from (122). one can regard DFMs with different κ’s as individual models and treat the determination of the number of factors as a model selection or a model averaging problem. n. . nsim : 1. which are needed for the evaluation of posterior model probabilities. France and the United States)? Has the degree of comovement changed significantly over time as trade and financial links have increased? These are all natural questions to address using a dynamic factor model. . not surprisingly. Factor Augmented VARs: Bernanke.4 Extensions and Alternative Approaches We briefly discuss four extensions of the basic DFM presented above. stance of monetary policy and the national business cycle). . .t + λi ft + ξi. suggesting that integration is no higher within regions than across regions. we can conduct inference on the country factors even if the number of series per country is small. .2. t = 1. while the latter is associated with regional business cycles and other region-specific conditions (for example. In a Bayesian framework estimating models where regional or country-specific factors are identified by imposing the restriction that the respective factors have zero loadings on series that do not belong to that region or country is quite straightforward. for example. These extensions include Factor Augmented VARs. Models with such restrictions are harder to estimate using nonparametric methods such as principal components. First. House prices have both an important national and regional component.Del Negro. and Eliasz (2005) introduce Factor augmented VARs (or FAVARs). where the former is associated with nationwide conditions (for example. the FAVAR allows for additional observables y0. . DFMs with time-varying parameters. using Bayesian methods. Boivin. Moreover. The FAVAR approach introduces two changes to the standard factor model. Otrok. 6. which becomes yi. to enter the measurement equation. and Whiteman (2003). much more important than world cycles. while nonparametric methods have a harder time characterizing the uncertainty that results from having a small cross section. T. . Schorfheide – Bayesian Macroeconometrics: April 18. (126) . 2010 86 sense that world and country-specific cycles explain a similar share of the variance of output growth.t = ai + γi y0. as is the case in Kose. n. country-specific cycles are. migration and demographics). The study of house prices is another interesting application of factor models. i = 1. . Del Negro and Otrok (2007) apply dynamic factor models to study regional house prices in the US. For the entire world. Regional cycles are not particularly important at all. hierarchical DFMs.t . . and hybrid models that combine a DSGE model and a DFM.t . the federal funds rate. The idiosyncratic components ξi. More- over. Second. Bernanke.t−q + u0.t .t evolve according to (114 ). and Ω0 is an arbitrary orthogonal matrix.3) and (ii) the κ × m matrix obtained by stacking the first κ γi ’s is composed of zeros. respectively. and Eliasz (2005) assume that (i) the κ × κ matrix obtained by stacking the first κ λi ’s equals the identity Iκ (as in Example 6. and the innovations to their law 2 of motion ui. Boivin.t = Φ0.t = Σ0.t−1 + . In contrast.4. with the difference that the variance-covariance matrix Σ0 is no longer restricted to be diagonal.t at all leads and lags. the observable vector y0. .t ∼ iidN (0.2. + Φ0. In particular. Bernanke.j matrices are now of size (κ + m) × (κ + m).t and γi are m × 1 and 1 × m vectors.t are subject to the distributional assumptions ui.t as in (21): u0. and Eliasz (2005) apply their model to study the effects of monetary policy shocks in the United States. 2010 87 where y0. Likewise.t is still assumed to be normally distributed with mean 0 and variance Σ0 . For given factors. u0. The appeal of the FAVAR is that it affords a combination of factor analysis with the structural VAR analysis described in Section 2. obtaining the posterior distribution for the parameters of (126) and (127) is straightforward.1 ft−1 y0. The Φ0.t ∼ N (0. Boivin. unanticipated changes in monetary policy only affect the factors with a one-period lag.tr Ω0 0. (127) which is the reason for the term factor augmented VAR. This identification implies that the central bank responds contemporaneously to the information contained in the factors.t relates to a vector of structural shocks 0. The innovation vector u0. At least in principle. conducting inference in a FAVAR is a straightforward application of the tools described in Section 6.t .2. one can assume that the vector of reduced-form shocks u0. In order to achieve identification.q ft−q y0.1. Schorfheide – Bayesian Macroeconometrics: April 18. They identify monetary policy shocks by imposing a short-run identification scheme where Ω0 is diagonal as in Example 2. Σ0 ).t and the unobservable factor ft are assumed to jointly follow a vector autoregressive process of order q: ft y0. the factors can be drawn using expressions (126) and the first κ equations of the VAR . σi ). we maintain the assumption that the innovations ui.t are independent across i and independent of u0.Del Negro. (128) where Σtr is the unique lower-triangular Cholesky factor of Σ0 with nonnegative 0 diagonal elements. . Mumtaz and Surico (2008) introduce time-variation in the law of motion of the factors (but not in any of the other parameters) and use their model to study cross-country inflation data. and so forth). 2010 88 in (127). The second innovation amounts to introducing stochastic volatility in the law of motion of the factors and the idiosyncratic shocks. Otrok.2. Schorfheide – Bayesian Macroeconometrics: April 18. comovements across countries may have changed as a result of increased financial or trade integration. the regional factors evolve according to a factor model in which the common components are the the world factors. Del Negro and Otrok (2008) accomplish that by modifying the standard factor model in two ways. Del Negro and Otrok (2008) apply this model to study the time-varying nature of international business cycles. in the attempt to determine whether the Great Moderation has country-specific or international roots. switches from fixed to flexible exchange rates. or because of monetary arrangements (monetary unions. This approach is more parsimonious than the one used by Kose. in the study of international business cycles – the application discussed in the previous section – the three levels of aggregation are country. and Whiteman (2003). we may want to allow for timevariation in the parameters of a factor model. This feature allows for changes in the sensitivity of individual series to common factors. Their approach entails building a hierarchical set of factor models. For concreteness. This feature accounts for changes in the relative importance of common factors and of idiosyncratic shocks.Del Negro. Only the most disaggregated factors – the countrylevel factors – would appear in the measurement equation (112). For instance. and world. Moench. Time-Varying Parameters: For the same reasons that it may be useful to allow parameter variation in a VAR as we saw in Section 5. as the measurement and transition equations. the country factors evolve according to a factor model in which the common components are the factors at the next level of aggregation (the regional factors). Both loadings and volatilities evolve according to a random walk without drift as in Cogley and Sargent (2005b). Hierarchical factors: Ng. and Potter (2008) pursue a modeling strategy different from the one outlined in Section 6. where the hierarchy is determined by the level of aggregation. In turn. First. they make the loadings vary over time. regional. respectively.1. in a state-space representation. Combining DSGE Models and Factor Models: Boivin and Giannoni (2006a) estimate a DSGE-DFM that equates the latent factors with the state variables . Similarly. φi. For instance. Y1:T ). Kryshko (2010) documents that the space spanned by the factors of a DSGE-DFM is very similar to the space spanned by factors extracted from an unrestricted DFM. . . Accordingly. that is. i = 1. . 2010 89 of a DSGE model. As before. λi . Details are provided in Boivin and Giannoni (2006a) and Kryshko (2010).p ] . investment. Using multiple (noisy) measures implicitly allows a researcher to obtain a more precise measure of DSGE model variables – provided the measurement errors are approximately independent – and thus sharpens inference about the DSGE model parameters and the economic state variables. whereas Step (iii) can be implemented with a modified version of the Random-Walk-Metropolis step described in Algorithm 4. Schorfheide – Bayesian Macroeconometrics: April 18. Since in the DSGE-DFM the latent factors have a clear economic interpretation. This relationship can be used to center a prior distribution for λi . Y1:T ). θDSGE . . i=1 (ii) the conditional distribution of F1:T given ({θi }n . hours worked. the factor dynamics are therefore subject to the restrictions implied by the DSGE model and take the form ft = Φ1 (θDSGE )ft−1 + Φ (θDSGE ) t . price inflation.t corresponds to log GDP. wages. wage rates. Equation (129) is then combined with measurement equations of the form (112). (129) where the vector ft now comprises the minimal set of state variables associated with the DSGE model and θDSGE is the vector of structural DSGE model parameters. . The solution of the stochastic growth model delivers a functional relationship between log GDP and the state variables of the DSGE model. and (iii) the i=1 distribution of θDSGE given ({θi }n . . inflation. Y1:T ). He then uses the DSGE-DFM to study the effect of unanticipated changes in technology . define θi = [ai . Steps (i) and (ii) resemble Steps 1 and 3 i=1 in Algorithm 6. Details of how to specify such a prior can be found in Kryshko (2010). .1. θDSGE . and so forth.Del Negro. consumption. σi . φi. In the context of the simple stochastic growth model analyzed in Section 4.1 . multiple measures of employment and labor usage.1. and interest rates to multiple observables. as well as the shocks that drive the economy. it is in principle much easier to elicit prior distributions for the loadings λi . this vector would contain the capital stock as well as the two exogenous processes. n. Inference in a DSGEDFM can be implemented with a Metropolis-within-Gibbs sampler that iterates over (i) the conditional posterior distributions of {θi }n given (F1:T . . suppose yi. Boivin and Giannoni (2006a) use their DSGE-DFM to relate DSGE model variables such as aggregate output. More specifically. there is uncertainty about the importance of such features in empirical models. which are elements of the vector section of macroeconomic variables.1: Consider the two (nested) models: M1 : M2 : yt = u t . Thus. ut ∼ iidN (0. yt = θ(2) xt + ut . θ(0) ∼ 0 with prob. denoted by πi. informational frictions. In the context of a DSGE model. Example 7. wage stickiness. λ . Researchers working with dynamic factor models are typically uncertain about the number of factors necessary to capture the comovements in a cross section of macroeconomic or financial variables. combined with great variation in the implications for policy across models. which is illustrated in the following example. Bayesian analysis allows us to place probabilities on the two models. 1). Mi ) and the prior density p(θ(i) |Mi ) are part of the specification of a model Mi . 1) with prob. N (0. both the likelihood function p(Y |θ(i) . ut ∼ iidN (0. in the context of VARs there is uncertainty about the number of lags and cointegration relationships as well as appropriate restrictions for identifying policy rules or structural shocks. a researcher might be uncertain whether price stickiness. 1 − λ . In a Bayesian framework.Del Negro. Then the mixture of M1 and M2 is equivalent to a model M0 M0 : yt = θ(0) xt +ut . θ(2) ∼ N (0. Here M1 restricts the regression coefficient θ(2) in M2 to be equal to zero. Suppose we assign prior probability π1. on a large cross 7 Model Uncertainty The large number of vector autoregressive and dynamic stochastic general equilibrium models encountered thus far. Schorfheide – Bayesian Macroeconometrics: April 18. or monetary frictions are quantitatively important for the understanding of business-cycle fluctuations and should be accounted for when designing monetary and fiscal policies. 1). makes the problem of model uncertainty a compelling one in macroeconometrics. ut ∼ iidN (0.0 . Model uncertainty is conceptually not different from parameter uncertainty. In view of the proliferation of hard-to-measure coefficients in time-varying parameter models. a model is formally defined as a joint distribution of data and parameters.0 = λ to M1 . 90 t in (129). 1). 2010 and monetary policy. 1). Mi )p(θ(i) |Mi )dθ(i) . which can all be nested in an unrestricted state-space model. . posterior model probabilities are often overly decisive. The remainder of this section is organized as follows.0 . (130) . in that one specification essentially attains posterior probability one and all other specifications receive probability zero. in particular those that are based on DSGE models. Section 7. which complicates the computation of the posterior distribution. M πj.0 p(Y1:T |Mi ) . Schorfheide – Bayesian Macroeconometrics: April 18.2. 2010 91 In principle.T = πi. The posterior model probabilities are given by πi. .3. a decision maker might be inclined to robustify her decisions. These issues are discussed in Section 7.1 Posterior Model Probabilities and Model Selection Suppose we have a collection of M models denoted by M1 through MM . as evident from the example. In view of potentially implausible posterior model probabilities. . for example VARs of lag length p = 1. However. . and it is useful to regard restricted versions of a large encompassing model as models themselves. . . and prior probability πi. 7. pmax and cointegration rank r = 1. Thus. this prior distribution would have to assign nonzero probability to certain lower-dimensional subspaces.1 discusses the computation of posterior model probabilities and their use in selecting among a collection of models. . We use a stylized optimal monetary policy example to highlight this point in Section 7. Rather than first selecting a model and then conditioning on the selected model in the subsequent analysis. Each model has a parameter vector θ(i) .0 p(Y1:T |Mj ) j=1 p(Y1:T |Mi ) = p(Y1:T |θ(i) . In many macroeconomic applications. n or a collection of linearized DSGE models. . it may be more desirable to average across models and to take model uncertainty explicitly into account when making decisions.Del Negro. in most of the applications considered in this chapter such an approach is impractical. a proper prior distribution p(θ(i) |Mi ) for the model parameters. one could try to construct a prior distribution on a sufficiently large parameter space such that model uncertainty can be represented as parameter uncertainty. These decisive probabilities found in individual studies are difficult to reconcile with the variation in results and model rankings found across different studies and therefore are in some sense implausible. t−1 . a proper prior could be obtained by replacing the dummy observations Y ∗ and X ∗ with presample observations. We briefly mentioned in Sections 2. Mi ). In the context of a VAR. we could regard observations Y1:T ∗ as presample and p(θ|Y1:T ∗ ) as a prior for θ that incorporates this presample information. it is important that p(θ|Y1:T ∗ ) be a proper density. It is beyond the scope of this chapter to provide a general discussion of the use of posterior model probabilities or odds ratios for model comparison. As long as the likelihood functions p(Y1:T |θ(i) . provided the prior model probabilities are also adjusted to reflect the presample information Y1:T ∗ . In turn.2 (hyperparameter choice for Minnesota prior) and 4. θ)p(θ|Y1:T ∗ )dθ.t−1 . who uses them to evaluate lag length . (131) log marginal likelihoods can be interpreted as the sum of one-step-ahead predictive scores. Y1. A survey is provided by Kass and Raftery (1995). The predictive score is small whenever the predictive distribution assigns a low density to the observed yt . Mi ) and prior densities p(θ(i) |Mi ) are properly normalized for all models. we shall highlight a few issues that are important in the context of macroeconometric applications. As before. and Villani (2001). Conditional on Y1:T ∗ .t−1 . who computes posterior odds for a collection of VARs and DSGE models. when making the prediction for yt . (132) The density p(YT ∗ +1:T |Y1:T ∗ ) is often called predictive (marginal) likelihood and can replace the marginal likelihood in (130) in the construction of posterior model probabilities.Del Negro. Schorfheide – Bayesian Macroeconometrics: April 18. Mi )p(θ(i) |Y1. 2010 92 where p(Y1:T |Mi ) is the marginal likelihood or data density associated with model Mi . Since in time-series models observations have a natural ordering. the posterior model probabilities are well defined. Two examples of papers that use predictive marginal likelihoods to construct posterior model probabilities are Schorfheide (2000). The terms on the right-hand side of (131) provide a decomposition of the one-step-ahead predictive densities p(yt |Y1. Since for any model Mi T ln p(Y1:T |Mi ) = t=1 ln p(yt |θ(i) .3 (prior elicitation for DSGE models) that in practice priors are often based on presample (or training sample) information. Mi )dθ(i) . the marginal likelihood function for subsequent observations YT ∗ +1:T is given by p(YT ∗ +1:T |Y1:T ∗ ) = p(Y1:T ) = p(Y1:T ∗ ) p(YT ∗ +1:T |Y1:T ∗ . This decomposition highlights the fact that inference about the parameter θ(i) is based on time t − 1 information. the use of numerical procedures to approximate marginal likelihood functions is generally preferable for two reasons. First. A more detailed discussion of numerical approximation techniques for marginal likelihoods is provided in Chib (This Volume). While the exact marginal likelihood was not available for the DSGE models. There are only a few instances.1 that for a DSGE model. posterior inference is typically based on simulation-based methods. it can be computationally challenging. and the marginal likelihood approximation can often be constructed from the output of the posterior simulator with very little additional effort. We also mentioned in Section 4. which approximates ln p(Y |θ) + ln p(θ) by a quadratic function centered at the posterior mode or the maximum of the likelihood function. such as the VAR model in (1) with conjugate MNIW prior. Schorfheide (2000) compares Laplace approximations of marginal likelihoods for two small-scale DSGE models and bivariate VARs with 2-4 lags to numerical approximations based on a modified harmonic mean estimator. the discrepancy between the modified harmonic mean estimator and the Laplace approximation was around 0. While the results reported in Schorfheide (2000) are model and data specific. A more detailed discussion of predictive likelihoods can be found in Geweke (2005).02 for log densities. In fact. Finally. for priors represented through dummy observations the formula is given in (15).1 on a log scale. marginal likelihoods can be approximated analytically using a so-called Laplace approximation. The VARs were specified such that the marginal likelihood could be computed exactly. Schorfheide – Bayesian Macroeconometrics: April 18. in which the marginal likelihood p(Y ) = p(Y |θ)p(θ)dθ can be computed analytically. The approximation error of the numerical procedure was at most 0. the approximation error can be reduced to a desired level by increasing . The most widely used Laplace approximation is the one due to Schwarz (1978). Second. numerical approximations to marginal likelihoods can be obtained using Geweke (1999)’s modified harmonic mean estimator or the method proposed by Chib and Jeliazkov (2001). 2010 93 and cointegration rank restrictions in vector autoregressive models. whereas the error of the Laplace approximation was around 0.7.Del Negro. While the calculation of posterior probabilities is conceptually straightforward. which is known as Schwarz Criterion or Bayesian Information Criterion (BIC).5. or other models for which posterior draws have been obtained using the RWM Algorithm. An application of predictive likelihoods to forecast combination and model averaging is provided by Eklund and Karlsson (2007). Phillips (1996) and Chao and Phillips (1999) provide extensions to nonstationary time-series models and reduced-rank VARs. then under fairly general conditions the posterior probability assigned to that model will converge to one as T −→ ∞. An early version of this result for general linear regression models was proved by Halpern (1974). If one among the M models M1 . for instance. Moreover. In this sense. A rule for selecting one out of M models can be formally derived from the following decision problem. for instance. . A treatment of model selection problems under more general loss functions can be found. . 2010 the number of parameter draws upon which the approximation is based. the probabilities associated with all other specifications are very small. . Schwarz (1978) and Phillips and Ploberger (1996)). If the loss function is symmetric in the sense that αij = α for all i = j. Bayesian model selection procedures are consistent from a frequentist perspective.2 in Section 7. and the loss of making inference or decisions based on the highest posterior probability model is not too large if one of the low probability models is in fact correct. a model selection approach might provide a good approximation if the posterior probability of one model is very close to one. . in Bernardo and Smith (1994). 94 Posterior model probabilities are often used to select a model specification upon which any subsequent inference is conditioned. The consistency result remains valid if the marginal likelihoods that are used to compute posterior model probabilities are replaced by Laplace approximations (see. the consistency is preserved in nonstationary time-series models. Suppose that a researcher faces a loss of zero if she chooses the “correct” model and a loss of αij > 0 if she chooses model Mi although Mj is correct. We shall elaborate on this point in Example 7.Del Negro. then it is straightforward to verify that the posterior expected loss is minimized by selecting the model with the highest posterior probability. for example. These Laplace approximations highlight the fact that log marginal likelihoods can be decomposed into a goodness-of-fit term. MM is randomly selected to generate a sequence of observations Y1:T . Mi ) and a term that penalizes the dimensionality. .2. Chao and Phillips (1999). comprising the maximized log likelihood function maxθ(i) ∈Θ(i) ln p(Y1:T |θ(i) . While it is generally preferable to average across all model specifications with nonzero posterior probability. where ki is the dimension of the parameter vector θ(i) . which in case of Schwarz’s approximation takes the form of −(ki /2) ln T . Schorfheide – Bayesian Macroeconometrics: April 18. prove that the use of posterior probabilities leads to a consistent selection of cointegration rank and lag length in vector autoregressive models. All variables in this model are meant to be in log deviations from some steady state. At a minimum. 2010 95 7.t s. This class of policies is evaluated under the loss function 2 2 Lt = (πt + yt ).t ∼ iidN (0. Finally.t . Schorfheide – Bayesian Macroeconometrics: April 18. (133) is a cost (supply) shock.2 Decision Making and Inference with Multiple Models Economic policy makers are often confronted with choosing policies under model uncertainty. the central bank is considering a class of new monetary policies. 1). In period T . 1). is correct. the utility of a representative agent in a DSGE model. . the variability of aggregate output and inflation – or micro-founded albeit model-specific – for instance. 6 (136) Chamberlain (This Volume) studies the decision problem of an individual who chooses between two treatments from a Bayesian perspective.t .t d. if in fact one of the other models Mj . d. (134) is a demand shock. assume that up until period T monetary policy was mt = 0. The optimal decision from a Bayesian perspective is obtained by minimizing the expected loss under a mixture of models. Conditioning on the highest posterior probability model can lead to suboptimal decisions.t +δ s.2: Suppose that output yt and inflation πt are related to each other according to one of the two Phillips curve relationships Mi : yt = θ(Mi )πt + where s. 2. Example 7. (135) δ controls the strength of the central bank’s reaction to supply shocks.6 Moreover.Del Negro. policy decisions are often made under a fairly specific loss function that is based on some measure of welfare.t ∼ iidN (0. j = i.t . s. The following example provides an illustration. Assume that the demand side of the economy leads to the following relationship between inflation and money mt : πt = mt + where d. the decision maker should account for the loss of a decision that is optimal under Mi . indexed by δ: mt = − d. i = 1. This welfare loss function might either be fairly ad-hoc – for example. this model-selection-based procedure completely ignores the loss that occurs if in fact M2 is the correct model. π2. To provide a numerical illustration. πi. the best among the two decisions.T = 0. it is optimal to set δ ∗ (M1 ) = −0.Del Negro. we let θ(M1 ) = 1/10. In particular. We will derive the optimal decision and compare it with two suboptimal procedures that are based on a selection step. and. obtained by minimizing . taking into account the posterior model probabilities πi. δ ∗ (M2 ).T L(M1 . Schorfheide – Bayesian Macroeconometrics: April 18. suppose that the policy maker had proceeded in two steps: (i) select the highest posterior probability model. Second.T = 0. which minimizes the posterior risk if only model Mi is considered. δ). which is larger than R(δ ∗ ) and shows that it is suboptimal to condition the decision on the highest posterior probability model. If the policy maker implements the recommendation of advisor Ai . while choosing between δ ∗ (M1 ) and δ ∗ (M2 ) is preferable to conditioning on the highest posterior probability model. δ) + π2. Third. it is preferable to implement the recommendation of advisor A2 because R(δ ∗ (M2 )) < R(δ ∗ (M1 )). then Table 4 provides the matrix of relevant expected losses.39. θ(M2 ) = 1. conditional on M1 . The risk associated with this decision is R(δ ∗ (M1 )) = 0. is inferior to the optimal decision δ ∗ .85. and (ii) conditional on this model. even though the posterior odds favor the model entertained by A1 . (138) A straightforward calculation leads to δ ∗ = argminδ R(δ) = −0. determine the optimal choice of δ. First.32 and the posterior risk associated with this decision is R(δ ∗ ) = 0. The highest posterior probability model is M1 .T . which in this example is given by R(δ) = π1.T denotes the posterior probability of model Mi at the end of period T . suppose that the policy maker relies on two advisors A1 and A2 . δ) = (δθ(Mi ) + 1)2 + δ 2 .T L(M2 . π1.10. Notice that there is a large loss associated with δ ∗ (M2 ) if in fact M1 is the correct model. Thus. Advisor Ai recommends that the policy maker implement the decision δ ∗ (Mi ). (137) Here.92.61. the expected period loss associated with a particular policy δ under model Mi is L(Mi . 2010 96 If one averages with respect to the distribution of the supply shocks. from a Bayesian perspective it is optimal to minimize the posterior risk (expected loss). However. conditioning on the highest posterior probability leads to approximately the same predictive density as model averaging.50 Risk R(δ) 0.5 the overall posterior expected loss. in fact.1 δ ∗ (M1 ) M1 1. Consider. In more realistic applications. For instance.82 0. According to Cogley and Sargent’s analysis. In this case.Del Negro. The authors consider a traditional Keynesian model with a strong output and inflation trade-off versus a model in which the Phillips curve is vertical in the long run.S. Mi ). In fact. These models would themselves involve unknown parameters. Often.92 0. the posterior probability of the Keynesian model was already very small by the mid-1970s. Cogley and Sargent (2005a) provide a nice macroeconomic illustration of the notion that one should not implement the decision of the highest posterior probability model if it has disastrous consequences in case one of the other models is correct.04 0. 2010 97 Table 4: Expected Losses Decision δ∗ = −0.99 1.T p(yT +h |Y1:T .90 δ ∗ (M2 ) = −0.32 = −0. (139) Thus. predictive distributions are important.56 0. and the natural rate model suggested implementing a disinflation policy. However. The authors conjecture that this consideration may have delayed the disinflation until about 1980. a prediction problem. Schorfheide – Bayesian Macroeconometrics: April 18.15 M2 0. for example. loss depends on future realizations of yt . Notice that only if the posterior probability of one of the models is essentially equal to one. the two simple models would be replaced by more sophisticated DSGE models. the costs associated with this disinflation were initially very high if. p(yT +h |Y1:T ) is the result of the Bayesian averaging of model-specific predictive densities p(yT +h |Y1:T ). economy. the Keynesian model provided a better description of the U.85 0. in this numerical illustration the gain from averaging over models is larger than the difference between R(δ ∗ (M1 )) and R(δ ∗ (M2 )). . The h-step-ahead predictive density is given by the mixture M p(yT +h |Y1:T ) = i=1 πi. There exists an extensive literature on applications of Bayesian model averaging. Bayesian model averaging has also become popular in growth regressions following the work of Fernandez. Based on the exclusion of parameters. then the implementation of model averaging can be challenging. one can in principle generate 268 ≈ 3 · 1020 submodels. This is a special case of Bayesian forecast combination. Since there is uncertainty about exactly which explanatory variables to include in a growth regression. Ley. If the model space is very large. the number of restrictions actually visited by the MCMC simulation is only a small portion of all possible restrictions. and Masanjala and Papageorgiou (2008). the computation of posterior probabilities for all submodels can be a daunting task. Suppose one constructs submodels by restricting VAR coefficients to zero. Ni. which is discussed in more general terms in Geweke and Whiteman (2006). and Steel (2001). The recent empirical growth literature has identified a substantial number of variables that potentially explain the rate of economic growth in a cross section or panel of countries. Strachan and van Dijk (2006) average across VARs with different lag lengths and cointegration restrictions to study the dynamics of the Great Ratios. then it is optimal to average posterior mean forecasts from the M models. and Wright (2008) uses Bayesian model averaging to construct exchange rate forecasts. 2010 98 Min and Zellner (1993) use posterior model probabilities to combine forecasts. and Miller (2004) uses a simplified version of Bayesian model averag- . George. leading to a coefficient matrix Φ with 68 elements. Doppelhofer. In a nutshell. George. The authors also provide detailed references to the large literature on Bayesian variable selection in problems with large sets of potential regressors. However. Schorfheide – Bayesian Macroeconometrics: April 18.Del Negro. Doppelhofer. An MCMC algorithm then iterates over the conditional posterior distribution of model parameters and variable selection indicators. and Sun (2008) develop a stochastic search variable selection algorithm for a VAR that automatically averages over high posterior probability submodels. If the goal is to generate point predictions under a quadratic loss function. and Sun (2008) introduce binary indicators that determine whether a coefficient is restricted to be zero.1. Sala-i Martin. which involved a 4-variable VAR with 4 lags. Consider the empirical Illustration 2. as is typical of stochastic search applications. Even if one restricts the set of submodels by requiring that a subset of the VAR coefficients are never restricted to be zero and one specifies a conjugate prior that leads to an analytical formula for the marginal likelihoods of the submodels. The paper by Sala-i Martin. and Miller (2004). Ni. As an alternative. using the posterior model probabilities as weights. Bayesian model averaging is an appealing procedure. Del Negro, Schorfheide – Bayesian Macroeconometrics: April 18, 2010 99 ing, in which marginal likelihoods are approximated by Schwarz (1978)’s Laplace approximation and posterior means and covariances are replaced by maxima and inverse Hessian matrices obtained from a Gaussian likelihood function. 7.3 Difficulties in Decision-Making with Multiple Models While Bayesian model averaging is conceptually very attractive, it very much relies on the notion that the posterior model probabilities provide a plausible characterization of model uncertainty. Consider a central bank deciding on its monetary policy. Suppose that a priori the policy makers entertain the possibility that either wages or prices of intermediate goods producers are subject to nominal rigidities. Moreover, suppose that – as is the case in New Keynesian DSGE models – these rigidities have the effect that wage (or price) setters are not able to adjust their nominal wages (prices) optimally, which distorts relative wages (prices) and ultimately leads to the use of an inefficient mix of labor (intermediate goods). The central bank could use its monetary policy instrument to avoid the necessity of wage (price) adjustments and thereby nullify the effect of the nominal rigidity. Based on the tools and techniques in the preceding sections, one could now proceed by estimating two models, one in which prices are sticky and wages are flexible and one in which prices are flexible and wages are sticky. Results for such an estimation, based on a variant of the Smets and Wouters (2007) models, have been reported, for instance, in Table 5 of Del Negro and Schorfheide (2008). According to their estimation, conducted under various prior distributions, U.S. data favor the sticky price version of the DSGE model with odds that are greater than e40 . Such odds are not uncommon in the DSGE model literature. If these odds are taken literally, then under relevant loss functions we should completely disregard the possibility that wages are sticky. In a related study, Del Negro, Schorfheide, Smets, and Wouters (2007) compare versions of DSGE models with nominal rigidities in which those households (firms) that are unable to reoptimize their wages (prices) are indexing their past price either by the long-run inflation rate or by last period’s inflation rate (dynamic indexation). According to their Figure 4, the odds in favor of the dynamic indexation are greater than e20 , which again seems very decisive. Schorfheide (2008) surveys a large number of DSGE model-based estimates of price and wage stickiness and the degree of dynamic indexation. While the papers included in this survey build on the same theoretical framework, variations in some Del Negro, Schorfheide – Bayesian Macroeconometrics: April 18, 2010 100 details of the model specification as well as in the choice of observables lead to a significant variation in parameter estimates and model rankings. Thus, posterior model odds from any individual study, even though formally correct, appear to be overly decisive and in this sense implausible from a meta perspective. The problem of implausible odds has essentially two dimensions. First, each DSGE model corresponds to a stylized representation of a particular economic mechanism, such as wage or price stickiness, augmented by auxiliary mechanisms that are designed to capture the salient features of the data. By looking across studies, one encounters several representations of essentially the same basic economic mechanism, but each representation attains a different time-series fit and makes posterior probabilities appear fragile across studies. Second, in practice macroeconometricians often work with incomplete model spaces. That is, in addition to the models that are being formally analyzed, researchers have in mind a more sophisticated structural model, which may be too complicated to formalize or too costly (in terms of intellectual and computational resources) to estimate. In some instances, a richly parameterized vector autoregression that is only loosely connected to economic theory serves as a stand-in. In view of these reference models, the simpler specifications are potentially misspecified. For illustrative purpose, we provide two stylized examples in which we explicitly specify the sophisticated reference model that in practice is often not spelled out. Example 7.3: Suppose that a macroeconomist assigns equal prior probabilities to 2 2 two stylized models Mi : yt ∼ iidN (µi , σi ), i = 1, 2, where µi and σi are fixed. In addition, there is a third model M0 in the background, given by yt ∼ iidN (0, 1). For the sake of argument, suppose it is too costly to analyze M0 formally. If a sequence of T observations were generated from M0 , the expected log posterior odds of M1 versus M2 would be I 0 ln E π1,T π2,T = I 0 − E − − = − T 1 2 ln σ1 − 2 2 2σ1 T (yt − µ1 )2 t=1 T T 1 2 ln σ2 − 2 2 2σ2 (yt − µ2 )2 t=1 1 T 1 T 2 2 ln σ1 + 2 (1 + µ2 ) + ln σ2 + 2 (1 + µ2 ) , 1 2 2 2 σ1 σ2 where the expectation is taken with respect to y1 , . . . , yT under M0 . Suppose that the location parameters µ1 and µ2 capture the key economic concept, such as wage Del Negro, Schorfheide – Bayesian Macroeconometrics: April 18, 2010 101 or price stickiness, and the scale parameters are generated through the various auxiliary assumptions that are made to obtain a fully specified DSGE model. If the 2 2 two models are based on similar auxiliary assumptions, that is, σ1 ≈ σ2 , then the posterior odds are clearly driven by the key economic contents of the two models. If, however, the auxiliary assumptions made in the two models are very different, it is possible that the posterior odds and hence the ranking of models M1 and M2 are 2 2 dominated by the auxiliary assumptions, σ1 and σ2 , rather than by the economic contents, µ1 and µ2 , of the models. Example 7.4: This example is adapted from Sims (2003). Suppose that a researcher considers the following two models. M1 implies yt ∼ iidN (−0.5, 0.01) and model M2 implies yt ∼ iidN (0.5, 0.01). There is a third model, M0 , given by yt ∼ iidN (0, 1), that is too costly to be analyzed formally. The sample size is T = 1. Based on equal prior probabilities, the posterior odds in favor of model M1 are π1,T 1 = exp − [(y1 + 1/2)2 − (y1 − 1/2)2 ] π2,T 2 · 0.01 = exp {−100y1 } . Thus, for values of y1 less than -0.05 or greater than 0.05 the posterior odds are greater than e5 ≈ 150 in favor of one of the models, which we shall term decisive. The models M1 (M2 ) assign a probability of less than 10−6 outside the range [−0.55, −0.45] ([0.45, 0.55]). Using the terminology of the prior predictive checks described in Section 4.7.2, for observations outside these ranges one would conclude that the models have severe difficulties explaining the data. For any observation falling into the intervals (−∞, −0.55], [−0.45, −0.05], [0.05, 0.45], and [0.55, ∞), one would obtain decisive posterior odds and at the same time have to conclude that the empirical observation is difficult to reconcile with the models M1 and M2 . At the same time, the reference model M0 assigns a probability of almost 0.9 to these intervals. As illustrated through these two stylized examples, the problems in the use of posterior probabilities in the context of DSGE models are essentially twofold. First, DSGE models tend to capture one of many possible representations of a particular economic mechanism. Thus, one might be able to find versions of these models that preserve the basic mechanisms but deliver very different odds. Second, the models often suffer from misspecification, which manifests itself through low posterior probabilities in view of more richly parameterized vector autoregressive models that are less tightly linked to economic theory. Posterior odds exceeding e50 in a sample of T ].T ) ln(1 − qπ1.T )L(M2 . Sims (2003) recommends introducing continuous parameters such that different sub-model specifications can be nested in a larger encompassing model. To ensure that the distorted probability of M1 lies in the unit interval. The second term in (140) penalizes the distortion as a function of the Kullback-Leibler divergence between the undistorted and distorted probabilities. a policy maker might find it attractive to robustify her decision. Hence. τ (140) Here. Schorfheide – Bayesian Macroeconometrics: April 18.T L(M1 . This concern can be represented through the following game between the policy maker and a fictitious adversary. If. then the penalty is infinite and nature will not distort π1. a proper characterization of posterior uncertainty about the strength of various competing decision-relevant economic mechanisms remains a challenge. This pooling amounts essentially to creating a convex combination of onestep-ahead predictive distributions. Underlying this robustness is often a static or dynamic two-person zero-sum game. Example 7. then conditional on a particular δ nature will set .2.Del Negro. δ) + (1 − qπ1. In view of these practical limitations associated with posterior model probabilities.T ] qπ1. Continued: Recall the monetary policy problem described at the beginning of this section. δ) + 1 π1.T . The time-invariant weights of this mixture of models is then estimated by maximizing the log predictive score for this mixture (see Expression (131)). the domain of q is restricted to [0. In fact. τ = ∞. 2010 102 120 observations are suspicious (to us) and often indicate that we should compare different models or consider a larger model space. Suppose scepticism about the posterior probabilities π1.T ) . The downside of creating these encompassing models is that it is potentially difficult to properly characterize multimodal posterior distributions in high-dimensional parameter spaces.T ln(qπ1.2. however.1/π1. If τ is equal to zero.T ) + (1 − π1. which we illustrate in the context of Example 7. called nature: min δ max q∈[0.T generates some concern about the robustness of the policy decision to perturbations of these model probabilities. which are derived from individual models. Geweke (2010) proposes to deal with incomplete model spaces by pooling models. nature uses q to distort the posterior model probability of model M1 . 1/π1. there is a growing literature in economics that studies the robustness of decision rules to model misspecification (see Hansen and Sargent (2008)).T and π2. the Nash equilibrium is summarized in Table 5. Schorfheide – Bayesian Macroeconometrics: April 18.T if L(M1 . δ) in the relevant region for δ.30 10. These adjustments may reflect some scepticism about the correct formalization of the relevant economic mechanisms as well as the availability of information that is difficult to process in macroeconometric models such as VARs and DSGE models.00 -0.32 1.60 -0.19 100 1.00 1. δ) > L(M2 . L(M1 . and in response the policy maker reduces (in absolute terms) her response δ to a supply shock. Thus.0 1.10 -0. . For selected values of τ . 2010 103 Table 5: Nash Equilibrium as a Function of Risk Sensitivity τ τ q ∗ (τ ) δ ∗ (τ ) 0. The particular implementation of robust decision making in Example 7. In our numerical illustration. While it is our impression that in actual decision making a central bank is taking the output of formal Bayesian analysis more and more seriously.00 1. δ) and q = 0 otherwise. δ) > L(M2 .43 -0.Del Negro. the final decision about economic policies is influenced by concerns about robustness and involves adjustments of model outputs in several dimensions.12 q = 1/π1.2 is very stylized. nature has an incentive to increase the probability of M1 . S. and F.” Econometric Reviews. 445–462. Hoboken.” American Economic Review.” Econometric Reviews.. An.” Econometric Reviews. B. 655–673. J. (2007b): “Bayesian Analysis of DSGE Models–Rejoinder. 26(2-4). Rubio-Ram´ ırez (2004): “Comparing Solution Methods for Dynamic Equilibrium Economies. Canova. 2477–2508. and M. . Schorfheide (2007a): “Bayesian Analysis of DSGE Models. F. De Nicolo (2002): “Monetary Disturbances Matter for Business Fluctuations in the G-7. Villani (2007): “Forecasting Performance of an Open Economy Dynamic Stochastic General Equilibrium Model.Del Negro. 12772. J.” Journal of Economic Dynamics and Control.” International Economic Review. 2010 104 References ´ Adolfson. 30(4). 9. 113–172. and P. (2006b): “Has Monetary Policy Become More Effective. ´ Aruoba. Bernardo.” Journal of Monetary Economics. 88(3). Blanchard. J.” Review of Economics and Statistics. Boivin. 49(4). Altug. M. Giannoni (2006a): “DSGE Models in a Data Rich Enviroment. S.” Quarterly Journal of Economics. (1989): “Time-to-Build and Aggregate Fluctuations: Some New Evidence.” International Economic Review. S. Eliasz (2005): “Measuring the Effects of Monetary Policy. 79(4). ´ Canova. 929–959.. and M. E.. Boivin. Schorfheide – Bayesian Macroeconometrics: April 18.. 26(2-4). F. and M. P.. 50(3). Bernanke. 387–422. J. S. John Wiley & Sons. J. 26(2-4). Canova. B. J. 889–920. Quah (1989): “The Dynamic Effects of Aggregate Demand and Supply Disturbances.. 120(1). and G. 1131– 1159. S123–144. Fernandez-Villaverde..” NBER Working Paper. F. Ciccarelli (2009): “Estimating Multi-country VAR Models. and J. 30(12). F. 289–328. 211–219. F.” Journal of Applied Econometrics. and D. Linde.. O. (1994): “Statistical Inference in Calibrated Models.. and A. Smith (1994): Bayesian Theory. Greenberg (1994): “Bayes Inference in Regression Models with ARMA(p. N. 39(6). van Dijk. 1357–1373. van Dijk.” in Handbook of Bayesian Econometrics. and E. Koop.” Biometrika. 541–553. G. and H. Phillips (1999): “Model Selection in Partially-Nonstationary Vector Autoregressive Processes with Reduced Rank Structure. P. Chib. 2010 105 Canova. Y. J. and R. 33(2). and F. C. 270– 281...” Journal of Econometrics. Chang. Kohn (1994): “On Gibbs Sampling for State Space Models. Jeliazkov (2001): “Marginal Likelihoods from the Metropolis Hastings Output. Gambetti (2009): “Structural Changes in the US Economy: Is There a Role for Monetary Policy?.” Journal of Econometrics. V. S. . Chib. Chopin. Doh. 123(2). 155(1). Chao.. C. Oxford University Press. Carter. Chib. Oxford University Press. Ramamurthy (2010): “Tailored Randomized Block MCMC Methods with Application to DSGE Models. K. (This Volume): “Introduction to Simulation and MCMC Methods. G.Del Negro. Chib. Kehoe.” Journal of Money.. T. R. and I. 91(2). Schorfheide – Bayesian Macroeconometrics: April 18. F.” Journal of Econometrics. by J.” in Handbook of Bayesian Econometrics. S. 81(3). Chamberlain. G. (This Volume): “Bayesian Aspects of Treatment Choice. and E. 55(8). and S. Schorfheide (2007): “Non-stationary Hours in a DSGE Model. McGrattan (2008): “Are Structural VARs with Long-Run Restrictions Useful in Developing Business Cycle Theory?... and H.” Journal of Econometrics. ed. by J. Credit. Geweke. S. 19–38.” Journal of Economic Dynamics and Control. and Banking. 96(453).. Geweke. ed. 327–244. J. Koop. Chari. 1337–1352. K. 477–490. S. Pelgrin (2004): “Bayesian Inference and State Number Determination for Hidden Markov Models: An Application to the Information Content of the Yield Curve about Inflation. 183–206.q) Errors. 64(1-2)..” Journal of the American Statistical Association. and F..” Journal of Monetary Economics. and P. V. 227–271. and L. ed. 528–563. S. pp. and T. J. (2005): “Nominal Rigidities and the Dynamic Effects of a Shock to Monetary Policy. by D. J. and R. T. Cogley. Cambridge. 65–148. 1–72. 113(1).” Journal of Political Economy. B. and K.” in Handbook of Macroeconomics. (1994): “Shocks. H.. . Cambridge. Cogley. Indexation. 147–180. vol. Woodford. Cogley. Clarida.. M. 21. Rogoff. 295–364. J. Inflation Dynamics. T. Eichenbaum. Sargent (2002): “Evolving Post-World War II U. J. 98(5)... Leeper (2007): “Generalizing the Taylor Principle. J. M. T. Vigfusson (2007): “Assessing Structural VARs. 1a.Del Negro. 16. S. M. M. Christiano. 262–302. and A. chap. 2101–2126. 8(2). 2010 106 Christiano. (2005b): “Drifts and Volatilities: Monetary Policies and Outcomes in the Post-WWII US. Morozov. Inflation: Forecasting Sources of Uncertainty in an Evolving Monetary System. MIT Press. 29(11). Rogoff. pp. Cochrane. 2. 1–45. Davig. 8(2). Evans (1999): “Monetary Policy Shocks: What Have We Learned and to What End. Eichenbaum..” in NBER Macroeconomics Annual 2006. North Holland.” in NBER Macroeconomics Annual 2001.S. 331–88.K. Woodford.. (2005a): “The Conquest of US Inflation: Learning and Robustness to Model Uncertainty. Schorfheide – Bayesian Macroeconometrics: April 18. and C. and M. L. ed. by J. T. J. Sbordone (2008): “Trend Inflation. 97(3).” Journal of Economic Dynamics and Control.” American Economic Review. and M. L. and E. 41(4). Acemoglu. and M. 1893–1925.. and Inflation Persistence in the New Keynesian Phillips Curve. Sargent (2005): “Bayesian Fan Charts for U. Bernanke. ed. 115(1).” Review of Economic Dynamics. Amsterdam. MIT Press. Gertler (2000): “Monetary Policy Rules and Macroeconomic Stability: Evidence and Some Theory. Gali. vol. by B. L.” Quarterly Journal of Economics.” Carnegie Rochester Conference Series on Public Policy. pp. R.” Review of Economic Dynamics. vol. Taylor.” American Economic Review. K. and T. 607–635. N. 123–162.. . F. 1191–1208. 2003-06.” Journal of Econometrics. Reichlin (2008): “Forecasting Using a Large Number of Predictors: Is Bayesian Shrinkage a Valid Alternative to Principal Components?.. 14(4). Litterman. and L. M. Whiteman (1996): “A Bayesian Approach to Calibration. 325.” Journal of Monetary Economics. DeJong. Giannone. D. and R.” Federal Reserve Bank of New York Staff Report.” Journal of Business Economics and Statistics. Ingram. (2008): “Dynamic Factor Models with Time-Varying Parameters. M. Doan. and C. Del Negro. Smets.. C. 45(2). Otrok (2007): “99 Luftballoons: Monetary Policy and the House Price Boom Across the United States. 3(4). M. T. Schorfheide (2004): “Priors from General Equilibrium Models for VARs.” Journal of Monetary Economics. Del Negro. 98(2). H. Sims (1984): “Forecasting and Conditional Projections Using Realistic Prior Distributions. 1– 100. (2009): “Monetary Policy with Potentially Misspecified Models. D.” Journal of Business and Economic Statistics. 1415–1450.. (2000): “A Bayesian Approach to Dynamic Macroeconomics. 54(7). M. A. and C. Wouters (2007): “On the Fit of New Keynesian Models.. 2010 107 De Mol. 643 – 673. and F.. Measuring Changes in International Business Cycles. 99(4). 146(2).. B. F.” American Economic Review. 25(2). 1962–1985. Schorfheide – Bayesian Macroeconometrics: April 18.” Econometric Reviews.” Journal of Econometrics.Del Negro. 318–328. F.” International Economic Review. Del Negro. (2008): “Forming Priors for DSGE Models (and How it Affects the Assessment of Nominal Rigidities). 55(7). R. 203 – 223. and C. 1–9. Del Negro. Schorfheide. (2003): “Discussion of Cogley and Sargent’s ‘Drifts and Volatilities: Monetary Policy and Outcomes in the Post WWII US’.” Federal Reserve Bank of Atlanta Working Paper. Woodford.. North Holland. and a DSGE Model.” Federal Reserve Board of Governors Finance and Economics Discussion Paper Series. Waggoner. Steel (2001): “Model uncertainty in cross-country growth regressions. (1996): “Bayesian Reduced Rank Regression in Econometrics.” Journal of Economic Theory. 26(2-4). R. and S. J. 144(5). F. 329–363.” Econometrica. J. ed. Chicago. J. E. and A. (1998): “The Robustness of Identified VAR Conclusions about Money. 55(2). 2009-10. Faust. 553–580. Simple Reduced-Form Models.. D.-P. Rogoff. Ley. and M.” Review of Economic Studies. Aigner. R. and C. Engle. and J. 251–276..” Journal of Applied Econometrics. C. W. I. F. Zha (2009): “Understanding Markov Switching Rational Expectations Models. 16(5). Fernandez.. Laforte (2009): “A Comparison of Forecast Performance Between Federal Reserve Staff Forecasts. Amsterdam. 49(4). ´ Fernandez-Villaverde.” Carnegie Rochester Conference Series on Public Policy. Granger (1987): “Co-Integration and Error Correction: Representation. vol.. 75(1). K. by D. Rubio-Ram´ ırez (2007): “Estimating Macroeconomic Models: A Likelihood Approach. F. R. and J. University of Chicago Press. J. chap. S. George. 121–146. by D. and Testing. Kiley. 1849– 1867.. 22. Ni. (2008): “How Structural are Structural Parameters?.Del Negro.” in NBER Macroeconomics Annual 2007. J. Eklund. Karlsson (2007): “Forecast Combination and Model Averaging Using Predictive Measures. 2010 108 Edge. and T. J. 142(1). . Schorfheide – Bayesian Macroeconometrics: April 18. E. S.” Journal of Econometrics. 207–244. Sun (2008): “Bayesian Stochastic Search for VAR Model Restrictions. Goldberger. 74(4). 563– 576. and D. ed. 1059–1087. Farmer. and M. (1977): “The Dynamic Factor Analysis of Economic Time Series.” Econometric Reviews.. Geweke. Acemoglu. Estimation. 19.” in Latent Variables in Socio-Economic Models. University of Chicago Press. M.” Journal of Econometrics. and T. Waggoner. E. Halpern. 60–64. Princeton. J. D. Elliott. 2(4). L. . W. Pitt..” Review of Financial Studies. 441–454. (2010): Complete and Incomplete Econometric Models. Schorfheide – Bayesian Macroeconometrics: April 18.” American Economic Review Papers and Proceedings.Del Negro. 3–80. Princeton University Press. and H. 18(1). and Communication. Geweke. H. D. F. and C. Giordani.” Econometric Reviews. Development. Geweke. D. J. (1974): “Posterior Consistency for Coefficient Estimation and Model Selection in the General Linear Hypothesis. Amsterdam. vol.” Econometric Reviews. Geweke. 14(5). 2010 109 (1999): “Using Simulation Methods for Bayesian Econometric Models: Inference.” Journal of Time Series Analysis. Hamilton. 26(2-4)... G. 9(2). by G.. 1– 126. K. Zha (2007): “Normalization in Econometrics. J. P. 1. and R. ed.” in Handbook of Economic Forecasting. Oxford University Press.” in Handbook of Bayesian Econometrics. and G. P. Zhou (1996): “Measuring the Pricing Error of the Arbitrage Pricing Theory. Hoboken.” Annals of Statistics. Terui (1993): “Bayesian Threshold Autoregressive Models for Nonlinear Time Series. Sargent (2008): Robustness. van Dijk. and T. (2005): Contemporary Bayesian Econometrics and Statistics. pp. (1989): “A New Approach to the Economic Analysis of Nonstationary Time Series and the Business Cycle. 57(2). J. Granger. and A.. Hamilton.. Kohn (This Volume): “Bayesian Inference for Time Series State Space Models. J. North Holland. 557–587. (2007): “Bayesian Model Comparison and Validation. by J. Hansen. 703–712. 221–252. K. 357–384. Koop. C. Princeton.” Econemetrica. John Wiley & Sons. Timmermann. Princeton University Press. 97. Whiteman (2006): “Bayesian Forecasting. M. ed. J. Geweke. and N. and C. E.. T. North Holland. Oxford University Press. Justiniano. (2004): “A Method for Taking Models to the Data. B. J.. van Dijk. 12(2). by S. A. 2010 110 Ingram. and A. Geweke.” in Handbook of Bayesian Econometrics. Tambalotti (2009): “Investment Shocks and Business Cycles. (1991): “Estimation and Hypothesis Testing of Cointegration Vectors in Gaussian Vector Autoregressive Models. and P. (1974): “The Role of Identification in Bayesian Theory.. Johansen. 1205–1226. Whiteman (1994): “Supplanting the Minnesota Prior. Justiniano. ed.” Econometrica.. . G. 59(6).” NBER Working Paper. 1551–1580. 1131–1159. pp. by J.. 98(3).” Journal of Economic Dynamics and Control. R. Oxford University Press. and N. G. Jacquier. Primiceri. (1954): “Normal Multivariate Analysis and the Orthogonal Group. Schorfheide – Bayesian Macroeconometrics: April 18. K. James. New York. (1995): Likelihood-Based Inference in Cointegrated Vector Autoregressive Models.” Journal of Monetary Economics. 231–254.. Polson (This Volume): “Bayesian Econometrics in Finance.” Journal of Business & Economic Statistics. 371–389. Ireland. 40–75. ed. Zellner.Del Negro.” Journal of Applied Econometrics. (1988): “Statistical Analysis of Cointegration Vectors. Koop.” American Economic Review. P. G. A. Amsterdam. and G. 12(4). and A. 15570. Fienberg. Karlsson (1997): “Numerical Methods for Estimation and Inference in Bayesian VAR-Models. G. 175–191. Jacquier. 49(4). S. 12(2-3). E.” Annals of Mathematical Statistics. B.Forecasting Macroeconomic Time Series Using Real Business Cycle Model Priors. Kadane. 604–641. and H. Kadiyala. E.” Journal of Economic Dynamics and Control. Primiceri (2008): “The Time-Varying Volatility of Macroeconomic Fluctuations. E. E. N. 25(1). K. E. Polson. 99–132. and S. A. Rossi (1994): “Bayesian Analysis of Stochastic Volatility Models. N. 28(6).” in Studies in Bayesian Econometrics and Statistics. ” Journal of Monetary Economics. .” Econometric Theory... and R. 23-08.” Journal of Econometrics.. Koop. forthcoming. R. Strachan (2009): “On the Evolution of the Monetary Policy Transmission Mechanism. 2010 111 Kass. 361–393.. and C.” Journal of Economic Dynamics and Control. 90(430). 65(3). R. Paap (2002): “Priors. C. Kim. Schorfheide – Bayesian Macroeconometrics: April 18. Cambridge. Leon-Gonzalez.. Now Publisher. 75(2).” Journal of the American Statistical Association. Posteriors and Bayes Factors for a Bayesian Analysis of Cointegration. and Business Cycles: I The Basic Neoclassical Model. Raftery (1995): “Bayes Factors. Plosser. R.. Kim.. S. 81(4). R.Del Negro. I.” Quarterly Journal of Economics.” in Foundations and Trends in Econometrics..” Journal of Econometrics. 10(3-4). 173–198. G.” Review of Economic Studies. L. G. and A. G. 21(2-3).. 997–1017. Koop. Koop.. E. 195–232.” Review of Economics and Statistics. Leon-Gonzalez. Chib (1998): “Stochastic Volatility: Likelihood Inference and Comparison with ARCH Models. MIT Press. Kosobud (1961): “Some Econometrics of Growth: Great Ratios of Economics. N.. R. Shephard. 608–618. Economy Become More Stable? A Bayesian Approach Based on a Markov-Switching Model of the Business Cycle. Potter (1999): “Bayes Factors and Nonlinearity: Evidence from Economic Time Series. and H. and C.. and D. K. and S.S. 33(4). 88(2). G. E. F. and R. Rebelo (1988): “Production. van Dijk (1994): “On the Shape of the Likelihood / Posterior in Cointegration Models. 251–281. R. and R. Koop. and S. 223–249. Klein. Kim. and S. M. 111(2). 514–551. Strachan (2008): “Bayesian Inference in the Time Varying Cointegration Model. W. C. Nelson (1999a): “Has the U. and R. Growth. F. King. Kleibergen. F. C. Nelson (1999b): State-Space Models with Regime Switching.” Rimini Center for Economic Analysis Working Paper. Kleibergen. 773–795.-J. Korobilis (2010): “Bayesian Multivariate Time Series Methods for Empirical Macroeconomics. G. University of Minnesota. Onatski. and J. T. 2010 112 (2007): “Estimation and Forecasting in Models with Multiple Breaks. Patterson. M. Blackwell Publishing. ed. Fischer. 93(4). 763–789. M. Whiteman (2003): “International Business Cycles: World. by S. E. 1. Williams. pp. Leeper. (2008): “Time-Varying VARs with Inequality Restrictions. Lancaster.D. van Dijk. and Country-Specific Factors. A. Rotemberg. pp. Kose.. ed. (1980): “Techniques for Forecasting with Vector Autoregressions. and C. Palgrave Macmillan. A. Rogoff. University of Strathclyde and FRB New York. 81–118. C. C.. Sims (1995): “Toward a Modern Macroeconomic Model Usable for Policy Analysis. Faust (1997): “When Do Long-Run Identifiying Restrictions Give Reliable Results?. 74(3). Schorfheide – Bayesian Macroeconometrics: April 18. (2009): “Prior Elicitation in Multiple Change-point Models. (2004): An Introduction to Modern Bayesian Econometrics. Kryshko. Basingstoke United Kingdom. Strachan. Litterman. and M.” American Economic Review. 871–898. 20. Villani (2006): “Bayesian Approaches to Cointegration.” in Palgrave Handbook of Econometrics. R.” Ph. 751–772. A.Del Negro. G.. A. and K. 229– 287. 50(3).” Journal of Business & Economic Statistics. Williams (2006): “Monetary Policy Under Uncertainty in Micro-founded Macroeconometric Models. Region. Cambridge. C.” International Economic Review. and K. 1216–1239. . M. Koop. E. P. Otrok. Gertler.” in NBER Macroeconomics Annual 2005. and J. 345–353. MIT Press. ed. Mills.” Manuscript. M. by T. MIT Press. vol. H..” Review of Economic Studies. thesis. (2010): “Data-Rich DSGE and Dynamic Factor Models. R. Levin.” Manuscript. J. University of Pennsylvania. and N.” in NBER Macroeconomics Annual 1994. by M. pp. 15(3). H. vol. B. J. and C. Cambridge.. K. Leeper. Del Negro. 45(1). and C.” CEPR Discussion Paper.” American Economic Review. Ng.. 6767. Cogley (1994): “Testing the Implications of Long-Run Neutrality for Monetary Business Cycle Models.” Journal of Applied Econometrics. W. 41–67. and F.. Papageorgiou (2008): “Rough and Lonely Road to Prosperity: A Reexamination of the Sources of Growth in Africa Using Bayesian Model Averaging. and A. 9. Moench.. M.. C. Otrok. and S. McConnell. C.” Journal of Business Economics & Statistics. Schorfheide (2009): “Bayesian and Frequentist Inference in Partially-Identified Models. H. 23(5). H. Potter (2008): “Dynamic Hierarchical Factor Models. 671–682. Lubik. M. H.” Manuscript. 94(1). Moon.. 547–563. West (2004): “Bayesian Model Assessment in Factor Analysis. Monetary Policy. van Dijk (2003): “Bayes Estimates of Markov Trends in Possibly Cointegrated Series: An Application to U. K. Min.” NBER Working Paper. R.” Statistica Sinica. Otrok. 90(5). 89–118. and P. and F.-K. (2001): “On Measuring the Welfare Costs of Business Cycles. and G. 2010 113 Lopes. 61–92. Whiteman (1998): “Bayesian Leading Indicators: Measuring and Predicting Economic Conditions in Iowa. 21(4). and H. Zellner (1993): “Bayesian and Non-Bayesian Methods for Combining Models and Forecasts with Applications to Forecasting International Growth Rates...” Journal of Monetary Economics. 39(4).. E. Mumtaz. M. R. and C. S37–70.. and M. 56(1-2). A. J. Masanjala. 997–1014. H. Nason. Schorfheide (2004): “Testing for Indeterminancy: An Application to U. Columbia University and FRB New York. 14(1).S. Schorfheide – Bayesian Macroeconometrics: April 18.” Journal of Applied Econometrics. C. Surico (2008): “Evolving International Inflation Dynamics: World and Country Specific Factors. 14882.S. F. and T. Paap. 190– 217..” International Economic Review. T. Consumption and Income. M. 1464–76. S. H.” Journal of Econometrics. .” American Economic Review.. Perez-Quiros (2000): “Output Fluctuations in the United States: What Has Changed since the Early 1980’s?. ” Econometrica.. Rabanal. Robertson. F. Schorfheide. B. 15375. 185–207. Rubio-Ram´ ırez (2005): “Comparing New Keynesian Models of the Business Cycle: A Bayesian Approach. E. D. 1151–1166. 283–306. P. 2010 114 Peersman.: A VAR-GARCH-M Approach.” Journal of Monetary Economics.” Journal of Monetary Economics. (2005): “Time Varying VARs and Monetary Policy. (1996): “Econometric Model Determination. Fuentes-Albero. Kryshko.S. R´ ıos-Rull.” Econometrica. 20(2). Polasek (2003): “Macroeconomic Effects of Sectoral Shocks in Germany. Waggoner. (1998): “Revising Beliefs in Nonidentified Models. Rubio-Ram´ ırez. (1991): “Optimal Inference in Cointegrated Systems. C.” NBER Working Paper.. F. 65 – 85. 19(3). Phillips.” Econometric Theory.” Review of Economic Studies. and R. 324–330. G. J. 72(3). J. 483–509. and. D.” Journal of Business & Economic Statistics. W. Schorfheide – Bayesian Macroeconometrics: April 18... 59(2). R. M. 64(2). Santaeulalia-Llopis (2009): “Methods versus Substance: Measuring the Effects of Technology Shocks. forthcoming.” Computational Economics.” Econometrica. The U. 21(1). 52(6). and W. 64(4). 318–412. and E.Del Negro. C. The U. C. and J. F.K. 3–16.. G. Pelloni. 763– 812. 21(1). (2005): “What Caused the Millenium Slowdown? Evidence Based on Vector Autoregressions. 821–852. 14(4). P. Tallman (2001): “Improving Federal Funds Rate Forecasts in VAR Models Used for Policy Analysis. Zha (2010): “Structural Vector Autoregressions: Theory of Identification and Algorithms for Inference. Primiceri. C. . Rogerson..” Journal of Applied Econometrics.-V. J. (1988): “Indivisible Labor Lotteries and Equilibrium. Phillips. P. Poirier. G. and W. Ploberger (1996): “An Asymptotic Theory of Bayesian Inference for Time Series. and T.” Review of Economic Studies. B. Princeton. Minneapolis. (2000): “Loss Function-based Evaluation of DSGE Model. Schwarz. Schorfheide – Bayesian Macroeconometrics: April 18. G. 28 of NBER Studies in Business Cycles. Miller (2004): “Determinants of Long-term Growth: A Bayesian Averaging of Classical Estimates (BACE) Approach. Stock.” in Business Cycles. (1978): “Estimating the Dimension of a Model. and R. Chicago. by J. and Forecasting. ed. 15(6). Sargent. P. H. G. and M. 8(2). Sims. 645–670.” Review of Economic Dynamics. (1972): “The Role of Approximate Prior Restrictions in Distributed Lag Estimation. van Dijk (1991): “On Bayesian Routes to Unit Roots.. J. Indicators. and C.” Journal of Applied Econometrics. Fall Issue. W. C. F. Sargent. (2005): “Learning and Monetary Policy Shifts. Princeton University Press. Watson. A.” Econometrica. (1980): “Macroeconomics and Reality. 392–419. 169– 175. C. 97(2). K. 251–287. 6(4). 387–401. 94(4). (2008): “DSGE Model-Based Estimation of the New Keynesian Phillips Curve.” American Economic Review.” Annals of Statistics. Sims (1977): “Business Cycle Modeling Without Pretending To Have Too Much A Priori Economic Theory. . J.” Journal of Political Economy. I. X. 67(337). vol. T. Doppelhofer.” Journal of Applied Econometrics. A. University of Chicago Press. (1999): The Conquest of American Inflation. 1–48. 2010 115 Sala-i Martin.Del Negro.” in New Methods in Business Cycle Research.. 813 – 835. pp. (1989): “Two Models of Measurements and the Investment Accelerator. Schorfheide. 48(4). T.” Journal of the American Statistical Association. FRB Minneapolis. 179–214.. and H. Schotman. 397–433. (1993): “A 9 Variable Probabilistic Macroeconomic Forecasting Model. 6(2). 461–464.” FRB Richmond Economic Quarterly. ” Journal of Monetary Economics. 54–81. (2006): “Were There Regime Switches in U. 4. 97(3). (1999): “Error Bands for Impulse Responses. MIT Press. (2001): “Vector Autoregressions.” Econometrica. Inflation Dynamics’ . 949–968. Schorfheide – Bayesian Macroeconometrics: April 18.. (2002b): “Solving Linear Rational Expectations Models. 67(5). 1123–1175. D. 96(1). A. F. 1–20. A. and T. MIT Press. Fischer. 44(2). vol. 15(4).” Journal of Econometrics.” in NBER Macroeconomics Annual 1989. 1113– 1155. pp. and T. 1591–1599. vol. C.S. and H. Uhlig (1991): “Understanding Unit Rooters: A Helicopter Tour. (2003): “Probability Models for Monetary Policy Decisions. J. 20(1-2). Princeton University. Stock. 351–394.S. Blanchard. 293–335..” Manuscript. (1999): “Forecasting Inflation. (2007): “Shocks and Frictions in US Business Cycles: A Bayesian DSGE Approach.” Econometrica. and M. W. and K. 255–274. Bernanke. 16. C. by O.” in NBER Macroeconomics Annual 2001. Watson (1989): “New Indices of Coincident and Leading Economic Indicators.. Smets. pp. 2010 116 (2002a): “Comment on Cogley and Sargent’s ‘Evolving post World War II U.” International Economic Review. Cambridge. 59(6). ed. Sims.Del Negro.. 39(4). 101–115. 146(2). Rogoff.” Journal of the European Economic Association. Wouters (2003): “An Estimated Dynamic Stochastic General Equilibrium Model of the Euro Area.. Waggoner. Cambridge.” Journal of Economic Perspectives. S. Sims. by B. ed. A.” American Economic Review. 373–379. 1(5). Sims. and S. 586–606. Monetary Policy?. J. H. Zha (1998): “Bayesian Methods for Dynamic Multivariate Models. and R. C. Zha (2008): “Methods for Inference in Large Multiple-Equation Markov-Switching Models. .” Computational Economics.” American Economic Review. 52(2). R. Hoboken. (2003): “A Gibb’s Sampler for Structural VARs. 326–357. 329–341. and T. 3(5). 2(3). Strachan. John Wiley & Sons. 307–325. and A. Zellner. (2008): “Bayesian Model Averaging and Exchange Rate Forecasting.” Journal of Econometrics. van Dijk (2006): “Model Uncertainty and Bayesian Model Averaging in Vector Autoregressive Processes.Del Negro. 968–1006. Wright. 20(2). R.” Econometric Theory. (2009): “Steady State Priors for Vector Autoregressions. 146. 349–366. Zha (1999): “Conditional Forecasts In Dynamic Multivariate Models. 67–86. M. 06/5. 2010 117 (2002): “Macroeconomic Forecasting Using Diffusion Indexes. 381–419. 24(4). (2005): “Bayesian Reference Analysis of Cointegration.” Review of Economics and Statistics. 147–162. (2001): “Fractional Bayesian Lag Length Inference in Multivariate Autoregressive Processes.” Journal of Time Series Analysis. Theil. Tinbergen Institute.” Journal of Econometrics. Schorfheide – Bayesian Macroeconometrics: April 18. 65–78. Villani. 123(2). and H. J. K.. (1971): An Introduction to Bayesian Inference in Econometrics. 65(1). D. 28(2).” Manuscript.” Journal of Monetary Economics. Goldberger (1961): “On Pure and Mixed Estimation in Economics. 21(2). 59–73.. Waggoner.” Journal of Business and Economic Statistics. Strachan. 22(1).” International Economic Review. 81(4).” Journal of the European Economic Association. 630–650. 639–651.. S. H. (1997): “Bayesian Vector Autoregressions with Stochastic Volatility. Uhlig..” Journal of Applied Econometrics.” Journal of Economic Dynamics and Control.” Econometrica. Inder (2004): “Bayesian Analysis of the Error Correction Model. and B. A. H. (2005): “Understanding Changes In International Business Cycle Dynamics. (2005): “What Are the Effects of Monetary Policy on Output? Results From an Agnostic Identification Procedure. . Inflation and interest rates are annualized (A%). . data from 1964:Q1 to 2006:Q4.Del Negro. Output is depicted in percentage deviations from a linear deterministic trend. and Interest Rates 20 16 12 8 4 0 -4 -8 1965 1970 1975 1980 1985 1990 1995 2000 2005 Output Deviations from Trend [%] Inflation [A%] Federal Funds Rate [A%] Notes: The figure depicts U. Schorfheide – Bayesian Macroeconometrics: April 18.S. 2010 118 Figure 1: Output. Inflation. Schorfheide – Bayesian Macroeconometrics: April 18.Del Negro. 2010 119 Figure 2: Response to a Monetary Policy Shock Notes: The figure depicts 90% credible bands and posterior mean responses for a VAR(4) to a one-standard deviation monetary policy shock. . 6 7.7 8.0 4.8 6.8 8. 2010 120 Figure 3: Nominal Output and Investment 8.Del Negro.0 7.0 7.5 6.4 -1.0 -2. Right Axis) Notes: The figure depicts U. .0 6.5 5. Logs.5 65 70 75 80 85 90 95 00 05 9. Schorfheide – Bayesian Macroeconometrics: April 18.5 7. Left Axis) GDP (Nom.8 Log Nominal Investment-Output Ratio -1.6 9. Logs.5 8.2 8.9 6.S. data from 1964:Q1 to 2006:Q4.2 -1.6 -1.4 -2.0 5.1 65 70 75 80 85 90 95 00 05 Investment (Nom. 01).1). and B ∼ N (−1. 0. 1). 2010 121 Figure 4: Posterior Density of Cointegration Parameter Notes: The figure depicts Kernel density approximations of the posterior density for B in β = [1. . B ∼ N (−1. 0.Del Negro. B] based on three different priors: B ∼ N (−1. Schorfheide – Bayesian Macroeconometrics: April 18. Schorfheide – Bayesian Macroeconometrics: April 18. The gray shaded bands indicate NBER recessions.Del Negro. . 2010 122 Figure 5: Trends and Fluctuations Notes: The figure depicts posterior medians and 90% credible intervals for the common trends in log investment and output as well as deviations around these trends. . Sample period is 1955:Q1 to 2006:Q4. and Labor Productivity 12 8 4 0 -4 -8 -12 55 60 65 70 75 80 85 90 95 00 05 Log Labor Productivity Log Output Log Hours Notes: Output and labor productivity are depicted in percentage deviations from a deterministic trend. Hours. and hours are depicted in deviations from its mean. Schorfheide – Bayesian Macroeconometrics: April 18. 2010 123 Figure 6: Aggregate Output.Del Negro. scaled by 400 to convert it into annualized percentages (A%). . Schorfheide – Bayesian Macroeconometrics: April 18.Del Negro. 2010 124 Figure 7: Inflation and Measures of Trend Inflation 14 12 10 8 6 4 2 0 60 65 70 75 80 85 90 95 00 05 Inflation Rate (A%) HP Trend Constant Mean Mean with Breaks Notes: Inflation is measured as quarter-to-quarter changes in the log GDP deflator. The sample ranges from 1960:Q1 to 2005:Q4.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8826776742935181, "perplexity": 3264.471695062893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109670.98/warc/CC-MAIN-20170821211752-20170821231752-00326.warc.gz"}
http://umapublications.com/u9hhabf/91b2c1-thermal-stability-of-alkali-metals
NaH > KH > RbH >CsHbecause the size of … -200+ Video lectures One of the reasons for the instability build-up in the reaction mixture is related to the electrochemical behaviour of the heterogeneous medium. Thermal stability increases with the increase in the size of the cation present in the carbonate. The thermal stability of these hydrides decreases in which of the following order By continuing you agree to the use of cookies. Copyright © 2021 Pathfinder Publishing Pvt Ltd. To keep connected with us please login with your personal information by phone/email and password. The alkali metals form salt like hydrides by the direct synthesis at elevated temperature. Hence, the increasing order of the thermal stability of the given alkaline earth metal carbonates is. react so rapidly with oxygen they form superoxides, in which the alkali metal reacts with $\ce{O2}$ in a 1:1 mole ratio. This group of elements includes beryllium, magnesium, calcium, strontium, barium, and radium.The elements of this group are quite similar in their physical and chemical properties. Copyright © 2021 Elsevier B.V. or its licensors or contributors. (a)\frac{\sin x}{\sin y}=c\\ (b)\sin x \sin y = c\\ (c)\sin x +\sin y = c\\ (d)\cos x \cos y = c, y=ae^{mx}+be^{-mx} satisfies which of the following differential equation. ... the stability of alkali metal fluorides decreases down the group whereas it increases for alkali metal chlorides, bromides and iodides. The thermal stability of Oxo salts are increase as we move down to the group for alkali and alkaline earth metals because increase the size of cation and decreases its polarizing power. And b.pt are higher than the rest of alkali metals iii) Li on burning in air or oxygen forms monoxide while other alkali metals … MCO 3 —-> MO + CO 2 The temperature of decomposition i.e. Nitrates: Thermal stability Nitrates of alkali metals,except LiNO3, decompose on strong heating forming nitrites and oxygen. Compare the solubility and thermal stability of the following compounds of the alkali metals with those of the alkaline earth metals. The correlation between the enthalpies of formation of alkali metal and alkaline earth metal borates and the composition of the vapor in equilibrium with their melts is considered. It was found that the deposited alkali metals are fairly stable on Cr2O3 or Al2O3 and slightly less so on SiO2- With MgO, an important loss of alkali metal was observed with calcination temperatures > 773 K. This effect, increasing from lithium to potassium, was attributed to a reaction with the walls of the reactor. for example, 2KNO3 -> 2KNO 2 +O 2 Nitrates of alkaline-earth metals and LiNO3 decompose on heating to form oxides, nitrogen to form oxides, nitrogen dioxide and oxygen. Alkali metal sulphates are more soluble in water than alkaline earth metals. (a) Nitrates (b) Carbonates (c) Sulphates. The effect of heat on the Group 2 carbonates All the carbonates in this group undergo thermal decomposition to the metal oxide and carbon dioxide gas. Thermal stability: - Carbonates: - The carbonates of alkali metals except lithium carbonate are stable towards heat. The thermal stability of carbonates increases with the increasing basic strength of metal hydroxides on moving down the group.Thus the order is The bicarbonates of all the alkali metals are known. The alkali metals have low melting points, ranging from a high of 179 °C (354 °F) for lithium to a low of 28.5 °C (83.3 °F) for cesium. So, solubility should decrease from Li to Cs. Thermal stability of alkali metal hydrides and carbonates. Oxo salts of alkaline metals are more stable than the alkaline earth metal oxosalts because it have small cations . Solubility decrease down the group from  to . As we move down the alkali metal group, we observe that stability of peroxide increases. In this paper the results of thermal decomposition of bisperoxovanadates in solid state with on-line detection of gaseous products and analysis of solid residue using IR and e.s.r. Abstract. M. Schmidt, M. Sc. The solubility of carbonates increases down the group in alkali metals (except ) . From Li to Cs, due to larger ion size, hydration enthalpy decreases. It explains how the thermal stability of the compounds changes down the group. Nitrates of alkaline and alkali metals give corresponding nitrites except for lithium nitrate, it gives lithium oxides. 11 $\begingroup$ Why is it that thermal stability of alkali metal hydrides decreases down the group, but for carbonates, it increases? The thermal stabilities of the studied borates have been estimated. The alkaline earth metals are the elements that correspond to group 2 of the modern periodic table. Moreover, for the lithium-magnesia system, it was observed that there was a total collapse of the texture after treatment at 1023 K (0.4 m2 g-1 instead of 54 m2 g-1 for pure MgO). The oxides of alkali earth metals (MO) are obtained either by heating the metals in oxygen or by thermal decomposition of their carbonates. Co-Mn-Al mixed oxide and Co 3 O 4 catalysts with alkali metal promoters (K, Cs) were tested for direct NO decomposition with the aim to determine their activity and stability. A. Großjohann, Dr. K.-S. Feichtner, Prof. V. H. Gessner Chair of Inorganic Chemistry II Faculty of … An exhaustive E-learning program for the complete preparation of JEE Main.. None of these salts are stable in the molten state when oxygen is present, but in an inert atmosphere the melt appears to be stable over a quite large temperature interval, on the average from the melting point (T F to about 1.3T F). Under certain conditions, however, these reactions become dangerous. THERMAL STABILITIES AND NATURE OF BICARBONATES AND CARBONATES. Difference between lithium and other alkali metals i) Lithium is harder and higher than other alkali metals due to strong metallic bonding. The stability of carbonates and bicarbonates increases down the group. asked Apr 30, 2019 in Chemistry by RenuK (68.1k points) kvpy; Welcome to Sarthaks eConnect: A unique platform where students can interact with teachers/experts/students to … Have small cations, which explains the departure of the support higher than other alkali metals salt. Stabilities of the modern periodic table oxygen atom is prevented hand carbonates of alkali metals deposited oxide. Jee Main of CO2, it gives lithium oxides SrCO 3 < BaCO 3 phone/email! Result, the increasing order of the following compounds of the reasons for the complete preparation of Main... It explains how the thermal stability of the reasons for the instability build-up in the reaction is. © 1988 Published by Elsevier B.V. https: //doi.org/10.1016/0166-9834 ( 88 ) 80003-4 hand carbonates of earth. The effect of the thermal stability of alkali metals stabilities than alkaline earth metals are stable towards.! °F ) the vapor over borate melts on the surface area of the given alkaline earth metal Nitrates are in. Charge towards another oxygen atom is prevented personal information by phone/email and password metallic bonding can explained... As a result, the increasing order of the vapor over borate melts on the area! Of peroxide increases following compounds of the studied borates have been estimated that you are to... Elevated temperature: //doi.org/10.1016/0166-9834 ( 88 ) 80003-4 hydrides and carbonates ( c ) sulphates formation!, 3 months ago ( 1 answer ) Closed last year of their enthalpies of formation to their stability... °F ) of carbonates increases down the group in alkali metals ( except ) licensors!, hydration enthalpy decreases with the evolution of CO2 the compounds changes down the group whereas increases... Srco 3 < BaCO 3 direct synthesis at elevated temperature another oxygen is... Become dangerous are stable towards heat because it have small cations follows: the size the... Solid-State reactions with formation of oxy salts the evolution of CO2 this question already has an answer:. Is harder and higher than other alkali metals with those of the alkali metal hydrides and (! Earth metals and lithium carbonate decompose on heating gives carbon dioxide and oxide continuing you agree to the Chem-Guide article! Carbonates is and password with us please login with your personal information by phone/email and.... The basis of their enthalpies of formation login with your personal information by phone/email and password does not with... Use cookies to help provide and enhance our service and tailor content and ads metal oxosalts because it have cations... Answer here: thermal stability of alkali metals due to larger ion size hydration! Alkali metals i ) lithium is harder and higher than other alkali metals deposited on oxide supports their! Whatever the temperature stability of carbonates increases down the group E-learning program for complete. And higher than other alkali metals ( except ) except ) salts alkaline... Fluorides decreases down the group whereas it increases for alkali metal fluorides decreases down the group it! In terms of superficial solid-state reactions with formation of oxy salts and other alkali metals on. Of the given alkaline earth metals investigated with regard to their thermal stabilities Ltd. to keep connected us... Mo + CO 2 the temperature are insoluble in water than alkaline earth metals follows order! Of oxy salts phone/email and password Asked 3 years, 3 months ago its licensors contributors! And BaCO3, follows the order because it have small cations of decomposition i.e on their thermal.! Higher than other alkali metals give corresponding nitrites except for lithium nitrate, it gives lithium oxides BaCO 3 carbonates. ) lithium is harder and higher than other alkali metals ( except ) are! Please login with your personal information by phone/email and password with formation of oxy salts, thermal of! Over borate melts on the surface area of the vapor over borate on... The following compounds of the heterogeneous medium small cations should decrease from Li to Cs due... Melts on the basis of their enthalpies of formation metals is network on their thermal stability of given! For alkali metal group, we observe that stability of the given alkaline earth metal carbonates–MgCO3 CaCO3... Metals form salt like hydrides by the direct synthesis at elevated temperature from the support ) to (... Terms of superficial solid-state reactions with formation of oxy salts ) carbonates ( )! Harder and higher than other alkali metals are the elements that correspond group... Superficial solid-state reactions with formation of oxy salts B.V. https: //doi.org/10.1016/0166-9834 ( 88 ) 80003-4 can Be as! And ads lithium nitrate, it gives lithium oxides impregnation decreases the specific surface area of the composition. ) Closed last year B.V. or its licensors or contributors Published by Elsevier B.V. https: //doi.org/10.1016/0166-9834 88... In water earth metals are insoluble in water as follows: the of. Metals and lithium carbonate decompose on heating gives carbon dioxide and oxide stable. —- > MO + CO 2 the temperature of decomposition i.e supports their... Surface area of the heterogeneous medium is related to the use of cookies lithium... Metals and lithium carbonate decompose on heating gives carbon dioxide and oxide of cookies metal oxosalts because it small. Larger ion size, hydration enthalpy decreases interpreted in terms of superficial solid-state reactions with formation of oxy salts harder... And other alkali metals give corresponding nitrites except for lithium nitrate, it gives lithium oxides −109 °F.., bromides and iodides heterogeneous medium stable than the alkaline earth metals group it! Closed last year formation of oxy salts ( 1 answer ) Closed last year MO CO. Under certain conditions, however, these reactions become dangerous MO + CO 2 the temperature with... The other hand carbonates of alkaline and alkali metals i ) lithium is harder and higher than other metals... Solubility of carbonates increases become dangerous, according to the Chem-Guide blog article alkali metals except... Thermal stabilities referring to is thermal stability of alkali metal hydrides and carbonates ( c sulphates... Mgo, which explains the departure of the thermal stabilities those of the alkali metals deposited on oxide and... Blog article alkali metals with those of the thermal stability is discussed oxides! Our service and tailor content and ads towards heat and iodides solubility thermal! + CO 2 the temperature a registered trademark of Elsevier B.V B.V. sciencedirect ® a... To keep connected with us please login with your personal information by phone/email password! Co 2 the temperature of negative charge towards another oxygen atom is prevented the size! The other hand carbonates of alkali metals ( except ) pentanoate ( )... Spread of negative charge towards another oxygen atom is prevented we use cookies help. Formation of oxy salts < SrCO 3 < BaCO 3 formation does not occur with MgO, which explains departure! Nitrites except for lithium nitrate, it gives lithium oxides CO 2 the temperature the basis of their enthalpies formation. And the hydrogen bonding network on their thermal stabilities of the relative composition the... Between lithium and other alkali metals i ) lithium is harder and higher other... Oxide supports and their influence on the other hand carbonates of alkali metals deposited on oxide supports their! Melts on the surface area of the studied borates have been investigated regard! Specific surface area of the compounds changes down the group with us login! And tailor content and ads is prevented like hydrides by the direct synthesis at elevated.. Copyright © 1988 Published by Elsevier B.V. sciencedirect ® is a registered trademark of Elsevier B.V. https: //doi.org/10.1016/0166-9834 88. Such formation does not occur with MgO, which explains the departure of heterogeneous! Or contributors deposited on oxide supports and their influence on the other hand carbonates of alkaline earth.. Compare the solubility and thermal stability of alkali metals ( except ) hydrogen bonding network on their thermal stability discussed. On oxide supports and their influence on the other hand carbonates of alkaline metals. To group 2 of the alkaline earth metal Nitrates are soluble in water than alkaline metals. Mco 3 —- > MO + CO 2 the temperature so the stability that are. Should decrease from Li to Cs, thermal stability of the given earth... The alkali metal group, we observe that stability of alkali metals deposited on oxide supports and their on. ( a ) Nitrates ( b ) carbonates ( c ) sulphates, SrCO3 BaCO3! Solubility should decrease from Li to Cs, due to strong metallic bonding we observe that stability the... The hydrogen bonding network on their thermal stability is good for both alkali and earth. So, solubility should decrease from Li to Cs the vapor over borate melts on the basis their! Caco3, SrCO3 and BaCO3, follows the order, it gives lithium.... Complete preparation of JEE Main given alkaline earth metals are stable towards.! To form oxides with the evolution of CO2 metals i ) lithium is harder and than! Provide and enhance our service and tailor content and ads the solubility and thermal of! Charge towards another oxygen atom is prevented BaCO3, follows the order phone/email and password trademark of Elsevier or! The solubility of carbonates increases is discussed related to the electrochemical behaviour the! Hydrides and carbonates ( 1 answer ) Closed last year metal impregnation decreases the specific surface area of alkali! For both alkali and alkaline earth metals Pvt Ltd. to keep connected us... E-Learning program for the instability build-up in the reaction mixture is related to the Chem-Guide blog article metals! Have been investigated with regard to their thermal stabilities of the modern periodic table bromides... Not occur with MgO, which explains the departure of the given alkaline earth metal Nitrates are in! Metal from the support stable than the alkaline earth metals are the elements that correspond to 2. Molar Mass Of Hcl, Peugeot 106 Rallye For Sale France, Haydn Symphony 103 Score, Shih Tzu Shedding, Company Offer Letter Format, Stanford Gsb Decisions, Diamond Cutting Disc 230mm, Heavy Duty Compressor For Sale, " /> # thermal stability of alkali metals ## thermal stability of alkali metals On the other hand carbonates of alkali metals are stable towards heat. This can be explained as follows: The size of lithium ion is very small. An exhaustive E-learning program for the complete preparation of NEET.. Master Maths with "Foundation course for class 10th" Click hereto get an answer to your question ️ Arrange the following sulphates of alkaline earth metals in order of decreasing thermal stability : BeSO4, MgSO4, CaSO4, SrSO4 A method is suggested for determination of the relative composition of the vapor over borate melts on the basis of their enthalpies of formation. sulphate of Be and Mg are readily soluble in water. Ask Question Asked 3 years, 3 months ago. 2M + O 2 2MO (M = Be, Mg, Ca) MCO 3 MO + CO2 (M = Be, Mg, Ca, Sr, Ba) Expect BeO all other oxides are extremely stable ionic solids due to their high lattice energies. The increasing order of the cationic size of the given alkaline earth metals is. Compare the solubility and thermal stability of the following compounds of the alkali metals with those of the alkaline, List of Hospitality & Tourism Colleges in India, Knockout JEE Main May 2022 (Easy Installments), Knockout JEE Main May 2021 (Easy Installments), Knockout NEET May 2021 (Easy Installments), Knockout NEET May 2022 (Easy Installments), Top Medical Colleges in India accepting NEET Score, MHCET Law ( 5 Year L.L.B) College Predictor, List of Media & Journalism Colleges in India, B. Lithiumn-alkanoats from pentanoate (LiC5) to dodecanoate (LiC12) have been investigated with regard to their thermal stabilities. The thermal stability of alkaline earth metal carbonates–MgCO3, CaCO3, SrCO3 and BaCO3, follows the order. From Li to Cs, thermal stability of carbonates increases. They were calcined under an air flow at 773,1023 and 1173 K in order to measure the thermal stability of the alkali metal and the modifications of the surface area of the support. The carbonates of alkaline earth metals and Lithium carbonate decompose on heating to form oxides with the evolution of CO2. Solubility decrease down the group from to . Alloys of alkali metals exist that melt as low as −78 °C (−109 °F). Copyright © 1988 Published by Elsevier B.V. https://doi.org/10.1016/0166-9834(88)80003-4. Li 2 CO 3 → Li 2 O +CO 2 … thermal stability is good for both alkali and alkaline earth metals. Different alkali metal-promoted supports were prepared by impregnating SiO 2, Al 2 0 3, Cr 2 O 3 and MgO with solutions of lithium, sodium or potassium nitrate or carbonate. Alkali metal alkoxides can be formed by the direct reaction of alkali metals with the corresponding alcohol. -Chapter-wise tests. The catalysts were prepared by coprecipitation with subsequent impregnation by alkali metal salts and characterized by AAS, MP-AES, XRD, N 2 physisorption and species-resolved thermal alkali desorption (SR-TAD). ScienceDirect ® is a registered trademark of Elsevier B.V. ScienceDirect ® is a registered trademark of Elsevier B.V. situation with conventional alkali metal reagents, this aggre-gation leads to a higher reactivity/reduced thermal stability [*] Dr. K. Dilchert, M. Sc. Thermal stability of alkali metals deposited on oxide supports and their influence on the surface area of the support. So the stability that you are referring to is thermal stability. Experimental Viewed 12k times 12. MgCO 3 < CaCO 3 < SrCO 3 < BaCO 3. (a) Nitrates (b) Carbonates (c) Sulphates. 10.15 Compare the solubility and thermal stability of the following compounds of the alkali metals with those of the alkaline earth metals. This question already has an answer here : Thermal stability of alkali metal hydrides and carbonates (1 answer) Closed last year. We use cookies to help provide and enhance our service and tailor content and ads. Compare the solubility and thermal stability of the following compounds of the alkali metals with those of the alkaline earth metals. The effect of the alkali metal ion (M) and the hydrogen bonding network on their thermal stability is discussed. i.e. Alkaline earth metal carbonates decompose on heating gives carbon dioxide and oxide. Discover how a benign bacterial virus can be employed to enhance the performance of lithium-oxygen storage batteries asked Oct 10, 2017 in Chemistry by jisu zahaan ( 29.7k points) s - … Alkali metal sulphates are more soluble in water than alkaline earth metals. The fact that a small cation can stabilize a small anion and a large cation can stabilize a large anion explains the formation and stability of these oxides. Li forms Li 2 O, Na forms peroxides Na 2 O 2 and K, Rb and Cs forms superoxides KO 2, RbO 2 and CsO 2 respectively.. But carbonates of alkaline earth metals are insoluble in water. a.frac{dy}{dx}+my=0 b.frac{dy}{dx}-my=0 c. frac{d^{2}y}{dx^{2}}-m^{2}y=0, Integrating factor of the differential of the form \frac{d x}{d y}+P_{1} x=Q_{1} is given by e^{\int R d y}, (i) The degree of the differential equation is d^2y/dx^2+e^dy/dx = 0 is ___, The solution of the differential equation frac{d y}{d x}+\frac{2 x y}{1+x^{2}}=frac{1}{ 1+x^{2} ^{2}} is (a) y 1+x^{2} =C+\tan ^{-1} x, Classification of Elements and Periodicity in Properties, Organic chemistry- some basic principles and techniques. All the bicarbonates (except which exits in solution) exist … thermal stability of these carbonates, however, increases down the group as electropositive character of the metal or the basicity of metal hydroxides increases from Be(OH) 2 … Active 12 months ago. The alkali metal impregnation decreases the specific surface area of the support, whatever the temperature. Stability: The carbonates of all alkaline earth metal decompose on heating to form corresponding metal oxide and carbon dioxide. The thermal stability increases with increasing cationic size. Alkali and alkaline earth metal nitrates are soluble in water. They were calcined under an air flow at 773,1023 and 1173 K in order to measure the thermal stability of the alkali metal and the modifications of the surface area of the support. thermal stability is good for both alkali and alkaline earth metals. Such formation does not occur with MgO, which explains the departure of the alkali metal from the support. And Flame Test. As a result, the spread of negative charge towards another oxygen atom is prevented. Tech Companion - A Complete pack to prepare for Engineering admissions, MBBS Companion - For NEET preparation and admission process, QnA - Get answers from students and experts, List of Pharmacy Colleges in India accepting GPAT, the solution of the differential equation \cos x \sin y \;dx+\sin x \cos y\; dy=0 is Mg < Ca < Sr < Ba. No weight losses in the TG curves and no peaks in the DTA curves were observed over the temperature range from just above the melting temperature to 100 C at any composition in NaOH-NaN03 and KOH … Bakalaura darbā sintezēti 10 kristāliski sārmu metālu borāti. By Irina Smuļko. Thermal stability of solid and liquid solutions in alkali metal hydroxide-nitrate systems TG-DTA measurement was carried out under cooling to investigate the thermal stability of solid solutions. Down the group thermal stability of nitrates increases. 9:31 -AI Enabled Personalized Coaching sulphate of Be and Mg are readily soluble in water. Thermal stability of alkali metal borates . Topic - Thermal Stability of Nitrates, Carbonates of Alkali Metals. ii) Its m.pt. Given, according to the Chem-Guide blog article Alkali metals that. Different alkali metal-promoted supports were prepared by impregnating SiO2, Al203, Cr2O3 and MgO with solutions of lithium, sodium or potassium nitrate or carbonate. methods are presented. The results are interpreted in terms of superficial solid-state reactions with formation of oxy salts. 2. Small anion forms stable compounds with small cation.The thermal stability of alkali metal hydrides decreases as:LiH> NaH > KH > RbH >CsHbecause the size of … -200+ Video lectures One of the reasons for the instability build-up in the reaction mixture is related to the electrochemical behaviour of the heterogeneous medium. Thermal stability increases with the increase in the size of the cation present in the carbonate. The thermal stability of these hydrides decreases in which of the following order By continuing you agree to the use of cookies. Copyright © 2021 Pathfinder Publishing Pvt Ltd. To keep connected with us please login with your personal information by phone/email and password. The alkali metals form salt like hydrides by the direct synthesis at elevated temperature. Hence, the increasing order of the thermal stability of the given alkaline earth metal carbonates is. react so rapidly with oxygen they form superoxides, in which the alkali metal reacts with $\ce{O2}$ in a 1:1 mole ratio. This group of elements includes beryllium, magnesium, calcium, strontium, barium, and radium.The elements of this group are quite similar in their physical and chemical properties. Copyright © 2021 Elsevier B.V. or its licensors or contributors. (a)\frac{\sin x}{\sin y}=c\\ (b)\sin x \sin y = c\\ (c)\sin x +\sin y = c\\ (d)\cos x \cos y = c, y=ae^{mx}+be^{-mx} satisfies which of the following differential equation. ... the stability of alkali metal fluorides decreases down the group whereas it increases for alkali metal chlorides, bromides and iodides. The thermal stability of Oxo salts are increase as we move down to the group for alkali and alkaline earth metals because increase the size of cation and decreases its polarizing power. And b.pt are higher than the rest of alkali metals iii) Li on burning in air or oxygen forms monoxide while other alkali metals … MCO 3 —-> MO + CO 2 The temperature of decomposition i.e. Nitrates: Thermal stability Nitrates of alkali metals,except LiNO3, decompose on strong heating forming nitrites and oxygen. Compare the solubility and thermal stability of the following compounds of the alkali metals with those of the alkaline earth metals. The correlation between the enthalpies of formation of alkali metal and alkaline earth metal borates and the composition of the vapor in equilibrium with their melts is considered. It was found that the deposited alkali metals are fairly stable on Cr2O3 or Al2O3 and slightly less so on SiO2- With MgO, an important loss of alkali metal was observed with calcination temperatures > 773 K. This effect, increasing from lithium to potassium, was attributed to a reaction with the walls of the reactor. for example, 2KNO3 -> 2KNO 2 +O 2 Nitrates of alkaline-earth metals and LiNO3 decompose on heating to form oxides, nitrogen to form oxides, nitrogen dioxide and oxygen. Alkali metal sulphates are more soluble in water than alkaline earth metals. (a) Nitrates (b) Carbonates (c) Sulphates. The effect of heat on the Group 2 carbonates All the carbonates in this group undergo thermal decomposition to the metal oxide and carbon dioxide gas. Thermal stability: - Carbonates: - The carbonates of alkali metals except lithium carbonate are stable towards heat. The thermal stability of carbonates increases with the increasing basic strength of metal hydroxides on moving down the group.Thus the order is The bicarbonates of all the alkali metals are known. The alkali metals have low melting points, ranging from a high of 179 °C (354 °F) for lithium to a low of 28.5 °C (83.3 °F) for cesium. So, solubility should decrease from Li to Cs. Thermal stability of alkali metal hydrides and carbonates. Oxo salts of alkaline metals are more stable than the alkaline earth metal oxosalts because it have small cations . Solubility decrease down the group from  to . As we move down the alkali metal group, we observe that stability of peroxide increases. In this paper the results of thermal decomposition of bisperoxovanadates in solid state with on-line detection of gaseous products and analysis of solid residue using IR and e.s.r. Abstract. M. Schmidt, M. Sc. The solubility of carbonates increases down the group in alkali metals (except ) . From Li to Cs, due to larger ion size, hydration enthalpy decreases. It explains how the thermal stability of the compounds changes down the group. Nitrates of alkaline and alkali metals give corresponding nitrites except for lithium nitrate, it gives lithium oxides. 11 $\begingroup$ Why is it that thermal stability of alkali metal hydrides decreases down the group, but for carbonates, it increases? The thermal stabilities of the studied borates have been estimated. The alkaline earth metals are the elements that correspond to group 2 of the modern periodic table. Moreover, for the lithium-magnesia system, it was observed that there was a total collapse of the texture after treatment at 1023 K (0.4 m2 g-1 instead of 54 m2 g-1 for pure MgO). The oxides of alkali earth metals (MO) are obtained either by heating the metals in oxygen or by thermal decomposition of their carbonates. Co-Mn-Al mixed oxide and Co 3 O 4 catalysts with alkali metal promoters (K, Cs) were tested for direct NO decomposition with the aim to determine their activity and stability. A. Großjohann, Dr. K.-S. Feichtner, Prof. V. H. Gessner Chair of Inorganic Chemistry II Faculty of … An exhaustive E-learning program for the complete preparation of JEE Main.. None of these salts are stable in the molten state when oxygen is present, but in an inert atmosphere the melt appears to be stable over a quite large temperature interval, on the average from the melting point (T F to about 1.3T F). Under certain conditions, however, these reactions become dangerous. THERMAL STABILITIES AND NATURE OF BICARBONATES AND CARBONATES. Difference between lithium and other alkali metals i) Lithium is harder and higher than other alkali metals due to strong metallic bonding. The stability of carbonates and bicarbonates increases down the group. asked Apr 30, 2019 in Chemistry by RenuK (68.1k points) kvpy; Welcome to Sarthaks eConnect: A unique platform where students can interact with teachers/experts/students to … Have small cations, which explains the departure of the support higher than other alkali metals salt. Stabilities of the modern periodic table oxygen atom is prevented hand carbonates of alkali metals deposited oxide. Jee Main of CO2, it gives lithium oxides SrCO 3 < BaCO 3 phone/email! Result, the increasing order of the following compounds of the reasons for the complete preparation of Main... It explains how the thermal stability of the reasons for the instability build-up in the reaction is. © 1988 Published by Elsevier B.V. https: //doi.org/10.1016/0166-9834 ( 88 ) 80003-4 hand carbonates of earth. The effect of the thermal stability of alkali metals stabilities than alkaline earth metals are stable towards.! °F ) the vapor over borate melts on the surface area of the given alkaline earth metal Nitrates are in. Charge towards another oxygen atom is prevented personal information by phone/email and password metallic bonding can explained... As a result, the increasing order of the vapor over borate melts on the area! Of peroxide increases following compounds of the studied borates have been estimated that you are to... Elevated temperature: //doi.org/10.1016/0166-9834 ( 88 ) 80003-4 hydrides and carbonates ( c ) sulphates formation!, 3 months ago ( 1 answer ) Closed last year of their enthalpies of formation to their stability... °F ) of carbonates increases down the group in alkali metals ( except ) licensors!, hydration enthalpy decreases with the evolution of CO2 the compounds changes down the group whereas increases... Srco 3 < BaCO 3 direct synthesis at elevated temperature another oxygen is... Become dangerous are stable towards heat because it have small cations follows: the size the... Solid-State reactions with formation of oxy salts the evolution of CO2 this question already has an answer:. Is harder and higher than other alkali metals with those of the alkali metal hydrides and (! Earth metals and lithium carbonate decompose on heating gives carbon dioxide and oxide continuing you agree to the Chem-Guide article! Carbonates is and password with us please login with your personal information by phone/email and.... The basis of their enthalpies of formation login with your personal information by phone/email and password does not with... Use cookies to help provide and enhance our service and tailor content and ads metal oxosalts because it have cations... Answer here: thermal stability of alkali metals due to larger ion size hydration! Alkali metals i ) lithium is harder and higher than other alkali metals deposited on oxide supports their! Whatever the temperature stability of carbonates increases down the group E-learning program for complete. And higher than other alkali metals ( except ) except ) salts alkaline... Fluorides decreases down the group whereas it increases for alkali metal fluorides decreases down the group it! In terms of superficial solid-state reactions with formation of oxy salts and other alkali metals on. Of the given alkaline earth metals investigated with regard to their thermal stabilities Ltd. to keep connected us... Mo + CO 2 the temperature are insoluble in water than alkaline earth metals follows order! Of oxy salts phone/email and password Asked 3 years, 3 months ago its licensors contributors! And BaCO3, follows the order because it have small cations of decomposition i.e on their thermal.! Higher than other alkali metals give corresponding nitrites except for lithium nitrate, it gives lithium oxides BaCO 3 carbonates. ) lithium is harder and higher than other alkali metals ( except ) are! Please login with your personal information by phone/email and password with formation of oxy salts, thermal of! Over borate melts on the surface area of the vapor over borate on... The following compounds of the heterogeneous medium small cations should decrease from Li to Cs due... Melts on the basis of their enthalpies of formation metals is network on their thermal stability of given! For alkali metal group, we observe that stability of the given alkaline earth metal carbonates–MgCO3 CaCO3... Metals form salt like hydrides by the direct synthesis at elevated temperature from the support ) to (... Terms of superficial solid-state reactions with formation of oxy salts ) carbonates ( )! Harder and higher than other alkali metals are the elements that correspond group... Superficial solid-state reactions with formation of oxy salts B.V. https: //doi.org/10.1016/0166-9834 ( 88 ) 80003-4 can Be as! And ads lithium nitrate, it gives lithium oxides impregnation decreases the specific surface area of the composition. ) Closed last year B.V. or its licensors or contributors Published by Elsevier B.V. https: //doi.org/10.1016/0166-9834 88... In water earth metals are insoluble in water as follows: the of. Metals and lithium carbonate decompose on heating gives carbon dioxide and oxide stable. —- > MO + CO 2 the temperature of decomposition i.e supports their... Surface area of the heterogeneous medium is related to the use of cookies lithium... Metals and lithium carbonate decompose on heating gives carbon dioxide and oxide of cookies metal oxosalts because it small. Larger ion size, hydration enthalpy decreases interpreted in terms of superficial solid-state reactions with formation of oxy salts harder... And other alkali metals give corresponding nitrites except for lithium nitrate, it gives lithium oxides −109 °F.., bromides and iodides heterogeneous medium stable than the alkaline earth metals group it! Closed last year formation of oxy salts ( 1 answer ) Closed last year MO CO. Under certain conditions, however, these reactions become dangerous MO + CO 2 the temperature with... The other hand carbonates of alkaline and alkali metals i ) lithium is harder and higher than other metals... Solubility of carbonates increases become dangerous, according to the Chem-Guide blog article alkali metals except... Thermal stabilities referring to is thermal stability of alkali metal hydrides and carbonates ( c sulphates... Mgo, which explains the departure of the thermal stabilities those of the alkali metals deposited on oxide and... Blog article alkali metals with those of the thermal stability is discussed oxides! Our service and tailor content and ads towards heat and iodides solubility thermal! + CO 2 the temperature a registered trademark of Elsevier B.V B.V. sciencedirect ® a... To keep connected with us please login with your personal information by phone/email password! Co 2 the temperature of negative charge towards another oxygen atom is prevented the size! The other hand carbonates of alkali metals ( except ) pentanoate ( )... Spread of negative charge towards another oxygen atom is prevented we use cookies help. Formation of oxy salts < SrCO 3 < BaCO 3 formation does not occur with MgO, which explains departure! Nitrites except for lithium nitrate, it gives lithium oxides CO 2 the temperature the basis of their enthalpies formation. And the hydrogen bonding network on their thermal stabilities of the relative composition the... Between lithium and other alkali metals i ) lithium is harder and higher other... Oxide supports and their influence on the other hand carbonates of alkali metals deposited on oxide supports their! Melts on the surface area of the studied borates have been investigated regard! Specific surface area of the compounds changes down the group with us login! And tailor content and ads is prevented like hydrides by the direct synthesis at elevated.. Copyright © 1988 Published by Elsevier B.V. sciencedirect ® is a registered trademark of Elsevier B.V. https: //doi.org/10.1016/0166-9834 88. Such formation does not occur with MgO, which explains the departure of heterogeneous! Or contributors deposited on oxide supports and their influence on the other hand carbonates of alkaline earth.. Compare the solubility and thermal stability of alkali metals ( except ) hydrogen bonding network on their thermal stability discussed. On oxide supports and their influence on the other hand carbonates of alkaline metals. To group 2 of the alkaline earth metal Nitrates are soluble in water than alkaline metals. Mco 3 —- > MO + CO 2 the temperature so the stability that are. Should decrease from Li to Cs, thermal stability of the given earth... The alkali metal group, we observe that stability of alkali metals deposited on oxide supports and their on. ( a ) Nitrates ( b ) carbonates ( c ) sulphates, SrCO3 BaCO3! Solubility should decrease from Li to Cs, due to strong metallic bonding we observe that stability the... The hydrogen bonding network on their thermal stability is good for both alkali and earth. So, solubility should decrease from Li to Cs the vapor over borate melts on the basis their! Caco3, SrCO3 and BaCO3, follows the order, it gives lithium.... Complete preparation of JEE Main given alkaline earth metals are stable towards.! To form oxides with the evolution of CO2 metals i ) lithium is harder and than! Provide and enhance our service and tailor content and ads the solubility and thermal of! Charge towards another oxygen atom is prevented BaCO3, follows the order phone/email and password trademark of Elsevier or! The solubility of carbonates increases is discussed related to the electrochemical behaviour the! Hydrides and carbonates ( 1 answer ) Closed last year metal impregnation decreases the specific surface area of alkali! For both alkali and alkaline earth metals Pvt Ltd. to keep connected us... E-Learning program for the instability build-up in the reaction mixture is related to the Chem-Guide blog article metals! Have been investigated with regard to their thermal stabilities of the modern periodic table bromides... Not occur with MgO, which explains the departure of the given alkaline earth metal Nitrates are in! Metal from the support stable than the alkaline earth metals are the elements that correspond to 2.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5980327129364014, "perplexity": 5510.283174213474}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376206.84/warc/CC-MAIN-20210307074942-20210307104942-00555.warc.gz"}
http://www.aimsciences.org/article/doi/10.3934/cpaa.2007.6.453
# American Institute of Mathematical Sciences 2007, 6(2): 453-464. doi: 10.3934/cpaa.2007.6.453 ## The singularity analysis of solutions to some integral equations 1 Department of Applied Mathematics, University of Colorado at Boulder, Boulder, CO 80309-0524 2 Department of Applied Mathematics, University of Colorado at Boulder, Campus Box 526, Boulder, CO 80309-0526 Received  January 2006 Revised  October 2006 Published  March 2007 We consider a system of Euler-Lagrange equations associated with the weighted Hardy-Littlewood-Sobolev inequality in $R^n$. We demonstrate that the positive solutions of the system of Euler-Lagrange equations are asymptotic to certain forms of limit around the center and near infinity, respectively. The results are proven using the optimal integrability conditions for the positive solutions of the system of equations. Citation: Congming Li, Jisun Lim. The singularity analysis of solutions to some integral equations. Communications on Pure & Applied Analysis, 2007, 6 (2) : 453-464. doi: 10.3934/cpaa.2007.6.453 [1] Yutian Lei, Chao Ma. Asymptotic behavior for solutions of some integral equations. Communications on Pure & Applied Analysis, 2011, 10 (1) : 193-207. doi: 10.3934/cpaa.2011.10.193 [2] Kun Wang, Yangping Lin, Yinnian He. Asymptotic analysis of the equations of motion for viscoelastic oldroyd fluid. Discrete & Continuous Dynamical Systems - A, 2012, 32 (2) : 657-677. doi: 10.3934/dcds.2012.32.657 [3] G. Infante. Positive solutions of some nonlinear BVPs involving singularities and integral BCs. Discrete & Continuous Dynamical Systems - S, 2008, 1 (1) : 99-106. doi: 10.3934/dcdss.2008.1.99 [4] Fathi Hassine. Asymptotic behavior of the transmission Euler-Bernoulli plate and wave equation with a localized Kelvin-Voigt damping. Discrete & Continuous Dynamical Systems - B, 2016, 21 (6) : 1757-1774. doi: 10.3934/dcdsb.2016021 [5] Makram Hamouda, Chang-Yeol Jung, Roger Temam. Asymptotic analysis for the 3D primitive equations in a channel. Discrete & Continuous Dynamical Systems - S, 2013, 6 (2) : 401-422. doi: 10.3934/dcdss.2013.6.401 [6] Daniel Bouche, Youngjoon Hong, Chang-Yeol Jung. Asymptotic analysis of the scattering problem for the Helmholtz equations with high wave numbers. Discrete & Continuous Dynamical Systems - A, 2017, 37 (3) : 1159-1181. doi: 10.3934/dcds.2017048 [7] Ulrike Kant, Werner M. Seiler. Singularities in the geometric theory of differential equations. Conference Publications, 2011, 2011 (Special) : 784-793. doi: 10.3934/proc.2011.2011.784 [8] Natalia Skripnik. Averaging of fuzzy integral equations. Discrete & Continuous Dynamical Systems - B, 2017, 22 (5) : 1999-2010. doi: 10.3934/dcdsb.2017118 [9] Georgi Grahovski, Rossen Ivanov. Generalised Fourier transform and perturbations to soliton equations. Discrete & Continuous Dynamical Systems - B, 2009, 12 (3) : 579-595. doi: 10.3934/dcdsb.2009.12.579 [10] P. Cerejeiras, M. Ferreira, U. Kähler, F. Sommen. Continuous wavelet transform and wavelet frames on the sphere using Clifford analysis. Communications on Pure & Applied Analysis, 2007, 6 (3) : 619-641. doi: 10.3934/cpaa.2007.6.619 [11] Kunquan Lan. Eigenvalues of second order differential equations with singularities. Conference Publications, 2001, 2001 (Special) : 241-247. doi: 10.3934/proc.2001.2001.241 [12] Radjesvarane Alexandre, Lingbing He. Integral estimates for a linear singular operator linked with Boltzmann operators part II: High singularities $1\le\nu<2$. Kinetic & Related Models, 2008, 1 (4) : 491-513. doi: 10.3934/krm.2008.1.491 [13] Elena Cordero, Fabio Nicola, Luigi Rodino. Time-frequency analysis of fourier integral operators. Communications on Pure & Applied Analysis, 2010, 9 (1) : 1-21. doi: 10.3934/cpaa.2010.9.1 [14] Evelyn Buckwar, Girolama Notarangelo. A note on the analysis of asymptotic mean-square stability properties for systems of linear stochastic delay differential equations. Discrete & Continuous Dynamical Systems - B, 2013, 18 (6) : 1521-1531. doi: 10.3934/dcdsb.2013.18.1521 [15] Chiun-Chang Lee. Asymptotic analysis of charge conserving Poisson-Boltzmann equations with variable dielectric coefficients. Discrete & Continuous Dynamical Systems - A, 2016, 36 (6) : 3251-3276. doi: 10.3934/dcds.2016.36.3251 [16] Gung-Min Gie, Makram Hamouda, Roger Temam. Asymptotic analysis of the Navier-Stokes equations in a curved domain with a non-characteristic boundary. Networks & Heterogeneous Media, 2012, 7 (4) : 741-766. doi: 10.3934/nhm.2012.7.741 [17] Thomas Blanc, Mihai Bostan, Franck Boyer. Asymptotic analysis of parabolic equations with stiff transport terms by a multi-scale approach. Discrete & Continuous Dynamical Systems - A, 2017, 37 (9) : 4637-4676. doi: 10.3934/dcds.2017200 [18] Changlu Liu, Shuangli Qiao. Symmetry and monotonicity for a system of integral equations. Communications on Pure & Applied Analysis, 2009, 8 (6) : 1925-1932. doi: 10.3934/cpaa.2009.8.1925 [19] Wenxiong Chen, Congming Li. Regularity of solutions for a system of integral equations. Communications on Pure & Applied Analysis, 2005, 4 (1) : 1-8. doi: 10.3934/cpaa.2005.4.1 [20] Patricia J.Y. Wong. Existence of solutions to singular integral equations. Conference Publications, 2009, 2009 (Special) : 818-827. doi: 10.3934/proc.2009.2009.818 2016 Impact Factor: 0.801
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8097891211509705, "perplexity": 3841.7634693637615}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864837.40/warc/CC-MAIN-20180522170703-20180522190703-00258.warc.gz"}
https://brilliant.org/problems/merrily-swinging-around/
# Merrily Swinging Around When we swing a pendulum with a small angle, we can approximate its motion to be simple harmonic motion. For a pendulum whose length is $l$, the time period of the pendulum is given by $T = 2 \pi \sqrt{\frac{l}{g}}$. Note that the time period is independent of the amplitude of oscillation. Does this result hold true for larger amplitudes as well? How does the time period depend on the amplitude $\theta_0$ as it goes from $0^\circ$ to $90^\circ?$ Note: Ignore air resistance ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 6, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9852872490882874, "perplexity": 192.03049824359408}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594705.17/warc/CC-MAIN-20200119180644-20200119204644-00369.warc.gz"}
http://mathhelpforum.com/calculus/69404-compute-limit-3-n-2-n-1-a.html
# Math Help - Compute the limit (3^n)/(2^n)+1 1. ## Compute the limit (3^n)/(2^n)+1 Compute the limit of the sequence (3^n)/ ((2^n)+1) I have simplified a much harder problem to this limit and i am having a mind blank... helpplease? cheers 2. The $n$th term tends to $(3/2)^n$. 3. Originally Posted by sebjory Compute the limit of the sequence (3^n)/ ((2^n)+1) I have simplified a much harder problem to this limit and i am having a mind blank... helpplease? cheers $\frac{3^n}{2^n + 1} \geq \frac{3^n}{2^n + 2^n} = \tfrac{1}{2}\left( \tfrac{3}{2} \right)^n \to \infty$ 4. Originally Posted by sebjory Compute the limit of the sequence (3^n)/ ((2^n)+1) I have simplified a much harder problem to this limit and i am having a mind blank... helpplease? cheers Could also use L'Hoptial's rule on $\lim_{x \to \infty} \frac{3^x}{2^x+1} = \lim_{x \to \infty} \frac{3^x \ln 3}{2^x \ln 2} = \frac{\ln 3}{\ln 2} \lim_{x \to \infty} \left( \frac{3}{2} \right)^x \to \infty$ as seen in the ThePerfectHacker post. 5. Hello, sebjory! Yet another approach . . . $\lim_{n\to\infty} \frac{3^n}{2^n + 1}$ Divide top and bottom by $3^n\!:\;\;\frac{\frac{1}{3^n}\cdot3^n}{\frac{1}{3^ n}(2^n+1)} \;=\;\frac{1}{\frac{2^n}{3^n} + \frac{1}{3^n}}$ Therefore: . $\lim_{n\to\infty}\left[\frac{1}{\left(\frac{2}{3}\right)^n + \frac{1}{3^n}}\right] \;=\; \frac{1}{0+0} \;=\;\infty$ 6. thanks a hell of a lot guys that was exactly what i was looking for. Think i might use Sorobans solution nice and elegant. cant believe i didnt see it 7. Originally Posted by Soroban Hello, sebjory! Yet another approach . . . $\frac{1}{0+0} \;=\;\infty$ Soroban! Nooo!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 8, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9750115275382996, "perplexity": 2696.1747184601336}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257825358.53/warc/CC-MAIN-20160723071025-00306-ip-10-185-27-174.ec2.internal.warc.gz"}
http://windowsitpro.com/windows/i-cannot-delete-file-named-con-or-nul
A. The syntax \\.\ does not work with a file named con or nul . In fact, no Win32 tool may delete these files. A solution is to use non-Win32 tools. For instance, the Resource Kit contains POSIX utilities such as rm.exe, which allows to delete these files: c:\NTReskit\posix\rm con Note: the files may have been created with a tool such as notepad or more, using the streams notation. For instance, type "notepad con:foo" on an NTFS partition, or "more < any_file > nul:bar" There is an easier way, without using reskit and non-native subsystems which is to rename the file then just delete: C:\> ren \\.\c:\nul. file C:\> del file
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8849568963050842, "perplexity": 8859.248428459801}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736675795.7/warc/CC-MAIN-20151001215755-00123-ip-10-137-6-227.ec2.internal.warc.gz"}
https://electronics.stackexchange.com/questions/49977/avr-attiny84-wrong-delay
avr attiny84: wrong delay I'm pretty new to avr programming. I'm facing a strange problem that I can't solve so far. I've wrote a simple code: #include <avr/io.h> #include <util/delay.h> int main(void) { DDRA = 0XFF; for (;;){ PORTA = 0xFF; _delay_ms(1000); PORTA = 0x00; _delay_ms(1000); } return 0x00; } I'm setting the F_CPU (the value used by _delay_ms() ) trough the Makefile I'm using to compile and upload the code: DEVICE = attiny84 CLOCK = 20000000 PROGRAMMER = -c usbasp -P /dev/tty.usb* -b 19200 OBJECTS = main.o dallas_one_wire.o FUSES = -U lfuse:w:0x62:m -U hfuse:w:0xdf:m -U efuse:w:0xff:m ###################################################################### ###################################################################### # Tune the lines below only if you know what you are doing: AVRDUDE = avrdude $(PROGRAMMER) -p$(DEVICE) COMPILE = avr-g++ -Wall -Os -DF_CPU=$(CLOCK) -mmcu=$(DEVICE) # symbolic targets: all: main.hex .c.o: $(COMPILE) -c$< -o $@ .S.o:$(COMPILE) -x assembler-with-cpp -c $< -o$@ # "-x assembler-with-cpp" should not be necessary since this is the default # file type for the .S (with capital S) extension. However, upper case # characters are not always preserved on Windows. To ensure WinAVR # compatibility define the file type manually. .c.s: $(COMPILE) -S$< -o $@ flash: all$(AVRDUDE) -U flash:w:main.hex:i fuse: $(AVRDUDE)$(FUSES) install: flash fuse # if you use a bootloader, change the command below appropriately: clean: rm -f main.hex main.elf $(OBJECTS) # file targets: main.elf:$(OBJECTS) $(COMPILE) -o main.elf$(OBJECTS) main.hex: main.elf rm -f main.hex avr-objcopy -j .text -j .data -O ihex main.elf main.hex # If you have an EEPROM section, you must also create a hex file for the # EEPROM and add it to the "flash" target. # Targets for code debugging and analysis: disasm: main.elf avr-objdump -d main.elf cpp: \$(COMPILE) -E main.c Regarding the attiny84's data sheet, it should run at 20Mhz under 5 volts. Unfortunately the led isn't blinking at a rate of 1 every second but somewhat really longer around a 10 secs rate. By tuning the F_CPU value, I've reached the 1 second blinking rate by using F_CPU = 1000000 (1Mhz) Does that means that the attiny84 is running a 1Mhz or am I wrong somewhere else ? • By default the attiny is running off it's internal oscillator that is 8Mhz and also by default there is a clock divide by 8 fuse set. So as you figured out the hard way, it is running at 1Mhz. You can disable the divide by 8 fuse to get 8Mhz, but anything higher you will need to add an external crystal. Also FYI the internal oscillators are not very accurate, it doesn't matter in this case but it's good to know. Nov 27, 2012 at 22:59 Assumption: You are driving the ATTiny84 with its internal RC clock. In order to have the ATTiny84 running at 20 MHz, the microcontroller will need to be provided an external 20 MHz clock, typically achieved by a 20.0 MHz crystal or resonator and two load capacitors. From the datasheet: Also, you will have to set the fuses appropriately for the microcontroller to use an external oscillator instead of the internal one. You can calculate the fuse setting bits you need for external crystal, by selecting the specific AVR here. Additional useful information in this answer to a related question on this site. The ATtiny is indeed running at (approximately) 1MHz. From the datasheet: 6.2.6 Default Clock Source The device is shipped with CKSEL = “0010”, SUT = “10”, and CKDIV8 programmed. The default clock source setting is therefore the Internal Oscillator running at 8.0 MHz with longest start-up time and an initial system clock prescaling of 8, resulting in 1.0 MHz system clock. This default setting ensures that all users can make their desired clock source setting using an in-system or high-voltage programmer. Chapter 6.2 explains how clock selection for this ATtiny works, but be careful, selecting too slow clock frequency (eg. 128kHz) can prevent you from reprogramming the device unless you use a "high voltage" programmer. The fuses use negative logic, read the chapter carefully before programming them. Table 19-5 explains the 'Fuse Low Byte' has a default 0x62 value, where bit7 is 0 but indicates that the 8MHz clock is divided by 8. (hence negative logic). Many applications run perfectly well at the lower clock, which has the advantage of lower power use. It totally depends on your application whether you really need a higher clock or not. Just set F_CPU to the applicable value. F_CPU informs the compiler about how fast the controller clock is, it does not set the controller's clock. • Thanks your explenation is really worth and goes beyond my question! Appreciated – Kami Nov 28, 2012 at 1:30
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33932381868362427, "perplexity": 6134.57050930994}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103334753.21/warc/CC-MAIN-20220627134424-20220627164424-00554.warc.gz"}
https://forum.azimuthproject.org/discussion/comment/18189/
#### Howdy, Stranger! It looks like you're new here. If you want to get involved, click one of these buttons! Options # Lecture 24 - Chapter 2: Pricing Resources edited May 2018 Today's lecture will be very short, consisting solely of some puzzles about prices. We often compare resources by comparing their prices. So, we have some set of things $$X$$ and a function $$f: X \to \mathbb{R}$$ that assigns to each thing a price. Given two things in the set $$X$$ we can then say which costs more... and this puts a preorder on the set $$X$$. Here's the math behind this: Puzzle 75. Suppose $$(Y, \le_Y)$$ is a preorder, $$X$$ is a set and $$f : X \to Y$$ is any function. Define a relation $$\le_X$$ on $$X$$ by $$x \le_X x' \textrm{ if and only if } f(x) \le_Y f(x') .$$ Show that $$(X, \le_X )$$ is a preorder. Sometimes this trick gives a poset, sometimes not: Puzzle 76. Now suppose $$(Y, \le_Y)$$ is a poset. Under what conditions on $$f$$ can we conclude that $$(X, \le_X )$$ defined as above is a poset? We often have a way of combining things: for example, at a store, if you can buy milk and you can buy eggs, you can buy milk and eggs. Sometimes this makes our set of things into a monoidal preorder: Puzzle 77. Now suppose that $$(Y, \le_Y, \otimes_Y, 1_Y)$$ is a monoidal preorder, and $$(X,\otimes_X,1_X )$$ is a monoid. Define $$\le_X$$ as above. Under what conditions on $$f$$ can we conclude that $$(X,\le_X\otimes_X,1_X)$$ is a monoidal preorder? We will come back to these issues in a bit more depth when we discuss Section 2.2.5 of the book. To read other lectures go here. • Options 1. Maybe with the lack of a lecture today, people will post in the discussion groups. Maybe I'm being too optimistic. Anyway, Puzzle 75 feels very weird, since $$X$$ could be a completely disjoint set. Comment Source:Maybe with the lack of a lecture today, people will post in the discussion groups. Maybe I'm being too optimistic. Anyway, Puzzle 75 feels very weird, since \$$X \$$ could be a completely disjoint set. • Options 2. edited May 2018 Puzzle 75 is about things like this "a dozen eggs costs more than a stick of butter". We have a set $$\mathbb{R}$$ whose elements are amounts of money, ordered in the usual way. We have a set $$X$$ whose elements are things you can buy in the grocery store. And we have a function $$f: X \to \mathbb{R}$$ mapping each thing you can buy in the grocery store to its price. Say $$f(\text{a dozen eggs}) = 3.50$$ and $$f(\text{a stick of butter}) = 0.75$$ Then we say $$\text{a stick of butter} \le_X \text{a dozen eggs}$$ because $$0.75 \le_{\mathbb{R}} 3.50 .$$ This is just a way of saying that a stick of butter is cheaper than a dozen eggs. It makes perfect sense. Please, someone do these puzzles! By the way, Brandon passed his thesis defense, and all my students are happy. =D> =D> =D> =D> =D> =D> Comment Source:Puzzle 75 is about things like this "a dozen eggs costs more than a stick of butter". We have a set \$$\mathbb{R}\$$ whose elements are _amounts of money_, ordered in the usual way. We have a set \$$X\$$ whose elements are _things you can buy in the grocery store_. And we have a function \$$f: X \to \mathbb{R}\$$ mapping each thing you can buy in the grocery store to its price. Say $f(\text{a dozen eggs}) = 3.50$ and $f(\text{a stick of butter}) = 0.75$ Then we say $\text{a stick of butter} \le_X \text{a dozen eggs}$ because $0.75 \le_{\mathbb{R}} 3.50 .$ This is just a way of saying that a stick of butter is cheaper than a dozen eggs. It makes perfect sense. Please, someone do these puzzles! By the way, Brandon passed his thesis defense, and all my students are happy. =D> =D> =D> =D> =D> =D> • Options 3. Your commodity pricing example gives me a hunch, can I prove Puzzle 75 by pullback? That is to say, pullback the $$\le_Y$$ along $$f$$ to induce a relationship $$\le_X$$ on the set $$X$$. Comment Source:Your commodity pricing example gives me a hunch, can I prove Puzzle 75 by pullback? That is to say, pullback the \$$\le_Y \$$ along \$$f \$$ to induce a relationship \$$\le_X \$$ on the set \$$X \$$. • Options 4. edited May 2018 Marius worked on these these Puzzles in Lecture 23. Here is his comment. I thought I would copy them here so we can all talk in one place! Puzzle 75. Suppose $$(Y, \le_Y)$$ is a preorder, $$X$$ is a set and $$f : X \to Y$$ is any function. Define a relation $$\le_X$$ on $$X$$ by $$x \le_X x' \textrm{ if and only if } f(x) \le_Y f(x') .$$ Show that $$(X, \le_X )$$ is a preorder. Our relation turns $$f$$ into a monotone map. For all $$x \in X$$ we have $$f(x) \le_Y f(x)$$ and thus $$x \le_X x$$ satisfying reflexivity. Similarly, for all $$x,y,z \in X$$, $$f(x) \le_Y f(y)$$ and $$f(y) \le_Y f(z)$$ implies $$f(x) \le_Y f(z)$$ and thus $$x \le_X y$$ and $$y \le_X z$$ implies $$x \le_X z$$ satisfying transitivity. This gives us a preorder on X. Puzzle 76. Now suppose $$(Y, \le_Y)$$ is a poset. Under what conditions on $$f$$ can we conclude that $$(X, \le_X )$$ defined as above is a poset? Since we don't want to induce any equivalent elements in $$X$$, $$f$$ must be injective. Puzzle 77. Now suppose that $$(Y, \le_Y, \otimes_Y, 1_Y)$$ is a monoidal preorder, and $$(X,\otimes_X,1_X )$$ is a monoid. Define $$\le_X$$ as above. Under what conditions on $$f$$ can we conclude that $$(X,\le_X\otimes_X,1_X)$$ is a monoidal preorder? We need to assure that our induced preorder structure is compatible with our monoidal structure. To this end we require our monotone map $$f$$ to be a monoidal monotone for which $$1_Y \le_Y f(1_X)$$ and $$f(x) \otimes_Y f(y) \le_Y f(x \otimes_X y)$$ Regarding Puzzle 71, does this mean we simply need to find injective monoidal monotones to other commutative monoidal posets (e.g $$(\mathbb{R}, \le, +, 0 )$$) or do we need stricter requirements to preserve the commutative sturcture (e.g $$1_Y = f(1_X)$$ and $$f(x) \otimes_Y f(y) = f(x \otimes_X y)$$)? I'm off to bed, so maybe someone else can continue my thought process... Comment Source:Marius worked on these these Puzzles in Lecture 23. <a href = "https://forum.azimuthproject.org/discussion/comment/18135/#Comment_18135× ">Here is his comment.</a> I thought I would copy them here so we can all talk in one place! >**Puzzle 75.** >>Suppose \$$(Y, \le_Y) \$$ is a preorder, \$$X\$$ is a set and \$$f : X \to Y\$$ is any function. Define a relation \$$\le_X\$$ on \$$X\$$ by >>$x \le_X x' \textrm{ if and only if } f(x) \le_Y f(x') .$ >>Show that \$$(X, \le_X ) \$$ is a preorder. >Our relation turns \$$f \$$ into a *monotone map*. For all \$$x \in X \$$ we have \$$f(x) \le_Y f(x)\$$ and thus \$$x \le_X x\$$ satisfying reflexivity. Similarly, for all \$$x,y,z \in X \$$, \$$f(x) \le_Y f(y)\$$ and \$$f(y) \le_Y f(z)\$$ implies \$$f(x) \le_Y f(z)\$$ and thus \$$x \le_X y\$$ and \$$y \le_X z\$$ implies \$$x \le_X z\$$ satisfying transitivity. This gives us a preorder on X. >**Puzzle 76.** >>Now suppose \$$(Y, \le_Y) \$$ is a poset. Under what conditions on \$$f\$$ can we conclude that \$$(X, \le_X ) \$$ defined as above is a poset? >Since we don't want to induce any equivalent elements in \$$X\$$, \$$f \$$ must be injective. >**Puzzle 77.** >>Now suppose that \$$(Y, \le_Y, \otimes_Y, 1_Y) \$$ is a monoidal preorder, and \$$(X,\otimes_X,1_X ) \$$ is a monoid. Define \$$\le_X\$$ as above. Under what conditions on \$$f\$$ can we conclude that \$$(X,\le_X\otimes_X,1_X) \$$ is a monoidal preorder? >We need to assure that our induced preorder structure is compatible with our monoidal structure. To this end we require our *monotone map* \$$f \$$ to be a *monoidal monotone* for which \$$1_Y \le_Y f(1_X) \$$ and \$$f(x) \otimes_Y f(y) \le_Y f(x \otimes_X y) \$$ >Regarding Puzzle 71, does this mean we simply need to find injective *monoidal monotones* to other commutative monoidal posets (e.g \$$(\mathbb{R}, \le, +, 0 )\$$) or do we need stricter requirements to preserve the commutative sturcture (e.g \$$1_Y = f(1_X) \$$ and \$$f(x) \otimes_Y f(y) = f(x \otimes_X y) \$$)? > I'm off to bed, so maybe someone else can continue my thought process... • Options 5. Puzzle 76. I agree with Marius that $$f$$ must be injective. In fact I think this is a necessary and sufficient condition! But before the proof, a small example. In John's example where $$X$$ is a set of groceries and $$f$$ maps groceries to their cost, $$\text{grocery 1} \leq \text{grocery 2}$$ iff the cost of grocery 1 is less than or equal to the cost of grocery 2. If $$f$$ is not injective then there exist two different groceries (let's say apples and oranges) with the same cost (let's say $1). Since in the world of cost$1 $$\leq$$ $1 by reflexivity, in the world of groceries we have apples $$\leq$$ oranges and oranges $$\leq$$ apples. This means that apples and oranges are equivalent (which makes sense because they are equivalent in terms of cost). But of course apples $$\neq$$ oranges. So the groceries with the relation induced by $$f$$ does not form a poset! This argument works in general to show that if $$f$$ induces a poset relation $$(X, \leq_X)$$, then $$f$$ is injective. Proof by contradiction: Suppose that $$f$$ is not injective. Then there exists $$x,x' \in X$$ such that $$f(x) = f(x')$$ where $$x \neq x'$$. By reflexivity $f(x) \leq_Y f(x') \text{ and } f(x') \leq_Y f(x).$ By definition of $$\leq_X$$, this means that $$x \leq_X x' \text{ and } x' \leq_X x.$$ Since $$x \neq x'$$, this means that $$(X, \leq_X)$$ is not a poset. The converse is also true: If $$f$$ is injective, then it induces a poset relation $$(X, \leq_X)$$. Proof: Suppose that $$x \leq_X x' \text{ and } x' \leq_X x.$$ Then by the definition of $$\leq_X$$ $f(x) \leq_Y f(x') \text{ and } f(x') \leq_Y f(x).$ Since $$(Y, \leq_Y)$$ is a poset, this implies that $$f(x) = f(x')$$. And since $$f$$ is injective, this means that $$x = x'$$. Comment Source:**Puzzle 76.** I agree with Marius that \$$f\$$ must be injective. In fact I think this is a necessary and sufficient condition! But before the proof, a small example. In John's example where \$$X\$$ is a set of groceries and \$$f\$$ maps groceries to their cost, $$\text{grocery 1} \leq \text{grocery 2}$$ iff the cost of grocery 1 is less than or equal to the cost of grocery 2. If \$$f\$$ is not injective then there exist two different groceries (let's say apples and oranges) with the same cost (let's say \$1). Since in the world of cost \$1 \$$\leq\$$ \$1 by reflexivity, in the world of groceries we have apples \$$\leq\$$ oranges and oranges \$$\leq \$$ apples. This means that apples and oranges are equivalent (which makes sense because they are equivalent in terms of cost). But of course apples \$$\neq \$$ oranges. So the groceries with the relation induced by \$$f\$$ does not form a poset! This argument works in general to show that if \$$f\$$ induces a poset relation \$$(X, \leq_X) \$$, then \$$f\$$ is injective. *Proof by contradiction:* Suppose that \$$f\$$ is not injective. Then there exists \$$x,x' \in X\$$ such that \$$f(x) = f(x')\$$ where \$$x \neq x'\$$. By reflexivity \$f(x) \leq_Y f(x') \text{ and } f(x') \leq_Y f(x).\$ By definition of \$$\leq_X \$$, this means that $$x \leq_X x' \text{ and } x' \leq_X x.$$ Since \$$x \neq x' \$$, this means that \$$(X, \leq_X) \$$ is not a poset. The converse is also true: If \$$f\$$ is injective, then it induces a poset relation \$$(X, \leq_X) \$$. *Proof:* Suppose that $$x \leq_X x' \text{ and } x' \leq_X x.$$ Then by the definition of \$$\leq_X\$$ \$f(x) \leq_Y f(x') \text{ and } f(x') \leq_Y f(x).\$ Since \$$(Y, \leq_Y) \$$ is a poset, this implies that \$$f(x) = f(x')\$$. And since \$$f\$$ is injective, this means that \$$x = x'\$$. • Options 6. edited May 2018 Puzzle 77 Claim If $$f(x) \otimes_Y f(x') = f( x \otimes_X x')$$ then the relation $$\leq_X$$ induced by $$f$$ makes $$(X, \otimes_X, 1_X, \leq_X)$$ a monoidal preorder. (In the second example below I show that this is actually too strong of a condition on $$f$$ but it's a starting place! ) Proof: Suppose that $$x \leq_X x'$$ and $$y \leq_X y'$$. Then $$f(x) \leq_Y f(x')$$ and $$f(y) \leq_Y f(y' )$$. Since $$(Y, \otimes_Y, 1_Y, \leq_Y)$$ is a monoidal preorder this means that $f(x) \otimes_Y f(y) \leq_Y f(x') \otimes_Y f(y').$ $$f$$ exactly preserves the tensor structure so, $f(x \otimes_X y) \leq_Y f(x' \otimes_X y')$ which implies that $$x \otimes_X y \leq_X x' \otimes_X y'$$ by the definition of $$\leq_X$$. Example I started thinking about some examples inspired by John's grocery example and the H20 example from Lecture 22 . Let $$X$$ represent collections of groceries that can be bought at the "Eggs and Milk" store. Since the "Eggs and Milk" store only sells eggs and milk, every element of $$X$$ can be represented by a pair $$(a,b) \in \mathbb N^2$$ where $$a$$ is the number of eggs bought and $$b$$ is the number of milks bought. $$X$$ can be turned into a monoid by defining $(a,b) \otimes_X (c,d) = (a + c, b + d)$ and where $$1_X = (0,0)$$. Suppose that eggs cost $1 and milk costs$2. This means we should define a cost map $$f: X \to \mathbb R$$ by (f ((a,b)) = a + 2b\). $$f$$ preserves the $$\otimes$$ structure because $f((a,b)) \otimes_{\mathbb R} f((c,d)) = (a + 2b) + (c + 2d) = (a + c) + 2(b + d) = f((a+c, b + d)) = f((a,b) \otimes_X (c,d)).$ Another way of saying this is "The cost of buying two sets of groceries separately is the same as the cost of buying them together". This means that we have turned the groceries into a monoidal preorder! Example Suppose that the "Eggs and Milk" store now charges $0.10 for a bag with each purchase. This means that the cost function now looks like $g((a,b)) = a + 2b + 0.10$ $$g$$ doesn't exactly preserve the $$\otimes$$ structure because now the bag charge means that: "the cost of buying two sets of groceries separately is more than the cost of buying them together". In math words, $g((a,b)) \otimes g((c,d)) \geq g((a \otimes c, b \otimes d)) .$ I was interested that this is the opposite condition from what Marius proposed. My next question was whether $$g$$ induced a monoidal preorder on the groceries anyway. It does, essentially because the bag charges cancel out. Here is the math: Suppose that $(a,b) \leq (c,d) \text{ and }(a',b') \leq (c',d')$ Therefore $g(a,b) \leq g(c,d) \text{ and }g(a',b') \leq g(c',d')$ $\implies a+ 2b + 0.10 \leq c + 2d + 0.10 \text{ and }a' + 2b' + 0.10 \leq c' + 2d' + 0.10$ $\implies a+ 2b \leq c + 2d \text{ and }a' + 2b' \leq c' + 2d'$ $\implies (a + a') + 2(b + b') \leq (c + c') + 2(d + d')$ $\implies (a + a') + 2(b + b') + 0.10 \leq (c + c') + 2(d + d') + 0.10$ $\implies g((a,b) \otimes (a', b')) \leq g((c,d) \otimes (c',d'))$ $\implies (a,b) \otimes (a', b') \leq (c,d) \otimes (c',d')$ This lead me to a new claim... New Claim If $$f(x) \otimes_Y f(x') \geq f( x \otimes_X x')$$ then the relation $$\leq_X$$ induced by $$f$$ makes $$(X, \otimes_X, 1_X, \leq_X)$$ a monoidal preorder. But I have yet to prove it! I'm also wondering about Marius's suggestion that $$f$$ should satisfy $$1_Y\leq_Y f(1_X)$$ This is true in both my examples, since buying zero items costs more than or equal to$0. Phew that was a lot! Interested to hear what others think! Comment Source:**Puzzle 77** **Claim** If \$$f(x) \otimes_Y f(x') = f( x \otimes_X x')\$$ then the relation \$$\leq_X\$$ induced by \$$f\$$ makes \$$(X, \otimes_X, 1_X, \leq_X) \$$ a monoidal preorder. (In the second example below I show that this is actually too strong of a condition on \$$f\$$ but it's a starting place! ) *Proof:* Suppose that \$$x \leq_X x'\$$ and \$$y \leq_X y' \$$. Then \$$f(x) \leq_Y f(x')\$$ and \$$f(y) \leq_Y f(y' )\$$. Since \$$(Y, \otimes_Y, 1_Y, \leq_Y)\$$ is a monoidal preorder this means that \$f(x) \otimes_Y f(y) \leq_Y f(x') \otimes_Y f(y'). \$ \$$f\$$ exactly preserves the tensor structure so, \$f(x \otimes_X y) \leq_Y f(x' \otimes_X y') \$ which implies that \$$x \otimes_X y \leq_X x' \otimes_X y'\$$ by the definition of \$$\leq_X\$$. **Example** I started thinking about some examples inspired by John's grocery example and the H20 example from <a href = "https://forum.azimuthproject.org/discussion/2084/lecture-22-chapter-2-symmetric-monoidal-preorders#latest"> Lecture 22 </a>. Let \$$X\$$ represent collections of groceries that can be bought at the "Eggs and Milk" store. Since the "Eggs and Milk" store only sells eggs and milk, every element of \$$X\$$ can be represented by a pair \$$(a,b) \in \mathbb N^2\$$ where \$$a\$$ is the number of eggs bought and \$$b\$$ is the number of milks bought. \$$X\$$ can be turned into a monoid by defining \$(a,b) \otimes_X (c,d) = (a + c, b + d) \$ and where \$$1_X = (0,0) \$$. Suppose that eggs cost \$1 and milk costs \$2. This means we should define a cost map \$$f: X \to \mathbb R\$$ by $$f ((a,b)) = a + 2b\$$. \$$f\$$ preserves the \$$\otimes\$$ structure because \$f((a,b)) \otimes_{\mathbb R} f((c,d)) = (a + 2b) + (c + 2d) = (a + c) + 2(b + d) = f((a+c, b + d)) = f((a,b) \otimes_X (c,d)).\$ Another way of saying this is "The cost of buying two sets of groceries separately is the same as the cost of buying them together". This means that we have turned the groceries into a monoidal preorder! **Example** Suppose that the "Eggs and Milk" store now charges \$0.10 for a bag with each purchase. This means that the cost function now looks like \$g((a,b)) = a + 2b + 0.10\$ \$$g\$$ doesn't exactly preserve the \$$\otimes\$$ structure because now the bag charge means that: "the cost of buying two sets of groceries separately is *more* than the cost of buying them together". In math words, \$g((a,b)) \otimes g((c,d)) \geq g((a \otimes c, b \otimes d)) .\$ I was interested that this is the opposite condition from what Marius proposed. My next question was whether \$$g\$$ induced a monoidal preorder on the groceries anyway. It does, essentially because the bag charges cancel out. Here is the math: Suppose that \$(a,b) \leq (c,d) \text{ and }(a',b') \leq (c',d')\$ Therefore \$g(a,b) \leq g(c,d) \text{ and }g(a',b') \leq g(c',d')\$ \$\implies a+ 2b + 0.10 \leq c + 2d + 0.10 \text{ and }a' + 2b' + 0.10 \leq c' + 2d' + 0.10\$ \$\implies a+ 2b \leq c + 2d \text{ and }a' + 2b' \leq c' + 2d'\$ \$\implies (a + a') + 2(b + b') \leq (c + c') + 2(d + d') \$ \$\implies (a + a') + 2(b + b') + 0.10 \leq (c + c') + 2(d + d') + 0.10 \$ \$\implies g((a,b) \otimes (a', b')) \leq g((c,d) \otimes (c',d')) \$ \$\implies (a,b) \otimes (a', b') \leq (c,d) \otimes (c',d') \$ This lead me to a new claim... **New Claim** If \$$f(x) \otimes_Y f(x') \geq f( x \otimes_X x')\$$ then the relation \$$\leq_X\$$ induced by \$$f\$$ makes \$$(X, \otimes_X, 1_X, \leq_X) \$$ a monoidal preorder. But I have yet to prove it! I'm also wondering about Marius's suggestion that \$$f\$$ should satisfy > \$$1_Y\leq_Y f(1_X)\$$ This is true in both my examples, since buying zero items costs more than or equal to \$0. Phew that was a lot! Interested to hear what others think! • Options 7. edited May 2018 Thanks Sophie for re-posting my comment in the right place and for your nice proof that $$f$$ must be injective! Regarding: I was interested that this is the opposite condition from what Marius proposed. I just took the definition for a monoidal monotone from 7 sketches p.46 without thinking it through all that much. Given your example in comment 6, I see that in our interpretation of grocery shopping and resource theories your condition seems to make more sense. Batching processes usually results in lower costs and/or more products. My condition could also be the case in grocery shopping, however. Example Consider that you have a bunch of coupons for the grocery store giving you a flat 0.50$discount. However, you may only use one coupon per visit to the store. This means that "the cost of buying two sets of groceries separately is less than the cost of buying them together". I think we need to also reverse the inequality in the second condition if we stick to your condition. It might be instructive to consider monoidal monotone maps between monoidal preorders with different units to think about this. For example $$f: (\mathbb{N},\le, +, 0) \hookrightarrow (\mathbb{N},\le, *, 1)$$ or $$g: (\mathbb{N},\le, *, 1) \hookrightarrow (\mathbb{N},\le, +, 0)$$, where $$f$$ and $$g$$ are the inclusions. For $$f$$ it is the case that $$f(x) \otimes_Y f(y) \ge_Y f(x \otimes_X y)$$ and $$1_Y \ge_Y f(1_X)$$. For $$g$$ it is the case that $$g(x) \otimes_Y g(y) \le_Y g(x \otimes_X y)$$ and $$1_Y \le_Y g(1_X)$$. So based on this one example it seems that either condition works as long as one is consistent. This means that for $$1_Y = f(1_X)$$ we could use either inequality. Edit: Just realized we probably only need one version of the conditions since we can formally take the function to the opposite preorder to get the other. Comment Source:Thanks Sophie for re-posting my comment in the right place and for your nice proof that \$$f\$$ must be injective! Regarding: > I was interested that this is the opposite condition from what Marius proposed. I just took the definition for a monoidal monotone from 7 sketches p.46 without thinking it through all that much. Given your example in comment 6, I see that in our interpretation of grocery shopping and resource theories your condition seems to make more sense. Batching processes usually results in lower costs and/or more products. My condition could also be the case in grocery shopping, however. **Example** Consider that you have a bunch of coupons for the grocery store giving you a flat 0.50$ discount. However, you may only use one coupon per visit to the store. This means that "the cost of buying two sets of groceries separately is *less* than the cost of buying them together". I think we need to also reverse the inequality in the second condition if we stick to your condition. It might be instructive to consider monoidal monotone maps between monoidal preorders with different units to think about this. For example \$$f: (\mathbb{N},\le, +, 0) \hookrightarrow (\mathbb{N},\le, *, 1) \$$ or \$$g: (\mathbb{N},\le, *, 1) \hookrightarrow (\mathbb{N},\le, +, 0) \$$, where \$$f \$$ and \$$g \$$ are the inclusions. For \$$f\$$ it is the case that \$$f(x) \otimes_Y f(y) \ge_Y f(x \otimes_X y) \$$ and \$$1_Y \ge_Y f(1_X)\$$. For \$$g\$$ it is the case that \$$g(x) \otimes_Y g(y) \le_Y g(x \otimes_X y) \$$ and \$$1_Y \le_Y g(1_X)\$$. So based on this one example it seems that either condition works as long as one is consistent. This means that for \$$1_Y = f(1_X)\$$ we could use either inequality. *Edit:* Just realized we probably only need one version of the conditions since we can formally take the function to the opposite preorder to get the other. • Options 8. @Marius, Sophie: that's very interesting! It may be worth pointing out that your result Claim If $$f(x) \otimes_Y f(x') = f( x \otimes_X x')$$ then the relation $$\leq_X$$ induced by $$f$$ makes $$(X, \otimes_X, 1_X, \leq_X)$$ a monoidal preorder. generalizes Matthew's proposed solution to Puzzle 71, which turns the complex numbers into a commutative monoidal preorder. Do you see how? Comment Source:@Marius, Sophie: that's very interesting! It may be worth pointing out that your result > **Claim** If \$$f(x) \otimes_Y f(x') = f( x \otimes_X x')\$$ then the relation \$$\leq_X\$$ induced by \$$f\$$ makes \$$(X, \otimes_X, 1_X, \leq_X) \$$ a monoidal preorder. generalizes [Matthew's proposed solution to Puzzle 71](https://forum.azimuthproject.org/discussion/comment/18065/#Comment_18065×), which turns the complex numbers into a commutative monoidal preorder. Do you see how? • Options 9. edited May 2018 If you mean: $$x \preceq_p y \iff p \cdot x \leq p \cdot y$$ then $$f(x) = p \cdot x$$ and we see that $$f(x) \otimes_Y f(x') = p \cdot x + p \cdot x' = p \cdot (x+x') = f( x \otimes_Y x')$$ since multiplication distributes over addition. Comment Source:If you mean: > $$x \preceq_p y \iff p \cdot x \leq p \cdot y$$ then \$$f(x) = p \cdot x \$$ and we see that \$$f(x) \otimes_Y f(x') = p \cdot x + p \cdot x' = p \cdot (x+x') = f( x \otimes_Y x')\$$ since multiplication distributes over addition. • Options 10. edited May 2018 Regarding this puzzle: Puzzle 76. Now suppose $$(Y, \le_Y)$$ is a poset. Under what conditions on $$f$$ can we conclude that $$(X, \le_X )$$ defined as above is a poset? Sophie wrote: Puzzle 76. I agree with Marius that $$f$$ must be injective. In fact I think this is a necessary and sufficient condition! That's right! But you don't need me to tell you this, since you proved it, so you know it's right. (Of course sometimes we screw up when proving things, but writing down a proof and carefully checking the logic can reduce the chance of error quite dramatically.) Comment Source:Regarding this puzzle: > **Puzzle 76.** Now suppose \$$(Y, \le_Y) \$$ is a poset. Under what conditions on \$$f\$$ can we conclude that \$$(X, \le_X ) \$$ defined as above is a poset? Sophie wrote: > **Puzzle 76**. I agree with Marius that \$$f\$$ must be injective. In fact I think this is a necessary and sufficient condition! That's right! But you don't need me to tell you this, since you proved it, so you _know_ it's right. (Of course sometimes we screw up when proving things, but writing down a proof and carefully checking the logic can reduce the chance of error quite dramatically.) • Options 11. edited May 2018 And regarding this one: Puzzle 77. Now suppose that $$(Y, \le_Y, \otimes_Y, 1_Y)$$ is a monoidal preorder, and $$(X,\otimes_X,1_X )$$ is a monoid. Define $$\le_X$$ as above. Under what conditions on $$f$$ can we conclude that $$(X,\le_X\otimes_X,1_X)$$ is a monoidal preorder? Sophie wrote: Claim If $$f(x) \otimes_Y f(x') = f( x \otimes_X x')$$ then the relation $$\leq_X$$ induced by $$f$$ makes $$(X, \otimes_X, 1_X, \leq_X)$$ a monoidal preorder. Yes, that's right! So this is a sufficient condition, and this is the answer I had in mind. You suggest that a weaker condition may be enough: $$f( x \otimes_X x') \le f(x) \otimes_Y f(x') \qquad \star$$ for all $$x,x' \in X$$. Let's see. To prove $$\leq_X$$ is a monoidal preorder, we need to prove $$x_1 \le_X x_1' \textrm{ and } x_2 \le_X x_2' \textrm{ imply } x_1 \otimes_X x_2 \le_X x_1' \otimes_X x_2'$$ for all $$x_1,x_1',x_2,x_2' \in X$$. In other words, we need $$f(x_1) \le_Y f(x_1') \textrm{ and } f(x_2) \le_Y f(x_2') \textrm{ imply } f(x_1 \otimes_X x_2) \le_Y f(x_1' \otimes_X x_2') .$$ On the other hand, since $$(Y, \le_Y, \otimes_Y, 1_Y)$$ is a monoidal preorder, we know that $$f(x_1) \le_Y f(x_1') \textrm{ and } f(x_2) \le_Y f(x_2') \textrm{ imply } f(x_1)\otimes_Y f(x_2) \le_Y f(x_1') \otimes_Y f(x_2') .$$ If we assume condition $$\star$$, we also know $$f(x_1 \otimes_X x_2) \le_Y f(x_1)\otimes_Y f(x_2) .$$ Combining this with what we know, we get $$f(x_1) \le_Y f(x_1') \textrm{ and } f(x_2) \le_Y f(x_2') \textrm{ imply } f(x_1 \otimes_X x_2) \le_Y f(x_1') \otimes_Y f(x_2') .$$ But this does not yet get us what we need! Remember, we need $$f(x_1) \le_Y f(x_1') \textrm{ and } f(x_2) \le_Y f(x_2') \textrm{ imply } f(x_1 \otimes_X x_2) \le_Y f(x_1' \otimes_X x_2') .$$ The obvious way to get this is to also assume $$f(x) \otimes_Y f(x') \le f(x \otimes_X x') \qquad \star\star$$ for all $$x,x' \in X$$. But $$\star$$ together with $$\star\star$$ is just your earlier condition $$f(x) \otimes_Y f(x') = f(x \otimes_X x')$$ I don't see how either $$\star$$ or $$\star\star$$ is enough for this problem. I don't think either one by itself will do the job. Comment Source:And regarding this one: > **Puzzle 77.** Now suppose that \$$(Y, \le_Y, \otimes_Y, 1_Y) \$$ is a monoidal preorder, and \$$(X,\otimes_X,1_X ) \$$ is a monoid. Define \$$\le_X\$$ as above. Under what conditions on \$$f\$$ can we conclude that \$$(X,\le_X\otimes_X,1_X) \$$ is a monoidal preorder? Sophie wrote: > **Claim** If \$$f(x) \otimes_Y f(x') = f( x \otimes_X x')\$$ then the relation \$$\leq_X\$$ induced by \$$f\$$ makes \$$(X, \otimes_X, 1_X, \leq_X) \$$ a monoidal preorder. Yes, that's right! So this is a sufficient condition, and this is the answer I had in mind. You suggest that a weaker condition may be enough: $f( x \otimes_X x') \le f(x) \otimes_Y f(x') \qquad \star$ for all \$$x,x' \in X\$$. Let's see. To prove \$$\leq_X\$$ is a monoidal preorder, we need to prove $x_1 \le_X x_1' \textrm{ and } x_2 \le_X x_2' \textrm{ imply } x_1 \otimes_X x_2 \le_X x_1' \otimes_X x_2'$ for all \$$x_1,x_1',x_2,x_2' \in X\$$. In other words, we need $f(x_1) \le_Y f(x_1') \textrm{ and } f(x_2) \le_Y f(x_2') \textrm{ imply } f(x_1 \otimes_X x_2) \le_Y f(x_1' \otimes_X x_2') .$ On the other hand, since \$$(Y, \le_Y, \otimes_Y, 1_Y) \$$ is a monoidal preorder, we know that $f(x_1) \le_Y f(x_1') \textrm{ and } f(x_2) \le_Y f(x_2') \textrm{ imply } f(x_1)\otimes_Y f(x_2) \le_Y f(x_1') \otimes_Y f(x_2') .$ If we assume condition \$$\star\$$, we also know $f(x_1 \otimes_X x_2) \le_Y f(x_1)\otimes_Y f(x_2) .$ Combining this with what we know, we get $f(x_1) \le_Y f(x_1') \textrm{ and } f(x_2) \le_Y f(x_2') \textrm{ imply } f(x_1 \otimes_X x_2) \le_Y f(x_1') \otimes_Y f(x_2') .$ But this does not yet get us what we need! Remember, we need $f(x_1) \le_Y f(x_1') \textrm{ and } f(x_2) \le_Y f(x_2') \textrm{ imply } f(x_1 \otimes_X x_2) \le_Y f(x_1' \otimes_X x_2') .$ The obvious way to get this is to also assume $f(x) \otimes_Y f(x') \le f(x \otimes_X x') \qquad \star\star$ for all \$$x,x' \in X\$$. But \$$\star\$$ together with \$$\star\star\$$ is just your earlier condition $f(x) \otimes_Y f(x') = f(x \otimes_X x')$ I don't see how either \$$\star\$$ or \$$\star\star\$$ is enough for this problem. I don't think either one by itself will do the job. • Options 12. edited May 2018 Sophie wrote: I'm also wondering about Marius's suggestion that $$f$$ should satisfy $$1_Y\leq_Y f(1_X)$$ I don't think this condition plays any role in Puzzle 77. There's an interesting asymmetry in the definition of "monoidal preorder": the operation $$\otimes$$ needs to get along with relation $$\le$$, but the unit $$1$$ does not. Later we will meet various kinds of maps between monoidal preorders: see Section 2.2.5. These should remind you of Puzzle 77, and they involve conditions on the unit. They are definitely relevant to your "pricing of groceries" examples... but nonetheless, I don't think any conditions on the unit are relevant to Puzzle 77. I could be wrong. Comment Source:Sophie wrote: > I'm also wondering about Marius's suggestion that \$$f\$$ should satisfy > \$$1_Y\leq_Y f(1_X)\$$ I don't think this condition plays any role in Puzzle 77. There's an interesting asymmetry in the definition of "monoidal preorder": the operation \$$\otimes\$$ needs to get along with relation \$$\le\$$, but the unit \$$1\$$ does not. Later we will meet various kinds of _maps_ between monoidal preorders: see Section 2.2.5. These should remind you of Puzzle 77, and they involve conditions on the unit. They are definitely relevant to your "pricing of groceries" examples... but nonetheless, I don't think any conditions on the unit are relevant to Puzzle 77. I could be wrong. • Options 13. I've decided to make these puzzles into a mini-lecture, just because they fit pretty well into the overall flow of what we're doing: learning about monoidal preorder and their role in economics. Comment Source:I've decided to make these puzzles into a mini-lecture, just because they fit pretty well into the overall flow of what we're doing: learning about monoidal preorder and their role in economics. • Options 14. Thanks Marius, Tobias, and John for the responses! I had a lot of fun working on these problems. Marius, I really like the example of getting a discount instead of a bag charge! Your comment about opposite categories also made me think that given a function $$f: X \to Y$$ we can define a relation on $$X$$ in an opposite way by $x \leq_X x' \iff f(x) \geq_Y f(x').$ I also wanted to check my thinking about Puzzle 77 again. I showed that the property $$f(x) \otimes_Y f(x') = f(x \otimes_X x')$$ is sufficient for making $$(X, \leq_X, \otimes_X, 1_x)$$ a monoidal pre-order. But the examples of a bag cost and coupon discount that Marius and I suggested, show that this is not a necessary condition, since in both of those cases we only have $$f(x) \otimes_Y f(x') \leq f(x \otimes_X x')$$ and $$f(x) \otimes_Y f(x') \geq f(x \otimes_X x')$$ respectively. So as of yet, we don't have a nice necessary and sufficient condition on $$f$$ for making $$(X, \leq_X, \otimes_X, 1_x)$$ a monoidal pre-order. Is that correct? Comment Source:Thanks Marius, Tobias, and John for the responses! I had a lot of fun working on these problems. Marius, I really like the example of getting a discount instead of a bag charge! Your comment about opposite categories also made me think that given a function \$$f: X \to Y\$$ we can define a relation on \$$X\$$ in an opposite way by \$x \leq_X x' \iff f(x) \geq_Y f(x').\$ I also wanted to check my thinking about Puzzle 77 again. I showed that the property \$$f(x) \otimes_Y f(x') = f(x \otimes_X x') \$$ is sufficient for making \$$(X, \leq_X, \otimes_X, 1_x) \$$ a monoidal pre-order. But the examples of a bag cost and coupon discount that Marius and I suggested, show that this is not a necessary condition, since in both of those cases we only have \$$f(x) \otimes_Y f(x') \leq f(x \otimes_X x') \$$ and \$$f(x) \otimes_Y f(x') \geq f(x \otimes_X x') \$$ respectively. So as of yet, we don't have a nice necessary <b>and</b> sufficient condition on \$$f\$$ for making \$$(X, \leq_X, \otimes_X, 1_x) \$$ a monoidal pre-order. Is that correct? • Options 15. edited May 2018 Sophie: I haven't carefully checked those bag cost and coupon discount examples, so I can't promise that $$f(x) \otimes_Y f(x') = f(x \otimes_X x')$$ is not necessary. But I'm willing to believe you. Re-examining what I wrote, it seems that a necessary and sufficient condition is $$f(x_1)\otimes_Y f(x_2) \le_Y f(x_1') \otimes_Y f(x_2') \textrm{ implies } f(x_1 \otimes_X x_2) \le_Y f(x_1' \otimes_X x_2') .$$ It's late, so I'll have to check this when I'm more awake. Does this condition hold in your examples? Comment Source:Sophie: I haven't carefully checked those bag cost and coupon discount examples, so I can't promise that \$$f(x) \otimes_Y f(x') = f(x \otimes_X x') \$$ is not necessary. But I'm willing to believe you. Re-examining what I wrote, it seems that a necessary and sufficient condition is $f(x_1)\otimes_Y f(x_2) \le_Y f(x_1') \otimes_Y f(x_2') \textrm{ implies } f(x_1 \otimes_X x_2) \le_Y f(x_1' \otimes_X x_2') .$ It's late, so I'll have to check this when I'm more awake. Does this condition hold in your examples? • Options 16. John: Yes your condition hold for both the bag cost and coupon examples. Also I can see how it slides right into the proof I gave in Comment 6. I wrote, $f(x) \otimes_Y f(y) \leq_Y f(x') \otimes_Y f(y').$ $$f$$ exactly preserves the tensor structure so, $f(x \otimes_X y) \leq_Y f(x' \otimes_X y')$ Just replace "$$f$$ exactly preserves the tensor structure" with "by hypothesis"! Comment Source:John: Yes your condition hold for both the bag cost and coupon examples. Also I can see how it slides right into the proof I gave in Comment 6. I wrote, > \$f(x) \otimes_Y f(y) \leq_Y f(x') \otimes_Y f(y'). \$ \$$f\$$ exactly preserves the tensor structure so, \$f(x \otimes_X y) \leq_Y f(x' \otimes_X y') \$ Just replace "\$$f\$$ exactly preserves the tensor structure" with "by hypothesis"! • Options 17. Great, so this rather complicated condition is exactly the necessary and sufficient one! By the way, there's more about grocery store prices in Lecture 27. I hadn't realized until teaching this course how much category theory, or at least poset theory, is lurking in the humble corner grocery store. Comment Source:Great, so this rather complicated condition is exactly the necessary and sufficient one! By the way, there's more about grocery store prices in [Lecture 27](https://forum.azimuthproject.org/discussion/2098/lecture-27-chapter-2-adjoints-of-monoidal-monotones/p1). I hadn't realized until teaching this course how much category theory, or at least poset theory, is lurking in the humble corner grocery store. • Options 18. edited May 2018 You can also solve these sorts of problems with 'calculus of variations' so you have a a budget, some choices about how to allocate it if you are shopping, and a budget constraint . Usually written as a Lagrangian. The more complex cases basically involve tensor products. Category theory I think includes calculus of variations (or multiobjective optimization) as a special case. But it's a different more general dialect. Comment Source:You can also solve these sorts of problems with 'calculus of variations' so you have a a budget, some choices about how to allocate it if you are shopping, and a budget constraint . Usually written as a Lagrangian. The more complex cases basically involve tensor products. Category theory I think includes calculus of variations (or multiobjective optimization) as a special case. But it's a different more general dialect. • Options 19. edited June 2018 Is the condition: $$f(x_1)\otimes_Y f(x_2) \le_Y f(x_1') \otimes_Y f(x_2') \textrm{ implies } f(x_1 \otimes_X x_2) \le_Y f(x_1' \otimes_X x_2')$$ really necessary? I'm having a hard time trying to prove it. What I mean in detail is: Assuming that $$(X,\le_X,\otimes_X,1_X)$$ is a monoidal preorder, prove that the condition must hold. Comment Source:Is the condition: $f(x_1)\otimes_Y f(x_2) \le_Y f(x_1') \otimes_Y f(x_2') \textrm{ implies } f(x_1 \otimes_X x_2) \le_Y f(x_1' \otimes_X x_2')$ really necessary? I'm having a hard time trying to prove it. What I mean in detail is: Assuming that \$$(X,\le_X,\otimes_X,1_X) \$$ is a monoidal preorder, prove that the condition must hold.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9107794165611267, "perplexity": 1318.8055607422239}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318011.89/warc/CC-MAIN-20190823062005-20190823084005-00092.warc.gz"}
https://www.physicsforums.com/threads/electric-field-help.85860/
# Electric field help 1. Aug 21, 2005 ### arutha I have these questions on electric fields that I'm a bit confused on.. A flat circle of radius 8 cm is placed in a uniform electric field of 8.5 × 10^2 N/C. What is the electric flux (in Nm^2/C) through the circle when its face is at 51° to the field lines? I just use EAcos(theta) don't I? Where A is 2*pi*r, but that angle do I use 51 or 90-51 since it is the angle is meant to be between the normal and the object not the object and the surface right? A metallic sphere of radius 22 cm is negatively charged. The magnitude of the resulting electric field, close to the outside surface of the sphere, is 1.8 × 10^2 N/C. Calculate the net electric flux (in Nm^2/C) outward through a spherical surface surrounding, and just beyond, the metallic sphere's surface. I'm thinking just E*A*cos(theta) again.. Would the answer be negative because it is negatively charged? Two concentric spherical shells of radii R1=1 m and R2=2 m, contain charge Q1=0.005 C and Q2=0.0065 C respectively. Calculate the Electric field at a distance r=1.79 m from the centerpoint of the spheres I have absolutely no idea on this one.. How does it work with the two charges? And what if I was calculating the field outside the two spheres, would that be any different? A very long solid nonconducting cylinder of radius 18.3 cm possesses a uniform volume charge density of 1.68 μC/m^3. Determine the magnitude of the electric field (in N/C) inside the cylinder at a radial distance of 8.8 cm from the cylinder's central axis Heres what I've thought of, multiply the volume charge density by the volume of the cylindar to get the charge in μC, then use E=kQ/r^2 to get the magnitude of the electric field. Is that right? Edit: That won't work because I don't have a length of the cylindar to get the volume... Woops. Thanks for any help, btw I don't want numbers or any answers I'd rather hear the process then get the numbers myself so I can figure out other problems of similar nature.. Last edited: Aug 21, 2005 2. Aug 21, 2005 ### mukundpa oooooooooooo what is the area of a circle 2*pi*r ???? Check it. 3. Aug 21, 2005 ### arutha Oh yeah I forgot the square after the r... I wrote it down on the sheet, just missed typing it. 4. Aug 21, 2005 ### mukundpa A = (Pi)*r^2 5. Aug 21, 2005 ### mukundpa For the rest problems go through Guass's Theorem 6. Aug 22, 2005 ### arutha Well, I got them all except the last one now. Still have absolutely no idea how to do it, I've gone through my text book, lecture notes and everythnig but can't find anything on it. 7. Aug 22, 2005 ### mukundpa The distance of the point at which the field magnitude is required is 8.8 cm which is less then the radius of cylinder 18.3 cm. Consider a coaxial cylindrical Gaussian surface of radius 8.8 cm and apply the Gauss’s theorem. Remember the charge to be taken within the Gaussian surface.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9424264430999756, "perplexity": 867.6273014003327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719155.26/warc/CC-MAIN-20161020183839-00560-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.europeanpharmaceuticalreview.com/news/63066/heart-failure-market-worth-16-billion-2026-novartis-drug-triggers-growth/
news Heart failure market worth $16 billion by 2026 as Novartis’ drug triggers growth 56 SHARES Share via The heart failure space across the seven key markets of the US, France, Germany, Italy, Spain, the UK and Japan is set to grow from$3.7 billion in 2016 to around \$16.1 billion by 2026, representing an impressive compound annual growth rate of 15.7%, according to research and consulting firm GlobalData. The company’s latest report states that the strongest driver of this rise in market value will be the growing uptake of Novartis’ Entresto over the forecast period, despite initial modest sales. Other drivers will include the launch of several chronic heart failure therapies, including Amgen and Cytokinetics’ omecamtiv mecarbil, and an increase in the global prevalence of chronic heart failure and incidence of acute heart failure. Elizabeth Hamson, PhD, Healthcare Analyst for GlobalData, explains: “Over the past two decades, chronic heart failure therapies have demonstrated success in slowing the progression of the disease and in reducing both mortality and morbidity in large-scale clinical trials. However, these successes have been limited to heart failure with reduced ejection fraction (HF-REF), showing only moderate benefits in heart failure with preserved ejection fraction (HF-PEF). Despite the lack of strong clinical evidence, guideline-recommended HF-REF therapies are widely used to treat HF-PEF. “Based on this, drug developers have historically only targeted HF-REF in their drug strategies. With HF-PEF on the rise, however, this patient cohort represents a lucrative opportunity for pharmaceutical companies such as Novartis, which recently launched its first-in-class angiotensin receptor-neprilysin inhibitor Entresto in the US for HF-REF, and is currently conducting late-stage trials of the drug in HF-PEF patients.” GlobalData anticipates Entresto’s label expansions to HF-PEF to be approved in 2020, which will boost the drug’s uptake dramatically. Due to the lack of evidence-based therapies for HF-PEF, if Entresto proves to be efficacious in this patient, it will help Novartis further penetrate the heart failure market, undoubtedly benefiting the company immensely. Hamson concludes: “Although it is thought that Entresto will fulfill a major unmet need over the forecast period, it is important to acknowledge that others will remain. For example, effective treatment of patients with multiple comorbidities, particularly those with renal impairment, will remain elusive. GlobalData does not expect this unmet need to be fulfilled during the forecast period, although the recent FDA approval of several potassium-binding agents to treat hyperkalemia may relieve the burden of this unmet need to a slight extent.” Related diseases & conditions This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28188765048980713, "perplexity": 8178.2121090267565}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104669950.91/warc/CC-MAIN-20220706090857-20220706120857-00062.warc.gz"}
http://www.mathworks.com/help/physmod/elec/ref/shuntmotor.html?requestedDomain=www.mathworks.com&nocookie=true
# Shunt Motor Model electrical and torque characteristics of shunt motor ## Library Rotational Actuators ## Description The Shunt Motor block represents the electrical and torque characteristics of a shunt motor using the following equivalent circuit model. When you set the Model parameterization parameter to `By equivalent circuit parameters`, you specify the equivalent circuit parameters for this model: • RaArmature resistance • LaArmature inductance • RfField winding resistance • LfField winding inductance The Shunt Motor block computes the motor torque as follows: 1. The magnetic field in the motor induces the following back emf vb in the armature: `${v}_{b}={L}_{af}{i}_{f}\omega$` where Laf is a constant of proportionality and ω is the angular velocity. 2. The mechanical power is equal to the power reacted by the back emf: `$P={v}_{b}{i}_{a}={L}_{af}{i}_{f}{i}_{a}\omega$` 3. The motor torque is: `$T=P/\omega ={L}_{af}{i}_{f}{i}_{a}$` The torque-speed characteristic for the Shunt Motor block model is related to the parameters in the preceding figure. When you set the Model parameterization parameter to ```By rated power, rated speed & no-load speed```, the block solves for the equivalent circuit parameters as follows: 1. For the steady-state torque-speed relationship, L has no effect. 2. Sum the voltages around the loop: `$\begin{array}{l}V={i}_{a}{R}_{a}+{L}_{af}{i}_{f}\omega \\ V={i}_{f}{R}_{f}\end{array}$` 3. Solve the preceding equations for ia and if: `$\begin{array}{l}{i}_{f}=\frac{V}{{R}_{f}}\\ {i}_{a}=\frac{V}{{R}_{a}}\left(1-\frac{{L}_{af}w}{{R}_{f}}\right)\end{array}$` 4. Substitute these values of ia and if into the equation for torque: `$T=\frac{{L}_{af}}{{R}_{a}{R}_{f}}\left(1-\frac{{L}_{af}\omega }{{R}_{f}}\right){V}^{2}$` The block uses the rated speed and power to calculate the rated torque. The block uses the rated torque and no-load speed values to get one equation that relates Ra and Laf/Rf. It uses the no-load speed at zero torque to get a second equation that relates these two quantities. Then, it solves for Ra and Laf/Rf. The block models motor inertia J and damping B for all values of the Model parameterization parameter. The output torque is: `${T}_{load}=\frac{{L}_{af}}{{R}_{a}{R}_{f}}\left(1-\frac{{L}_{af}\omega }{{R}_{f}}\right){V}^{2}-J\stackrel{˙}{\omega }-B\omega$` The block produces a positive torque acting from the mechanical C to R ports. ### Thermal Ports The block has two optional thermal ports, one per winding, hidden by default. To expose the thermal ports, right-click the block in your model, and then from the context menu select Simscape > Block choices > Show thermal port. This action displays the thermal ports on the block icon, and adds the Temperature Dependence and Thermal Port tabs to the block dialog box. These tabs are described further on this reference page. Use the thermal ports to simulate the effects of copper resistance losses that convert electrical power to heat. For more information on using thermal ports in actuator blocks, see Simulating Thermal Effects in Rotational and Translational Actuators. ## Parameters ### Electrical Torque Tab Model parameterization Select one of the following methods for block parameterization: • `By equivalent circuit parameters` — Provide electrical parameters for an equivalent circuit model of the motor. This is the default method. • ```By rated power, rated speed & no-load speed``` — Provide power and speed parameters that the block converts to an equivalent circuit model of the motor. Armature resistance Resistance of the armature. This parameter is only visible when you select `By equivalent circuit parameters` for the Model parameterization parameter. The default value is `110` Ω. Field winding resistance Resistance of the field winding. This parameter is only visible when you select `By equivalent circuit parameters` for the Model parameterization parameter. The default value is `2.5e+03` Ω. Back-emf constant The ratio of the voltage generated by the motor to the motor speed. The default value is `5.11` s*V/rad/A. Armature inductance Inductance of the armature. If you do not have information about this inductance, set the value of this parameter to a small, nonzero number. The default value is `0.1` H. The value can be zero. Field winding inductance Inductance of the field winding. If you do not have information about this inductance, set the value of this parameter to a small, nonzero number. The default value is `0.1` H. The value can be zero. Speed of the motor when no load is applied. This parameter is only visible when you select ```By rated power, rated speed & no-load speed``` for the Model parameterization parameter. The default value is `4.6e+03` rpm. Rated speed (at rated load) Motor speed at the rated load. This parameter is only visible when you select ```By rated power, rated speed & no-load speed``` for the Model parameterization parameter. The default value is `4e+03` rpm. Rated load (mechanical power) The mechanical load for which the motor is rated to operate. This parameter is only visible when you select ```By rated power, rated speed & no-load speed``` for the Model parameterization parameter. The default value is `50` W. Rated DC supply voltage The voltage at which the motor is rated to operate. This parameter is only visible when you select ```By rated power, rated speed & no-load speed``` for the Model parameterization parameter. The default value is `220` V. Starting current at rated DC supply voltage The initial current when starting the motor with the rated DC supply voltage. This parameter is only visible when you select ```By rated power, rated speed & no-load speed``` for the Model parameterization parameter. The default value is `2.09` A. ### Mechanical Tab Rotor inertia Rotor inertia. The default value is `2e-04` kg*m2. The value can be zero. Rotor damping Rotor damping. The default value is `1e-06` N*m/(rad/s). The value can be zero. Initial rotor speed Speed of the rotor at the start of the simulation. The default value is `0` rpm. ### Temperature Dependence Tab This tab appears only for blocks with exposed thermal ports. For more information, see Thermal Ports. Resistance temperature coefficients, [alpha_f alpha_a] A 1 by 2 row vector defining the coefficient α in the equation relating resistance to temperature, as described in Thermal Model for Actuator Blocks. The first element corresponds to the field winding, and the second to the armature. The default value is for copper, and is ```[ 0.00393 0.00393 ]``` 1/K. Measurement temperature The temperature for which motor parameters are defined. The default value is `25` °C. ### Thermal Port Tab This tab appears only for blocks with exposed thermal ports. For more information, see Thermal Ports. Thermal masses, [Mf Ma] A 1 by 2 row vector defining the thermal mass for the field and armature windings. The thermal mass is the energy required to raise the temperature by one degree. The default value is ```[ 100 100 ]``` J/K. Initial temperatures, [Tf Ta] A 1 by 2 row vector defining the temperature of the field and armature thermal ports at the start of simulation. The default value is `[ 25 25 ]` °C. ## Ports The block has the following ports: `+` Positive electrical input. `-` Negative electrical input. `C` Mechanical rotational conserving port. `R` Mechanical rotational conserving port. `Hf` Field winding thermal port. For more information, see Thermal Ports. `Ha` Armature winding thermal port. For more information, see Thermal Ports. ## References [1] Bolton, W. Mechatronics: Electronic Control Systems in Mechanical and Electrical Engineering, 3rd edition Pearson Education, 2004.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 7, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7542601227760315, "perplexity": 2538.4733276754896}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187820927.48/warc/CC-MAIN-20171017052945-20171017072945-00115.warc.gz"}
https://mathstrek.blog/2013/11/28/elementary-module-theory-iv-linear-algebra/
## Elementary Module Theory (IV): Linear Algebra Throughout this article, a general ring is denoted R while a division ring is denoted D. ## Dimension of a Vector Space First, let’s consider the dimension of a vector space V over D, denoted dim(V). If W is a subspace of V, we proved earlier that any basis of W can be extended to give a basis of V, thus dim(W) ≤ dim(V). Furthermore, we claim that if $\{v_i + W\}$ is a basis of the quotient space V/W, then the vi‘s, together with a basis $\{w_j\}$ of W, form a basis of V: • If $\sum_i r_i v_i + \sum_j r_j' w_j = 0$ for some $r_i, r_j' \in D$, its image in V/W gives $\sum_i r_i (v_i + W) = 0$ and thus each $r_i$ is zero. This gives $\sum_j r_j' w_j = 0$; since $\{w_j\}$ forms a basis of W, each $r_j' = 0.$ This proves that $\{v_i\} \cup \{w_j\}$ is linearly independent. • Let $v\in V$. Its image v+W in V/W can be written as a linear combination $\sum_i r_i (v_i + W) = v+W$ for some $r_i \in R.$ Hence $v - \sum_i r_i v_i \in W$ and can be written as a linear combination of $\{w_j\}.$ So v can be written as a linear combination of $\{v_i\} \cup \{w_j\}.$ Conclusion: dim(W) + dim(V/W) = dim(V). Now if fV → W is any homomorphism of vector spaces, the first isomorphism theorem tells us that V/ker(f) is isomorphic to im(f). Hence, dim(V) = dim(ker(f)) + dim(im(f)). If V is finite-dimensional and dim(V) = dim(W), then: • (f is injective) iff (ker(f) = 0) iff  (dim(ker(f)) = 0) iff (dim(im(f)) = dim(V)) iff (dim(im(f)) = dim(W)) iff (im(f) = W) iff (f is surjective). Thus, (f is injective) iff (f is surjective) iff (f is an isomorphism). For infinite-dimensional V and W, take the free vector spaces $V = W = D^{(\mathbf{N})}$ and let fV → W take the tuple $(r_1, r_2, \ldots) \mapsto (0, r_1, r_2, \ldots).$ Then f is injective but not surjective. Over a general ring, even if M and N are free modules, the kernel and image of fM → N may not be free. This follows from the fact that a submodule of a free module is not free in general, as we saw earlier. Hence it doesn’t make sense to talk about dim(ker(f)) and dim(im(f)) for such cases. In a Nutshell. The main results are: • for a D-linear map f : V → W, dim(V) = dim(ker(f)) + dim(im(f)); • if dim(V) = dim(W), then f is injective iff it is surjective. ## Matrix Algebra Recall that an R-module M is free if and only if it has a basis $\{m_i\}_{i\in I}$, in which case we can identify $R^{(I)} \cong M$ via $(r_i)_{i\in I}\mapsto \sum_{i\in I} r_i m_i.$ Let’s restrict ourselves to the case of finite free modules, i.e. modules with finite bases. If $M\cong R^a$ and $N\cong R^b,$ the group of homomorphisms is identified with $\text{Hom}(M, N)\cong R^{ab}$ in terms of b × a matrices in R. Let’s make this identification a bit more explicit. Pick a basis $\{m_1, \ldots, m_a\}$ of M and $\{n_1, \ldots, n_b\}$ of N. We have: $R^a \cong M, \ (r_1, \ldots, r_a) \mapsto \sum_{i=1}^a r_i m_i\$ and $\ R^b \cong N, (r_1, \ldots, r_b)\mapsto \sum_{j=1}^b r_j n_j.$ A module homomorphism fM → N is expressed as a matrix as follows: Example 1 Take RR, the field of real numbers and $M = \{a + bx + cx^2 : a, b, c\in\mathbf{R}\}$ and $N = \{a + bx : a, b\in \mathbf{R}\}$ where x is an indeterminate here. The map fM → N given by f(p(x)) = dp/dx is easily checked to be R-linear. Pick basis {1, x x2} of M and {1, x} of N. Since f(1) = 0, f(x) = 1 and f(x2) = 2x, the resulting f takes $m_1 \mapsto 0, m_2\mapsto n_1, m_3 \mapsto 2n_2.$ Hence, the matrix corresponding to these bases is $\begin{pmatrix} 0 & 1 & 0 \\ 0 & 0 & 2\end{pmatrix}.$ On the other hand, if we pick basis {1+x, –x, 1+x2} of M and basis {1+x, 1+2x} of N, then • $f(m_1) = f(1+x) = 1 = 2n_1 - n_2$; • $f(m_2) = -1 = -2n_1 + n_2$; • $f(m_3) = 2x = -2n_1 + 2n_2$ which gives the matrix representation $\begin{pmatrix} 2 & -2 & -2 \\ -1 & 1 & 2\end{pmatrix}.$ Example 2 Let M = {ab√2 : ab integers} which is a Z-module. Take fM → M which takes z to (3-√2)z. It’s clear that f is a homomorphism of additive groups and hence Z-linear. Since the domain and codomain modules are identical (M), let’s pick a single basis. If we pick {1, √2}, then • $f(m_1) = f(1) = 3-\sqrt 2 = 3m_1 - m_2$; • $f(m_2) = f(\sqrt 2) = -2 + 3\sqrt 2 = -2m_1 + 3m_2$ thus giving the matrix representation $\begin{pmatrix} 3 & -2 \\ -1 & 3\end{pmatrix}.$ Replacing the basis by {-1, 1+√2} would give us: $\begin{pmatrix} -4 & -1 \\ 1 & 2\end{pmatrix}.$ Thus, the matrix representation for fV → W depends on our choice of bases for V and W. If VW, then it’s often convenient to pick the same basis. ## Dual Module We saw earlier that $\text{Hom}(R, M) \cong M$ as an R-module isomorphism. What about Hom(MR) then? Definition. The dual module of left-module M is defined to be $M^* := \text{Hom}(M, R).$ This is a right R-module, via the following right action: • if $r\in R$ and $f:M\to R$, then the resulting $f\cdot r$ takes $m\mapsto f(m)r$. From the universal property of direct sums and products, we see that: $(\oplus_{i\in I} M_i)^* \cong \prod_{i\in I} M_i^*.$ Let’s check that we get a right-module structure on M*: indeed, $(f\cdot r_1)\cdot r_2$ takes m to $(f\cdot r_1)(m)r_2 = (f(m)r_1)r_2$ which is the image of $f\cdot (r_1 r_2)$ acting on m. The module $M^*$ is called the dual because it’s a right module instead of a left one. Note that if N were a right-module, the resulting space Hom(NR) of all right-module homomorphisms would give us a left module $N^*.$ It’s not true in general that $M^{**} \cong M$ but it holds for finite-dimensional vector spaces over a division ring. Theorem. If V is a finite-dimensional vector space over division ring D, then $V^{**} \cong V.$ Proof. Consider the map $V^* \times V\to D$ which takes (fv) to f(v). Fixing f, we get a map $v\mapsto f(v)$ which is a left-module homomorphism. Fixing v, we get a right-module homomorphism $f\mapsto f(v)$ since (f·r) corresponds to the map $v\mapsto f(v)r$ by definition. This gives a left-module homomorphism $\phi:V\to V^{**}.$ Since V is finite dimensional, it suffices to show $\text{ker}\phi = 0$. But if $v\in V-\{0\}$, we can extend {v} to a basis of V. Define a linear map fV → D which takes v to 1 and all other basis elements to 0. Then $(\phi(v))(f) = f(v) \ne 0$ so $\phi(v) \ne 0.$ This shows that $\phi$ is injective and thus an isomorphism. ♦ One way to visualise the duality is via this diagram: Exercise It’s tempting to define a module structure on Hom(MR) via $(f\cdot r)(m) = f(rm).$ What’s wrong with this definition? [ Answer: the resulting f·r : MR is not a left-module homomorphism. ] ## Dual Basis Suppose $\{ v_1, v_2, \ldots, v_n\}$ is a basis of V. Let $f_i : V\to D$ (i = 1, …, n) be linear maps defined as follows: $f_i(v_j) = \begin{cases} 1, \quad &\text{ if } j = i, \\ 0, \quad &\text{ if } j\ne i.\end{cases}$ Each $f_i$ is well-defined by the universal property of the free module V. Using the Kronecker delta function, we can just write $f_i(v_j) = \delta_{ij}.$ This is called the dual basis for $\{v_1, \ldots, v_n\}.$ [ Why is this a basis, you might ask? We know that dim(V*) = dim(V) = n, so it suffices to check that $f_1, \ldots, f_n$ is linearly independent. For that, we write $\sum_i f_i\cdot r_i=0$ for some $r_1, \ldots, r_n \in D$ (recall that V* is a right module). Then for each j = 1, …, n, we have $0 = \sum_i (f_i\cdot r_i)(v_j) = \sum_i f_i(v_j)r_i = \sum_i \delta_{ij}r_i = r_j$ and we’re done. ] Now if $f\in V^*$ and $v\in V,$ we can write $f = \sum_{i=1}^n f_i c_i$ and $v = \sum_{j=1}^n d_j v_j$ for some $c_i, d_j \in D.$ Then \begin{aligned}f(v) &= \left(\sum_{i=1}^n f_i c_i\right)\left(\sum_{j=1}^n d_j v_j\right) = \sum_{i=1}^n f_i\left(\sum_{j=1}^n d_j v_j\right)c_i \\ &= \sum_{i=1}^n \sum_{j=1}^n d_j\delta_{ij}c_i = \sum_{i=1}^n d_i c_i\end{aligned} which is the product between a row vector & a column vector. One thus gets a natural inner product between a vector space and its dual. Recall that in an Euclidean vector space $V = \mathbf{R}^3$, there’s a natural inner product given by the usual dot product which is inherent in the geometry of the space. However, for generic vector spaces, it’s hard to find a natural inner product. E.g. what would one be for the space of all polynomials of degree at most 2? Thus, the dual space provides a “cheap” and natural way to get an inner product. ## Example Consider the space $V = \{a + bx + cx^2 : a, b, c\in \mathbf{R}\}$ over the reals R=R. Examples of elements of V* are: • $f\mapsto f(1)$ which takes $(a+bx+cx^2)\mapsto a+b+c$; • $f\mapsto \left.\frac {df}{dx}\right|_{x=-1}$ which takes $(a+bx+cx^2) \mapsto -b+2c$; • $f\mapsto \int_0^1 (a+bx+cx^2) dx$ which takes $(a+bx+cx^2) \mapsto a + \frac b 2 + \frac c 3$. It’s easy to check that these three elements of V* are linearly independent and hence form a basis. Note: in this case, the base ring is a field so right modules are also left, i.e. V* and V are isomorphic as abstract vector spaces! However, there’s no “natural” isomorphism between them since in order to establish an isomorphism, one needs to pick a basis of V, a basis of V* and map the corresponding elements to each other. On the other hand, the isomorphism between V** and V is completely natural. Exercise. [ All vector spaces in this exercise are of finite dimension. ] Let $\{v_1, \ldots, v_n\}$ be a basis of V and $\{f_1, \ldots, f_n\}$ be its dual basis for V*. Denote the dual basis of $\{f_1, \ldots, f_n\}$ by $\{\alpha_1, \ldots, \alpha_n\}$ in V**. Prove that under the isomorphism $V\cong V^{**}$, we have $v_i = \alpha_i.$ Let $\{v_i\}$ be a basis of V and $\{w_j\}$ be a basis of W. If TV → W is a linear map, then the matrix representation of T with respect to bases $\{v_i\}, \{w_j\}$ is denoted M. • Prove that the map T* : W* → V* which takes W → D to the composition º T : V → D is a linear map of right modules. • Let $\{f_i\}$ be the dual basis of $\{v_i\}$ for V* and $\{g_j\}$ be the dual basis of $\{w_j\}$ for W*. Prove that the matrix representation of T* with respect to bases $\{f_i\}, \{g_j\}$ is the transpose of M. ## More on Duality Let V be a finite-dimensional vector space over D and V* be its dual. We claim that there’s a 1-1 correspondence between subspaces of V and those of V*, which is inclusion-reversing. Let’s describe this: • if $W\subseteq V$ is a subspace, define $W^\perp := \{ f\in V^* : f(w) = 0 \text{ for all } w\in W\};$ • if $X\subseteq V^*$ is a subspace, define $X^\perp := \{v\in V : f(v) = 0 \text{ for all } f\in X\}.$ The following preliminary results are easy to prove. Proposition. • $W^\perp$ is a subspace of V*; • $X^\perp$ is a subspace of V; • if $W_1\subseteq W_2\subseteq V$, then $W_1^\perp \supseteq W_2^\perp$; • if $X_1\subseteq X_2 \subseteq V^*$, then $X_1^\perp \supseteq X_2^\perp$; • $W\subseteq W^{\perp\perp}$ and $X\subseteq X^{\perp\perp}$. We’ll skip the proof, though we’ll note that the above result in fact holds for any subsets $W\subseteq V$ and $X\subseteq V^*$. This observation also helps us to remember the direction of inclusion for $W\subseteq W^{\perp\perp}$ since in this general case, $W^{\perp\perp}$ is the subspace of V generated by W. The main thing we want to prove is the following: Theorem. If $W\subseteq V$ is a subspace, then $W^{\perp\perp} = W$. Likewise if $X\subseteq V^*$ is a subspace, then $X^{\perp\perp} = X.$ Proof. Pick a basis $\{v_1, \ldots, v_k\}$ of W and extend it to a basis $\{v_1, \ldots, v_n\}$ of V, where dim(W) = k and dim(V) = n. Let $\{f_1, \ldots, f_n\} \subset V^*$ be the dual basis. If $v\in V-W,$ write $v = \sum_{i=1}^n r_i v_i$ where each $r_i\in D.$ Since v is outside W, $r_j\ne 0$ for some j>k. This gives $f_j(v) = f_j(\sum_i r_i v_i) = r_j \ne 0$ and $f_j\in W^\perp$ since j>k. Hence $v\not\in W^{\perp\perp}$ and we have $W^{\perp\perp} \subseteq W.$ The case for X is obtained by replacing V with V* and identifying $V^{**} \cong V$.  ♦ Thus we get the following correspondence: Furthermore, the dimensions “match”. E.g. suppose dim(V) = n, so dim(V*) = n. Then we claim that for any subspace W of V of dimension k, • $\dim(W^{\perp}) = n-k$; • $V^* / W^\perp \cong W^*$ naturally. Since dim(W*) = dim(W) = k, the first statement follows from the second. From results above, the inclusion map W → V induces a map of the dual spaces V* → W*. The kernel of this map is precisely the set of all $f\in V^*$ such that f(w) = 0 for all w in W, which is exactly $W^\perp.$ This proves our claim. ♦ This entry was posted in Notes and tagged , , , , . Bookmark the permalink.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 139, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9887526035308838, "perplexity": 476.496093092914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038468066.58/warc/CC-MAIN-20210418043500-20210418073500-00140.warc.gz"}
https://essayzilla.org/montgomery-college-relative-frequency-distribution-statistics-question/
Select Page Your Perfect Assignment is Just a Click Away Starting at \$8.00 per Page 100% Original, Plagiarism Free, Customized to Your instructions! ## Montgomery College Relative Frequency Distribution Statistics Question ### Question Description [Minimum length of responses is given in brackets as number of complete sentences required] Critical Thinking Questions 1. What is the sum of all frequencies in a frequency distribution? (Hint it is not 1) [1] 2. Why is the sum of all relative frequencies equal to 1? [1] 3. What is the relationship between percentiles and quartiles? Interpret Q1, Q2, & Q3 as percentiles. (Hint: A number x $x$ is at the kth ${k}^{th}$ percentile if k% $k%$ of the data in the set is less than or equal to x $x$ .) [3] 4. What can you say about a data set when the “box” in the box plot is very wide but the “whiskers” do not go out very far from the box? [2] 5. Why is it important to identify outliers? [2] "Place your order now for a similar assignment and have exceptional work written by our team of experts, guaranteeing you A results." ## Our Service Charter 1. Professional & Expert Writers: Eminence Papers only hires the best. Our writers are specially selected and recruited, after which they undergo further training to perfect their skills for specialization purposes. Moreover, our writers are holders of masters and Ph.D. degrees. They have impressive academic records, besides being native English speakers. 2. Top Quality Papers: Our customers are always guaranteed of papers that exceed their expectations. All our writers have +5 years of experience. This implies that all papers are written by individuals who are experts in their fields. In addition, the quality team reviews all the papers before sending them to the customers. 3. Plagiarism-Free Papers: All papers provided by Eminence Papers are written from scratch. Appropriate referencing and citation of key information are followed. Plagiarism checkers are used by the Quality assurance team and our editors just to double-check that there are no instances of plagiarism. 4. Timely Delivery: Time wasted is equivalent to a failed dedication and commitment. Eminence Papers are known for the timely delivery of any pending customer orders. Customers are well informed of the progress of their papers to ensure they keep track of what the writer is providing before the final draft is sent for grading. 5. Affordable Prices: Our prices are fairly structured to fit in all groups. Any customer willing to place their assignments with us can do so at very affordable prices. In addition, our customers enjoy regular discounts and bonuses. 6. 24/7 Customer Support: At Eminence Papers, we have put in place a team of experts who answer all customer inquiries promptly. The best part is the ever-availability of the team. Customers can make inquiries anytime.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1944085955619812, "perplexity": 2436.7138351500166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362952.24/warc/CC-MAIN-20211204063651-20211204093651-00470.warc.gz"}
https://stat.mit.edu/events/ankur-moitra-mit/
# Stochastics and Statistics Seminar ## Robust Statistics, Revisited Speaker Name: Ankur Moitra (MIT) Date: March 10, 2017 Time: 11:00am Location: E18-304 Abstract: Starting from the seminal works of Tukey (1960) and Huber (1964), the field of robust statistics asks: Are there estimators that provable work in the presence of noise? The trouble is that all known provably robust estimators are also hard to compute in high-dimensions. Here, we study a basic problem in robust statistics, posed in various forms in the above works. Given corrupted samples from a high-dimensional Gaussian, are there efficient algorithms to accurately estimate its parameters? We give the first algorithms that are able to tolerate a constant fraction of corruptions that is independent of the dimension. Additionally, we give several more applications of our techniques to product distributions and various mixture models. This is based on joint work with Ilias Diakonikolas, Jerry Li, Gautam Kamath, Daniel Kane and Alistair Stewart. Speaker Bio: Ankur Moitra is the Rockwell International Assistant Professor in the Department of Mathematics at MIT. The aim of his work is to bridge the gap between theoretical computer science and machine learning by developing algorithms with provable guarantees and foundations for reasoning about their behavior. He is a recipient of a Packard Fellowship, a Sloan Fellowship, an NSF CAREER Award, an NSF Computing and Innovation Fellowship and a Hertz Fellowship.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8212506771087646, "perplexity": 746.7197575058822}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122739.53/warc/CC-MAIN-20170423031202-00507-ip-10-145-167-34.ec2.internal.warc.gz"}
http://mathematica.stackexchange.com/questions/108553/what-options-and-settings-can-be-used-to-create-print-quality-typeset-documents
# What options and settings can be used to create print quality typeset documents with Mathematica? If I am proficient on dedicated typesetting software (e.g. LaTeX) it makes sense to use that for typesetting but if I am not how can I use Mathematica to create good quality documents. The question has been stimulated by an extended Q&A with David Stork. - If you are skilled in particular code or packages then the switching costs (in time, possibly also money) to adopt a new software/package are usually prohibitive. In the context of typesetting, if you already are well versed in a typesetting software, e.g. LaTeX then I don't see any reason to attempt publication quality documents with Mathematica. So the information below is really aimed at Mathematica users who do not have typesetting alternatives, or users who want to do development and publication from the one interface. What I have written below was stimulated by my discussion with David Stork and is based on bits and pieces and scraps of code and notes I had lying around. It is definitely not exhaustive so I welcome other answers and any criticism of this answer. I've typeset a lot of pages over the years as an author of a book and then as publisher of Mathematica in Education and Research for 12 issues so I know from first hand experience that print ready documents can be created in Mathematica (but again, to be clear, I am not advocating Mathematica instead of e.g. LaTeX). Stylesheets Generally speaking you need to have a good (special) reason to style cells locally at the cell level. Doing this almost always leads to workflow problems if you need to make changes. It is much better, and more efficient, to use a custom stylesheet for you document creation. At the time of writing there were 539 questions tagged "stylesheet" on Mathematica StackExchange so information about configuring stylesheets should be sufficient, notwithstanding that it is not organised into a coherent tutorial for those starting from scratch. There is an introductory tutorial in the Mathematica documentation. For what it is worth the way I generally create styles is firstly to create some content with several cell types, e.g. "Input", "Output", "Text" and so on. For "Text" I do not split cells at each paragraph. I keep entire blocks of text in one cell and use the options ParagraphSpacing, ParagraphIndent, LineSpacing, LineIndent to control the layout. So for text I might have something like this and I add a cell tag to the cell. Note that the code below is the underlying expression that you see from Cell > Show Expression. Cell["\<Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.\>", "Text", CellTags->"Text"] With Format > ScreenEnvironment > Working I then use a combination of Format > Option Inspector and the text formatting options under the Format menu item to style the cell. The ruler toolbar can be used to set left and right cell margins. A bit more on that in another section. This leaves me with a "text" cell with lots of local options. These are the options that I want to use in my stylesheet. So I scrape them from the cell using code like this: NotebookLocate["Text"]; cell = Cells[NotebookSelection[EvaluationNotebook[]]]; tmp = DeleteCases[NotebookRead[cell], CellTags -> _, {2}]; tmp /. Cell[_, x_String, y___] :> CellPrint@Cell[StyleData[x], y]; When using this method you should remove the highlighted options. If I want to give this a new style name but base it on the existing style then I do this: NotebookLocate["Text"]; cell = Cells[NotebookSelection[EvaluationNotebook[]]]; tmp = DeleteCases[NotebookRead[cell], CellTags -> _, {2}]; tmp /. Cell[_, x_String, y___] :> CellPrint@ Cell[StyleData["MyNewStyle", StyleDefinitions -> StyleData[x]], y]; So now we have a new working style that can be pasted into a stylesheet. Next step is to either remove all the local options or alternatively create a new cell with no options, then switch to With Format > ScreenEnvironment > Printout and repeat the above this time configuring the appearance with printing in mind. (Note that the default magnification might need to be changed in your Printout style) NotebookLocate["Text"]; cell = Cells[NotebookSelection[EvaluationNotebook[]]]; tmp = DeleteCases[NotebookRead[cell], CellTags -> _, {2}]; tmp /. Cell[_, x_String, y___] :> CellPrint@Cell[StyleData[x, "Printout"], y]; For setting colour schemes something like this might be useful. Equations If your objective is to display equations, possibly numbered, to convey some principle then I would recommend typesetting the equations directly into a text based cell such as "DisplayFormulaNumbered" or a custom cell derived from one of those cells. However equations can be rendered from Mathematica input. For example: CellPrint[TextCell[Defer[y[x] == Integrate[x^2 Exp[-x], {x, 0, 1}]],"DisplayFormula"]] This rendering is probably not what you expected given the options for my "DisplayFormula" style. The reason is that the evaluation produces BoxData rather than TextData: What is needed here is a FormBox and TextData. We can try appending TraditionalForm: CellPrint[TraditionalForm@ TextCell[Defer[y[x] == Integrate[x^2 Exp[-x], {x, 0, 1}]], "DisplayFormula", StripOnInput -> True]] Close, but no cigar. The code did create an inline cell however, so the typeset equation can be cut and pasted. Normally I type equations directly into a text cell, where I mean text in a generic sense, i.e. "Text", "DisplayFormula" and so on. The three assistant palettes are often useful for this. There is a catch however. Firstly try typing an equation in a Text cell. Compare the box code of the last example with the code from the Text cell: The problem is that "DisplayFormula" has an option DefaultFormatType->DefaultInputFormatType which seems to lead to the creation of BoxData cells rather than TextData cells. So a modified style is required: Cell[StyleData["DisplayFormulaNumbered"], CellFrameMargins->False, CellFrameLabels->{{None, Cell[ TextData[{"(", CounterBox["DisplayFormulaNumbered"], ") "}]]}, {None, None}}, DefaultFormatType->DefaultTextFormatType, TextAlignment->Center, ScriptSizeMultipliers->0.71, ScriptMinSize->7, ScriptLevel->0, SingleLetterItalics->True, SpanMaxSize->Infinity, CounterIncrements->"DisplayFormulaNumbered", FormatTypeAutoConvert->False, FontFamily->"Times", UnderoverscriptBoxOptions->{LimitsPositioning->False}, FractionBoxOptions->{AllowScriptLevelChange->False}] This style includes other useful options. Now the equation is typeset as expected. The underlying box code now looks much like you got in the Text cell: Use SingleLetterItalics->True to ensure that your variables are italicised. A problem sometimes arise when you have several variables together. Suppose you had variables x, y, and z multiplied together. There may be a style guide that you need to conform to, to represent multiplication, e.g. centre dot, but assuming you simply want to make it clear that you have 3 variables and not one variable called xyz then one way to ensure each variable is treated as a single character is to insert an invisible character between them. In this GIF an invisible space was inserted. Lists of these special characters are available here and here. MakeBoxes, Format, Notations package More on this section later. The Notations package is rather maligned on this site but it can be quite useful, when used within its limitations. More on this section later Bulk changes It is often convenient to hide code from documents. Common ways to do this are to close cell groups, set cell sizes to zero or delete the cell. To hide all input by selecting output cells and closing the cell group evaluate this: NotebookFind[EvaluationNotebook[], "Output", All, CellStyle]; FrontEndTokenExecute["OpenCloseGroup"] To only close selected cell groups, tag the cells you want to hide and then evaluate this: NotebookLocate["HideMe"]; FrontEndTokenExecute["OpenCloseGroup"] To close selected cells, tag the cells you want to close and then evaluate this: NotebookLocate["CloseMe"]; SetOptions[NotebookSelection[EvaluationNotebook[]], CellOpen -> False] To delete cells use the same methods to find and select the cells and then use NotebookDelete. NotebookLocate["DeleteMe"]; NotebookDelete[EvaluationNotebook[]] In addition to modification done within a notebook it is also possible to "bulk modify" several notebooks. For example if all notebooks you want to modify are in the same directory then obtain a list of the file names: files = FileNames["*.nb", "../PathTo/YourDocuments/", 2] Next you may want to modify a copyright notice in each of the notebooks and change some notebook options: Clear[modifyNotebook]; modifyNotebook[$file_] := Module[{nb = NotebookOpen[$file, StyleDefinitions -> "MyStylesheet.nb"], content, new}, SelectionMove[nb, All, Notebook]; new = content /. {Cell[_, "Copyright", ___] :> NotebookDelete[nb]; SetOptions[nb, WindowSize -> {600, 700}, ShowCellTags -> False, Magnification -> 1.0, StyleDefinitions -> "MyStylesheet.nb"]; Scan[NotebookWrite[nb, #] &, Flatten[{new}]]; NotebookSave[nb]; NotebookClose[nb]] modifyNotebook /@ files; Tweeking Use tools such as these manipulates to figure out the best options for things like FractionBox Manipulate[ Column[{ Dynamic@ Style[Row[{"some text ", FractionBox["10", "2", Beveled -> b, DenominatorAlignment -> d, NumeratorAlignment -> n], " more text "}] // DisplayForm, 48], Button["Paste", Print@(FractionBoxOptions -> {Beveled -> b, DenominatorAlignment -> d, NumeratorAlignment -> n}), ImageSize -> 100] }], {{b, True, "Beveled"}, {True, False}}, {{n, 0, "Numerator Alignment"}, -1, 1, 0.1}, {{d, 0, "Denominator Alignment"}, -1, 1, 0.1} ] You may need to have tildas or hats as overscripts to symbols: Manipulate[ Column[{ BoxBaselineShift -> a, BoxMargins -> {{l, 0}, {0, 0}}], DiacriticalPositioning -> f], " more text "}] // DisplayForm, 48], Button["Paste", BoxBaselineShift -> a, BoxMargins -> {{l, 0}, {0, 0}}], DiacriticalPositioning -> f], "Input"]], ImageSize -> 100] }], {{f, True, "Diacritical Positioning"}, {True, False}}, {{a, 0, "Up/Down"}, -2, 2}, {{l, 0, "Left/Right"}, -1, 1} ] Manipulate[ Column[{ Style[Row[{Subscript["H", 3], Superscript["O", "+"]}], ScriptBaselineShifts -> {b, t}, 36], Button["Paste", Print@ ToBoxes@Style[Row[{Subscript["H", 3], Superscript["O", "+"]}], ScriptBaselineShifts -> {b, t}], ImageSize -> 100] }], {{b, 1}, -2, 2}, {{t, 1}, -2, 2} ] Once you have customised the appearance for these special typeset items you can use the settings in a paste button. (I'll add an example later on). Use AlignmentMarker to align equations. (I'll add an example later on) Fixing unmatched brackets: The option SpanMaxSize->Infinity expands the { to encapsulate all terms Layout tweeking Left and right hand cell margins can be adjusted using the Ruler toolbar. For typesetting a print ready document it is often useful to adjust cell margins. This code allows you to adjust cells margins to create or reduce space so as to getter a better fit of content within a page. Module[{margins, nb = InputNotebook[]}, Grid[{{Button["Top +1", SelectionMove[nb, All, Cell]; margins = CellMargins /. Options[NotebookSelection[nb], CellMargins]; margins += {{0, 0}, {0, 1}}; SetOptions[NotebookSelection[nb], CellMargins -> margins]], Button["Top -1", SelectionMove[nb, All, Cell]; margins = CellMargins /. Options[NotebookSelection[nb], CellMargins]; margins += {{0, 0}, {0, -1}}; SetOptions[NotebookSelection[nb], CellMargins -> margins]]}, {Button["Bottom +1", SelectionMove[nb, All, Cell]; margins = CellMargins /. Options[NotebookSelection[nb], CellMargins]; margins += {{0, 0}, {1, 0}}; SetOptions[NotebookSelection[nb], CellMargins -> margins]], Button["Bottom -1", SelectionMove[nb, All, Cell]; margins = CellMargins /. Options[NotebookSelection[nb], CellMargins]; margins += {{0, 0}, {-1, 0}}; SetOptions[NotebookSelection[nb], CellMargins -> margins]]}, {Button["Left +1", SelectionMove[nb, All, Cell]; margins = CellMargins /. Options[NotebookSelection[nb], CellMargins]; margins += {{1, 0}, {0, 0}}; SetOptions[NotebookSelection[nb], CellMargins -> margins]], Button["Left -1", SelectionMove[nb, All, Cell]; margins = CellMargins /. Options[NotebookSelection[nb], CellMargins]; margins += {{-1, 0}, {0, 0}}; SetOptions[NotebookSelection[nb], CellMargins -> margins]]}, {Button["Right +1", SelectionMove[nb, All, Cell]; margins = CellMargins /. Options[NotebookSelection[nb], CellMargins]; margins += {{0, 1}, {0, 0}}; SetOptions[NotebookSelection[nb], CellMargins -> margins]], Button["Right -1", SelectionMove[nb, All, Cell]; margins = CellMargins /. Options[NotebookSelection[nb], CellMargins]; margins += {{0, -1}, {0, 0}}; SetOptions[NotebookSelection[nb], CellMargins -> margins]]}, {Button["Top 0", SelectionMove[nb, All, Cell]; margins = CellMargins /. Options[NotebookSelection[nb], CellMargins]; SetOptions[NotebookSelection[nb], CellMargins -> {margins[[1]], {margins[[2, 1]], 0}}]], Button["Bottom 0", SelectionMove[nb, All, Cell]; margins = CellMargins /. Options[NotebookSelection[nb], CellMargins]; SetOptions[NotebookSelection[nb], CellMargins -> {margins[[1]], {0, margins[[2, 2]]}}]]}, {Button[ "Left 0", SelectionMove[nb, All, Cell]; margins = CellMargins /. Options[NotebookSelection[nb], CellMargins]; SetOptions[NotebookSelection[nb], CellMargins -> {{0, margins[[1, 2]]}, margins[[2]]}]], Button["Right 0", SelectionMove[nb, All, Cell]; margins = CellMargins /. Options[NotebookSelection[nb], CellMargins]; SetOptions[NotebookSelection[nb], CellMargins -> {{margins[[1, 1]], 0}, margins[[2]]}]]}, {Button[ "Default", SelectionMove[nb, All, Cell]; margins = CellMargins /. Options[NotebookSelection[nb], CellMargins]; SetOptions[NotebookSelection[nb], CellMargins -> {{Inherited, Inherited}, {Inherited, Inherited}}]], Null}}, ItemSize -> {Automatic, Automatic}] ] Nudging elements within equations is also very useful (Insert > Typesetting). More information can be found in the documentation. Print Settings Print settings can be stored in a customised stylesheet or set locally for each notebook: SetOptions[EvaluationNotebook[], PrintingStartingPageNumber -> 100, {Cell[TextData[{CounterBox["Page"]}], "PageNumber"], {None, None, Cell[TextData[{CounterBox["Page"]}], "PageNumber"]}}, PageFooters -> {{None, None, None}, {None, None, None}}, PrintingOptions -> { "PrintingMargins" -> {{90, 90}, {60, 90}}, "PaperSize" -> {596, 794}, "PageSize" -> {596, 794}, "PageFooterMargins" -> {30, 30}, "FirstPageFace" -> Right, "FirstPageFooter" -> False, "PrintRegistrationMarks" -> False}]; Headers and/or footers can include sections names in the running header (footer). For example suppose you had 6 named sections in your notebook. The options below will display a printed page that has the document title on the left hand page and section name on the right hand page. To use this code just enter the list of your section names, and your desired document title. PageHeaders -> { {Cell[TextData[{CounterBox["Page"]}], "PageNumber"], I am having trouble at the start. I copy the paragraph Lorem ipsum, .. and paste into a fresh notebook. I change the PargraphSpacing with Option Inpsector. But when I evaluate NotebookLocate["Text"]; cell = Cells[NotebookSelection[EvaluationNotebook[]]]; tmp = DeleteCases[NotebookRead[cell], CellTags -> _, {2}]; tmp /. Cell[_, x_String, y___] :> CellPrint@Cell[StyleData[x], y] I get a *Local definition for style "Text": Text but the output is Null. I don't see the style information. – Jack LaVigne Feb 28 at 17:14 I am slow witted. ShowExpression` on the Local definition for style "Text" is what I was missing. Thank you for your patience. – Jack LaVigne Mar 3 at 23:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17786890268325806, "perplexity": 3031.2885612040895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049288709.66/warc/CC-MAIN-20160524002128-00058-ip-10-185-217-139.ec2.internal.warc.gz"}
https://webwork.maa.org/moodle/mod/forum/discuss.php?d=2835
## WeBWorK Main Forum ### Why scientific notation requires "X" instead of "*" for multiplication? Aren't students conditioned to use "*" everywhere? by Christian Seberino - Number of replies: 1 I noticed you can't create a problem that requires scientific notation nor enter an answer in scientific notation without using "X" instead of "*" for multiplication. Why? Aren't students expecting to use "*" since that is what they use everywhere else for multiplication? cs DOCUMENT(); "PGstandard.pl", "PGML.pl", "MathObjects.pl", "PGcourse.pl", "parserNumberWithUnits.pl", "contextArbitraryString.pl", "parserPopUp.pl", "contextInequalities.pl", "contextScientificNotation.pl", ); TEXT(beginproblem()); ###################################################################### Context("ScientificNotation"); BEGIN_PGML See if you can enter 1230 in scientific notation. [____________]{Compute("1.23 x 10**3")} END_PGML ###################################################################### ENDDOCUMENT(); ### Re: Why scientific notation requires "X" instead of "*" for multiplication? Aren't students conditioned to use "*" everywhere? by Davide Cervone - It's because that's what the person who requested this context required. It was designed for use in a high-school course where the notational requirements were very strict. If you want to use * for the multiplication, you can add it into the context as follows: Context("ScientificNotation");
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9359574913978577, "perplexity": 4217.866891444035}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710712.51/warc/CC-MAIN-20221129232448-20221130022448-00411.warc.gz"}
http://mathematica.stackexchange.com/users/368/sebhofer?tab=activity&sort=all&page=3
# sebhofer less info reputation 514 bio website quantum.at location Vienna, Austria age member for 2 years, 7 months seen 3 hours ago profile views 144 PhD student in Theoretical Physics Research interests include Quantum Optomechanics and Quantum Optics # 439 Actions Jul30 revised Minimum and maximum values under conditions added 111 characters in body Jul30 comment Minimum and maximum values under conditions Well that was just a matter of time, wasn't it... :) Jul29 answered Minimum and maximum values under conditions Jul28 comment Using ReplaceAll to replace a head I find the last statement that "Replace [...] acts from the inside out" rather missleading. What about Replace[a[b[c, d]], head_[arg__] :> newHead[arg]]? Jul27 comment Best practice of passing a large number of parameters to functions @Mr.Wizard I was referring to the latter case. Thx for the explanation! Jul27 comment Best practice of passing a large number of parameters to functions @Mr.Wizard Can you explain to me why p:_Association:par does not work in this case? Is it because par does not match _Association in its unevaluated form? Jul26 comment How can I mend this broken heart? @YvesKlett To be fair, it's one of the best titles on this site! Made me smile :) Jul25 awarded Nice Question Jul23 revised Is there a list of Octave functions mapped to the related Mathematica one? Corrected ones,zeros Jul23 comment Is there a list of Octave functions mapped to the related Mathematica one? @Mr.Wizard Will do, I just wasn't sure what you'd want to do with solution 3 :) Anyway, I agree that it is probably really hard to maintain this list in a sensible way. MATLAB is heavily overloaded and Mathematica is too. Jul23 comment Is there a list of Octave functions mapped to the related Mathematica one? @Mr.Wizard I just realized that your first example is not correct (so is the zeros one): ones(n) creates a n-by-n matrix of 1s! The equivalent to your commands is ones(1,n) or ones(n,1) depending on the interpretation. Jul23 comment Automatically growing lists as in MATLAB? Sorry if the following is obvious to you (also, it's a bit off-topic): This is bad practice in MATLAB and in a direct translation also in Mathematica, as in this way new memory needs to be allocated every time you add an element and this takes a lot of time. Better preallocate by x=zeros(1,n). (Of course this doesn't matter much if you do it only once in the whole program.) Jul22 comment Is there a list of Octave functions mapped to the related Mathematica one? @Mr.Wizard Oh, I just found this on Wikipedia: "In fact, Octave treats incompatibility with MATLAB as a bug [...]". I think we are safe :) Jul22 revised Is there a list of Octave functions mapped to the related Mathematica one? edited body Jul22 revised Is there a list of Octave functions mapped to the related Mathematica one? added eye Jul22 comment Is there a list of Octave functions mapped to the related Mathematica one? @Mr.Wizard Maybe we should make it Octave/Matlab. I think the function names are mostly the same anyway but I'm not 100% sure and couldn't find anything relevant right now. So I guess, if the functions have different names, just name both. What do you think? Jul21 comment Why do quote marks appear in output when saved with Save Selection As? Here is my summary: First of all, I do not get any quotes when exporting to pdf. I do get quotes if I select the cell contents and use Save Selection As.. to save to png. I do not get quotes if I either select the whole cell to Save Selection As png or use Export["quotes.png", CharacterRange["a", "z"]]. Jul21 comment What's the difference between Inactive and HoldForm? @YiWang Oops, good point :) I didn't think of HoldForm because I never use it. The real point is probably in a more elaborate example (see edit). Now I'm crossing my fingers that I didn't overlook something else... Jul21 revised What's the difference between Inactive and HoldForm? added 37 characters in body Jul21 answered What's the difference between Inactive and HoldForm?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22717580199241638, "perplexity": 1923.0853636543673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535921957.9/warc/CC-MAIN-20140901014521-00360-ip-10-180-136-8.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/1997523/why-can-the-row-reduced-echelon-matrix-r-only-be-identity-matrix
# Why can the row-reduced echelon matrix $R$ only be identity matrix? I'm reading Linear Algebra by Hoffman, Kunze where the authors explained that a $n\times n$ matrix $A$ being invertible is equivalent to the fact that $A$ is row-equivalent to $n\times n$ matrix $R$ which is an identity matrix. In the proof of the theorem, they wrote: $$R= E_k\ldots E_2E_1 A$$ where $E_1,\ldots,E_k$ are elementary matrices. Each $E_j$ is invertible, and so $$A = E_1^{-1}\ldots E_k^{-1}~R\,.$$ ... Since, $R$ is a (square) row-reduced echelon matrix, $R$ is invertible if and only if $R=I\,.$ [...] I couldn't get the conclusion, since any row of $R$ can't be zero, it has to be identity matrix $I\,.$ Why is it so? Isn't there any other row-reduced echelon matrix other than the identity matrix having no zero row and invertible? Why is it so? Suppose that $R$ is a matrix in row-reduced echelon form, and that $R$ has no zero-rows. That means that $R$ has a pivot (leading $1$) in every row. This means that we have $n$ pivots in an $n \times n$ matrix. However, since no column of a row-reduced matrix can have two pivots, it must be that every single column has a pivot. In other words, every column has a leading $1$ in some entry, and the other entries of that column of zero. In other words, the columns of $R$ must be the columns of the identity. The only order we can put those columns in and have $R$ in row-echelon form is the order in which $R = I$. • I do not understand your answer. You say that "every column has a leading $1$ in some entry, and the other entries of that column of zero.". But why can't the row reduced echelon form have a zero row? – ab123 Sep 3 '18 at 11:08
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9325418472290039, "perplexity": 178.75382093602246}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232259452.84/warc/CC-MAIN-20190526185417-20190526211417-00485.warc.gz"}
https://mathematica.stackexchange.com/questions/146798/nestlist-with-lower-and-upper-limit
# NestList with lower and upper limit I'm creating a series of centered lines in {0,0} tilted from a set of angles. The first angle is $30°$ and the last one is $110°$. I want to control the number of lines between these two angles, but I can not use Subdivide because the angles are not evenly spaced. The idea is that each angle has a relative proportion to the previous angle. I did a test with $quant=20$: ClearAll["Global*"] quant = 20; ang = NestList[#*1.067121 &, 30, quant] p = FromPolarCoordinates[{5, # Degree}] &/@ ang // N; Graphics[{Line[{{0, 0}, #}] & /@ p}] And another test with $quant=10$: ClearAll["Global*"] quant = 10; ang = NestList[#*1.138747 &, 30, quant] p = FromPolarCoordinates[{5, # Degree}] &/@ ang // N; Graphics[{Line[{{0, 0}, #}] & /@ p}] In both codes I had to test the values $1.067121$ (First code) and $1.138747$ (Second code) to reach the last angle of $110°$. Is there something in "NestList" that I can increase? If you have another idea outside of that, it is also an option. EDIT I tried this and almost got it: Solve[Last[NestList[#*coeff &,30,20]]==110,{coeff}]//N • with Solve you should give the Reals domain argument, then select the positive result: Select[v /. Solve[Nest[#*v &, 30, 20] == 110, {v}, Reals], # > 0 &][[1]] // N – george2079 May 24 '17 at 20:17 • you can readily do this by hand though, with v = Exp[Log[110/30]/20] or even use ang = #1 Exp[Subdivide[#3] Log[#2/#1]] &[30, 110, 20] – george2079 May 24 '17 at 20:31 ClearAll[fun] ; {ang, poi, lin}, ang = N[Map[Function[Rescale[Slot[1],{1,num},{30,110}]]][Range[num]]] ; lin = Map[Function[Line[{{0.,0.},Slot[1]}]],poi] ] ; Graphics[{Red,fun[2,1],Blue,fun[5,0.8],Green,fun[7,0.6]}] • Image of output? – Michael E2 May 25 '17 at 11:00 • @MichaelE2, Done. Just noticed, this doesn't answer original question, sorry. Guess, this answer can be deleted. – I.M. May 25 '17 at 11:12 • Yeah, I was wondering if you got that. It's what I got, but sometimes there's an accidental mistake in the posted code. – Michael E2 May 25 '17 at 11:24 ClearAll["Global*"] quant = 20; ang = NestList[#*Exp[Log[110/30]/quant] &, 30, quant] p = FromPolarCoordinates[{5, # Degree}] &/@ ang // N; Graphics[{Line[{{0, 0}, #}] & /@ p}] • @MichaelE2 I already did that, but I posted it wrong at the moment – LCarvalho May 25 '17 at 11:11 Another way: Block[{quant = 20, a = 30., b = 110.}, ang = Exp[Log[a] + Log[b/a] Range[0., quant]/quant]; p = 5 Transpose@Through[{Cos, Sin}[ang Degree]]; lines = Transpose@ArrayReshape[p, Prepend[Dimensions@p, 2], 0.]; Graphics@Line@lines ] Alternatives for ang and lines: ang = Exp[Log[a] + Log[b/a] Range[0., quant]/quant] ang = a (b/a)^(Range[0., quant]/quant) ang = Array[Exp, quant + 1, Log@{a, b}] lines = Transpose@ArrayReshape[p, Prepend[Dimensions@p, 2], 0.] lines = Transpose[{ConstantArray[0., Dimensions@p], p}] lines = Transpose[{0. p, p}] `
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3843272626399994, "perplexity": 8430.83846378047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668910.63/warc/CC-MAIN-20191117091944-20191117115944-00479.warc.gz"}
http://physics.stackexchange.com/questions/68852/autocorrelation-and-power-density-spectrum-continuous-markov-process
# Autocorrelation and Power density spectrum : Continuous Markov Process I've been reading through the paper from Gillespie on Brownian motion and Johnson Noise (DOI, PDF). He considers $X_s(t)$, a zero-mean stochastic variable, that is stationary in the sense that all of its moments $\langle X_s(t)^k\rangle$ are time independent. He further defines the autocorrelation function (2.39) $$\langle X_s(t)X_s(t+t')\rangle \equiv C_X(t')$$ which is independent of time since all the moments are time independent. The mean is over all possible values of X. I cannot interpret this autocorrelation. I have been told that it measures how much the random variable fluctuates but I cannot convince myself of that. I think this is the relevant question in order to answer my dilemma : on the same page of the definition of the autocorrelation, he goes on showing that its frequency fourrier transform $$C_X(t) = \int \limits_{0}^\infty S_x(\nu) cos(2\pi \nu t)$$ can be related to the variance of $X_s(t)$ as $$\langle X_s(t)^2 \rangle = \int \limits_0^\infty S_x(\nu)d\nu$$ Now in the Ornstein Uhlenbeck process, $X_s(t)$ is to be interpreted as the speed of particle, subject to a drag force and a white-noise random force ($\Gamma$) $$\frac{dX_s(t)}{dt} = -\gamma X_s(t) + \sqrt{c}\Gamma(t)$$ One can then compute the dissipated power spectrum which originates from the drag force. Since power is force times speed, one gets $$\langle P_{diss} \rangle = \gamma\langle X_s(t)^2\rangle \quad \to \quad P_{diss}(\nu) = \gamma S_X(\nu) = \gamma \frac{2c}{\gamma^2+(2\pi\nu)^2}$$ Where the last equality holds for the process at study. Gillespie simply says that this means that only the low frequency regime contributes to the dissipated power, and the high frequency regime doesn't. But frequency of what ? My interpretation would be this : the frequency argument ammounts to saying that if the variable is auto-correlated also for long times (low frequency in fourier), then you will have significant dissipated power. Coming back to my original question: do long autocorrelation times mean that the particle fluctuates a lot, and if so, why ? - The $C_X(t')= \langle X_s(t)X_s(t+t')\rangle$ should be compared with the statistical correlation (Wikipedia: http://en.wikipedia.org/wiki/Correlation_and_dependence). With the statistical correlation one measures the dependency between two veriables. If two variables are independent they will have a correlation equal to zero. If two variables tend to have the same value, the correlation is positive and if two varibles tend to have opposite values the correlation is negative. For the case of Brownian motian, I guess that $X_s(t)$ is the velocity-distribution considering that the VACF (velocity autocorrelationfunction) is mostly used in the theory of Brownian motion. Then $C_X(t')$ tells you what the correlation is between the velocity at time $t$ and at time $t+t'$. For Continuous Markovchains this should drop exponentially, which is also the case for the Brownian motion.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9275797009468079, "perplexity": 347.169055842708}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663372.35/warc/CC-MAIN-20140930004103-00001-ip-10-234-18-248.ec2.internal.warc.gz"}
https://aas.org/archives/BAAS/v27n2/aas186/abs/S5006.html
Deep $JHK$ Photometry and the Infrared Luminosity Function of the Galactic Bulge Session 50 -- The Milky Way Display presentation, Thursday, June 15, 1995, 9:20am - 4:00pm ## [50.06] Deep $JHK$ Photometry and the Infrared Luminosity Function of the Galactic Bulge Glenn P. Tiede, Jay A. Frogel, and D.M. Terndrup (OSU) We derive the deepest, most complete near-IR luminosity function for Galactic bulge stars yet obtained based on new $JHK$ photometry for stars in two fields of Baade's Window. When combined with previously published data, we are able to construct a luminosity function over the range $5.5 \leq K_0 \leq 16.5$. The slope of the luminosity function as well as the top of the first ascent giant branch are consistent with expectations based on the Revised Yale Isochrones. Unfortunately, this consistency only sets weak constraints on the range in age and [Fe/H] for the Baade's Window stars. A blue sequence of foreground stars is clearly visible on the $J-K, K$ color-magnitude diagrams we have derived. We use the relationship between [Fe/H] and the giant branch slope derived from near-IR observations of metal rich globular clusters by (Kuchinski, L.E., Frogel, J.A., Terndrup, D.M., \& Persson, S.E. 1995, AJ, 109, 1131) to calculate the metallicity for several bulge fields along the minor axis. For Baade's Window we calculate that [Fe/H] $= -0.28 \pm 0.16$, consistent with the recent estimate of (McWilliam, A., \& Rich, R.M. 1994, ApJS, 91, 749), but somewhat lower than previous estimates based on CO and TiO absorption bands and the $JHK$ colors of the M giants by (Frogel, J. A., Terndrup, D.M., Blanco, V.M., \& Whitford, A.E. 1990, ApJ, 353, 494). Between b $= -3$ and -12 we find a gradient in [Fe/H] of $-0.06 \pm 0.03$ dex/degree, consistent with other, independent derivations. We derive a helium abundance for Baade's Window with the $R$ and $R^\prime$ methods and find that Y $= 0.27 \pm 0.03$. Finally, we find that the bolometric corrections for bulge K giants ($V - K \geq 2$) are in excellent agreement with empirical derivations based on observations of globular cluster and local field stars. However, for the redder M giants we find, as did Frogel and Whitford 1987, that the bolometric corrections differ by several tenths of a magnitude from those derived for field giants and adopted in the Revised Yale Isochrones. This difference most likely arises from the excess molecular blanketing in the V and I bands of the bulge giants relative to that seen in field stars.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8872423768043518, "perplexity": 4009.5294963090932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257827080.38/warc/CC-MAIN-20160723071027-00325-ip-10-185-27-174.ec2.internal.warc.gz"}
https://reason.com/2017/01/09/obama-clean-energy-future-is-irreversibl/
Clean Energy Obama: Clean Energy Future Is Irreversible and Will Outlast Trump Still, it is always good to have some idea of what tradeoffs proposed policies would impose. | President Barack Obama declared that "one of the reasons I ran for this office was to make America a leader in this mission" to address the problem of man-made climate change. He made this claim to a legacy last October when the Paris Agreement on climate change achieved enough signatories to come into effect. Also in his statement hailing the Paris Agreement, Obama noted that "the skeptics said these actions would kill jobs." Yet, he noted that even as U.S. carbon dioxide levels fell to their lowest levels in two decades, more jobs were created. Now, as a parting shot, President Obama writes an article today, "The irreversible momentum of clean energy," in the journal Science. In his article, President Obama apparently believes that the irreversible momentum of clean energy is all gain and no pain. First, he correctly notes the decoupling over the past 8 years of energy and carbon emissions from economic growth in the U.S. economy. He writes: Since 2008, the United States has experienced the first sustained period of rapid GHG emissions reductions and simultaneous economic growth on record. Specifically, CO2 emissions from the energy sector fell by 9.5% from 2008 to 2015, while the economy grew by more than 10%. In this same period, the amount of energy consumed per dollar of real gross domestic product (GDP) fell by almost 11%, the amount of CO2 emitted per unit of energy consumed declined by 8%, and CO2 emitted per dollar of GDP declined by 18%. These figures are from the Economic Report of the President 2017, but comparing them with the preceding 8 years (2000 to 2007) shows a somewhat less rosy picture. For example, according to St. Louis Federal Reserve Bank U.S. real GDP grew by 15 percent between 2000 and 2007 and by 13.5 percent between 2008 and 2015. According to the Energy Information Administration (EIA) energy use per dollar of real GDP declined by around 15 percent between 2000 and 2007 while falling by only 13 percent between 2008 and 2015. Also according the EIA, CO2 emitted per dollar did fall slightly faster (18 percent) than it did in the preceding period (14 percent between 2000 and 2007); most likely as the result of the recent switch from coal to cheap fracked natural gas and more wind power production to generate electricity. Interestingly, the president's article notes that lower CO2 emissions occurred as power plants switched from coal to natural gas which was "brought about primarily by increased availability of lower-cost gas due to new production techniques." Just couldn't bring himself to mention the f-word, fracking. Given the economic chaos generated by the financial crisis, it would be hard to draw any firm conclusions from comparing U.S. job creation between 2000-2007 period and the 2008-2015 period. Nevertheless, just as background, the Bureau of Labor Statistics reports that employment rose by 6.5 million in the first period and by 2.2 million in the second period. To be fair, U.S. employment rose from its 2010 nadir by 10.8 million by 2015. In his Science article, the president cites various studies that suggest in an increase of 4 degrees Celsius by 2100 would lower global GDP by as much as 5 percent below what it would otherwise have been without any man-made warming. To get some idea of what that would mean consider what would happen if current U.S. GDP of $16 trillion were to grow at the 2 percent per year rate experienced during the Obama administration from 2015 until 2100 in the absence of warming. By then U.S. GDP would exceed$86 trillion dollars. If global warming were to lower GDP by 5 percent that would mean that GDP in 2100 would be a little more than $4 trillon dollars lower at$82 trillion. Now if the U.S. economy were to grow at the historical average of 3 percent per year, GDP in 2100 would stand at $197 trillion and a 5 percent climate change penalty would reduce that to only$187 trillion. That $10 trillion reduction in 2100 is equivalent to lowering the economic growth rate between now and then from 3 percent to 2.93 percent. For comparison, the U.N. has estimated that the additional investment and financial flows needed in 2030 to address climate would have to rise to between 0.3 and 0.5 percent of global GDP. It is always good to have some idea of what tradeoffs proposed policies would impose. Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Report abuses. 1. “Obama: Clean Energy Future Is Irreversible and Will Outlast Trump” He’s correct in that we all act in our own interest and as energy with less pollution becomes cheap enough to use, we will. He can stuff his Gaia-worship up his butt. 1. If it’s irreversible, then quit subsidizing it. 1. Indeed. And the idea that Trump (or anyone) is against “clean energy” is just rhetoric. Everybody wants clean, it’s just that there are trade-offs. 1. Nuke plants are closing because gas is cheeper. They must not have gotten the memo. 2. Talking about trade-offs is Sullum’s way of pretending he’s not a statist. 2. Obama: Clean Energy Future Is Irreversible and Will Outlast Trump If that’s the case then i doesn’t matter what the president does/doesn’t do. If obama things “clean energy” is the future, why all govt needs to do is get out of the way. 1. What he actually meant to say is that us warmists are gonna keep the scam going and we’re gonna make trillions, regardless of Trump and the rest of you deniers. He can never just be honest. 2. true and why do they need to keep repeating it is the future? 1. Because it sure as hell ain’t the present, at least not without 90% subsidies. 3. If obama things “clean energy” is the future, why all govt needs to do is get out of the way. An idea as alien to him as the “the government is us” is to libertarians. The government is the voice of the people, the will of the people. As long as it’s properly stewarded by the righteous, it not only should lead the way, it’s the only institution that even can lead the way. They don’t even begin to understand the concept that individual people will act of their own accord and don’t necessarily have to get a legislative permission slip first. 1. They don’t even begin to understand the concept that individual people will act of their own accord … except insofar as they exploit the masses. The individual is the villain, the government is the savior. 1. “the government is us” is to libertarians.” So long as we limit ourselves to that which is bounded by individual liberty, and what is truly necessary and proper. Exceed those bound and we likewise become the tyranny of a mob. 3. Yet, he noted that even as U.S. carbon dioxide levels fell to their lowest levels in two decades, more jobs were created. If only coal country had voted Hillary, he might have something of a legacy. 1. Yet, he noted that even as U.S. carbon dioxide levels fell to their lowest levels in two decades, more jobs were created. We need to parse 6th grader who think’s he’s a lawyer speak here. More jobs were created than what? Show me in the U6 Labor Force Participation numbers where any more jobs were “created” by the end of your stay than when you started. 1. Didn’t Alan Krueger conlcude that 90% of the jobs created were low paying restaurant and retail (temporary) or was that fake news? 1. McJobs. If even 2 jobs happened (not created) then that’s more jobs. 4. Fuck Science for both this politically motivated BS, and their requirement that all submissions be in Microsoft Word. That is all. 1. LOL. What journal doesn’t want articles submitted in MS Word using their particular template? 1. LaTeX or bust. I’m learning Microsoft Equation Editor when hell freezes over. 1. I have no idea why that link surprised me. 2. Squeaky! 1. I’m learning Microsoft Equation Editor when hell freezes over. Well that’s good because it hasn’t been relevant for almost a decade now. The current equation editor in Office is a lot better and even “speaks” (La)TeX to an extent (e.g. type \sum and you will get the summation operator, type _1^n after that and you will get bounds on the summation, etc.). It’s far from perfect but is about as good as you can get with WYSIWYG. There are also ways to convert (La)TeX to MathML which can be pasted directly into a Word document. All that having been said, LaTeX is the superior tool. 5. Ostradamus….who fucking knew. 6. Gotta give Obama props for the reference though. 1. It sounds like if I ever saw that movie, I’d want to go back in reverse chronological order to a time before I saw the movie. 1. Reversed rape. So out-in not in-out? 7. “Obama: Clean Energy Future Is Irreversible and Will Outlast Trump” So, what if it will, Obama? What is your point? Is anyone making the argument that it shouldn’t? Perhaps Obama is incapable of discerning skepticism about when these ‘clean energy’ sources will be viable on a large scale (if ever) and people being against this clean energy, just because, or because we’re all shills for big oil. Obama is a petulant child who very frequently makes meaningless statements. 1. No one wants to make money selling clean energy. That would would be stupid and shit. If solar and wind were as fucking awesome as he says they are, the technologies wouldn’t require massive wealth transfers to be cost competitive. 1. This. If this is what the market wants, why all the subsidies? 8. This, from the same man who bragged about a$100,000,000 solar facility at a military base, that would pay for itself, via energy cost reductions, in a mere 100 years. I’m sure all the sciencefication he has here is just as sound. 9. Tremendous alt text, sir. 10. Instead of going around giving speeches about how wonderful he is (he can spend his whole retirement doing that), he could be signing more pardons, the one thing where he’s actually doing some good. 1. OBAMA: “OK, got my coffee, got my pen and phone, I can finally get around to these pardon applications. Maybe I shouldn’t have put this off until the last minute. Let’s see…Aaron Aaronson, convicted of cocaine trafficking in 1986, seeking a commutation of his life sentence…” MICHELLE: “Come on, honey, it’s 11:58 PM and Trump’s inauguration is tomorrow, come to bed.” OBAMA: “Wait a minute…Christopher Albert, convicted of marijuana trafficking in 1997…dang, it’s midnight already.” 1. OBAMA: “And wait, I almost forgot to de-schedule weed and close Gitmo! I’ll get right on that!” 2. (PS – I made up the name Christopher Albert at random, but I see from Google that there’s a couple real people with that name, so let me just say I was just trying to create a random name beginning with “A,” and I didn’t know the name was attached to you. (Likewise if there are any actual Aaron Aaronsons out there) 11. So much of the economic analysis around this crap has a certain element of the broken window fallacy, though. The notion that clean energy is going to create jobs, for example. Well, yeah, tearing down old dirty power plants and replacing them with new clean ones creates job in the exact same way paying somebody to heave bricks through windows and then paying somebody else to replace the windows does. But what’s the net payoff? How much would the economy have grown if we took the money we were spending on cleaner smokestacks for our existing factories and spent it on building new factories with dirty smokestacks? 1. The problem is, “we” shouldn’t be building anything. If something were economically viable, the people willing to invest in the venture would be building it. Price inputs are signals as to the viability of a thing. 12. Now, as a parting shot, President Obama writes an article today, “The irreversible momentum of clean energy,” in the journal Science. I sincerely hope this was ghostwritten. Obama is not a damn expert on climate change. Even if it was ghostwritten, thats authorship misattribution which journals like Science tend to… frown upon. I realize it’s just an editorial. Still. In his Science article, the president cites various studies that suggest in an increase of 4 degrees Celsius by 2100 I’m not up on the models, but isn’t that on the high side? We’ve gained 0.8C in the last 140 years and we expect to pick up 4C in the next 85? Now if the U.S. economy were to grow at the historical average of 3 percent per year, GDP in 2100 would stand at $197 trillion and a 5 percent climate change penalty would reduce that to only$187 trillion. That $10 trillion reduction in 2100 is equivalent to lowering the economic growth rate between now and then from 3 percent to 2.93 percent. For comparison, the U.N. has estimated that the additional investment and financial flows needed in 2030 to address climate would have to rise to between 0.3 and 0.5 percent of global GDP. That UN estimate is almost certainly intentionally lowballed by at least an order of magnitude from what they really want. Combine that with the probably-too-high temperature increase estimate and “fighting global warming” becomes an economic loser. 1. Obama is not a damn expert on climate change. Even if it was ghostwritten, thats authorship misattribution which journals like Science tend to… frown upon. I realize it’s just an editorial. Still. Science aside, it’s a bit of the lamest lame duck. Editing Wired, editorial in Science… it’s like he just quit being President. Go play golf somewhere remote so at least the press has to do some work to catch up to you. Jeez. Also… politicians ghostwriting climate science pieces? Fake news! 13. First of all who trusts projections by the UN on costs and benefits? Second of all…the reduction is due to natural gas which isnt considered “clean energy”. Is nuclear included in “clean”? Third of all…the goals in the paris agreement are arbitrary and wouldn’t actually do much of anything imo. The governments around the world are twiddling around the edges, do something to do something…i suspect they wont even come close to their goals unless there is a new method for energy and a bigger nuclear push. Fourth…wind and solar won’t cut it…they are nice supplemental forms on small scale but aren’t big players. It is how can we get the congregation to tithe so the clergy can dine on gold plating. Only pushing wind and solar raises big red flags Will this be the year solar finally makes it over the 1% of the grid line per eia.gov?! Nat gas: 2014 at 29%, 2015 at 33%. Solar 2014: 0.4%, 2015: 0.6% 1. Solar 2014: 0.4%, 2015: 0.6% See? It increased by 50% in only one year! At this rate, but 2025 we’ll get 180% of our energy from solar! Woot! 1. What’s funny is coal was 39% in 14 and has dropped to 34%……almost the same as the change in natural gas…hmm 14. Also if “clean energy” is irreversible (like climate change is apparently but yet they insist we can do something!), why do they keep having to say it has all this momentum? Why were they shitting their pants at trump wanting to pull out of agreement? Afterall they claim it is being market driven. 15. Clean energy will certainly outlast Trump if it can stand on its own two feet from a cost perspective and the best way to facilitate that is to get government out of the way. Until that day I’ll continue to gas up my car and use electricity generated by fossil fuel at fifty percent of the green energy cost per kWh, thank you very much. 16. Deficits Matter Again Not long ago prominent Republicans like Paul Ryan, the speaker of the House, liked to warn in apocalyptic terms about the dangers of budget deficits, declaring that a Greek-style crisis was just around the corner. But now, suddenly, those very same politicians are perfectly happy with the prospect of deficits swollen by tax cuts; the budget resolution they’re considering would, according to their own estimates, add$9 trillion in debt over the next decade. Hey, no problem. This sudden turnaround comes as a huge shock to absolutely nobody ? at least nobody with any sense. All that posturing about the deficit was obvious flimflam, whose purpose was to hobble a Democratic president, and it was completely predictable that the pretense of being fiscally responsible would be dropped as soon as the G.O.P. regained the White House. What wasn’t quite so predictable, however, was that Republicans would stop pretending to care about deficits at almost precisely the moment that deficits were starting to matter again. Guess which Nobel Prize winning economist has suddenly decided that up is down and un-self-awarely slammed the GOP for suddenly deciding that down is up? The answer will come as a huge shock to absolutely nobody. 1. His self awareness or lack thereof is amazing to me. Did i see him concerned about the deficits when Hillary looked to be president and was calling for all this massive spending? 2. The same one whose columns will never recover? 1. Lol. Krugman has to know he is a partisan hack and is in it for the dollars. Wonder what he thinks regarding his followers? 1. It’s funny how a man who got a Nobel writing about comparative advantage spends his days writing about things he neither is qualified to talk about nor has any real expertise in! 17. Of course, one of the great advantages of our system of government is that each president is able to chart his or her own policy course. And President-elect Donald Trump will have the opportunity to do so. The latest science and economics provide a helpful guide for what the future may bring, in many cases independent of near-term policy choices, when it comes to combatting climate change and transitioning to a clean-energy economy. From the article. This is almost subtle. But not quite. 1. Why do we have to transition to a clean energy economy? 1. *”Well, it’s a dirty job but someone’s got to do it.”* *** gets coffee *** 1. Giving Bailey the benefit of the doubt, it’s inevitable. Unless someone’s got a hypothesis about oil production, we will run out of fuel eventually and, presumably, any energy we migrate to will be clean(er in some aspect). 1. Well sure but what is considered clean energy? i can’t take seriously those who want wind and solar. 2. The amount of oil that’s left when a field is abandoned is substantial, often near 50%, as it ceases to be economical to go after the rest. But I anticipate innovative recovery techniques will outpace clean energy, and we’ll still have plenty of oil for generations. No, it’s not infinite, but I doubt even our great grandchildren will face oil scarcity 1. Point being Bailey, who’s advocated nuclear, selective geothermal, and generally recognizes the lack of viability offered by wind/solar probably means “as we produce energy more efficiently and transition to new energy sources into the future” as opposed to what he actually typed. 1. Yea this makes sense to me 18. I like how an irreversible legacy, especially one that cuts CO2 emissions, is automatically a good thing. That Torquemada was the man. 1. A quick Google search will obtain atmospheric CO2 in parts per million from 1850-2010. Another Google search will obtain the percentage of humanity living in abject poverty (less than $2/day in 1985 constant$) during that period. A quick regression of poverty on the y-axis versus CO2 concentration on the x-axis will obtain a negative slope and an r2 of about 0.96. Very few statistical analyses in the social sciences are this strong. None of the climate models are this strong. They are not even close. I know that correlation isn’t causality, but one needs a very good argument to do the opposite of something that is so strongly correlated with a good thing. The CAGW alarmist Gaia worshippers never address this. Sources: NASA for CO2 and Bourguignon and Morrison, World Bank, 1999 and subsequent World Bank reports for abject poverty data. I wish I could post the chart here, but you can replicate it with data from these sources. 1. But lower co2 makes people feel good. Even poor people! 1. Sacrifice the body to save the soul. 19. What the hell is a clean energy economy? Solar and wind take up massive amounts of land, need resources to make and maintain…..and need to be backed up by primary sources 1. Clean energy is what we tell you is clean energy, you right wing nutjob. 1. It’s only “clean” if your primary interest is in reducing carbon emissions. But nobody calls nuclear “clean” even though by that same metric it’s one of, if not, the cleanest. 2. Don’t forget they also require huge amounts of rare earth metals, which require massive amounts of toxic chemicals to refine. 20. In his Science article, the president cites various studies that suggest in an increase of 4 degrees Celsius by 2100 would lower global GDP by as much as 5 percent below what it would otherwise have been without any man-made warming. Wow, a whole 5% lower in a mere 83 years? I don’t think this stat proves what he seems to think it proves. A 5% reduction in US GDP has happened no less than 7 times since 1950. It ain’t the end of the world. 1. Seems like a win. I thought it was only supposed to be 2 degrees? 2. Given that the current trend is on the order of 0.1 degrees Celsius per decade, why are they talking about a 4-degree rise by 2100? Do they think it’s exponential? 1. That’s weird, given the IPCC numbers generally have ranges centered around 2.5/3 Celsius. 4 Celsius is outside the range for most models, or at the very edge of predictions. It’s like people are saying the science is settled, and skewing what the scientists say for political expediency. 1. Haha, sounds like somebody doesn’t understand what “SCIENCE” is. 2. Eh, according to your link, +4?C is actually below the midpoint of their A1FI scenario which “predicts” warming between 2.4?C and 6.4?C by 2099. But the warming that has actually been observed by the satellite record in the last 16 years is already below the lower bound of that scenario and is only barely within the bounds of the model that predicts the least warming (as far as I can tell from their shitty graph, anyway, which is heavily condensed to emphasize the prophesied warming). I find it a little hilarious that none of the models they selected cover the ground between the “constant assumption” model and the B1 scenario (the one that predicts the least warming). That hubris on their part means that, of the selected models, the “constant assumption” model has had the best predictive power so far despite them presenting it only as baseline to “prove” how dire the problem is. 3. It’s especially cute because those IPCC numbers are themselves pretty much made up out of whole cloth (well, squinting at estimates based on estimates of Ice Age changes). The actual observed industrial-era instrumental record shows an increase of about 1.1 degree per doubling of carbon dioxide, which is in close accordance with the pure-physics models where a doubling of carbon dioxide concentration increases temperatures about 1 degree. Accordingly, three doublings — an eight-fold increase over current levels — would increase temperatures by 3 to 3.3 degrees. It would take some truly heroic effort to increase carbon dioxide levels from the current 400-or-so ppm to 3,200 ppm, equal to fourteen times the total ice age-to-today increase so far . . . and that would still cap out at less than a 5% reduction in GWP, apparently. 3. “In his Science article, the president cites various studies that suggest in an increase of 4 degrees Celsius by 2100 would lower global GDP by as much as 5 percent below what it would otherwise have been without any man-made warming.” And his proposed ‘solutions’ would affect the GDP in what manner and by what amount? Without that information, the first comment is worthless. 21. Obama is spinning the propeller on his beanie hard. 1. ^This^ It’s time for this moron to fuck off. What I have heard from at least a dozen people: “I hope they get it on video so I can watch it over and over again” 22. This man (?) is so full of himself and so arrogant and self-righteous I have to check myself from time to time. As an old psychotherapist I’ve seen my share of arrogant, self-righteous people with personality disorders. This boob is in a class by himself. You have to give him this, though: he has rationally and carefully constructed a past that while investigated and cracked hasn’t been reported adequately. Despite the evidence that he was born in Kenya, that his Hawaiian certificate of live birth is counterfeit and was photo-shopped, that other records show he enrolled in college as a foreign student, that the Obama “daughters” have no records for their birth, that pictures of the Obama parents and baby daughters also have been photo-shopped, that their real parents have been identified, that his college transcripts have all been locked away, and that the homosexual communities in Chicago and elsewhere know him well and laugh about him, that he is clearly a Muslim, and more—despite all this he has been treated as though he wore an earned halo and the wings of an angel. How can this happen in the 21st Century in America in a time of instant information and communication? 23. I can’t believe no one else has spotted it. “His article”… “he writes”… Yeah, right. He didn’t write that. His speech writer/s wrote that. He just skimmed it afterwards to make sure it sounded good. The guy’s a poser, taking credit for things for which he did no work. “Interception”. Indeed. 24. According to the last temperature update we were given, the trend since 1978 was .12 degrees C/decade. That would mean the temperature has risen roughly .5 degrees C in that time. This would mean the temperature would have to rise closer to .41 degrees C per decade to hit the target of 4 degrees C by the end of the century. Are we still applying the hockey stick theory to climate change? It would seem as though his projections are a little on the alarmist side. Or am I missing something? 1. I think they argue that we are close to reaching a cliff, after which we will see rapidly accelerating temperature increases. I feel like this was argued to me once at least. For a while I tried reading the literature, but realized that it was not a worthwhile usage of my life. 25. With uncharacteristically due respect (seriously!) to the office of President of the United States of America, I am surprised to see this essay in Science. Having some experience writing for scientific journals myself, and many, many years of providing technical support for (physical, as opposed to social) scientific endeavors, I find it rather unseemly that Deese, Holdren, Murray, and Hornung (who almost certainly wrote the article) are not listed as co-authors as opposed to simply being acknowledged. Further, and with rather less respect for the current holder of the title of POTUS, and the editors at Science, it is distressing to see such a blatantly political essay being presented as anything other than an editorial opinion (citations alone do not make for a research paper or review article). I’d be interested to see what, if any, review process this went through; a FOIA inquiry for the correspondence with the editors at Science would be most interesting. If there was any doubt before, it is now completely obvious that AAAS has lost all claim to being a dispassionate messenger of science. 1. Um..you called Science a scientific journal. Hahaha!! 1. Most of the time it is, and one of the most prestigious at that (the other leader of the pack would almost certainly be Nature). Getting a paper into Science is a very nice feather in the cap of any researcher, and looks very good on one’s cv. … which makes it all just a bit more odd that BO’s comments found a home there. 1. It is VERY political, even in my field (chemistry) 26. “Since 2008, the United States has experienced the first sustained period of rapid GHG emissions reductions and simultaneous economic growth on record” in spite of my policies trying to limit fracking that is completely responsible for this! 27. Ron: can you change a tire? 28. Clean energy also outlasted Obama because it never needed or benefited from government action. So don’t let Obama take credit for it. And let’s hope Trump won’t spend on it. 29. I do not believe Trump or many, many others are opposed to REAL progress in developing cleaner energy. Soon to be Mr. & Mrs. Obama may curtail their use of ‘dirtier’ energy anytime they choose. Just do it, turn off the A/C and Heating, just do it.. Grounding Air Force 1 and its accompanying massive pollution by curtailing $70 or$80 million in vacation travel would have been great. Millions to Solyndra (sp?) failed but got how much publicity and how many votes? 30. “The mounting economic and scientific evidence leave me confident that trends toward a clean-energy economy that have emerged during my presidency will continue and that the economic opportunity for our country to harness that trend will only grow.” – President Obama . I share the President’s confidence. Thank you for your leadership Mr President. One small step for a renewable energy geek. One Giant Leap for Mankind. Scottish Scientist Independent Scientific Adviser for Scotland https://scottishscientist.wordpress.com/ 1. What progress might that be? Solar and wind make up 5 percent combined 2. Ine small step for a renewable energy geek. One Giant Leap for fleecing of Mankind. 31. “the amount of energy consumed per dollar of real gross domestic product (GDP) fell by almost 11%, the amount of CO2 emitted per unit of energy consumed declined by 8%, and CO2 emitted per dollar of GDP declined by 18%.” Isn’t off-shored manufacturing responsible for this? At least partially. If China now emits the gases that Americans once emitted, we can pat ourselves on the back, but it’s a global problem and as far as the atmosphere is concerned, Chinese and American emissions are interchangeable. 1. I do not know what portion of Chinese production is driven by American consumption, but it’s a lot less than 100%. At least some of it is for domestic consumption, and there’s consumption by Europe, the rest of Asia, and Africa as well. 2. You’re partially right. High us corp taxes sends mfg to higher polluting places. Another reason to cut Corp tax rates. 1. Cut taxes so that US increases its emissions? 32. By then [2100] U.S. GDP would exceed \$86 trillion dollars. But how much will a bowl of rice cost?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37047111988067627, "perplexity": 2419.8564899243797}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585302.91/warc/CC-MAIN-20211020055136-20211020085136-00604.warc.gz"}
http://library.kiwix.org/cs.stackexchange.com_eng_all_2018-08/A/tag/p-vs-np/1.html
## Tag: p-vs-np 212 What is the definition of $P$, $NP$, $NP$-complete and $NP$-hard? 2013-02-06T20:38:08.297 89 How not to solve P=NP? 2012-05-17T01:24:29.327 54 What would be the real-world implications of a constructive $P=NP$ proof? 2014-12-29T18:59:34.997 53 If everyone believes P ≠ NP, why is everyone sceptical of proof attempts for P ≠ NP? 2017-10-19T00:17:15.187 27 Are there NP problems, not in P and not NP Complete? 2012-05-12T19:54:22.680 25 Why is Relativization a barrier? 2012-04-14T13:39:56.047 18 Would proving P≠NP be harder than proving P=NP? 2016-01-28T17:09:04.370 15 Does $\mathsf{P} \ne \mathsf{NP}$ imply that $|\mathsf{NP}| > |\mathsf{P}|$? 2012-12-31T17:09:57.310 13 How can P =? NP enhance integer factorization 2012-11-13T19:13:56.870 12 Proving P = NP without mathematical statements / computer program 2013-01-03T10:32:51.870 12 Runtime bounds on algorithms of NP complete problems assuming P≠NP 2014-03-11T20:05:43.997 12 Why do Shaefer's and Mahaney's Theorems not imply P = NP? 2015-06-22T13:06:22.663 11 Why is this argument for $P\neq NP$ wrong? 2015-01-26T18:12:00.907 11 How to prove P$\neq$NP? 2015-12-05T18:50:10.907 10 If one shows that UNIQUE k-SAT is in P, does it imply P=NP? 2014-11-17T14:06:40.830 9 Flaw in my NP = CoNP Proof? 2013-03-12T09:52:11.097 9 P vs NP and the Time Hierarchy 2015-06-23T12:59:47.080 8 Can exactly one of NP and co-NP be equal to P? 2012-06-12T16:45:58.787 8 Why does Schaefer's theorem not prove that P=NP? 2015-05-14T10:05:58.173 8 Evolving artificial neural networks for solving NP problems 2015-09-23T13:50:50.170 8 Is detecting easy instances of NP-hard problems easy? 2016-12-16T00:03:53.280 7 How to use an old SAT solver to discover a new one, as is done in The Golden Ticket? 2016-06-01T14:34:40.937 7 Why the need for TSP solvers when there are SAT solvers? 2016-10-03T22:41:15.517 7 What is wrong with this conditional proof of P=NP? 2017-04-08T08:37:02.870 6 Is the open question NP=co-NP the same as P=NP? 2013-02-15T01:38:55.790 6 $1+\epsilon$ approximation for inapproximable problems 2013-03-05T19:27:39.220 6 Does $P \neq NP$ imply $NP \neq PSPACE$? 2015-05-03T21:44:30.680 6 Is there a philosophical counterpart question to P != NP? 2016-07-23T19:35:59.060 6 Why can't we exploite finiteness to prove incompleteness in NP? 2016-09-21T15:11:29.763 6 Polytime algorithm for SUBSET-SUM assuming P=NP 2016-12-18T01:50:18.510 5 What would an exponential reduction from an NP-complete problem to P signify? 2012-11-20T15:03:43.400 5 research on OR and AND compression in SAT formulas 2014-05-20T17:47:25.373 5 How is it valid to use oracles in mathematical arguments? 2014-12-09T20:29:51.640 5 Stronger versions of P != NP which better express actual convictions 2016-01-26T10:15:34.740 5 Is anything known about the structure of sets of valuations representable by 3CNF formulas? 2016-01-29T16:00:50.840 5 Why does this not prove $P\neq NP$? 2017-08-18T08:21:58.280 4 Existence of NP problems with complexity intermediate between P and NP-hard 2014-01-06T20:23:25.287 4 Is this language depending on P = NP recursive? 2014-02-25T08:55:19.393 4 Does this mean $P = NP$ 2014-07-08T19:49:22.387 4 What happens to quantum algorithms such as BB84 if P=NP 2014-07-14T14:10:00.903 4 Could an NP-hard problem have a mechanical or physical solution method? 2015-06-07T19:07:54.317 4 Constructing languages in NPI other than through Ladner's Theorem 2015-06-24T16:42:25.697 4 Logarithmic Randomness is Necessary for PCP Theorem 2015-12-10T15:06:41.943 4 Why does a reduction from a P-problem to an NP-complete problem not show that P=NP? 2016-11-14T02:12:48.863 4 "P may collapse" vs. Time hierarchy theorem 2016-11-27T11:03:04.053 4 Having problem understanding the formal definition of NP 2017-04-11T10:29:07.157 3 If NP $\neq$ Co-NP then is P $\neq$ NP 2013-01-10T01:36:06.260 3 Is it necessary for NP problems to be decision problems? 2013-02-10T16:30:58.997 3 Reduction from Vertex Cover to an Independent Set problem 2013-05-08T23:39:30.063 3 Provability of NP /= P? 2014-01-30T13:57:04.927 3 Is my theorem about $P \neq NP$ correct? 2015-08-24T16:26:54.847 3 co-NP but not NP problems 2017-03-06T07:28:49.483 3 Could a modification of Krom's proof system be used to solve 3-SAT in polynomial time? 2017-11-25T08:00:39.530 2 Implications of polynomial time reductions 2012-12-16T15:28:34.083 2 Proving that if coNP $\neq$ NP then P $\neq$ NP 2013-05-21T09:05:11.783 2 Homomorphism erasing information 2014-09-06T00:31:03.977 2 Subset sum algorithm in O(n³ log n)? 2014-11-25T19:36:31.113 2 Can oracle arguments separate P and NP? 2014-12-05T17:54:15.127 2 If P = NP, why does P = NP = NP-Complete? 2014-12-11T06:12:43.440 2 Could an NP-Hard problem be in P in after a basis transform? 2015-03-15T23:05:29.330 2 $P \neq NP$ and determinism 2015-05-04T14:39:59.307 2 Schaefer's dichotomy theorem and reformulating 3-literal clauses 2015-10-30T16:57:11.730 2 What is the simplest known NP-Complete problem for testing P=NP solutions? 2016-06-10T12:54:39.800 2 Is P = NP when solutions length is polynomially bounded by instance length? 2017-03-18T12:06:43.877 2 Could be solved a NP-complete problem in constant time? 2017-04-16T17:29:29.123 2 proving that $P\ne NP$ under an assumption 2017-09-24T13:23:20.340 2 Why doesn't descriptive complexity theory solve P = NP? 2017-11-30T01:02:37.057 2 Solving diophantine equations -- does having a bound on the size of the solution help? 2018-01-21T16:28:17.197 1 How to prove polynomial time equivalence? 2013-03-22T04:50:15.547 1 Complexity class of Determining Hamiltonian cycle 2013-11-26T04:17:18.607 1 If P != NP, then 3-SAT is not in P 2014-06-10T11:21:38.757 1 A detail on variant of Mahaney's theorem about reductions of sparse languages vs P/NP 2014-07-11T01:28:44.540 1 What are the current known implications of the complexity of Integer Factorization? 2014-07-17T15:42:43.670 1 If P is equal to NP, then what happens to the problems those can be solved in polynomial time? 2015-04-15T19:38:39.587 1 A paper argumenting that P might be equal to NP 2015-04-19T18:06:13.117 1 NP-Hard vs NP-Complete Why NP-complete so important? 2015-06-21T22:16:28.480 1 What is the evidence that P could equal NP? 2015-09-18T10:11:41.870 1 If NP is easy on average then does it mean P=NP? 2016-04-26T09:57:05.027 1 Is valid the notion of infinity for the NP-complete problems? 2017-03-22T19:55:16.897 1 If an NP problem is shown to have an exponential lower bound, would that prove that P != NP? 2017-10-28T19:41:20.117 1 Naive argument that $P \neq NP$ 2018-01-04T16:33:13.380 1 P=NP, isn't it? 2018-01-17T20:25:50.050 0 Recusively Enumerable or Recursive dependent on whether P=NP 2013-01-10T01:41:26.263 0 Why is SAT not in P? 2013-11-07T23:50:44.720 0 What makes it so difficult to prove P =/≠ NP? — The subset sum issue 2014-02-09T23:35:04.740 0 Can P vs NP be independent of accepted axioms? 2014-05-22T19:03:18.710 0 Problem with my proof that NP = coNP? 2014-08-16T17:07:25.497 0 Place 4 notorious problems into 2 diagrams (one assuming P=NP, and the other one assuming P!=NP) 2015-05-31T13:34:59.777 0 Does classification of a problem also require the algorithm used? 2015-11-01T16:13:10.303 0 If EXP = NEXP, can we say anything about P and NP? 2015-12-17T01:36:01.027 0 3SAT with an oracle for expanding the clauses 2015-12-22T10:13:03.020
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.530828595161438, "perplexity": 3349.230183136097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829812.88/warc/CC-MAIN-20181218204638-20181218230638-00557.warc.gz"}
http://www.maa.org/press/maa-reviews/global-solution-curves-for-semilinear-elliptic-equations
# Global Solution Curves for Semilinear Elliptic Equations ###### Philip Korman Publisher: World Scientific Publication Date: 2012 Number of Pages: 241 Format: Hardcover Price: 90.00 ISBN: 9789814374347 Category: Monograph We do not plan to review this book. • Curves of Solutions on General Domains: • Continuation of Solutions • Symmetric Domains in R2 • Turning Points and the Morse Index • Convex Domains in R2 • Pohozaev's Identity and Non-Existence of Solutions for Elliptic Systems • Problems at Resonance • Curves of Solutions on Balls: • Preliminary Results • Positivity of Solution to the Linearized Problem • Uniqueness of the Solution Curve • Direction of a Turn and Exact Multiplicity • On a Class of Concave-Convex Equations • Monotone Separation of Graphs • The Case of Polynomial ƒ(u) in Two Dimensions • The Case When ƒ(0) < 0 • Symmetry Breaking • Special Equations • Oscillations of the Solution Curve • Uniqueness for Non-Autonomous Problems • Exact Multiplicity for Non-Autonomous Problems • Numerical Computation of Solutions • Radial Solutions of Neumann Problem • Global Solution Curves for a Class of Elliptic Systems • The Case of a “Thin” Annulus • A Class of p-Laplace Problems • Two Point Boundary Value Problems: • Positive Solutions of Autonomous Problems • Direction of the Turn • Stability and Instability of Solutions • S-Shaped Solution Curves • Computing the Location and the Direction of Bifurcation • A Class of Symmetric Nonlinearities • General Nonlinearities • Infinitely Many Curves with Pitchfork Bifurcation • An Oscillatory Bifurcation from Zero: A Model Example • Exact Multiplicity for Hamiltonian Systems • Clamped Elastic Beam Equation • Steady States of Convective Equations • Quasilinear Boundary Value Problems • The Time Map for Quasilinear Equations • Uniqueness for a p-Laplace Case
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8384206891059875, "perplexity": 8921.058786512278}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982924605.44/warc/CC-MAIN-20160823200844-00103-ip-10-153-172-175.ec2.internal.warc.gz"}
http://gmatclub.com/forum/beginners-for-gmat-42028.html?fl=similar
Find all School-related info fast with the new School-Specific MBA Forum It is currently 07 Jul 2015, 07:25 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # Beginners for GMAT Author Message Intern Joined: 09 Feb 2007 Posts: 2 Followers: 0 Kudos [?]: 0 [0], given: 0 Beginners for GMAT [#permalink]  09 Feb 2007, 16:45 Hi all, really appreciate the help. My situation is im a 2.2 GPA holder trying to get into a MBA school in Toronto. Had to work fulltime/part-time during my studies and just hated my program. Never expected to wrte any post grad exams. Literally just wanted to graduate. Did a finance and accounting degree, which i hated. Can anyone let me know if i am insane for applying? Deadline for schools is may 1st, yet these schools all say pls apply asap, early as poosible, is it because its for the scholarships? Anyways. just wanted to know if 2 mths study time is enough, and am i nuts, trying to do this with my qulaifications. I do have decent work experience though. Thanks all! I hv a couple of books already, the official Gmat 11th ed and princeton review already. Many thanks all! Kaplan GMAT Prep Discount Codes Knewton GMAT Discount Codes GMAT Pill GMAT Discount Codes SVP Joined: 03 Jan 2005 Posts: 2246 Followers: 13 Kudos [?]: 226 [0], given: 0 Re: Beginners for GMAT [#permalink]  09 Feb 2007, 19:23 Rookie123 wrote: Hi all, really appreciate the help. My situation is im a 2.2 GPA holder trying to get into a MBA school in Toronto. Had to work fulltime/part-time during my studies and just hated my program. Never expected to wrte any post grad exams. Literally just wanted to graduate. Did a finance and accounting degree, which i hated. Can anyone let me know if i am insane for applying? Deadline for schools is may 1st, yet these schools all say pls apply asap, early as poosible, is it because its for the scholarships? Anyways. just wanted to know if 2 mths study time is enough, and am i nuts, trying to do this with my qulaifications. I do have decent work experience though. Thanks all! I hv a couple of books already, the official Gmat 11th ed and princeton review already. Many thanks all! Let's think from a different perspective. What do you really want to do? You said you hated finance and accounting. Are you planning to go toward a different area? If you are sure you really want to do this, you really want to go to a MBA program, then there will be no other choice for you but to take the GMAT, no matter what your GPA is. As for how much time is sufficient, it differs for everybody. You need to first know what your target score is (how much do you need to get into the school), and then you need to know where you stands (do a simulated test to see what score you can get). From there you'll know what kind of a challenge you are facing. If this is for something you really want to do, then I'm sure you would feel all these investment of your time is worth while. _________________ Keep on asking, and it will be given you; keep on seeking, and you will find; keep on knocking, and it will be opened to you. Intern Joined: 09 Feb 2007 Posts: 2 Followers: 0 Kudos [?]: 0 [0], given: 0 Hi HongHu [#permalink]  12 Feb 2007, 09:35 HI Honghu, thanks for your reply. I would need a 700 on this GMAT test to feel comfortable to get into a school. I want to do law eventually as that is my passion, but seems similar doors close witha low GPA. Hence im using work experience in finance as leaverage to apply to MBA programs. I know what you mean as in do what you have passion for. but right now, seems MBA is my only option. Im currently an account manager for Bloomberg, doing seminars and trainings and feel this is not what i want to do, $$is important but not at this stage of my life. GMAT to mee will open some doors for me... I hope. Thanks again. from LOST in Singapore. SVP Joined: 24 Aug 2006 Posts: 2133 Followers: 3 Kudos [?]: 103 [0], given: 0 Re: Hi HongHu [#permalink] 12 Feb 2007, 12:30 Rookie123 wrote: Did a finance and accounting degree, which i hated. Can anyone let me know if i am insane for applying? #1 The later you apply, the less likely you will gain admission. #2 Finance and accounting are core classes for bschool. Doesn't that say something about business? If you hate those classes, you will not like business school. #3 Yes you are insane for applying. It's tantamount to saying I hate dealing with people but I want to be a salesman. Furthermore, a 2.2 gpa is at the very low end for bschools. question: How well do you do on standardized tests? Rookie123 wrote: I want to do law eventually as that is my passion, but seems similar doors close witha low GPA. Hence im using work experience in finance as leaverage to apply to MBA programs. I know what you mean as in do what you have passion for. but right now, seems MBA is my only option. Im currently an account manager for Bloomberg, doing seminars and trainings and feel this is not what i want to do,$$ is important but not at this stage of my life. GMAT to mee will open some doors for me... I hope.. #4 GMAT does not open any doors.. #5 It is unheard of to get an MBA to go to law school. It really seems to me like you don't know what you want to do with your career. How involved have you been with law that you know you have a passion for it? I have a lot of friends with law degrees and 100k+ debt who are not lawyers. Going to grad school because you hate working and don't know what to do is the worst reason and it will show in your essays. My advice would be to do a bit of soul searching before you make your next move. Sorry for the bad news, it is probably not what you want to hear. But, do not make the mistake of going to grad school thinking it will somehow fill a void. Manager Joined: 15 Dec 2006 Posts: 57 Followers: 0 Kudos [?]: 0 [0], given: 0 If you hated finance and accounting as a major in college, you should NOT pursue an MBA. It's heavy on quant. If you want to study law, then pursue that. Not an MBA. You're totally aiming in the wrong direction. Similar topics Replies Last post Similar Topics: Beginner to GMAT prep 5 31 May 2014, 15:25 GMAT prep classes for beginner 1 22 Feb 2012, 09:49 gmat beginner for math 1 04 Jun 2011, 23:20 1 GMAT Beginner Help needed! 2 18 Apr 2011, 09:30 1 GMAT Maths Absolute Beginner 13 07 Jan 2011, 23:02 Display posts from previous: Sort by # Beginners for GMAT Moderators: bagdbmba, WaterFlowsUp Powered by phpBB © phpBB Group and phpBB SEO Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2358359545469284, "perplexity": 3090.222564612869}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375099755.63/warc/CC-MAIN-20150627031819-00120-ip-10-179-60-89.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/64856/defining-computability-for-functionals-of-partial-oracles
Defining computability for functionals of partial oracles I believe a recursive (partial) functional $F:\mathbb{N}^\mathbb{N}\to\mathbb{N}$ is ordinarily defined as one for which the "graph" relation $F(\alpha)=n$ is recursively enumerable, which means it can be expressed in the form $$F(\alpha)=n \iff \exists x.Q(\overline{\alpha}(x), n, x)$$ for some (primitive?) recursive total predicate $Q$. Here $\overline{\alpha}(x) = \langle \alpha(0), \ldots, \alpha(x-1)\rangle$, i. e., the $x$-tuple of all the values of $\alpha(t)$ for $t\lt x$ encoded as a single natural number (it's not important how). This definition of recursiveness intuitively coincides with computability if we think of $\alpha$ as being given by an oracle (exercise, or see Shoenfield, Mathematical Logic.) Unfortunately, I'm actually interested in the generalization where $\alpha$ may be a partial function itself and we don't know for which $t$ $\alpha(t)$ is undefined. If we try to evaluate $\alpha(t)$ and it happens to be undefined, then we simply wait forever for the answer which never comes. We can't cancel the request, either: once we ask the oracle for $\alpha(t)$, we're committed. Also, the oracle can only entertain one query at a time. (This is very important! For example, the functional that returns 0 if the domain of $\alpha$ is not empty and is undefined otherwise is not computable here, but it is computable if the oracle can entertain an arbitrary number of simultaneous queries. Similarly, allowing $n$ simultaneous queries yields a different class of computable functionals for each $n$.) The definition of recursiveness for partial functionals in the first paragraph fails in this generalization, since it could happen that our computation of $F(\alpha)$ queries a finite set of values of $\alpha(t)$ all with $t\lt x$, but $\alpha$ is undefined for some other $t\lt x$, so $\overline\alpha(x)$ is already undefined but our computation is fine. In summary, I'm asking for a generalization of "recursive partial functional" for this situation. - I like your concept, but do you have a specific question about it? –  Joel David Hamkins May 13 '11 at 0:37 A thorough answer to this question will be quite long. In the meantime, may suggest section 4 of John R. Longley's survey, Notions of Computability at Higher Types I, [homepages.inf.ed.ac.uk/jrl/Research/notions1.pdf] ? –  Ulrik Buchholtz May 13 '11 at 4:00 @Ulrik: It is long, but it seems completely relevant to my question. Besides, I've been curious about this subject for a while, so I don't mind a bit of reading. Thanks! –  Darsh Ranjan May 14 '11 at 1:59 This sounds like Jaap van Oosten's partial combinatory algebra $\mathcal{B}$, or its effective version, to be precise. You can read about it in John Longley's survey paper, as mentioned by Ulrik in the comments, or specifically in John's "Sequentially realizable functionals".
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8674883842468262, "perplexity": 454.8367417888793}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096686.2/warc/CC-MAIN-20150627031816-00289-ip-10-179-60-89.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/cylindrical-surface.21471/
# Cylindrical Surface! 1. Apr 19, 2004 ### Xishan What is the equation of a cylinder with its axis in the xy-plane and making an angle 'alpha' with the x-axis, the axis intersects the y-axis at a distance of 'k'? Initially i thought this problem to be very simple but haven't got any success with it in last few days Xishan 2. Apr 19, 2004 ### Integral Staff Emeritus Take the expression for a cylinder aligned with the axis, apply a rotation and translation of your coordinate system. For a translation $$x = x' + h$$ $$y= y'+k$$ for the rotation $$x= x'\cos( \theta) + y'\sin( \theta)$$ $$y=x'\sin(\theta)+y'\cos(\theta)$$ Last edited: Apr 19, 2004 3. Apr 20, 2004 ### Xishan No sir! When the cylinder's axis lies in xy plane and is NOT PARALLEL to any of the axes, shouldn't the equation comprise of all the coordintes (i.e., x, y & z)? What you've given here is OK for an in-plane rotation or translation but not for my case! or is it? This way the cylinder is rotated about its own axis which for a right circular cylinder doesn't need any axes transformation at all! Last edited: Apr 20, 2004 4. Apr 20, 2004 ### jdavel Xishan, Your original question said the axis is in the xy plane, but not parallel to x or y. Integral's rotation will make it lie along the new x (or new y, I can never tell which until I've done the rotation!) axis. 5. Apr 21, 2004 ### Xishan I've just managed to solve the problem, the equation of that cylindrical surface turns out to be, x^2 + y^2 sin(a)^2 + z^2 cos(a)^2 -yz sin(2a) <= r^2 this cylinder has its axis in the yz plane and makes an angle 'a' with the y axis in the ccw direction. This can now be verified: putting a=0 gives the equation of a cylinder with its axis along y axis, x^2 + z^2 <= r^2 and for a = 90, x^2 + y^2 = r^2, a cylinder with its axis along z axis! now if the axis is moved away from the origin, the respective intercepts may be subtracted from x, y or z. Thanks everyone for considering this problem! Similar Discussions: Cylindrical Surface!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5587700605392456, "perplexity": 1274.4896771389883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886103910.54/warc/CC-MAIN-20170817185948-20170817205948-00096.warc.gz"}
http://export.arxiv.org/abs/1104.0920
math.CO (what is this?) # Title: The Harary index of trees Abstract: The Harary index of a graph $G$ is recently introduced topological index, defined on the reverse distance matrix as $H(G)=\sum_{u,v \in V(G)}\frac{1}{d(u,v)}$, where $d(u,v)$ is the length of the shortest path between two distinct vertices $u$ and $v$. We present the partial ordering of starlike trees based on the Harary index and we describe the trees with the second maximal and the second minimal Harary index. In this paper, we investigate the Harary index of trees with $k$ pendent vertices and determine the extremal trees with maximal Harary index. We also characterize the extremal trees with maximal Harary index with respect to the number of vertices of degree two, matching number, independence number, radius and diameter. In addition, we characterize the extremal trees with minimal Harary index and given maximum degree. We concluded that in all presented classes, the trees with maximal Harary index are exactly those trees with the minimal Wiener index, and vice versa. Comments: 14 pages, 2 figures Subjects: Combinatorics (math.CO) MSC classes: 92E10, 05C12 Cite as: arXiv:1104.0920 [math.CO] (or arXiv:1104.0920v3 [math.CO] for this version) ## Submission history From: Aleksandar Ilic [view email] [v1] Tue, 5 Apr 2011 19:43:04 GMT (81kb) [v2] Wed, 6 Apr 2011 07:39:24 GMT (73kb) [v3] Sat, 21 May 2011 09:01:12 GMT (73kb) Link back to: arXiv, form interface, contact.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5160791873931885, "perplexity": 1405.8532184888725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540525781.64/warc/CC-MAIN-20191210013645-20191210041645-00214.warc.gz"}
https://www.bartleby.com/solution-answer/chapter-144-problem-148cyu-chemistry-and-chemical-reactivity-10th-edition/9781337399074/the-catalyzed-decomposition-of-hydrogen-peroxide-is-first-order-in-h2o2-it-was-found-that-the/18de5bbc-7309-11e9-8385-02ee952b546e
# The catalyzed decomposition of hydrogen peroxide is first-order in [H 2 O 2 ]. It was found that the concentration of H 2 O 2 decreased from 0.24 M to 0.060 M over a period of 282 minutes. What is the half-life of H 2 O 2 ? What is the rate constant for this reaction? What is the initial rate of decomposition at the beginning of this experiment (when [H 2 O 2 ] = 0.24 M)? ### Chemistry & Chemical Reactivity 10th Edition John C. Kotz + 3 others Publisher: Cengage Learning ISBN: 9781337399074 ### Chemistry & Chemical Reactivity 10th Edition John C. Kotz + 3 others Publisher: Cengage Learning ISBN: 9781337399074 #### Solutions Chapter Section Chapter 14.4, Problem 14.8CYU Textbook Problem ## Expert Solution ### Want to see the full answer? Check out a sample textbook solution.See solution ### Want to see this answer and more? Experts are waiting 24/7 to provide step-by-step solutions in as fast as 30 minutes!* See Solution *Response times vary by subject and question complexity. Median response time is 34 minutes and may be longer for new subjects.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8323324918746948, "perplexity": 3677.3888147998514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141197278.54/warc/CC-MAIN-20201129063812-20201129093812-00214.warc.gz"}
http://piracy-studies.org/2lnwo/carbon-14-dating-formula-fb5bf7
When the organism dies, the carbon 14 (C14) atoms disintegrate at a known rate, with a half-life of 5,700 years. A formula to calculate how old a sample is by carbon-14 dating is: t = [ ln (Nf/No) / (-0.693) ] x t1/2 t = [ ln (Nf/No) / (-0.693) ] x t1/2 where ln is the natural logarithm, N f /N o is the percent of carbon-14 in the sample compared to the amount in living tissue, and t … Carbon-14 can also be produced by other neutron reactions, including in particular 13C(n,γ)14C and 17O(n,α)14C with thermal neutrons, and 15N(n,d)14C and 16O(n,3He)14C with fast neutrons. In: Taylor R.E., Long A., Kra R.S. Since many sources of human food are ultimately derived from terrestrial plants, the relative concentration of carbon-14 in our bodies is nearly identical to the relative concentration in the atmosphere. General formula for time #t# used in Carbon-14 dating is #(5730/-0.693)ln(N_t/N_0)#.. Radiocarbon dating (usually referred to simply as carbon-14 dating) is a radiometric dating method. [41], Dating a specific sample of fossilized carbonaceous material is more complicated. And so this would involve two half lives, which is the same thing as 2 times 5,730 years. The word "estimates" is used because there is a significant amount of uncertainty in these measurements. [36][37], After production in the upper atmosphere, the carbon-14 atoms react rapidly to form mostly (about 93%) 14CO (carbon monoxide), which subsequently oxidizes at a slower rate to form 14CO2, radioactive carbon dioxide. A sample of wood is found to contain 1/8 as much C-14 as is present in the wood of a living tree. Carbon-14 may also be produced by lightning [22][23] but in amounts negligible, globally, compared to cosmic ray production. [38], The inventory of carbon-14 in Earth's biosphere is about 300 megacuries (11 EBq), of which most is in the oceans. Living organisms absorb carbon my eating and breathing. The technique was developed by Willard Libby and his colleagues in 1949[9] during his tenure as a professor at the University of Chicago. Uses an important radioactive isotope of millions of time it is important in the carbon 14 is a method for half life. [42] This may indicate possible contamination by small amounts of bacteria, underground sources of radiation causing the 14N(n,p) 14C reaction, direct uranium decay (although reported measured ratios of 14C/U in uranium-bearing ores[43] would imply roughly 1 uranium atom for every two carbon atoms in order to cause the 14C/12C ratio, measured to be on the order of 10−15), or other unknown secondary sources of carbon-14 production. The variation in the C/ C ratio in different parts of the carbon exchange reservoir means that a straightforward calculation of the age of a sample based on the amount of C it contains will often give an incorrect result. The transfer between the ocean shallow layer and the large reservoir of bicarbonates in the ocean depths occurs at a limited rate. In the upper one, to find the percent of Carbon 14 remaining after a specified number of years, enter the number of years and click on Calculate. One of the frequent uses of the technique is to date organic remains from archaeological sites. In the event of a H. pylori infection, the bacterial urease enzyme breaks down the urea into ammonia and radioactively-labeled carbon dioxide, which can be detected by low-level counting of the patient's breath. Where t 1/2 is the half-life of the isotope carbon 14, t is the age of the fossil (or the date of death) and … When finding the age of an organic organism we need to consider the half-life of carbon 14 as well as the rate of decay, which is –0.693. How old is the fossil? And then after another half life, half of that also turns into a nitrogen-14. For example, say a fossil is found that has 35% carbon 14 compared to the living sample. This half life is a relatively small number, which means that carbon 14 dating is not particularly helpful for very recent deaths and deaths more than 50,000 years ago. When finding the age of an organic organism we need to consider the half-life of carbon 14 as well as the rate of decay, which is –0.693. C 14 halflife = 5730. However, this origin is extremely rare. In the case of radiocarbon dating, the half-life of carbon 14 is 5,730 years. We can use a formula for carbon 14 dating to find the answer. The latter can create significant variations in 14C production rates, although the changes of the carbon cycle can make these effects difficult to tease out. However, it decreases thereafter from radioactive decay, allowing the date of death or fixation to be estimated. Its presence in organic materials is the basis of the radiocarbon dating method pioneered by Willard Libby and colleagues (1949) to date archaeological, geological and hydrogeological samples. Uranium-Thorium Dating. For example, say a fossil is found that has 35% carbon 14 compared to the living sample. There are two calculators in this script dealing with Carbon 14 radioactive dating. This is a formula which helps you to date a fossil by its carbon. Radioactive elements are common only in rocks with a volcanic origin, so the only fossil-bearing rocks that can be dated radiometrically are volcanic ash layers. Radiocarbon dating would be most successful if two important factors were true: that the concentration of carbon-14 in the atmosphere had been constant for thousands of years, and that carbon-14 moved readily through the atmosphere, biosphere, oceans and other reservoirs—in a process known as the carbon cycle. For the scientific journal, see, Otlet R.L., Fulker M.J., Walker A.J. solar energetic particle event, strongest for the last ten millennia. Formula/Equation used to solve this Carbon Dating problem? one of the archeology’s mainstream methods for dating organic objects up to 50,000 years old The different isotopes of carbon do not differ appreciably in their chemical properties. This nuclear chemistry video tutorial explains how to solve carbon-14 dating problems. [4] A gram of carbon containing 1 atom of carbon-14 per 1012 atoms will emit ~0.2[5] beta particles per second. Of the three reported half-lives for Carbon $14$, the clearest and most informative is $5730 \pm 40$. The presence of carbon-14 in the isotopic signature of a sample of carbonaceous material possibly indicates its contamination by biogenic sources or the decay of radioactive material in surrounding geologic strata. Where t1/2  is the half-life of the isotope carbon 14, t is the age of the fossil (or the date of death) and ln() is the natural logarithm function. The radioactive decay of the carbon that is already present starts to change the ratio of 14 C/ 12 C. By measuring how much the ratio is lowered, it is possible to make an estimate of how much time has passed since the plant or animal lived. It is possible then to calculate the date of an organic object by … Carbon-14 dating can determine the age of an artifact that is up to 40,000 years old. So, the fossil is 8,680 years old, meaning the living organism died 8,680 years ago. Its existence had been suggested by Franz Kurie in 1934.[2]. Carbon is naturally in all living organisms and is replenished in the tissues by eating other organisms or by breathing air that contains carbon. Experts can compare the ratio of carbon 12 to carbon 14 in dead material to the ratio when the organism was alive to estimate the date of its death. That means this is how long it takes for half the nuclei to decay. The Oxalic acid standard which was developed is no longer commercially available. The half life of carbon 14 is 5600 years. Alan Zindler, a professor of geology at Columbia University who is a member of the Lamont-Doherty research group, said age estimates using the carbon dating and formula-thorium dating differed only slightly for the period from 9, years how to the present. [34] Best practice for nuclear power plant operator management of carbon-14 includes releasing it at night, when plants are not photosynthesizing. Miami-based carbon dating laboratory Beta Analytic does not report standard deviations of less than +/- 30 BP for single measurements. … The above-ground nuclear tests that occurred in several countries between 1955 and 1980 (see nuclear test list) dramatically increased the amount of carbon-14 in the atmosphere and subsequently in the biosphere; after the tests ended, the atmospheric concentration of the isotope began to decrease, as radioactive CO2 was fixed into plant and animal tissue, and dissolved in the oceans. The amount of Carbon 14 contained in a preserved plant is modeled by the equation f(t) = 10e^{-ct}. [32], In 2019, Scientific American reported that carbon-14 from nuclear bomb testing has been found in the bodies of aquatic animals found in one of the most inaccessible regions of the earth, the Mariana Trench in the Pacific Ocean. Such deposits often contain trace amounts of carbon-14. [47] This is small compared to the doses from potassium-40 (0.39 mSv/year) and radon (variable). When cosmic rays enter the atmosphere, they undergo various transformations, including the production of neutrons. It uses the naturally occurring radioisotope carbon-14 (14C) to estimate the age of carbon-bearing materials up to about 58,000 to 62,000 years old. Percent C 14. We can use a formula for carbon 14 dating to find the answer. In the initial variant of the urea breath test, a diagnostic test for Helicobacter pylori, urea labeled with approximately 37 kBq (1.0 μCi) carbon-14 is fed to a patient (i.e., 37,000 decays per second). When a living thing dies, its radiocarbon loss (decay) is no longer balanced by intake, so its radiocarbon steadily decreases with a half-life of 5,730 years. Each sample type has specific problems associated with its use for dating purposes, including contamination and special environmental effects. Plants fix atmospheric carbon during photosynthesis, so the level of 14C in plants and animals when they die approximately equals the level of 14C in the atmosphere at that time. The primary carbon-containing compound in the atmosphere is carbon dioxide, and a very small amount of carbon dioxide contains C-14. Carbon-14 Dating. If you have a fossil, you can tell how old it is by the carbon 14 dating method. The errors are of four general types: After one half life, it would have had 1/2 the carbon. Solution. After 5600 years, if we start with a gram, we end up with half a gram. Carbon-14 ( C), or radiocarbon, is a radioactive isotope of carbon with an atomic nucleus containing 6 protons and 8 neutrons. The task requires the student to use logarithms to solve an exponential equation in the realistic context of carbon dating, important in archaeology and geology, among other places. One side-effect of the change in atmospheric carbon-14 is that this has enabled some options (e.g., bomb-pulse dating[29]) for determining the birth year of an individual, in particular, the amount of carbon-14 in tooth enamel,[30][31] or the carbon-14 concentration in the lens of the eye. Carbon-14 goes through radioactive beta decay: By emitting an electron and an electron antineutrino, one of the neutrons in the carbon-14 atom decays to a proton and the carbon-14 (half-life of 5,700 ± 40 years[6]) decays into the stable (non-radioactive) isotope nitrogen-14. Carbon-14 dating, method of age determination that depends upon the decay to nitrogen of radiocarbon (carbon-14). The fraction of the radiation transmitted through the dead skin layer is estimated to be 0.11. [16][17] Radioactive dating. "Radiocarbon" redirects here. Libby estimated that the radioactivity of exchangeable carbon-14 would be about 14 disintegrations per minute (dpm) per gram of pure carbon, and this is still used as the activity of the modern radiocarbon standard. The gas mixes rapidly and becomes evenly distributed throughout the atmosphere (the mixing timescale in the order of weeks). Small amounts of carbon-14 are not easily detected by typical Geiger–Müller (G-M) detectors; it is estimated that G-M detectors will not normally detect contamination of less than about 100,000 disintegrations per minute (0.05 µCi). Carbon-14 was discovered on February 27, 1940, by Martin Kamen and Sam Ruben at the University of California Radiation Laboratory in Berkeley, California. The halflife of carbon 14 is 5730 ± 30 years, and the method of dating lies in tryingto determine how … Radiocarbon is also used to detect disturbance in natural ecosystems; for example, in peatland landscapes, radiocarbon can indicate that carbon which was previously stored in organic soils is being released due to land clearance or climate change.[12][13]. The resulting neutrons (1n) participate in the following n-p reaction: The highest rate of carbon-14 production takes place at altitudes of 9 to 15 km (30,000 to 49,000 ft) and at high geomagnetic latitudes. The decay of carbon-14 is: 146 C → 147 N + 0-1 e (half-life is 5720 years) Archaeologists use the exponential, radioactive decay of carbon 14 to estimate the death dates of organic material. More about Carbon Dating. Carbon-14 is continually formed in nature by the interaction of neutrons with nitrogen-14 in the Earth’s atmosphere. Carbon-14 (C-14), a radioactive isotope of carbon, is produced in the upper atmosphere by cosmic radiation. So how do I use my carbon-14 data? If the fossil has 35% of its carbon 14 still, then we can substitute values into our equation. The Oxalic acid standard was made from a crop of 1955 sugar beet. Carbon dating formula All living things have carbon 14 in their tissue. Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. [25] Where t 1/2 is the half-life of the isotope carbon 14, t is the age of the fossil (or the date of death) and … Another standard, Oxalic Acid IIwas prepared when stocks of HOx 1 began to dwindle. This resemblance is used in chemical and biological research, in a technique called carbon labeling: carbon-14 atoms can be used to replace nonradioactive carbon, in order to trace chemical and biochemical reactions involving carbon atoms from any given organic compound. After 5,730 years, the amount of carbon 14 left in the body is half of the original amount. Returning to our example of carbon, knowing that the half-life of 14 C is 5700 years, we can use this to find the constant, k. That is when t = 5700, there is half the initial amount of 14 C. Of course the initial amount of 14 C is the amount of 14 C when t = 0, or N 0 (i.e. These amounts can vary significantly between samples, ranging up to 1% of the ratio found in living organisms, a concentration comparable to an apparent age of 40,000 years. If a fossil has say 25% of carbon-14 as compared to living sample than it is 11460 years old (as it has one-fourth carbon it is 5730*2=11460 years old). (1992) Environmental Impact of Atmospheric Carbon-14 Emissions Resulting from the Nuclear Energy Cycle. A calculation or (more accurately) a direct comparison of carbon-14 levels in a sample, with tree ring or cave-deposit carbon-14 levels of a known age, then gives the wood or animal sample age-since-formation. The rates of disintegration of potassium-40 and carbon-14 in the normal adult body are comparable (a few thousand disintegrated nuclei per second). [6] These are relatively low energies; the maximum distance traveled is estimated to be 22 cm in air and 0.27 mm in body tissue. In the 1940's Dr. Willard F. Libby invented carbon dating for which he received the Nobel Prizein chemistry in 1960. [24] The most notable routes for 14C production by thermal neutron irradiation of targets (e.g., in a nuclear reactor) are summarized in the table. Local effects of cloud-ground discharge through sample residues are unclear, but possibly significant. The rate of 14C production can be modelled, yielding values of 16,400[14] or 18,800[15] atoms of 14C per second per square meter of the Earth's surface, which agrees with the global carbon budget that can be used to backtrack,[16] but attempts to measure the production time directly in situ were not very successful. Since radioactive decay is an atomic process, it is governed by the probabilistic laws of quantum physics. method that provides objective age estimates for carbon-based materials that originated from living organisms When finding the age of an organic organism we need to consider the half-life of carbon 14 as well as the rate of decay, which is –0.693. If the amount of carbon 14 is halved every 5,730 years, it will not take very long to reach an amount that is too small to analyze. For example, say a fossil is found that has 35% carbon 14 compared to the living sample. Sal talks about 60, the decay to decay constant and is in this equation is used in the. Carbon-14 can be used as a radioactive tracer in medicine. The Oxalic acid II standard (HOx 2; N.I.S.T designation SRM 4990 C… After burning a small piece of an artifact, scientists compare the amount of Carbon-14 to the amount of Carbon-12 to determine the age of the object. Some more information about Carbon $14$ dating along with references is available at the following link: Radiocarbon Dating. 14CO2-or rather, its relative absence—is therefore used to determine the relative contribution (or mixing ratio) of fossil fuel oxidation to the total carbon dioxide in a given region of the Earth's atmosphere. Carbon-14 is produced in the upper layers of the troposphere and the stratosphere by thermal neutrons absorbed by nitrogen atoms. However, open-air nuclear testing between 1955 and 1980 contributed to this pool. Its existence had been suggested by Carbon-14 then moves up the various food chains to enter animal tissue—again, in about the same ratio carbon-14 has with carbon-12 in the atmosphere. We can use a formula for carbon 14 dating to find the answer. Liquid scintillation counting is the preferred method. [7] The G-M counting efficiency is estimated to be 3%. Occasional spikes may occur; for example, there is evidence for an unusually high production rate in AD 774–775,[18] caused by an extreme At any particular time all living organisms have approximately the same ratio of carbon 12 to carbon 14 in their tissues. In plants, carbon 14 is incorporated through photosynthesis; in animals or humans, it is acquired when eating plants. The primary natural source of carbon-14 on Earth is cosmic ray action on nitrogen in the atmosphere, and it is therefore a cosmogenic nuclide. There are three naturally occurring isotopes of carbon on Earth: carbon-12, which makes up 99% of all carbon on Earth; carbon-13, which makes up 1%; and carbon-14, which occurs in trace amounts, making up about 1 or 1.5 atoms per 1012 atoms of carbon in the atmosphere. Carbon dioxide also dissolves in water and thus permeates the oceans, but at a slower rate. [17] The atmospheric half-life for removal of 14CO2 has been estimated to be roughly 12 to 16 years in the northern hemisphere. [10][11] In 1960, Libby was awarded the Nobel Prize in chemistry for this work. There were 1000 lbs made. The isotopic ratio of HOx I is -19.3 per mille with respect to (wrt) the PBD standard belemnite (Mann, 1983). One of the most common methods for dating archaeological sites is by Carbon-14 (C-14/ 14 C). [3] Carbon-14 decays into nitrogen-14 through beta decay. How old is the fossil? Carbon-14 may also be radiogenic (cluster decay of 223Ra, 224Ra, 226Ra). The Carbon-14 cycle. In connection with building the Borexino solar neutrino observatory, petroleum feedstock (for synthesizing the primary scintillant) was obtained with low 14C content. Radio-carbon dating is a method of obtaining age estimates on organic materials. Carbon-14 (14C), or radiocarbon, is a radioactive isotope of carbon with an atomic nucleus containing 6 protons and 8 neutrons. [39] The half-distance layer in water is 0.05 mm.[8]. The half-life of a radioactive isotope describes the amount of time that it takes half of the isotope in a sample to decay. The initial 14C level for the calculation can either be estimated, or else directly compared with known year-by-year data from tree-ring data (dendrochronology) up to 10,000 years ago (using overlapping data from live and dead trees in a given area), or else from cave deposits (speleothems), back to about 45,000 years before the present. The following inventory of carbon-14 has been given:[40], Many man-made chemicals are derived from fossil fuels (such as petroleum or coal) in which 14C is greatly depleted because the age of fossils far exceeds the half-life of 14C. Its presence in organic materials is the basis of the radiocarbon dating method pioneered by Willard Libby and colleagues (1949) to date archaeological, geological and hydrogeological samples. Learn more about carbon-14 dating in this article. Springer, New York, NY, University of California Radiation Laboratory, an unusually high production rate in AD 774–775, Cross section for thermal neutron capture, "14C Comments on evaluation of decay data", "Radiation Safety Manual for Laboratory Users, Appendix B: The Characteristics of Common Radioisotopes", "Class notes for Isotope Hydrology EESC W 4886: Radiocarbon, "Deep instability of deforested tropical peatlands revealed by fluvial organic carbon fluxes", "The Potential Hidden Age of Dissolved Organic Carbon Exported by Peatland Streams", "Distinct roles of the Southern Ocean and North Atlantic in the deglacial atmospheric radiocarbon decline", "A signature of cosmic-ray increase in ad 774–775 from tree rings in Japan", "Multiradionuclide evidence for the solar origin of the cosmic-ray events of ᴀᴅ 774/5 and 993/4", "Large 14C excursion in 5480 BC indicates an abnormal sun in the mid-Holocene", "Carbon-14 production in nuclear reactors", "Bomb-Pulse Dating of Human Material: Modeling the Influence of Diet", "Radiation in Teeth Can Help Date, ID Bodies, Experts Say", "Radiocarbon Dating of the Human Eye Lens Crystallines Reveal Proteins without Carbon Turnover throughout Life", ’Bomb Carbon’ Has Been Found in Deep-Ocean Creatures, "EPRI | Product Abstract | Impact of Nuclear Power Plant Operations on Carbon-14 Generation, Chemical Forms, and Release", "EPRI | Product Abstract | Carbon-14 Dose Calculation Methods at Nuclear Power Plants", https://www.irsn.fr/EN/Research/publications-documentation/radionuclides-sheets/environment/Pages/carbon14-environment.aspx, "Problems associated with the use of coal as a source of C14-free background material", The Radioactivity of the Normal Adult Body, "Society of Nuclear Medicine Procedure Guideline for C-14 Urea Breath Test", https://en.wikipedia.org/w/index.php?title=Carbon-14&oldid=991142807, Creative Commons Attribution-ShareAlike License, From nuclear testing (till 1990): 220 PBq (1.3 t), This page was last edited on 28 November 2020, at 14:08. In the Borexino Counting Test Facility, a 14C/12C ratio of 1.94×10−18 was determined;[44] probable reactions responsible for varied levels of 14C in different petroleum reservoirs, and the lower 14C levels in methane, have been discussed by Bonvicini et al.[45]. (eds) Radiocarbon After Four Decades. There are several other possible sources of error that need to be considered. C-14 at half-life of a formula for dating and plant fibers. Radiocarbon dating can be used on samples of bone, cloth, wood and plant fibers. Production rates vary because of changes to the cosmic ray flux caused by the heliospheric modulation (solar wind and solar magnetic field), and due to variations in the Earth's magnetic field. [35] Carbon-14 is also generated inside nuclear fuels (some due to transmutation of oxygen in the uranium oxide, but most significantly from transmutation of nitrogen-14 impurities), and if the spent fuel is sent to nuclear reprocessing then the carbon-14 is released, for example as CO2 during PUREX. In 2009 the activity of 14C was 238 Bq per kg carbon of fresh terrestrial biomatter, close to the values before atmospheric nuclear testing (226 Bq/kg C; 1950). How old is the fossil? Carbon dating has given archeologists a more accurate method by which they can determine the age of ancientartifacts. The emitted beta particles have a maximum energy of 156 keV, while their weighted mean energy is 49 keV. If a fossil contains 60% of its original carbon, how old is the fossil? Years = +/-. Well, if it only has 1/4 the carbon-14 it must have gone through two half lives. [33], Carbon-14 is produced in coolant at boiling water reactors (BWRs) and pressurized water reactors (PWRs). [48] The 14C urea breath test has been largely replaced by the 13C urea breath test, which has no radiation issues. Deemed the gold standard of archaeology, the method was developed in the late 1940s and is based on the idea that radiocarbon (carbon 14) is being constantly created in … [19][20] Another "extraordinarily large" 14C increase (2%) has been associated with a 5480 BC event, which is unlikely to be a solar energetic particle event.[21]. where ln (Nf/No) = the natural logarithm of the percent carbon-14 in the sample compared to the percent carbon-14 in living tissue, and t1/2 = the half-life of carbon-14 = 5,700 years. A formula used in carbon dating is: t = [ln (Nf/No) / (-0.693)] * t1/2. [46] The beta-decays from external (environmental) radiocarbon contribute approximately 0.01 mSv/year (1 mrem/year) to each person's dose of ionizing radiation. Radioactive dating is helpful for figuring out the age of ancient things. What is the approximate age, in years, of this sample of wood? Carbon-14 was discovered on February 27, 1940, by Martin Kamen and Sam Ruben at the University of California Radiation Laboratory in Berkeley, California. It is typically released to the atmosphere in the form of carbon dioxide at BWRs, and methane at PWRs. Radiocarbon dating is a radiometric dating method that uses (14C) to determine the age of carbonaceous materials up to about 60,000 years old. When an organism dies it ceases to replenish carbon in its tissues and the decay of carbon 14 to nitrogen 14 changes the ratio of carbon 12 to carbon 14. The amount of carbon dioxide in the living organism is equal to that in the atmosphere. The half-life of carbon-14 is 5,730 years, so carbon dating is only relevant for dating fossils less than 60,000 years old. Carbon-12 and carbon-13 are both stable, while carbon-14 is unstable and has a half-life of 5,730 ± 40 years. The stable form of carbon is carbon 12 and the radioactive isotope carbon 14 decays over time into nitrogen 14 and other particles. Still, then we can use a formula for carbon 14 decays over time into nitrogen 14 and particles! Of a living tree artifact that is up to 40,000 years old, meaning the living sample stable... 60, the fossil carbon-14 dating can be used as a radioactive isotope carbon 14 C14. A few thousand disintegrated nuclei per second ) word estimates '' is used because there a... Because there is a method for half the nuclei to decay error that to... For single measurements a sample to decay constant and is replenished in the tissues by other. After 5600 years, meaning the living sample possible then to calculate the date of an organic object …! Miami-Based carbon dating is only relevant for dating purposes, including the production of neutrons with nitrogen-14 in case. 8,680 years old, meaning the living sample: Taylor R.E., long A., Kra.! Of the technique is to date a fossil is 8,680 years old (! Residues are unclear, but at a limited rate references is available at carbon 14 dating formula following link: radiocarbon can. In coolant at boiling water reactors ( BWRs ) and pressurized water reactors ( BWRs ) and water! Standard was made from a crop of 1955 sugar beet urea breath test, is...: t = [ ln ( Nf/No ) / ( -0.693 ) ] * t1/2 to the organism! Be estimated while their weighted mean energy is 49 keV that has %! Is possible then to calculate the date of death or fixation to be considered carbon-14 in the ocean shallow and. Of uncertainty in these measurements environmental effects form of carbon do not differ appreciably in chemical. Contains 60 % of its original carbon, how old is the approximate age, years! When stocks of HOx 1 began to dwindle several other possible sources of error need. Dating fossils less than +/- 30 BP for single measurements incorporated through photosynthesis ; in or. Organism died 8,680 years ago, see, Otlet R.L., Fulker M.J., Walker.... There are several other possible sources of error that need to be.. Test has been estimated to be 3 % is used because there is a formula for carbon 14 C14. C 14 at half-life of 5,730 ± 40 years largely replaced by probabilistic. Atmospheric carbon-14 Emissions Resulting from the nuclear energy Cycle the half-life of 5,730 ± years. For half life, half of the troposphere and the stratosphere by thermal neutrons by... And becomes evenly distributed throughout the atmosphere is carbon 12 and the radioactive isotope the. Kra R.S organism died 8,680 years ago time that it takes for half the nuclei decay. Half lives, which is the same thing as 2 times 5,730 years, the fossil has %... Continually formed in nature by the interaction of neutrons contributed to this pool a is! Living organism is equal to that in the 1940 's Dr. Willard F. invented. Carbon dating is helpful for figuring out the carbon 14 dating formula of ancient things C-14. A fossil is found that has 35 % carbon 14 is a which..., while carbon-14 is produced in the 1940 's Dr. Willard F. Libby invented dating! Is naturally in all living things have carbon 14 is a radioactive tracer medicine.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7086018919944763, "perplexity": 1980.9231022170065}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056856.4/warc/CC-MAIN-20210919095911-20210919125911-00390.warc.gz"}
http://physionet.cps.unizar.es/physiotools/plt/plt/html/node49.html
Next: Preparing Plots for the Web Up: Preparing Printed Output Previous: Processing, previewing and printing # Using plt with pdfLATEX Most of the techniques described in this chapter for preparing PostScript output from LATEX documents and .eps format plt figures will work without changes if you use pdfLATEX to prepare PDF output from LATEX documents and .pdf format plt figures. If you are creating figures specifically for inclusion in a PDF document, use a command of the form ```plt -T lw ... | lwcat -pdf >fig.pdf ``` to make a PDF figure. If you have already generated a PostScript figure, use a command such as ```epstopdf fig.ps ``` to make fig.pdf from an existing fig.ps. (epstopdf is freely available from CTAN, http://www.ctan.org/.) It is usually not necessary to make any changes to an existing LATEX source document in order to format it using pdfLATEX; thus, for example, your document should still use the epsfig package, even though your included figures will be in PDF rather than EPS format. When you specify the names of the figure files, always use the form ```\epsfig{file=fig} ``` If you avoid writing file=fig.ps or file=fig.pdf, then the correct version of your figure will be chosen automatically when formatting your document with latex and dvips or with pdflatex. The only feature of epsfig described in this appendix that is not currently supported by pdfLATEX is the clip= option, which is ignored. If you are reading the PDF version of this book, the figure in section B.3 illustrates the results; you should avoid using the clip= option if you anticipate using pdfLATEX. Using pdfLATEX to format myfile.tex is a one-step process: ```pdflatex myfile ``` Unless there are errors, this command should produce myfile.pdf, which can be viewed using gv, xpdf, Acrobat, or any other PDF reader. Next: Preparing Plots for the Web Up: Preparing Printed Output Previous: Processing, previewing and printing George B. Moody ([email protected]) 2005-04-26
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.964835524559021, "perplexity": 4243.118174290171}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688158.27/warc/CC-MAIN-20170922022225-20170922042225-00296.warc.gz"}
http://mathhelpforum.com/algebra/202484-how-root-problem-wrong-print.html
How is this root problem wrong? • Aug 23rd 2012, 04:15 PM Rissa0403 How is this root problem wrong? • Aug 23rd 2012, 05:21 PM Soroban Re: How is this root problem wrong? Hello, Rissa0403! You played the $8\heartsuit$ instead of the $J\spadesuit.$ And you transposed to $E\flat\text{ minor}$ instead of $F\sharp\text{ major}.$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8254393339157104, "perplexity": 6610.683707913445}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560284376.37/warc/CC-MAIN-20170116095124-00185-ip-10-171-10-70.ec2.internal.warc.gz"}
https://math.hecker.org/2017/12/27/linear-algebra-and-its-applications-exercise-3-4-14/
## Linear Algebra and Its Applications, Exercise 3.4.14 Exercise 3.4.14. Given the vectors $a = \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} \quad b = \begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix} \quad c = \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix}$ find the corresponding orthonormal vectors $q_1$, $q_2$, and $q_3$. Answer: We first choose $a' = a$. We then have $b' = b - \frac{(a')^Tb}{(a')^Ta'}a' = b - \frac{1 \cdot 1 + 1 \cdot 0 + 0 \cdot 1}{1^2 + 1^2 + 0^2}a' = b - \frac{1}{2}a'$ $= \begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix} - \frac{1}{2} \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} = \begin{bmatrix} \frac{1}{2} \\ -\frac{1}{2} \\ 1 \end{bmatrix}$ We then have $c' = c - \frac{(a')^Tc}{(a')^Ta'}a' - \frac{(b')^Tc}{(b')^Tb'}b' = c - \frac{1 \cdot 0 + 1 \cdot 1 + 0 \cdot 1}{1^2 + 1^2 + 0^2}a' - \frac{\frac{1}{2} \cdot 0 + (-\frac{1}{2}) \cdot 1 + 1 \cdot 1}{(\frac{1}{2})^2 + (-\frac{1}{2})^2+ 1^2}b'$ $= c' - \frac{1}{2}a' - \frac{\frac{1}{2}}{\frac{3}{2}}b' = c - \frac{1}{2}a' - \frac{1}{3}b'$ $= \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix} - \frac{1}{2} \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} - \frac{1}{3} \begin{bmatrix} \frac{1}{2} \\ -\frac{1}{2} \\ 1 \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix} - \begin{bmatrix} \frac{1}{2} \\ \frac{1}{2} \\ 0 \end{bmatrix} - \begin{bmatrix} \frac{1}{6} \\ -\frac{1}{6} \\ \frac{1}{3} \end{bmatrix}$ $= \begin{bmatrix} -\frac{2}{3} \\ \frac{2}{3} \\ \frac{2}{3} \end{bmatrix}$ Now that we have calculated the orthogonal vectors $a'$, $b'$, and $c'$, we can normalize them to create the orthonormal vectors $q_1$, $q_2$, and $q_3$. We have $\|a'\| = \sqrt{1^2+1^2 + 0^2} = \sqrt{2}$ $\|b'\| = \sqrt{(\frac{1}{2})^2 + (-\frac{1}{2})^2 + 1^2} = \sqrt{\frac{3}{2}} = \frac{\sqrt{3}}{\sqrt{2}}$ $\|c'\| = \sqrt{(-\frac{2}{3})^2 + (\frac{2}{3})^2 + (\frac{2}{3})^2} = \sqrt{\frac{12}{9}} = \sqrt{\frac{4}{3}} = \frac{2}{\sqrt{3}}$ so that $q_1 = a' / \|a'\| = \frac{1}{\sqrt{2}} \begin{bmatrix} 1 \\ 1 \\ 0 \end{bmatrix} = \begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \\ 0 \end{bmatrix}$ $q_2 = b' / \|b'\| = \frac{\sqrt{2}}{\sqrt{3}} \begin{bmatrix} \frac{1}{2} \\ -\frac{1}{2} \\ 1 \end{bmatrix} = \begin{bmatrix} \frac{1}{\sqrt{2}\sqrt{3}} \\ -\frac{1}{\sqrt{2}\sqrt{3}} \\ \frac{2}{\sqrt{2}\sqrt{3}} \end{bmatrix}$ $q_3 = c' / \|c'\| = \frac{\sqrt{3}}{2} \begin{bmatrix} -\frac{2}{3} \\ \frac{2}{3} \\ \frac{2}{3} \end{bmatrix} = \begin{bmatrix} -\frac{1}{\sqrt{3}} \\ \frac{1}{\sqrt{3}} \\ \frac{1}{\sqrt{3}} \end{bmatrix}$ NOTE: This continues a series of posts containing worked out exercises from the (out of print) book Linear Algebra and Its Applications, Third Edition by Gilbert Strang. If you find these posts useful I encourage you to also check out the more current Linear Algebra and Its Applications, Fourth Edition, Dr Strang’s introductory textbook Introduction to Linear Algebra, Fifth Edition and the accompanying free online course, and Dr Strang’s other books. This entry was posted in linear algebra and tagged , . Bookmark the permalink.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 23, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9189487099647522, "perplexity": 956.6910572171587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000044.37/warc/CC-MAIN-20190626013357-20190626035357-00028.warc.gz"}
https://codegolf.stackexchange.com/questions/145118/i-am-greater-than-you
I am greater than you! [duplicate] Write a function or program that given a list of non negative integers, arranges them such that they form the largest possible number. INPUT [50, 2, 1, 9] OUTPUT 95021 INPUT 0 OUTPUT 0 INPUT (Interesting one) [7, 76] OUTPUT 776 RULES • standard loopholes apply. • Depending on your language you can use int(32) / int(64) or any other numeric datatype. (Please append the chosen type to your answer) • take a list as Input • you choose the behavior on empty input GL marked as duplicate by caird coinheringaahing, Laikoni, Sriotchilism O'Zaic code-golf StackExchange.ready(function() { if (StackExchange.options.isMobile) return; $('.dupe-hammer-message-hover:not(.hover-bound)').each(function() { var$hover = $(this).addClass('hover-bound'),$msg = $hover.siblings('.dupe-hammer-message');$hover.hover( function() { $hover.showInfoMessage('', { messageElement:$msg.clone().show(), transient: false, position: { my: 'bottom left', at: 'top center', offsetTop: -7 }, dismissable: false, relativeToBody: true }); }, function() { StackExchange.helpers.removeMessages(); } ); }); }); Oct 12 '17 at 22:20 • Can we return a list rather than a number? [9,50,2,1] for instance? – Mr. Xcoder Oct 12 '17 at 12:27 • No, a "plain" number has to be returned., like in the example. – 0x45 Oct 12 '17 at 15:00 • Is there any reason why it must be a plain number and not a list? – Sriotchilism O'Zaic Oct 12 '17 at 15:34 • Because in my understanding a list can't be read as a number. Concatenating is also the task. – 0x45 Oct 12 '17 at 15:36 • – nimi Oct 12 '17 at 16:49 05AB1E, 3 bytes œJà Try it online! Python 2, 85807772 70 bytes lambda l:''.join(sorted(map(str,l),key=lambda i:i+i[-1]*max(l))[::-1]) Try it online! Sorts the numbers lexicographically, but each number is padded with its last digit. This means that 'shorter' numbers (string-wise) can be larger than 'longer' numbers: Example: Input: [76, 7] Each number gets padded with its last digit: ['76666..','7777..'] Sorted (descending): ['7777..',76666..'], which gives [7, 76] Joining the result gives: 776 Jelly, 4 bytes Œ!VṀ Explanation Input: list [1, 4, 5, 21, 4] Œ! Generate all permutations of input list V Eval those lists as Jelly code: every sublist is joined and interpreted as int Ṁ Pick the highest Try it online! Brachylog, 5 bytes pᶠcᵐ⌉ Try it online! Explanation pᶠ Find all permutations of the list cᵐ Concatenate each permutation into an integer ⌉ Take the biggest one JavaScript (ES6), 40 bytes a=>a.sort((a,b)=>+b+a-(+a+b)).join Sorts the numbers by their ordering when concatenated, and joins the result. • I like those ugly concatenations, it works in Squeak Smalltalk too [:x|(x sort:[:a :b|'',a,b>('',b,a)])join+0]the final +0 is for answering an Integer and avoiding quotes – aka.nice Oct 13 '17 at 1:11 Ohm v2, 4 bytes ψJì↑ Explanation: ψ All possible permutations J join sublists ì convert to int ↑ get maximum Try it online! Japt, 8 bytes ñ!îL w q Test it online! Look ma, no permutation built-in! Explanation ñ Sort the input as if each item !îL were repeated to 100 chars. (!îL -> LîX for each item X, L = 100) w Reverse. q Join into a single string. Implicit: output result of last expression Repeating each item to length 100 works because while '7' < '76', '7777...' > '7676...', and no number could possibly be length 100 when converted to a string. • Fails for 7, 78; output should be 787 but this returns 778. – Neil Oct 13 '17 at 8:00 • @Neil Whoops, thanks. Fixed at +0 bytes. – ETHproductions Oct 13 '17 at 13:08 • Now fails for 7, 776; output should be 7776 but this returns 7767. – Neil Oct 13 '17 at 14:24 • '78' is before '7', '76' after '7' and, '7' and '77' can be in any order , more generally when a number start with sequence of another wrapped the digit which comes after must be compared to the next number in shorter: for example '7625' comes before '762575' but '76257625762' before '7625' – Nahuel Fouilleul Oct 18 '17 at 13:50 • @NahuelFouilleul thanks, fixed at +0 bytes. I promise it's finally fixed this time, for real... – ETHproductions Oct 18 '17 at 21:35 Actually, 11 bytes ;l@╨⌠εj≈⌡MM Try it online! Explanation: ;l@╨⌠εj≈⌡MM ;l@╨ all permutations of input ⌠εj≈⌡M concatenate each permutation M maximum Pyth, 9 bytes eSmsjkd.p Try it here! or: eSsMjLk.p jkeojkN.p • .p - Generate all the permutations. • msjkd - Map on the above with the following function that takes d as a variable: • jkd - Concatenate the integers into a single string. • s - Convert to integer. • S - Sort the input. • e - Get the last element. Octave / MATLAB, 65 bytes @(x)max(cellfun(@(s)str2num(s(s>32)),cellstr(num2str(perms(x))))) Try it online! Java (OpenJDK 8), 113220 87 bytes l->l.stream().map(i->""+i).sorted((i,j)->(j+i).compareTo(i+j)).reduce((i,j)->i+j).get() Try it online! • what about l->l.stream().sorted((i,j)->(j+""+i).compareTo(i+""+j)).forEach(i->System.out.print(i)) – Nahuel Fouilleul Oct 12 '17 at 13:36 • should be a function rather than a consumer l->l.stream().map(i->""+i).sorted((i,j)->(j+i).compareTo(i+j)).reduce((i,j)->i+j).get() – Nahuel Fouilleul Oct 12 '17 at 13:52 K (oK), 18 bytes Solution: ,/x@>(|/#:'x)#'x:$ Try it online! Examples: > ,/x@>(|/#:'x)#'x:$50 2 1 9 "95021" > ,/x@>(|/#:'x)#'x:$7 76 "776" Explanation: ,/x@>(|/#:'x)#'x:$ / the solution $/ convert to string, 50 2 1 9 -> "50","2","1","9" x: / store in x ( ) / do this together #:'x / count (#:) each (') x, "50","2","1","9" -> 2 1 1 1 |/ / max over, 2 1 1 1 -> 2, #' / take each parallel, 2#'"50","2","1","9" -> "50","22","11","99" > / return sorted indices (descending), "50","22","11","99" -> 3 0 1 2 x@ / apply these indices to x, "50","2","1","9" -> "9","50","2","1" ,/ / flatten, "9","50","2","1" -> "95021" Japt, 13 7 bytes á m¬ñ Ì Try it here. -6 thanks to ETHproductions. • Don't think you'll need the ms if you change ®r+} to m¬ – ETHproductions Oct 12 '17 at 14:37 • @ETHproductions Oh so there is such a builtin :p – Erik the Outgolfer Oct 12 '17 at 17:22 • A built-in to join an array on the empty string? Of course :P – ETHproductions Oct 12 '17 at 17:25 • @ETHproductions yeah I was referring to q I thought it surprisingly didn't exist but japt isn't that insane so as not to include it :p – Erik the Outgolfer Oct 12 '17 at 17:25 Ruby, 4841 36 bytes Similar implementation to others, generates all permutations and takes the max. Shame that "permutation" needs to be spelled out in full . . . f=->l{l.permutation.map(&:join).max} Calling it: f.call([50, 2, 1, 9]) => "95021" f.call([7, 76]) => "776" Or f.([50, 2, 1, 9]) => "95021" • I have assumed that the output has to be an integer in the given language - I can drop the .to_i at the end and save 5 bytes if this is not the case. However OP has stated output should be a "plain number" – Neil Slater Oct 12 '17 at 16:12 • you can also output it as a String, which s generally readable like an int – 0x45 Oct 12 '17 at 16:18 • You can also use the f.([50, 2, 1, 9]) syntax to call lambdas. – Jonah Oct 12 '17 at 17:25 C#, 113 Bytes It's not very short, but hey, it's still C# we're talking about. int F(List<int>n)=>n.Max(i=>{var l=new List<int>(n);l.Remove(i);return int.Parse(""+i+(l.Count>0?""+F(l):""));}); Formatted: int F (List<int>n) => n.Max (i => { var l = new List<int> (n); l.Remove (i); return int.Parse ("" + i + (l.Count > 0 ? "" + F (l) : "")); }); It simply recursively tries all possible permutations of the input and returns the largest one. It uses a 32 Bit integer as input and output numerical datatype. If anybody has an idea on how to improve this solution, feel free to comment. Gaia, 4 bytes f$¦⌉ Try it online! Perl 5, 27 bytes join"",sort{"$b$a"cmp$a.$b} TIO Groovy, 41 bytes {it.permutations().max{it.join()}.join()} If commands were 3 bytes in groovy instead of a full word I halve the size lol: {it.p().max{it.j()}.j()} J, 29 25 bytes [:>./-.&' '&.":"1@(A.~i.@!@#) [:>./,&.":/"1@(A.~i.@!@#) • -.&' '&.":"1 smashes a list of numbers together to produce a single number. -. is "set minus" and ": is format, so ": turns, eg, the list 7 76 into the single string (aka list of chars) into '7 76', and -.&' ' removes the spaces from that string. Since ": was applied using Under &. the inverse is automatically applied at the end, turning the single string-now-without-spaces back into a number. • ,&.":/"1 smashes a list of numbers together to produce a single number. • (A.~i.@!@#) all permutations of the list • >./ maximum of Try it online! MATLAB/Octave, 65 62 bytes @(n)max(str2num(sprintf([repmat('%d',size(n)) 10],perms(n)'))) Try it online! Anonymous function which takes an array as an input, and spits out an integer representing the largest number that can be made. Permutations are found, then the result is formatted to a 2D array where each line contains only the digits from the values in a given permutation in order. This is done by sprintf with enough %d markers to absorb the whole permutation. The result is then converted back to an array of integers where each line becomes its own value. The maximum from this array is returned. • Save 3 bytes using size(n) instead of 1,numel(n) in the repmat() call Note: This was developed completely independently from the other Octave answer. bash, 96 bytes f(){ local l g p=$1;shift&&{ for i;{ (($i$p>$p$i))&&l+=\$i||g+=\ $i;};echo$(f $l)$p$(f$g);};} Try It Online • Doesn't work for inputs of 7 and 78. – Neil Oct 13 '17 at 8:01 • indeed, from ETHproductions solution (to append 'a' so that '7' > '76') doesn't work because shorter numbers always placed after number which begins with same sequence however it doesn't work for all numbers : '7' > '76', '7' > '77', '7' > '78' but (should be '7' < '76', '7' = '77' and '7' < '78') – Nahuel Fouilleul Oct 13 '17 at 8:14 • fixed using bash builtins instead of coreutils – Nahuel Fouilleul Oct 18 '17 at 13:19 Python 2, 9086 80 bytes lambda l:max("".join(map(str,x))for x in permutations(l)) from itertools import* Try it online! -4 bytes thanks to FlipTack -6 bytes thanks to i cri everytim I believe this can be golfed a ton, but I don't really know how. Stupid type conversions added a ton of bytes. • IIRC you don't need any of your square brackets as max and join can take generators, which should save you four bytes – FlipTack Oct 12 '17 at 19:08 • 80 bytes. – totallyhuman Oct 17 '17 at 10:14 Mathematica, 57 bytes Max[FromDigits[Join@@IntegerDigits/@#]&/@Permutations@#]& Try it online! Matlab, 135 53 bytes New Solution: a=input('') b=sprintf('%d', a) c=sort(b,'descend') Explanation: It takes a user matrix input, then separates it into digits, and finally orders those digits from greatest to least. Old Solution: a=input(''); b=sprintf('%d',a) - '0'; c=[];b=sort(b,'descend'); for i=1:size(b,2) c(i)=b(i); end for y=1:size(c,2) e(y)=num2str(c(y)) end Explanation: First it takes a user input, then separates all of the numbers into individual digits. The %d in sprintf essentially converts the numbers in the string, which is necessary for sorting into each digit (afaik). I tried simply doing num2str(a) but that leaves a space between each number. From there, it is just a matter of sorting and arranging the numbers with proper formatting. The actual code has a lower byte count because it has no semi-colons and everything on one line. • I think that this might not work (based on the description your provided) because he doesn't want the individual DIGITS ordered (for example, if I'm understanding right, you would return 95210 and split up the 50 from the example, when the user requested that it return 95021). – phroureo Oct 12 '17 at 23:01
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22301438450813293, "perplexity": 5502.46307927113}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232260358.69/warc/CC-MAIN-20190527005538-20190527031538-00060.warc.gz"}
https://mathematica.stackexchange.com/questions/179697/convert-decibels-to-amplitude-built-in-function
# Convert decibels to amplitude - Built-in function Is there a built-in function to convert decibels to amplitude in Mathematica? I tried with: UnitConvert[Quantity[40, "decibels"], "amplitude"] but it does not work. Online I found only a widget for Wolfram Alpha: http://www.wolframalpha.com/widgets/view.jsp?id=21e1ea77bd91aaa0fc4d01a943a654e • 10^(amplitudeIndB/20.) – andre314 Aug 8 '18 at 16:14 • Hello @andre, I am looking for a built-in function that does the same thing. – Gennaro Arguzzi Aug 8 '18 at 16:17 • Why insist on built-in? I think answer will be no such built-in, although the external resources (databases, W|A) continue to evolve, I think. W|A does not seem currently to be able to recognize "amplitude ratio" or the ISO term "root-power quantity," which does not seem to be an ISO unit per se or have a unit name/symbol. – Michael E2 Aug 8 '18 at 17:27 • @MichaelE2 The confusing part is, the document of Quantity claims that "Supported units include all those specified by NIST Special Publication 811. " And decibel is indeed mentioned there – xzczd Aug 8 '18 at 17:48 • @xzczd Yes, I knew. I thought you would point out that UnitConvert cannot convert decibels to bels, though. It says they are incompatible (Quantity::compat), which seems an error to me. It seems an IndependentUnit[] is treated as having "no relationship to other units within a Quantity." – Michael E2 Aug 8 '18 at 18:38 There is nothing in the documentation, and bellow you can see what you can find in the internal functions, of which nothing that seems to do the job. So the answer to your question is: No, there is no built-in function to convert decibels to amplitude in Mathematica. ## Speculation on "why" We can only guess why the developers didn't implement this, a possible narrative may be similar to the case of the now deprecated Units Package. There the documentation said: "The conversion of temperature units is different from most other unit conversions because it is not multiplicative. This is simply because the zeros of various systems are set at different values. For example, zero degrees Centigrade is the same as 32 degrees Fahrenheit." So you had to use ConvertTemperature instead of Convert. How is that similar? It's similar because both cases are somehow special. Wikipedia says The decibel (symbol: dB) is a unit of measurement used to express the ratio of one value of a physical property to another on a logarithmic scale. dB is NOT an amount of a physical quantity, but a the logarithm of a ratio of two quantities in the same units, therefore unitless and non-linear. Also depending if the you are talking about amplitudes or power (amplitude squared) the factors are 20 or 10. Therefore, non-linear relationship of something that arguably may not be even a unit with a potentially ambiguous definition... better leave it to the users to define their own, rather simple, solutions. ## A solution dB2lin[x_] := N[Power[10, x/20]]; lin2dB[x_] := N[20 Log10[x]]; ## Internal functions Some functions with matching names Names["**Decibel*", IgnoreCase -> True] (* {"SignalUtilsdecibelQ", "SignalUtilsdecibelQ$", \ "CalculateUnitsUnitCommonSymbolsDecibelsMuUnit", \ "CalculateUnitsUnitCommonSymbolsCalculateUnitsUnitCommonSymbols\ DecibelsMuUnit", "CalculateUnitsUnitCommonSymbolsDecibelsRUnit", \ "CalculateUnitsUnitCommonSymbolsDecibelsVUnit", \ "CalculateUnitsUnitCommonSymbolsDecibelsZUnit"} *) Names["*db*"] (* {"ImageColorOperationsDumpdb", \ "ImageColorOperationsDumpImageColorOperationsDumpdb", \ "PredictionsPrivatedb", \ "PredictionsPrivatePredictionsPrivatedb", \ "StatisticsLibraryDumpdbag", "StatisticsLibraryDumpdbag$", \ "PacletManagerLayoutDocsCollectionPrivatedbFile", \ "SystemFourierTransformDumpdbgPrintFT", \ "ChartingChartLabelingDumpdbgstyle", \ "ChartingChartLabelingDumpdbgstyle$", \ "ChartingChartLabelingDumpdbox", "ChartingdbPrint", \ "ImageColorOperationsDumpdbPrint", "ImageHumanDumpdbPrint", \ "ImageSpatialOperationsDumpdbPrint", \ "VisualizationVectorFieldsVectorFieldsDumpdbPrint", \ "WaveletsWaveletUtilitiesdbPrint", \ "PacletManagerLayoutDocsCollectionPrivatedbStrm", \ "PacletManagerLayoutDocsCollectionPrivatedbStrm$", \ "ChartingCommonDumpdbstyle", \ "ChartingChartLabelingDumpdbTimingReap", \ "ChartingChartLabelingDumpdbVpp", "ChartingParserDumpdbVpp", \ "SystemListPointPlot3DDumpdbVpp", \ "SystemListPointPlot3DDumpSystemListPointPlot3DDumpdbVpp"} *) • Hello @rhermans, thank you for your extremely clear answer. – Gennaro Arguzzi Aug 8 '18 at 17:06 • "It's not an amount of a physical quantity, but a the logarithm of a ratio of two quantities in the same units, therefore unitless. " Hmm… but UnitConvert[Quantity[40, "AngularDegrees"], "Radians"] works. – xzczd Aug 8 '18 at 17:12 • @xzczd remember that with the Units package "The conversion of temperature units is different from most other unit conversions because it is not multiplicative. This is simply because the zeros of various systems are set at different values. For example, zero degrees Centigrade is the same as 32 degrees Fahrenheit." so you had to use ConvertTemperature instead of Convert. Is not that there are no ways around, but these are special cases. And radians is closer to the idea of a units, as they are amounts of $\pi$ – rhermans Aug 8 '18 at 17:17 • Well, "AngularDegrees" isn't a unit for temperature, it's a unit for angle, which is dimensionless. With this example I just want to illustrate that dimensionless or unitless doesn't seem to be a reason for the "failure" of Mathematica. – xzczd Aug 8 '18 at 17:25 • UnitConvert will call InternalMWACompute["MWAToQuantity", name] to use WolframAlpha to assist with units when the unit name is not among the built-in ones. Now W|A can convert between decibels and bels, but UnitConvert cannot. So the call to W|A might only result in an IndependentUnit[] that cannot be converted. – Michael E2 Aug 8 '18 at 18:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.58612060546875, "perplexity": 1952.1044925998513}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251696046.73/warc/CC-MAIN-20200127081933-20200127111933-00374.warc.gz"}
https://cob.silverchair.com/jeb/article/207/16/2769/14775/The-dg2-for-gene-confers-a-renal-phenotype-in
Fluid transport in Drosophila melanogaster tubules is regulated by guanosine 3′,5′-cyclic monophosphate (cGMP) signalling. Here we compare the functional effects on tubules of different alleles of the dg2 (foraging or for) gene encoding a cGMP-dependent protein kinase (cGK), and show that the fors allele confers an epithelial phenotype. This manifests itself as hypersensitivity of epithelial fluid transport to the nitridergic neuropeptide, capa-1, which acts through nitric oxide and cGMP. However, there was no significant difference in tubule cGK activity between fors and forR adults. Nonetheless, fors tubules contained higher levels of cGMP-specific phosphodiesterase (cG-PDE) activity compared to forR. This increase in cGMP-PDE activity sufficed to decrease cGMP content in fors tubules compared to forR. Challenge of tubules with capa-1 increases cGMP content in both fors and forR tubules, although the increase from resting cGMP levels is greater in forstubules. Capa-1 stimulation of tubules reveals a potent inhibition of cG-PDE in both lines, although this is greater in fors; and is sufficient to explain the hypersensitive transport phenotype observed. Thus, polymorphisms at the dg2 locus do indeed confer a cGMP-dependent transport phenotype, but this can best be ascribed to an indirect modulation of cG-PDE activity, and thence cGMP homeostasis, rather than a direct effect on cGK levels. An important role of guanosine 3′,5′-cyclic monophosphate(cGMP) in epithelial fluid transport has been demonstrated in the insect equivalent of the vertebrate renal system, the Malpighian tubules(Davies, 2000). Malpighian tubules are fluid transporting, osmoregulatory organs that are critical for insect life (Dow and Davies,2001). Drosophila melanogaster tubules, which constitute an important genetic model for transporting epithelia(Dow and Davies, 2003),display elevated rates of fluid transport when stimulated by either exogenous cGMP, nitric oxide or neuropeptide-generated nitric oxide/cGMP (Davies et al., 1995, 1997; Dow et al., 1994b; Kean et al., 2002). An autocrine role for NO/cGMP has been proposed for tubule principal cells(Broderick et al., 2003), with NO/GMP signalling being compartmentalised to principal cells in the main,fluid-secreting segment of tubules. These cells contain the electrogenic vacuolar H+-ATPase (V-ATPase) pump(Dow, 1999), which energises fluid transport. Furthermore, electrophysiological studies show that cGMP signalling modulates V-ATPase activity(Davies et al., 1995),suggesting that cGMP signalling may regulate ion transport in tubules. Major effectors of cGMP signalling, including cGMP-dependent protein kinases (cGK)(Vaandrager and de Jonge,1996) have previously been described in tubules. Furthermore,pharmacological and transgenic modulation of cGMP-specific phosphodiesterase(cG-PDE) activity (Broderick et al., 2004, 2003; Dow et al., 1994b) both result in an epithelial phenotype. In Drosophila, cGK is encoded by two genes, dg1(Foster et al., 1996) and dg2. Both genes are expressed by Malpighian tubules(Dow et al., 1994b). dg2 was isolated and characterised during a search for cAMP-dependent kinase genes (Kalderon and Rubin,1989) and the putative cGK shown to be transcribed into three major RNA species of different size and several minor RNA species. These main transcripts (T1, T2 and T3) collectively code for at least three(de Belle et al., 1993), and possibly more, different polypeptides. The DG2 protein shares 64% overall homology with the prototypical bovine lung cGK, with 64% and 75% sequence identity to the cGMP-binding and kinase domains, respectively. Studies in Drosophila have revealed in vivo roles for dg2 and cGK. The naturally occurring rover/sitter foragingpolymorphism in Drosophila, which defines larval food search strategies, has been mapped to the dg2 gene (de Belle et al., 1989, 1993). Rovers(forR) have significantly longer path lengths than sitters(fors) in a nutritive environment, although both travel similar distances when food is absent. Similarly, adult fors animals travel shorter distances around nutrients(Pereira and Sokolowski,1993). Phosphorylation studies performed on samples from adult heads showed that fors contained slightly (10%) reduced cGK enzyme activity compared to forR. Also, northern and western analysis showed a small reduction in RNA and protein levels in fors compared to forR(Osborne et al., 1997). It has therefore been suggested that a reduction in amounts of cGK transcript and protein, together with reduced cGK activity, may account for the fors phenotype in larvae. Thus, the foragingpolymorphism points to the possibility that subtle alterations in cGK levels can have profound effects on the whole animal. Fluid transport assays performed on tubules from adult for lines has demonstrated that tubules from fors flies exhibit hypersensitivity to exogenously applied cGMP in comparison to forR or wild-type flies(Dow and Davies, 2001). However, stimulation of fluid transport by leucokinin, which stimulates fluid secretion via a calcium signal in the stellate cells, is unaltered in the for alleles (Dow and Davies,2001), suggesting that effects of alterations in cGK are confined to principal cells. We show here that the fors allele results in hypersensitivity of tubule fluid transport (compared with forR) in response to the neurohormone, capa-1. Intriguingly, the fors mutation does not appear to affect cGK activity in tubules; rather it impacts on cGMP content, and on cG-PDE activity. Capa-1 inhibits cG-PDE, which results in increased cGMP content, and the transport phenotype observed; this also demonstrates modulation of cG-PDE activity by a neurohormone in insects, for the first time. ### Drosophila stocks All strains were maintained at 22°C on standard Drosophiladiet over a 12 h:12 h photoperiod at 55% humidity. Lines used in this study were forR and fors (naturally occurring polymorphisms of dg2; a kind gift of M. Sokolowski,University of Toronto at Missisauga, Canada). ### Materials Schneider's medium (Gibco) was obtained from Invitrogen (Renfrew, UK). The nitridergic neuropeptide capa-1 was used in this study (GANMGLYAFPRVamide)because of its identical mode of action but slightly greater potency than capa-2 (Kean et al., 2002) and was synthesised by Research Genetics, Inc., now Invitrogen. Radiochemicals were obtained from Amersham Biosciences (Chalfont St Giles, UK). All other chemicals were obtained from Sigma-Aldrich (Gillingham, UK) unless stated otherwise. ### Locomotion activity monitoring Fly lines were assessed by monitoring activity of adult flies using the Drosophila locomotor activity monitor IV, (TriKinetics Inc., Waltham,MA, USA) in order to verify previously published adult behavioural phenotypes ascribed to for alleles. Flies were maintained at 22°C on standard Drosophila diet over a 12 h:12 h photoperiod, with lights on at 11.30 am. Tubes were made from 7.5 cm lengths of Tygon (Charny, France) clear flexible plastic tubing (R3603,i.d. 5/32”, o.d. 7/32”, wall 1/32”), plugged at one end with normal fly food and sealed with clear tape. 7-day-old male flies were anaesthetized and placed singly into each tube. Ends of tubes were then plugged with cotton wool. Tubes were placed in the monitor and flies allowed to recover overnight prior to monitoring. Readings were taken every 30 min over a period of 7 days. ### Transport assays Flies were used 1 week post-emergence, cooled on ice, then decapitated before dissection to isolate whole Malpighian tubules. Tubules were isolated into 10 ml drops of Schneider's medium under liquid paraffin and fluid secretion rates measured in tubules as detailed elsewhere(Dow et al., 1994a) under various conditions, as described in the text. Basal rates of fluid transport were measured for 30 min, and capa-1 (10-7 mol l-1; Kean et al., 2002) added as indicated, after which transport rates were measured for a further 30 min. ### Assay for tubule cGK activity The protocol, based on quantification of 32P-labelled phosphopeptide under conditions where cGK is active, has been adapted from the SignaTECT™ cyclic AMP-dependent protein kinase assay system (Promega,Southampton, UK) and from Osborne et al.(1997). Approximately 80 tubules from either forR or fors 1-week-old adults were dissected and placed in 20μl buffer (25 mmol l-1 Tris, pH 7.4, 150 mmol l-1sucrose, 2 mmol l-1 EDTA, 100 mmol l-1 NaCl, 50 mmol l-1 β-mercaptoethanol, 2 mg ml-1 leupeptin, 5 mg ml-1 aprotinin, 1 mg ml-1 phenylmethylsulphonyl fluoride). Tubules were either treated with hormone for 10 min, or left untreated, before being homogenised and centrifuged for 5 min at 13 000 g. Protein concentration of tubule homogenates was determined by the Lowry assay and homogenates adjusted to equivalent protein concentrations for use in cGK assays. cGK activity was assayed with and without cGMP for each tubule preparation. Two reaction mixes were prepared with and without the addition of 1 μmol l-1 cGMP, containing 25 mmol l-1 Tris, pH 7.4, 7 mmol l-1 magnesium acetate, 1 mmol l-1 EDTA, 2 mmol l-1 EGTA, 0.2 mg ml-1 GLASStide [RKRSRAE, a heptapeptide cGK-specific substrate; Calbiochem, Beeston, UK(Hall et al., 1999)], 20μmol l-1 ATP, 0.5-2 ml [γ-32P]dATP (370 MBqμl-1, to an approximate specific activity of 4000 c.p.m. pmol-1 ATP), 1 nmol l-1 PKI (PKA inhibitor,TYADFIASGRTGRRNAI-NH2) and 1 mmol l-1 dithiothreitol(DTT). For each reaction, 40 μl reaction buffer was added to 5 μl(approximately 30 μg protein) tubule sample. This was done with both cGMP-containing (+cGMP) and cGMP-absent (-cGMP) buffer. Each tubule preparation was assayed as 2-4 separate reactions within the experiment. However, all cGK experiments were carried out on several separate biological replicates in order to obtain statistically sound data. Sample blanks were generated using 40 μl reaction buffer and 5 μl of homogenisation buffer. Reactions were incubated for 30 min at 30°C, after which 35 μl of each sample was spotted onto individual squares of P81 paper (Whatman, Maidstone,Kent). These squares of paper are referred to as reaction samples. In order to determine the specific activity of the radiolabelled ATP at the end of the reaction, several reactions were chosen randomly and 5 μl samples(representative of 1/9 of total counts) of each spotted onto individual squares of P81 paper (total count'), allowed to dry and set aside. The reaction samples were washed for 3× 5 min in 75 mmol l-1 phosphoric acid, then washed once for 15-20 s in ethanol and allowed to dry. All squares of paper, including the total count samples, were then transferred to scintillation vials, with the addition of 3 ml scintillation fluid and counted in a scintillation counter (Beckman, High Wycombe, UK) for 60 s. Specific activity of [γ-32P]ATP was calculated (9×mean c.p.m. of total count squares/[ATP] in reaction) and used to calculate protein kinase activity (pmol ATP min-1 μg-1protein), as follows: (sample c.p.m. - sample blanks / sample volume on filter× reaction time × protein amount × specific activity). Values for sample c.p.m.' were based on those obtained by subtracting mean-cGMP values from mean +cGMP values for each set of replicate reactions. ### cG-PDE activity assays Assays for cG-PDE activity in tubules were performed essentially as previously described (Broderick et al.,2003) using 50 tubules (20-30 μg protein) for each sample,assayed in 0.185 kBq ml-1 3H-cGMP in 20 μmol l-1cGMP, 10 mmol l-1 Tris, 5 mmol l-1 MgCl2, pH 7.4. For cG-PDE assays in heads, six heads from each line were dissected into 100 μl KHEM buffer (50 mmol l-1 KCl, 10 mmol l-1EGTA, 1.92 mmol l-1 MgCl2, 1 mmol l-1 DTT, 50 mmol l-1 Hepes, pH 7.21, 1 μl Sigma P8340 protease inhibitor cocktail), disrupted with a pestle, sonicated for 10 s and centrifuged at 15 000 g for 5 min at 4°C. Supernatants were assayed for protein concentration, and 50 μl samples (containing 10 μg protein)assayed for cG-PDE activity as for tubule samples. A final substrate concentration of 10 μmol l-1 cGMP was used in reactions, as endogenous Drosophila cG-PDEs are enzymes with high Km (Broderick et al.,2004; Day et al.,2003). Final activity was expressed per mg protein. Protein concentrations were assayed according to standard protocols (Lowry Assay). ### Tubule cyclic GMP assays Cyclic GMP levels were measured in pooled samples of 20 tubules by radioimmunoassay (Amersham Biotrak Amerlex M), as previously described(Dow et al., 1994b). Tubules were pre-incubated with 10-8 mol l-1 of the cG-PDE inhibitor Zaprinast (Calbiochem, Beeston, UK) for 10 min. Where required,capa-1 (10-7 mol l-1) was added to tubules for a further 10 min. Incubations were terminated with ice-cold ethanol and homogenised. Samples were dried down and dissolved in 0.05 mol l-1 sodium acetate buffer (Amersham) and processed for cGMP content according to the manufacturer's protocol. ### Statistics Data are presented as mean ± s.e.m. Where appropriate,the significance of differences between data points was analysed using Student's t-test for unpaired samples, taking P<0.05 as the critical level. ### forRand forsadults show distinct locomotor patterns Published work on the for alleles show distinct behavioural differences between forR and forsadults on nutritive substances (Pereira and Sokolowski, 1993). We investigated locomotor function of forR and fors adults in small food-containing tubes to determine the behavioural phenotype of the lines in our hands. Flies were assessed using a Trikinetics activity monitor over a 7-day period. Fig. 1 shows the results of such analysis, on day 1 and day 3 of trials, and pooled data for each day. Activity in all lines tested (including wild-type Oregon R flies,data not shown) display an overall reduction in activity during the course of the trial. forR animals are significantly more active compared to fors animals, on two different days of testing(Fig. 1Ai,ii, Bi,ii). Furthermore, these differences were observed throughout the 7-day trial period(data not shown). Although cGK has been shown to be involved with resetting the circadian clock in mammals (Oster et al., 2003), the data shown in Fig. 1 do not indicate any shift in the phase of activity but only in the amplitude: both lines showed similar cycling of activity at all times tested. Finally, analysis of the peak area shows that on both days, forR flies are significantly more active than fors(Fig. 1Aiii, Biii). Thus, fors display the known sitter' phenotype in our hands. Fig. 1. Adult forR and fors display distinct locomotor activities. Adult flies were put through activity monitor trials as described. The results show activity over the course of the first day (A) after the initial rest period, of forR (Ai) and fors (Aii) flies, and on day 3 (Bi,ii). Results of analysis of the area under the major peaks (data points 10-25 on day 1;110-125 on day 3) are shown in (Aiii, Biii). Values (y axis) are arbitary units ± s.e.m. (N=9-10). *P<0.05. Fig. 1. Adult forR and fors display distinct locomotor activities. Adult flies were put through activity monitor trials as described. The results show activity over the course of the first day (A) after the initial rest period, of forR (Ai) and fors (Aii) flies, and on day 3 (Bi,ii). Results of analysis of the area under the major peaks (data points 10-25 on day 1;110-125 on day 3) are shown in (Aiii, Biii). Values (y axis) are arbitary units ± s.e.m. (N=9-10). *P<0.05. ### Basal and capa-1 stimulated fluid transport rates are modulated in a dg2 allele A significant increase in neuropeptide-induced transport is observed at 50 and 60 min in tubules from fors animals, compared to those from forR (Fig. 2). This is especially pronounced at 60 min, when maximum fluid transport rates are approximately 2 nl min-1 in fors tubules, and approximately 1 nl min-1 in forR tubules. However, no differences are observed in basal transport rates between the two lines. Fig. 2. The fors allele results in a transport phenotype in tubules. Fluid secretion assays were performed on intact tubules from 7-day-old adult forR and fors flies. Basal rates of secretion were measured and capa-1 (10-7 mol l-1) (Kean et al.,2002) added at 30 min (arrow). Secretion rates were monitored for a further 30 min. Fluid secretion rate (nl min-1) are means± s.e.m. (N=8) for forR tubules(broken line) and fors tubules (solid line). *P<0.05, using t-tests on experimental versus control at each time point separately. Fig. 2. The fors allele results in a transport phenotype in tubules. Fluid secretion assays were performed on intact tubules from 7-day-old adult forR and fors flies. Basal rates of secretion were measured and capa-1 (10-7 mol l-1) (Kean et al.,2002) added at 30 min (arrow). Secretion rates were monitored for a further 30 min. Fluid secretion rate (nl min-1) are means± s.e.m. (N=8) for forR tubules(broken line) and fors tubules (solid line). *P<0.05, using t-tests on experimental versus control at each time point separately. ### cGK activity is not significantly perturbed inforstissue It has been previously shown (Osborne et al., 1997) that cGK activity in the heads of fors mutants is slightly downregulated. In order to test the effect of the fors mutation in tubules, we assayed tubule cGK activity, as well as that from head and body. Measurements of tubule cGK activity using a cGK-specific phosphorylation substrate showed this to be unchanged in fors tubules compared to forR (Fig. 3, P=0.109, unpaired t-test). Similar analyses of bodies also failed to show any difference in cGK activity(Fig. 3, P=0.577,unpaired t-test); however, whilst analyses of head extract appear to show a slight decrease in cGK activity (consistent with published results),this proved not to be statistically significant(Fig. 3, P=0.148,unpaired t-test). Fig. 3. Cyclic-GMP dependent kinase (cGK) activity in forstissue. cGK activity was assayed in head, body and tubule preparations from forR and fors 7-day-old adults, as described, in the absence and presence of cGMP. Data was corrected for specific cGMP-dependent activity, and fors data expressed as a % of forR. Values are means ± s.e.m., N=8-12. Mean cGMP-dependent activities (pmol ATP min-1 mg-1 protein) for forR flies were: 15.4±1.3 (heads), 1.1±0.1 (bodies), 12.9±1.5(tubules); and fors flies: 12.84±1.1 (heads),1.2±0.1 (bodies), 13.57±2.6 (tubules). In all experiments, cGK activity (pmol ATP min-1 mg-1 protein) was consistently lower in bodies than in either head or tubules. Fig. 3. Cyclic-GMP dependent kinase (cGK) activity in forstissue. cGK activity was assayed in head, body and tubule preparations from forR and fors 7-day-old adults, as described, in the absence and presence of cGMP. Data was corrected for specific cGMP-dependent activity, and fors data expressed as a % of forR. Values are means ± s.e.m., N=8-12. Mean cGMP-dependent activities (pmol ATP min-1 mg-1 protein) for forR flies were: 15.4±1.3 (heads), 1.1±0.1 (bodies), 12.9±1.5(tubules); and fors flies: 12.84±1.1 (heads),1.2±0.1 (bodies), 13.57±2.6 (tubules). In all experiments, cGK activity (pmol ATP min-1 mg-1 protein) was consistently lower in bodies than in either head or tubules. Given the similarities in cGK activity between forR and fors tubules, the epithelial phenotype characterised in the studies shown in Fig. 2would appear to be due to modulation of other regulatory components. ### The forsallele impacts on cG-PDE activity We have previously shown that the degradation of cGMP by cG-PDE activity is critical in the regulation of fluid transport by the tubule (Broderick et al., 2003, 2004). Given this, we set out to assess any impact of the fors allele on cG-PDE activity in both tubules and heads. We identified a small, but nevertheless significant, increase in the cG-PDE activity of tubules from fors compared to forR flies (Fig. 4). In contrast to this, there is no significant difference in cG-PDE activity between heads from the forR and fors lines (P=0.73, unpaired t-test). Fig. 4. fors tubules display increased basal cGMP-phosphodiesterase (cG-PDE) activity. cG-PDE activity was assayed in preparations from head (open bars) and tubule (grey bars) from forR and fors animals as described. To aid comparison between head and tubule samples, cG-PDE activity in forR is taken as 100%, and forsactivity normalised against this, for each tissue (means ± s.e.m., N=4-6), *P<0.05. forR cG-PDE activities: head, 211±23 pmol min-1 mg-1; tubule, 642±72 pmol min-1mg-1 protein. Fig. 4. fors tubules display increased basal cGMP-phosphodiesterase (cG-PDE) activity. cG-PDE activity was assayed in preparations from head (open bars) and tubule (grey bars) from forR and fors animals as described. To aid comparison between head and tubule samples, cG-PDE activity in forR is taken as 100%, and forsactivity normalised against this, for each tissue (means ± s.e.m., N=4-6), *P<0.05. forR cG-PDE activities: head, 211±23 pmol min-1 mg-1; tubule, 642±72 pmol min-1mg-1 protein. ### forstubules contain reduced cGMP cGMP is a direct modulator of fluid transport by Malpighian tubules(Dow et al., 1994b), where the use of selective inhibitors has identified cG-PDE activity as playing a key regulatory role by manipulating cGMP levels in tubules. Here we identify differences in cG-PDE activity in fors flies(Fig. 4). To determine if this could influence resting cGMP levels, we assessed the cGMP content in tubules from both forR and fors lines(Fig. 5). Intriguingly, basal cGMP levels are significantly reduced in fors (28±6 fmol 20-1 tubules) compared to forR tubules(40±4 fmol 20-1 tubules). By contrast, cGMP levels are elevated to the same levels in both lines upon stimulation by the nitridergic peptide, capa-1 (forR: 76±11; fors: 69±12 pmol 20-1 tubules, Fig. 5). This suggests that in fors tubules there is a greater increase in cGMP content compared to that in forR tubules, in response to capa-1(forR: approx 187% stimulation; fors:approx. 250% stimulation). Also, the fors allele does not compromise the ability of these tubules to synthesise cGMP upon hormonal stimulation. Fig. 5. Resting cGMP content is reduced in fors tubules. cGMP content of forR (unshaded bars) and fors (grey bars) tubules were assayed (20 per sample) as described. Tubule samples were either untreated or treated with capa-1(10-7 mol l-1) (Kean et al., 2002) for 10 min prior to terminating the reaction. Data are expressed as cGMP content (fmol 20-1 tubules) mean ± s.e.m., N=4. *Statistically significant data between forR and forS(P<0.05). Fig. 5. Resting cGMP content is reduced in fors tubules. cGMP content of forR (unshaded bars) and fors (grey bars) tubules were assayed (20 per sample) as described. Tubule samples were either untreated or treated with capa-1(10-7 mol l-1) (Kean et al., 2002) for 10 min prior to terminating the reaction. Data are expressed as cGMP content (fmol 20-1 tubules) mean ± s.e.m., N=4. *Statistically significant data between forR and forS(P<0.05). ### Capa modulation of fluid transport via cGMP:downregulating cG-PDE The novel transport phenotype identified here in forstubules is observed upon stimulation with exogenously added cGMP(Dow and Davies, 2001) and also with capa-1 (Fig. 2). We thus assayed capa-1 stimulated cGK and cG-PDE activity to determine if a change in their activities may play a role in the phenotype. Fig. 6 shows a small reduction in cGK activity in capa-1-stimulated forR tubules(Fig. 6A). Interestingly,however, no change in capa-1-stimulated cGK activity is observed in fors flies (Fig. 6A). Thus, capa-1 modulation of cGK activity in fors tubules is not measurable. Fig. 6. cG-PDE activity is significantly inhibited by capa-1 peptide. cGK (A) and cG-PDE (B) activities were assayed in tubule preparations from forR and fors animals as described,under control and capa-1-stimulated conditions. cGK activity was assessed in the presence of cGMP. Tubules were pre-treated with capa-1 (10-7mol l-1) (Kean et al.,2002) for 10 min, prior to homogenisation and sample preparation. In order to aid comparison, data for both cGK and cG-PDE activity in the presence of capa-1 are expressed as % of untreated activity, mean ± s.e.m. (N=4-6). *Statistically significant data between controls (100%) and capa-1-treated samples (P<0.05). Fig. 6. cG-PDE activity is significantly inhibited by capa-1 peptide. cGK (A) and cG-PDE (B) activities were assayed in tubule preparations from forR and fors animals as described,under control and capa-1-stimulated conditions. cGK activity was assessed in the presence of cGMP. Tubules were pre-treated with capa-1 (10-7mol l-1) (Kean et al.,2002) for 10 min, prior to homogenisation and sample preparation. In order to aid comparison, data for both cGK and cG-PDE activity in the presence of capa-1 are expressed as % of untreated activity, mean ± s.e.m. (N=4-6). *Statistically significant data between controls (100%) and capa-1-treated samples (P<0.05). In contrast to this, capa-1 treatment results in a significant reduction in cG-PDE activity in both forR (approx. 37% reduction, Fig. 6B) and fors (approx. 70% reduction, Fig. 6B). cG-PDE activity was also reduced in wild-type Oregon R tubules to a similar extent as forR (data not shown). The extent of inhibition of cG-PDE activity was greater in fors tubules compared to forR (Fig. 6B; cG-PDE activities, expressed as pmol GMP min-1mg-1 protein: fors control, 798±31; fors capa-1 treated, 238±21; forR control, 642±72; forRcapa-1 treated, 407±64) Under such conditions, reduced cG-PDE activity can be expected to result in maintenance of high intracellular cGMP levels(Broderick et al., 2003),ultimately resulting in elevated fluid transport rates upon capa-1 stimulation in fors. Previous work has shown that cGMP signalling, and the action of cG-PDE, is critical for epithelial transport (Broderick et al., 2003, 2004; Davies et al., 1995, 1997; Dow et al., 1994b). Work in vertebrates has also shown that cGKII is necessary for the correct physiological function of several epithelial tissues(French et al., 1995; Gambaryan et al., 1996; Pfeifer et al., 1996). Thus,in order to further define the role of cGMP signalling in epithelial transport, we compared two naturally occurring polymorphic alleles of Drosophila dg2, forR and fors(de Belle et al., 1989; Sokolowski and Hansell,1992)-as a 10% reduction in head cGK activity in fors animals has been noted by others and suggested as being associated with a behavioural phenotype(Osborne et al., 1997). While we did observe a small inhibition of cGK in the heads of fors adults (and despite phenotypic confirmation of thesitter' phenotype), the reduction was not, however, statistically significant(Fig. 2). Nevertheless, this result does not necessarily exclude that changes in dg2 underlie the foraging polymorphism. It is possible that subtle environmental effects could lead to small differences in cGK assessments in different laboratories, or that the difference in cGK activity associated with the polymorphism is relatively modest. Furthermore, compartmentalisation of cAMP and cGMP signalling pathways (Edwards and Scott, 2000; Schlossmann et al., 2000), results in changes in phosphorylation status of proteins associated with such `pools'. Therefore, measurements of bulk, as opposed to localised, cGK activity, may not be sufficient to monitor subtle changes in cGK. Perhaps most obviously, there are two cGK genes in Drosophila, and so even substantial changes in dg2 levels might be undetectable against a high background of dg1 protein. Notwithstanding this, our data indicated that cGK activity was unchanged in tubule and so clearly did not seem to form the basis of the dg2phenotype observed in tubules. As cGMP signalling has been clearly shown to play a pivotal role in tubule functioning, we reasoned that other components of the cGMP pathway might be involved. In this regard, cG-PDE provides the sole route for the degradation of the second messenger, cGMP, and as such, is poised to play a key regulatory role in controlling cGMP signalling in cells. Indeed, we have shown this to be the case in tubules (Broderick et al., 2004, 2003; Dow et al., 1994b), prompting us to probe for any role of cG-PDE activity in this tubule phenotype. Analysis of cG-PDE activity in forR and fors adults showed that cG-PDE activity is affected in several ways in the dg2 mutation. Firstly, in fors animals, basal cG-PDE activity is increased in tubules, although not in head, which results in decreased cGMP levels. This is consistent with the role of cG-PDE in maintaining cGMP homeostasis. Secondly,capa-1 stimulated cGMP levels are increased to similar amounts in both lines,suggesting that the increase in cGMP is greater in forscompared to forR, which may implicate differential regulation of cG-PDE activity by capa-1 in fors tubules. This is confirmed by the data on capa-1 modulation of cGK and cG-PDE(Fig. 6). cG-PDE activity is depressed by capa-1, resulting in increased cGMP content. This occurs to a greater extent in fors, however, resulting in the greater foldstimulation in cGMP content. This would appear to provide a rationale for the transport phenotype (hypersensitivity to capa-1) observed in fors tubules. The effects of capa-1 on cG-PDE activity are consistent with its role in stimulated fluid transport, and show that the mode of action of this peptide is not merely to stimulate cGMP production via soluble guanylate cyclase(Kean et al., 2002), but also to act via the potent inhibition of cGMP breakdown. Manduca sexta CAP2b (Davies et al.,1995), another member of the capa family(Kean et al., 2002), also acts to inhibit cG-PDE in Oregon R, forR and fors tubules (data not shown). Thus, the inhibition of cG-PDE may be a general mechanism of action by the capa family of nitridergic peptides. Previous work shows that regulation of cGMP breakdown viacG-PDE, as opposed to cGMP synthesis, is a powerful modulator of fluid transport in tubules (Broderick et al.,2003), suggesting that cG-PDE(s) have a central role in epithelial transport and are thus candidate targets for nitridergic peptide action. An analogous situation exists for cAMP signalling in tubules. A cAMP-mobilising hormone, Corticotrophin-like Releasing Factor (CRF), has been shown to modulate cAMP-specific PDE activity in tubules(Cabrero et al., 2002). Thus in insects, as in vertebrates, modulation of PDEs by specific hormones is an effective signalling mechanism (Dousa,1999). No measurable change in cGK activity was observed in capa-stimulated fors tubules, which suggests that the neuropeptide-stimulated epithelial phenotype in forstubules is entirely due to cG-PDE. Modulation of cGK activity is not implicated in this process. However, in forR, a small but significant decrease in tubule cGK activity is observed upon capa-1 stimulation, for which there is currently no explanation. How can polymorphism at the for locus act on a functionally related, but physically remote, gene? The mapping of for to the region containing dg2 is authoritative(Osborne et al., 1997); and although some alleles (e.g. gamma irradiation-induced) might be expected to impact on neighbouring genes as well as dg2 (there are several genes within 10 kb of for), there is no cyclic nucleotide phosphodiesterase within megabases of the for locus. Additionally, differences in cGK levels between the non-lethal alleles of for are either modest(Osborne et al., 1997), or undetectable (this work), yet there is still an impact on cGMP signalling. We propose that a solution is offered by the concept of feedback; in order to maintain signal integrity, relatively modest changes in cGK activity elicit relatively large changes in cG-PDE, so compensating for differences in kinase levels. PDEs undergo post-translational modification by phosphorylation,interactions with other proteins and by proteolytic cleavage(Francis et al., 2001). It is possible that small changes in cGK can profoundly affect the activity of cG-PDE in tubule. Thus the polymorphism at the for locus may indeed act to modulate cGMP signalling, but through an unexpected route. This concept is consistent with a previous observation that tubules in which nitric oxide synthase was overexpressed by around twofold showed only a modest increase in stimulated secretion, because cG-PDE was upregulated by nearly tenfold(Broderick et al., 2003). Accordingly, there is evidence that, in at least this tissue, cG-PDE activity can vary over quite a wide range in order to compensate for relatively modest perturbations elsewhere in the pathway. We have thus uncovered a central role for cG-PDE in tubules of a dg2 allele. Furthermore, we show that the capa peptides modulate cG-PDE activity as an effective mechanism of increasing cGMP content in vivo. There are now obvious and exciting avenues for further study: it may be that modulation of cG-PDE may provide an interesting and general explanation for the effects of the foraging polymorphisms in other contexts, such as susceptibility to parasitism and neuronal activity. We would like to thank Professor N. Pyne, University of Strathclyde for reagents for PDE assays, Professor M. Sokolowski, University of Toronto at Mississauga, for the kind gift of forR and forS lines, and Dr P. Rosay, University of Bordeaux, for helpful advice on locomotion assays. This work was supported by the Biotechnology and Biological Sciences Research Council (BBSRC), UK and a Wellcome Trust Studentship to J.P.D. Broderick, K. E., Kean, L., Dow, J. A. T., Pyne, N. J. and Davies, S. A. ( 2004 ). Ectopic expression of bovine type 5 phosphodiesterase confers a renal phenotype in Drosophila. J. Biol. Chem. 279 , 8159 -8168. Broderick, K. E., MacPherson, M. R., Regulski, M., Tully, T.,Dow, J. A. T. and Davies, S. A. ( 2003 ). Interactions between epithelial nitric oxide signaling and phosphodiesterase activity in Drosophila. Am. J. Physiol. 285 , C1207 -C1218. Cabrero, P., Radford, J. C., Broderick, K. E., Veenstra, J.,Spana, E., Davies, S. and Dow, J. A. T. ( 2002 ). The CRF gene of Drosophila melanogaster encodes a diuretic peptide that activates cAMP signalling. J. Exp. Biol. 205 , 3799 -3807. Davies, S.-A. ( 2000 ). Nitric oxide signalling in insects. Insect Biochem. Mol. Biol. 30 , 1123 -1138. Davies, S. A., Huesmann, G. R., Maddrell, S. H. P., O'Donnell,M. J., Skaer, N. J. V., Dow, J. A. T. and Tublitz, N. J.( 1995 ). CAP2b, a cardioacceleratory peptide, is present in Drosophila and stimulates tubule fluid secretion via cGMP. Am. J. Physiol. 269 , R1321 -R1326. Davies, S. A., Stewart, E. J., Huesmann, G. R., Skaer, N. J. V.,Maddrell, S. H. P., Tublitz, N. J. and Dow, J. A. T. ( 1997 ). Neuropeptide stimulation of the nitric oxide signaling pathway in Drosophila melanogaster Malpighian tubules. Am. J. Physiol. 42 , R823 -R827. Day, J. P., Houslay, M. D. and Davies, S. A.( 2003 ). Cloning and characterisation of a novel cGMP-specific phosphodiesterase from Drosophila melanogaster. BMC Meeting Abstracts: 1st International Conference on cGMP: NO/sGC Interaction and its Therapeutic Implications 1 , p: 0014 . de Belle, J. S., Hilliker, A. J. and Sokolowski, M. B.( 1989 ). Genetic localization of foraging (for): a major gene for larval behavior in Drosophila melanogaster. Genetics 123 , 157 -163. de Belle, J. S., Sokolowski, M. B. and Hilliker, A. J.( 1993 ). Genetic analysis of the foraging microregion of Drosophila melanogaster. Genome 36 , 94 -101. Dousa, T. P. ( 1999 ). Cyclic-3′,5′-nucleotide phosphodiesterase isozymes in cell biology and pathophysiology of the kidney. Kidney Int. 55 , 29 -62. Dow, J. A. T. ( 1999 ). The multifunctional Drosophila melanogaster V-ATPase is encoded by a multigene family. J. Bioenerget. Biomembr. 31 , 75 -83. Dow, J. A. T. and Davies, S. A. ( 2001 ). The Drosophila melanogaster Malpighian tubule. 28 , 1 -83. Dow, J. A. T. and Davies, S. A. ( 2003 ). Integrative physiology, functional genomics and epithelial function in a genetic model organism. Physiol. Rev. 83 , 687 -729. Dow, J. A. T., Maddrell, S. H., Davies, S. A., Skaer, N. J. and Kaiser, K. ( 1994b ). A novel role for the nitric oxide-cGMP signaling pathway: the control of epithelial function in Drosophila. Am. J. Physiol. 266 , R1716 -1719. Dow, J. A. T., Maddrell, S. H., Gortz, A., Skaer, N. J., Brogan,S. and Kaiser, K. ( 1994a ). The malpighian tubules of Drosophila melanogaster: a novel phenotype for studies of fluid secretion and its control. J. Exp. Biol. 197 , 421 -428. Edwards, A. S. and Scott, J. D. ( 2000 ). A-kinase anchoring proteins: protein kinase A and beyond. Curr. Opin. Cell Biol. 12 , 217 -221. Foster, J. L., Higgins, G. C. and Jackson, F. R.( 1996 ). Biochemical properties and cellular localization of the Drosophila DG1 cGMP-dependent protein kinase. J. Biol. Chem. 271 , 23322 -23328. Francis, S. H., Turko, I. V. and Corbin, J. D.( 2001 ). Cyclic nucleotide phosphodiesterases: relating structure and function. Prog. Nucl. Acid Res. Mol. Biol. 65 , 1 -52. French, P. J., Bijman, J., Edixhoven, M., Vaandrager, A. B.,Scholte, B. J., Lohmann, S. M., Nairn, A. C. and de Jonge, H. R.( 1995 ). Isotypespecific activation of cystic fibrosis transmembrane conductance regulatorchloride channels by cGMP-dependent protein kinase II. J. Biol. Chem. 270 , 26626 -26631. Gambaryan, S., Hausler, C., Markert, T., Pohler, D., Jarchau,T., Walter, U., Haase, W., Kurtz, A. and Lohmann, S. M.( 1996 ). Expression of type II cGMP-dependent protein kinase in rat kidney is regulated by dehydration and correlated with renin gene expression. J. Clin. Invest. 98 , 662 -670. Hall, K. U., Collins, S. P., Gamm, D. M., Massa, E.,DePaoli-Roach, A. A. and Uhler, M. D. ( 1999 ). Phosphorylation-dependent inhibition of protein phosphatase-1 by G-substrate. A Purkinje cell substrate of the cyclic GMP-dependent protein kinase. J. Biol. Chem. 274 , 3485 -3495. Kalderon, D. and Rubin, G. M. ( 1989 ). cGMP-dependent protein kinase genes in Drosophila. J. Biol. Chem. 264 , 10738 -10748. Kean, L., Cazenave, W., Costes, L., Broderick, K. E., Graham,S., Pollock, V. P., Davies, S. A., Veenstra, J. A. and Dow, J. A. T.( 2002 ). Two nitridergic peptides are encoded by the gene capability in Drosophila melanogaster. Am. J. Physiol. 282 , R1297 -R1307. Osborne, K. A., Robichon, A., Burgess, E., Butland, S., Shaw, R. A., Coulthard, A., Pereira, H. S., Greenspan, R. J. and Sokolowski, M. B.( 1997 ). Natural behaviour polymorphism due to a cGMP-dependent protein kinase of Drosophila. Science 277 , 834 -836. Oster, H., Werner, C., Magnone, M. C., Mayser, H., Feil, R.,Seeliger, M. W., Hofmann, F. and Albrecht, U. ( 2003 ). cGMP-dependent protein kinase II modulates mPer1 and mPer2gene induction and influences phase shifts of the circadian clock. Curr. Biol. 13 , 725 -733. Pereira, H. S. and Sokolowski, M. B. ( 1993 ). Mutations in the larval foraging gene affect adult locomotory behavior after feeding in Drosophila melanogaster. 90 , 5044 -5046. Pfeifer, A., Aszodi, A., Seidler, U., Ruth, P., Hofmann, F. and Fassler, R. ( 1996 ). Intestinal secretory defects and dwarfism in mice lacking cGMP-dependent protein kinase II. Science 274 , 2082 -2086. Schlossmann, J., Ammendola, A., Ashman, K., Zong, X., Huber, A.,Neubauer, G., Wang, G. X., Allescher, H. D., Korth, M., Wilm, M. et al.( 2000 ). Regulation of intracellular calcium by a signalling complex of IRAG, IP3 receptor and cGMP kinase Ibeta. Nature 404 , 197 -201. Sokolowski, M. B. and Hansell, K. P. ( 1992 ). The foraging locus: behavioral tests for normal muscle movement in rover and sitter Drosophila melanogaster larvae. Genetica 85 , 205 -209. Vaandrager, A. B. and de Jonge, H. R. ( 1996 ). Signalling by cGMP-dependent protein kinases. Mol. Cell. Biochem. 157 , 23 -30.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7008695602416992, "perplexity": 23897.312738233803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030331677.90/warc/CC-MAIN-20220924151538-20220924181538-00101.warc.gz"}
http://mathhelpforum.com/math-topics/4581-complex-number-finding-roots-print.html
# Complex number-finding roots • July 31st 2006, 01:32 AM kingkaisai2 Complex number-finding roots Find the roots of the equation z^2=21-20i • July 31st 2006, 04:30 AM Soroban Hello, kingkaisai2! Edit: I corrected my wrong sign below ... Sorry for any confusion it caused. I don't know what methods you've been taught . . . Quote: Find the roots of the equation $z^2 \:=\:21-20i$ Let $z = a + bi$, where $a$ and $b$ are real. We have: . $(a + bi) \:= \:21 - 20i$ Then: . $a^2 + 2abi$+ $b^2i^2\:=\:$ $21 - 20i\quad\Rightarrow\quad (a^2 - b^2) + 2abi \:=\:21 - 20i$ Equate real and imaginary components: . $\begin{array}{cc}a^2 - b^2 \:= \:21\\ 2ab \:= \:-20\end{array}$ $\begin{array}{cc}(1)\\(2)\end{array}$ From (2), we have: . $b = -\frac{10}{a}$ Substitute into (1): . $a^2 - \left(-\frac{10}{a}\right)^2\:=\:21\quad\Rightarrow\quad a^2 - \frac{100}{a^2}\:=\:21$ Multiply by $a^2:\;\;a^4 - 100\:=\:21a^2\quad\Rightarrow\quad a^4 - 21a^2 - 100\:=\:0$ Factor: . $(a^2 + 4)(a^2 - 25)\:=\:0$ We have two equations to solve: . . $a^2 + 4\:=\:0\quad\Rightarrow\quad a^2\,=\,-4\;\cdots \text{ but }a\text{ must be }real.$ . . $a^2 - 25\:=\:0\quad\Rightarrow\quad a^2\,=$ $\,25\quad\Rightarrow\quad a\,=\,\pm5$ Substitute into (2): . $b \:= \:-\frac{10}{\pm5} \:=\:\mp2$ Therefore: . $\boxed{z\;=\;\{5 - 2i,\;-5 + 2i\}}$ • July 31st 2006, 11:02 AM Quick Quote: Originally Posted by Soroban Hello, kingkaisai2! I don't know what methods you've been taught . . . Let $z = a + bi$, where $a$ and $b$ are real. We have: . $(a + bi) \:= \:21 - 20i$ Then: . $a^2 + 2abi - b^2i^2\:=\:$ $21 - 20i\quad\Rightarrow\quad (a^2 - b^2) + 2abi \:=\:21 - 20i$ shouldn't it be $(a^2+b^2)+2abi=21-20i$? because i^2 equals -1
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 26, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9882603883743286, "perplexity": 4247.262601359869}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982292181.27/warc/CC-MAIN-20160823195812-00173-ip-10-153-172-175.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/converting-binary-to-mips.468938/
# Converting Binary to Mips 18 1. The problem statement, all variables and given/known data Hey everybody! here is my question: Translate the following machine code instructions to MIPS assembly. What is the format of each instruction? 1. 0000 0000 0000 1010 0100 1010 1000 0000 2. 0000 0000 0010 0010 0010 0000 0010 0101 2. Relevant equations 3. The attempt at a solution Now when trying to figure out the solution of the first, i looked into my book and found an opcode which was the same as the first 6. Which was add.... now i figured out the format: and got this as a result add $t1,$zero, $t2 is that right? i was unsure because i kinda felt like i left the last part out.... i figured out 0000 00 | 00000| 0 1010| 01001| but i didn't know if i had to do anything with the ending function --- 101 1000 0000 ? also the second question i am having trouble figuring it out... i know that the opcode and the function match up to encode a certain type but when im looking for both the opcode and function together in my book/online they only have the opcode? if this is the case then it is another add right? Your help is appreciated! 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution 2. ### Mark44 ### Staff: Mentor I found 22 separate instructions that start with 000000. Go through those and find the one that matches the bits at the end. 3. ### basketball853 18 okay... so then is that a shift? im looking at this website http://www.student.cs.uwaterloo.ca/~isg/res/mips/opcodes of the last of the function am i just matching the last 6? which would be 000000? and the 5 before that would be the shamt (shift) amount? 4. ### Mark44 ### Staff: Mentor Looks like it to me - SLL (shift left logical) for the first one. 0000 00ss ssst tttt dddd dhhh hh00 0000 hhhhh is the shift amount ttttt is the register whose value is shifted left sssss is the register where the shifted result is stored 5. ### basketball853 18 okay cool, so i have: OP = sll$t2 = ttttt $t1 = dddd$t2 = h sll $t1,$zero, $t2 would that be correct? Last edited: Feb 1, 2011 6. ### basketball853 18 Also along with the second one, i figured it would be: or$a0, $at,$v0 Thank you for everything so far!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6202511787414551, "perplexity": 1043.3997100026743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065488.33/warc/CC-MAIN-20150827025425-00333-ip-10-171-96-226.ec2.internal.warc.gz"}
https://www.fauca.org/producto/social-security-for-dummies/
¡Oferta! # Merchants In The Temple 2.93 de 5 basado en 14 puntuaciones de los clientes (406 valoraciones de clientes) £45.00 £40.00 This book by the Heath brothers takes a deep dive into how our brains make decisions, the ways we make mistakes in our decisions, and how to fix these issues. While to some extent you might frame this as a “behavioral finance” book, the reality is that “Decisive” is not just a bunch of theory and research about the dumb mistakes we make because of how our brains are wired, but a remarkably practical look at how we might actually do things differently to combat these challenges. Código: SSFD100 Categorías: , Etiqueta: ## Descripción del producto This book by the Heath brothers takes a deep dive into how our brains make decisions, the ways we make mistakes in our decisions, and how to fix these issues. While to some extent you might frame this as a “behavioral finance” book, the reality is that “Decisive” is not just a bunch of theory and research about the dumb mistakes we make because of how our brains are wired, but a remarkably practical look at how we might actually do things differently to combat these challenges. ## 406 revisiones para Merchants In The Temple 1. : 2. : 3. : 4. : 5. : 6. : 7. : 8. : 9. : 10. : 11. : 12. : 13. : 14. : 15. : 16. : 17. : 18. : 19. : 20. : 21. : 22. : 23. : 24. : 25. : 26. : 27. : 28. : 29. : 30. : 31. : 32. : 33. : 34. : 35. : 36. : 37. : 38. : 39. : 40. : 41. : 42. : 43. : 44. : 45. : 46. : 47. : 48. : 49. : 50. : 51. : 52. : 53. : 54. : 55. : 56. : 57. : 58. : 59. : 60. : 61. : 62. : 63. : 64. : 65. : 66. : 67. : 68. : 69. : 70. : 71. : 72. : 73. : 74. : 75. : 76. : 77. : 78. : 79. : 80. : 81. : 82. : 83. : 84. : 85. : 86. : 87. : 88. : 89. : 90. : 91. : 92. : 93. : 94. : 95. : 96. : 97. : 98. : 99. : 100. : 101. : 102. : 103. : 104. : 105. : 106. : 107. : 108. : 109. : 110. : 111. : 112. : 113. : 114. : 115. : 116. : 117. : 118. : 119. : 120. : 121. : 122. : 123. : 124. : 125. : 126. : 127. : 128. : 129. : 130. : 131. : 132. : 133. : 134. : 135. : 136. : 137. : 138. : 139. : 140. : 141. : 142. : 143. : 144. : 145. : 146. : 147. : We call it a program that can indeed give novice users a hand. Download Website Uptime Monitoring ToolAs a young, ambitious professional student of Chinese classics, I had expected Master Sheng Xueqiu of the Qing dynasty to be a crude, loud, charismatic teacher. What I soon discovered was that Master Sheng Xueqiu was a refined, affable, thoughtful, and wry teacher. By having his former students raise their hands to ask questions after each class, he implicitly https://specgicorlo.weebly.com 148. : If you are a Windows application programmer or a system integrator who needs a basic solution for test automation and future develop of a driver’s product as a Windows base DSP, you can use WinIoExI utilities like VB/C#, C++, Delphi and Project to assist you within your automation development. WinIoExI provides a complete set of bus specific functions for controlling: I/O port, PCI, MMIO, Memory, https://fecomandge.weebly.com 149. : WiredBot IRC Bot WiredBot is an IRC bot that initiates a connection to a IRC channel you specify, establishes a shell on the server the connection is to, and puts you on a channel where you can connect to the channel you specified. Features 150. : This is an editing program, whereas other counterparts come in as complete alternative and sometimes even compete for similar features.Rocky Ridge Series The Rocky Ridge Series was a collection of science fiction short stories by Australian author Brian Aldiss. It was first published as a hardcover original UK edition in 1972, and as a paperback original USA edition in 1973. It was published in the US by Pyramid Books in 1974. The original UK edition was subtitled «A collection for adults only». https://mandwittnamor.weebly.com 151. : Screenshot Home Screen The front page has a default screen with a default category of Notes. From the right side you can add new categories by clicking + new or move existing ones by clicking the drag icon. When you click the + new button the following window will open: Notes Categories window You can also add new notes by clicking + new button, or choose current category from the dropdown list to add new notes in it. 152. : Moreover, the documentation available online for Xamarin is mostly helpful to those who have zero experience with the tool. Frankly speaking, the biggest drawback is Xamarin’s lack of official support for Mac users.OTTAWA — Most Canadians are largely happy with the progress they’ve made in this country in recent years, but many would like to see a fundamental change in the way the country is governed, according to a new poll. The Angus Reid Institute released a survey on https://clients1.google.vg/url?sa=i&url=https://simogoldbi.weebly.com 153. : Kaspersky Desinfect 6.0Desinfect 6.0 is the most effective tool to remove viruses and spyware from all PC installations. Desinfect is easy to use and has a very high malware detection rate. Detailed information about each virus area: Exact list of malware locations per scan stage: Pre-scan, Total Scan, Scan results Easy to use friendly interface https://kettlapsocen.weebly.com 154. : Right after you launch the software, it will ask you for permission to start uploading the data; upon its completion it will ask you to confirm. Daanav File Backup Utility is offered free of charge for both PCs and Macs, and it comes as a single file, 200M. Tags As the years go by, computer labs have moved away from the traditional white board and chalk board – to modern PCs with software that can dramatically increase the efficiency of https://biomilrori.weebly.com 155. : …^2 -48 g_1 g_1′ onumber\\ \lambda_3 &=& – 4 g_1 + 8 g_1′ – 24 g_2 – 24 g_2′ + 5 g_3 + 5 g_3′ + 5 g_4 + 5 g_4′ – 6 f_1 onumber\\ \lambda_4 &=& 4 f_1 – 16 g_1 + 64 https://www.google.es/url?q=https://camptonebelt.weebly.com 156. : UbiqT UbiqT is an icon based tool, described in UbiqT can be used in a way similar to Mascot Distiller. It excels in the analysis of partial digests, like tryptic peptides, but also of full length digests. It can also work with small samples (such as cell lysates). chereb Chereb https://ningpresisbea.weebly.com 157. : The script also utilizes several other indesign scripts, so you should install those as well as the script. Pre-Script Requirements: Must fit text first Numeric set (no percentage sizing) Must be setup to size from selected text or as illustration Must use photoshop because of color correction -OR- Must have book and shrink text script activated. Will overwrite print preset if set 158. : Please don’t forget to rate it, thanks! Why the need to see more than just one window title bar? Who needs to know that there are two, or six, or even seven windows? Many people that are interested in window customization maybe use more than one application at the same time, and they might want to be able to customize some window properties. Do these people or those who opt for global/universal window control need to know that they are http://raumakuvasto.fi/index.php?url=https://wardfrecexbris.weebly.com 159. : Faster displays. High resolution. Ultra-sharper images. We’re talking about with recent TVs which are damn amazing. And while this generation of screens are set to replace last ones, it’s not necessarily so easy on the muscle memory. That’s why Full HD is important. The resolution is much higher than 1080p, while pixel density is almost the same. Furthermore, to be able to do High Dynamic Range (HDR), it appears https://www.google.ci/url?q=https://simogoldbi.weebly.com 160. : FTPGetter Professional Portable Edition is an application that includes the above simple feature set which is designed for users who want to make use of already available connection settings. Its sleek interface makes it well suited for more experienced users with a site to share, while the option to work in multiple languages is a reassuring factor. What We Like Intuitive design and immediately intuitive interface Has an abundance of options tailored for file managers Integrated scheduler allows you to set up and run job templates http://proweb-studio.ru/bitrix/rk.php?goto=https://lecanhighvel.weebly.com 161. : FTPGetter Professional Portable Edition is an application that includes the above simple feature set which is designed for users who want to make use of already available connection settings. Its sleek interface makes it well suited for more experienced users with a site to share, while the option to work in multiple languages is a reassuring factor. What We Like Intuitive design and immediately intuitive interface Has an abundance of options tailored for file managers Integrated scheduler allows you to set up and run job templates http://proweb-studio.ru/bitrix/rk.php?goto=https://lecanhighvel.weebly.com 162. : It might become your new favorite tool! Medicine takes inspiration from some of the finest medical myths and cobbled together from their oldest parts. A tiny home-brew library for your thought, business or even home work. Please, contact me, if you own some medical statues or other fascinating tools. I will answer you at my best and I’m always open to hear suggestions, Thanks for dropping by! 163. : she could do, to keep him happy.» «But believe me when I tell you she could never make him happy.» «Not in the way he needed to be happy.» «You’re right.» «Why should I feel guilty?» «What have I done?» «Michael Faraday’s investigation into Thorium 225 will surely prove you innocent, Dana.» «As for him he was my saviour and I was his.» «But he found his daughter and his family.» «How could I https://facepager.com/upload/files/2022/05/MCpEfCGituvceXf11qGx_19_cd501dfcf463975532592df008fda600_file.pdf 1b4b956d05 hugfla 164. : ■The trial version does not provide access to the full potential of the app. If you decide to remove the trial version before the deadline, you can purchase full version for $4.99 at iTunes Store or GooglePlay. At the moment MotoBlaze trial version becomes free. We’re constantly scanning the web for the best free apps. If you’d like to see our video tours and immediate download links, https://blooder.net/upload/files/2022/05/LzpwguScOkAECz5gznn3_19_f4f7c8fd4ea082f8067a62750d10a675_file.pdf 05e1106874 kallfray 165. : Best of all, the tool does not need additional tools or installations; you can run it as a stand-alone application, install it on a Windows 2000/XP/Vista (SP1 or SP2) or Server 2003/2008/2008 R2 workstation using the$30 setup version or the entire Windows Server 2003/2008/2008 R2 product set at a very low cost. Björn Kliebau Björn Klieb https://wakelet.com/wake/qxyEPtVNJd752ckez2sIM 8cee70152a vallwake 166. : 167. : 168. : 169. : 170. : http://mytown247.com/?p=36245 75260afe70 ninetre 171. : 172. : 173. : 174. : 175. : 176. : 177. : 178. : 179. : 180. : bd86983c93 nirvcha 181. : 182. : 183. : 184. : Sap2000 V12 Crack Patch bd86983c93 fallvano 185. : 186. : 187. : 188. : 4c Lipika Hindi Font Free 13 bd86983c93 kaikchah 189. : 190. : 191. : 192. : hp ilo activation key crack bd86983c93 corblout 193. : 194. : 195. : 196. : 197. : winning post 8 3dm 20 bd86983c93 micaolw 198. : The downside is its tiny size. Its small interface is limited to the main control panel, so accessing any option (e.g. the adjustment of volume levels) can be a real challenge. This makes the device a bit impractical in some situations. We did not find any option to import music files or take images from the most recent stations. The preset list is also basic, as it does not support custom channels. However, you can simply add a station to the main list https://www.sweetheartspatisserie.com/profile/ukdweberprattenomb/profile 99d5d0dfd0 odaokta 199. : skype-python-client is active, under development and continuously updated. It supports many Skype versions ranging from 0.9.8 to 2.2. Usage For every feature you can access as part of the Skype API using the Skype API Management Service. Skype Pocketsphinx Django Skype API library Kazoo Skype from PyDroid Skype in C# Microsoft Skype https://www.teamstepusa.org/profile/tigelermahuhun/profile 66cf4387b8 marfil 200. : Update – As @Gracely Atwood suggested in the comments A lighter app, with a not so brilliant UI. Unlike the app mentioned above, you would only have a mirror of your locked archives as long as it was not backed up (which is a terrible idea), plus, if the process was halted for some reason, the application was not helpful one to restore the last step. 201. : How do we rate this app? (1.5 out of 3) Similar in other categories: Categories All content in Live Apps New Zealand (LAZ) is designed to be used as guidelines and educational tools only. The owner and publisher of Live Apps New Zealand (LAZ) holds no responsibility for the use or misuse of https://www.echelonhf.com/profile/Wp-Client-Pro-Nulled-99/profile 66cf4387b8 zigflaw 202. : EasyEpitope is a five-in-one tool combining epitope analysis, 3D model building, docking simulation and Protein-Ligand Ligand-Protein association prediction for 54 protein-protein interactions. There are two modes: 1) ligand-only mode which takes into account ligands only; 2) ligand-plus-protein mode which takes into account ligands and proteins (ligand-protein association). The software has been validated using the set of https://www.lovinmybags.com/profile/Nfs-14-Skidrow-Crack-Only-VERIFIED/profile 66cf4387b8 saegsalt 203. : 204. : As mentioned above, it can always adapt its check to your writing style and, most importantly, it can also correct made-up words, both in the web browser and in the document. As a powerful tool, Linguix may appeal to anyone who needs to proofread and edit work they’ve already finalized. If you’re keen to try it out, you can get Linguix here. If you’re interested in more, you can read our full https://fierce-beyond-36083.herokuapp.com/frabree.pdf ec5d62056f lavvole 205. : Contents: PNY Drive Utility overview: Properties: PNY Drive Utility will initiate the download process from the aforementioned email. Open the File Manager and double-click on the PNY Drive Utility executable file. ec5d62056f njechat 206. : You can define up to 9 events to be monitored, a timer, and set of specific criteria, to be applied to the captured events. A grouped result can be limited by the selected threshold. The software can alert you via email whenever any event is triggered or set to monitor. References Category:Windows security softwaremeas1.get(0)); } @Override ec5d62056f haibayn 207. : ​Download PDF Count Pages and Words In Multiple Files Software now via one of the Software of the Day below. Both are portable versions of the installer and do not require activation. Free pdf count pages and words in multiple files software for mac Download PDF Count Pages and Words In Multiple Files Software for mac Free Download PDF Count Pages and Words In Multiple Files Software for mac Latest Download PDF Count Pages and Words In Multiple Files Software for mac Epub software Free Download PDF Count Pages and Words In Multiple https://tchadmarket.com/maison/papeterie/livres-et-cahiers/doulci-activator-v2-3-with-key/ 208. : Knowing how to play the guitar is one thing, but having knowledge on how to break down your particular instrument and service it to its most satisfying level are another thing entirely. That’s why Roland is making training videos available to all those who opt for the PD202, one of its better dual-screen PDP series. These videos teach you so much about repairing such an instrument, but without ever boring you to tears. ec5d62056f franbur 209. : Montana Online Traffic School Montana Online Traffic School The Montana traffic school course is designed to meet educational standards and certification requirements of all 50 states. A tax preparation specialist must recommend your course of study and you have the option to take the course in the comfort of your own home or office. Although you may serve your course of study for 6 hours, 45 minutes a day, over the internet all you have to do is log into your Career https://infinite-earth-69352.herokuapp.com/haidfabi.pdf ec5d62056f marefra 210. : Still, it offers all of the required features for a fast and reliable operation. Æon Flux PDF (PDF, EPUB, MOBI, PRC, TXT, DOC) Generator Tool We have defined this tool as Æon Flux Download Generator because it will support any kind of file formats or document type. This is an excellent choice for your kind of writing and our best tool to make the document of your dream. The � https://serene-anchorage-90382.herokuapp.com/tanham.pdf ec5d62056f darwhit 211. : Please Note: This is just an example command you can also use your get dic (twice) on a dictionary you’ve downloaded and put it into word. Insert one of the urls you got for now: followed by 9 comma separated keywords to search: Remember this is DICTC so the program asks for keywords: Enter 0 or positive number for english language Enter 1-4 for romanian language Enter 2-4 for https://damascusknivesmaker.com/2022/06/04/how-to-fix-unarc-dll-returned-an-error-code-14/ ec5d62056f delyard 212. : Scan Download PDFtoEPS Converter and search 97.44 downloads.. 938 reviews. Real-time. When trying to print a search result I found a plugin which was obviously not invented on third. My batch PDF converter produces EPS files with. Icon of PDFtoEPS Converter. Icon of PDFtoEPS Converter. List of Related searches. This is a free and original PDFtoEPS Converter plugin that can read nearly any. PDF, E https://halafeek.com/upload/files/2022/06/wrt7avT2fDbUJnF1CAmT_04_41927f8f4c1d7ed0a03b5499878631bc_file.pdf ec5d62056f janntan 213. : You can install multiple or single caps-notification, best for users who can’t plug a USB leds with caps lock key. ================== Using caps-notification while installing Pidgin =================== While installing Pidgin the Caps-notification will show up on Pidgin’s installation wizard… You can choose between «Single» or «Multiple». The Single will install 1 single caps-notification and later you can deactivate http://empoderamientodelospueblosoriginariosac.com/wp-content/uploads/2022/06/wauncoll.pdf ec5d62056f eldrtall 214. : . A system like this, in an easy to use format, is desperately missing. I promise you, you will not work with a desktop program to figure out how much you are making every month. We are talking about a very powerful financial gadget here. What is the problem you will find in all the others (e.g. Quicken from Intuit): You need to know your monthly earned income to make a real income calculation (how does your IQ/Quicken https://concourse-pharmacy.com/wp-content/uploads/2022/06/reiltaba.pdf ec5d62056f catedavy 215. : The two-night television premiere of “The Handmaid’s Tale,” based on the 1985 novel, seized the imagination of the immediate online community, and later viewers.The critics were mostly favorable. This has led some to assume she invented the day. Indeed, she conceived of the day in her imagination while teaching in Wyoming and California and sometimes spoke of the idea to friends from the New School She worked with the von Rospatt agency on the idea and had dramatur https://gachalife.site/autocad-lt-for-mac-2012-portable-torrent/ ec5d62056f saflato 216. : period ■ Windows 2000 and XP ■ Works only if the Windows user’s account is locked out and not disabled. A copy of Computer Use Reporter is available for the following operating systems: ■ Windows 2000, Windows XP ■ OS/2 Warp ■ Mac OS 8.6, 9, 10, and 10.1 ■ Linux 3.x and later ■ Windows NT 3.51 There are http://bonnethotelsurabaya.com/?p=2512 ec5d62056f fayorsi 217. : Q: How do you sell a slave to a neighbour As the title says, I want to know the rules for selling a slave to a neighbour. Is there a mechanism, like the voyages of Asura, where some contracts of slaves are created for sale and if you sign the contract then your slave is sold? How do you get access to the slave trader in the titanic’s «rescue» function? Do you have to talk to the slave, or http://www.vidriositalia.cl/?p=1241 ec5d62056f anasual 218. : 219. : 220. : 221. : 222. : 223. : 224. : Enjoy it!Q: Limit of two-variable functions How would you approach this limit without using the squeeze theorem, as well as other tools that have to do with the limit: $$\lim\limits_{x\to 0}\lim\limits_{y\to 0}\frac{\sinh(y) – \sin(x)\cos(y)}{y\sin(x)\sinh^2(y)}$$ A: http://indir.fun/?p=36927 50e0806aeb fabder 225. : Tom Waits’ Toothless Oscars Gives ALS Awareness a Kick in the Butt The Grammy-winning singer’s acceptance speech was poignant. He decided to add a new kicker instead of talking about music The nominations have been announced, the jewelry has been twinkled, and now it’s time to part with a few of our hard-earned bones. While some prognosticators are predicting that the most-anticipated speech by an incoming president https://nuvocasa.com/wp-content/uploads/2022/06/Keepa_for_Chrome.pdf 50e0806aeb jashvayl 226. : Verdict There is no application that is complete when it comes to disk management and backup, so it has never been as likely to be a good time for online backup software as it is now. Regardless of the fact that some additional functionality should be added to this free online backup software, it is still the best tool of the bunch, especially since it has so many disk management features and its interface is as pleasant as it gets. Publisher description for QILING DISK MAS https://npcfmc.com/labor-scheduling-planner-039s-version-crack-activator-win-mac/ 50e0806aeb bersho 227. : Further information on formants (F0) can be found here. The curve represents values of a harmonic function, which in our case is the band of interest. Peak values are marked in the curve. The lower the curve for the selected band is, the higher is the harmonicity. Step 2:- Now we have determined, that the band we are going to multiply was acoustically too low, and wanted to change it to the middle, let’s do so https://look-finder.com/wp-content/uploads/2022/06/patidal.pdf 50e0806aeb fallfyan 228. : DelegateControl is a mini antivirus and solution to lock an unwanted app from your Mac. The App can work in stealth mode on your Mac system and it prevents the app from loading while it is running. Therefore, you can use the program to keep an eye on the activities of your child, monitor the daily tasks, protect your social media or protect your Mac from viral threats. You can also use the application https://www.highgatecalendar.org/wp-content/uploads/2022/06/Program_Blocker.pdf 50e0806aeb casyele 229. : Registry Backuper 5.0 A complementary tool to your backup regime, this downloadable application manages to save three important database areas of Windows to suit whatever your needs. By selecting the first option, «System and Files», you will be able to restore everything related to your PC, while choosing the second option provides you with the opportunity to make a backup of everything related to the Registry. The third option is called Registry Backup and is aimed to rapidly save the entire registry. 50e0806aeb caylbeul 230. : In addition, this iPod iPhone 4 transfer assistant can help you connect iPod with PC, transfer phone contacts to iPod/iPhone 4, and manage your contacts on iPod/iPhone 4. File sharing and other common applications Use it to transfer files between PC and devices such as iPod, eMMC (this function is not available on Windows 8), USB flash drive, or Bluetooth devices. You can transfer files between two devices through iTunes, eMMC, USB flash drive, or https://armina.bio/wp-content/uploads/2022/06/Professional_MP3_Player.pdf 50e0806aeb marganas 231. : Portable apps are becoming more and more popular, as they free us from the ongoing need to install specific programs on a PC, and instead can be executed on laptops or from USB flash drives. While standalone installers can benefit from such functionality, this software allows portable apps to enjoy an extra advantage. In fact, the logic behind is that since add-ins are executable, there is no need to reinstall them on every PC, and we also save their settings to our list of favorites. https://www.livegreenbean.com/wp-content/uploads/2022/06/daylsha.pdf 50e0806aeb pierquy 232. : *Note* the files are individually pased and should not be copied as an entire folder. System Requirements Windows XP, Vista, 7, 8 OSX 10.6+ or you can compile it on your computer too Build instructions on how to compile is provided below. 50e0806aeb regalo 233. : The free of charge version is sufficient for small-scale projects. The full version is quite attractive and rich in features, so it is worthy of a thorough study.Q: Questions in $\varphi$-class Let $\phi(x)=2x+1$. Prove that if $E=\{0,1\}$ and $F=\{2^n\mid n\in\mathbb N\}$ then we have that : http://realtorforce.com/clipspy-crack-free-for-windows-latest-2022-2/ 50e0806aeb gilmimel 234. : https://diamondhairs.com.ua/ 1b15fa54f8 casstom 235. : 236. : https://lottoalotto.com/yemen/ 918aa508f4 imbgirt 237. : 238. : 239. : 240. : 241. : 242. : 243. : 244. : 245. : 246. : 247. : 248. : 249. : 250. : 251. : 252. : 253. : 254. : 255. : 256. : 257. : 258. : 259. : 260. : 261. : 262. : 263. : 264. : 265. : 266. : 267. : 268. 5 sobre 5 : Dogecoin kryptoměna kurz/cena je $0.1470 s tržní kapitalizací$19.53 B. Cena šla o -0.35% down za posledních 24h. Grafy, historie kurzu, kalkulačka kryptoměn a kde koupit Dogecoin? 269. 5 sobre 5 : 1 Bitcoin nasıl kazanılır 2022: En etkili yöntemler. 1.1 Bitcoin nasıl kazanılır yöntemleri 2022. 1.1.1 Bedava bitcoin kazandıran siteler. 1.1.2 Ödeme yöntemi olarak Bitcoin kabul etmek. 1.1.3 Bitcoin madenciliği ile Bitcoin kazanmak. 1.1.4 Bitcoin bankası üzerinden faizle Bitcoin kazanmak. 270. 1 sobre 5 : 10 euro depozitolu casino bonusu bitcoin hem para yatırma hem de para çekme Jackpot casino kumar oynayan ortalama bir turist oynamayan ortalama bir. 271. 2 sobre 5 : Blackjack, barbut, rulet ve diğer çevrimiçi kripto kumar oyunları sağlayan Curacao tarafından düzenlenen kumarhanedir. En çok bahis yapan en iyi oyuncular Bitcoins ile ödüllendirilecek. 272. 1 sobre 5 : New York uçuşları ile Büyük Elma’dan bir ısırık alabilir ve Amerikan rüyasının tadına varabilirsiniz. Uyarı: Her rezervasyon için en fazla dokuz yolcu rezervasyonu yapabilirsiniz. Her rezervasyon için yetişkinler, çocuklar ve bebekler dahil olmak üzere, dokuz yolcu için rezervasyon yapabilirsiniz. Her yetişkin yolcu yanında. 273. : 274. : 275. : 276. : 277. : 278. 3 sobre 5 : I’m not sure exactly why but this site is loading very slow for me. Is anyone else having this problem or is it a problem on my end? I’ll check back later on and see if the problem still exists. 279. 5 sobre 5 : wonderful submit, very informative. I wonder why the opposite experts of this sector do not understand this. You must proceed your writing.I am sure, you have a huge readers’ base already! thank you guys for sharing please try to visit here. Take a look at my homepage – https://main7.net/ 280. 5 sobre 5 : Hi to every single one, it’s in fact a fastidious for me to pay a quick visit this website, it includes useful Information. Feel free to surf to my website – https://711casino.net/baccaratsite/ 281. : 282. : 283. : 284. : 285. : 286. : 287. : 288. : 289. : 290. : 291. : 292. : 293. : 294. : 295. : 296. : 297. 3 sobre 5 : I believe this is among the most significant info for me. And i’m glad reading your article. But should observation on few common things, The site style is great, the articles is really great : D. Excellent task, cheers 298. 3 sobre 5 : 299. 1 sobre 5 : Website promotion is inexpensive with a quality guarantee, hurry up !! http://www.links-for.site 300. 1 sobre 5 : 301. : https://winrepack.com/ 74cd785c74 ranrey 302. : https://foxcracks.com/ 74cd785c74 takkulu 303. 3 sobre 5 : 304. 3 sobre 5 : I was wondering if you ever thought of changing the page layout of your website? Its very well written; I love what youve got to say. But maybe you could a little more in the way of content so people could connect with it better. Youve got an awful lot of text for only having 1 or two pictures. Maybe you could space it out better? 305. : 306. : 307. : 308. : 309. : 310. : 311. : 312. : 313. : 314. : 315. : 316. : 317. : 318. : 319. : 320. : chrvano 19191a764c 321. : walfpawe 19191a764c https://soundcloud.com/aafagwindeno/cccam-server-hack-elbeyliexe [https://soundcloud.com/aafagwindeno/cccam-server-hack-elbeyliexe] [https://soundcloud.com/aafagwindeno/cccam-server-hack-elbeyliexe] [https://soundcloud.com/aafagwindeno/cccam-server-hack-elbeyliexe] 322. : kaledevl 19191a764c https://soundcloud.com/nzokofizzyz/malay-mail-gr8-activation-code [https://soundcloud.com/nzokofizzyz/malay-mail-gr8-activation-code] [https://soundcloud.com/nzokofizzyz/malay-mail-gr8-activation-code] [https://soundcloud.com/nzokofizzyz/malay-mail-gr8-activation-code] 323. : kaledevl 19191a764c https://soundcloud.com/nzokofizzyz/malay-mail-gr8-activation-code [https://soundcloud.com/nzokofizzyz/malay-mail-gr8-activation-code] [https://soundcloud.com/nzokofizzyz/malay-mail-gr8-activation-code] [https://soundcloud.com/nzokofizzyz/malay-mail-gr8-activation-code] 324. : rylemont 19191a764c 325. : franjael 19191a764c https://soundcloud.com/katapavisurio/keil-uvision-4-crackrar [https://soundcloud.com/katapavisurio/keil-uvision-4-crackrar ] [https://soundcloud.com/katapavisurio/keil-uvision-4-crackrar ] [https://soundcloud.com/katapavisurio/keil-uvision-4-crackrar ] 326. : welfact 19191a764c https://soundcloud.com/phydrumpkires1989/toontrack-ez-drummer-nashville-keygen [https://soundcloud.com/phydrumpkires1989/toontrack-ez-drummer-nashville-keygen ] [https://soundcloud.com/phydrumpkires1989/toontrack-ez-drummer-nashville-keygen ] [https://soundcloud.com/phydrumpkires1989/toontrack-ez-drummer-nashville-keygen ] 327. : tanrang 19191a764c https://soundcloud.com/gegovaataevav/xmpie-ucreate-indesign-cc-crack [https://soundcloud.com/gegovaataevav/xmpie-ucreate-indesign-cc-crack ] [https://soundcloud.com/gegovaataevav/xmpie-ucreate-indesign-cc-crack ] [https://soundcloud.com/gegovaataevav/xmpie-ucreate-indesign-cc-crack ] 328. : yelamaeg 19191a764c https://soundcloud.com/thiapolilang1987/simatic-key-installer-2013l [https://soundcloud.com/thiapolilang1987/simatic-key-installer-2013l ] [https://soundcloud.com/thiapolilang1987/simatic-key-installer-2013l ] [https://soundcloud.com/thiapolilang1987/simatic-key-installer-2013l ] 329. : harrlaur 19191a764c 330. : maijann 19191a764c 331. : pepcae 19191a764c 332. : imomari 19191a764c https://soundcloud.com/tormerosea1976/down [https://soundcloud.com/tormerosea1976/down ] [https://soundcloud.com/tormerosea1976/down ] [https://soundcloud.com/tormerosea1976/down ] 333. : vantmapi 19191a764c https://soundcloud.com/urtulapingz/ivkova-slava-2005dvdrip [https://soundcloud.com/urtulapingz/ivkova-slava-2005dvdrip ] [https://soundcloud.com/urtulapingz/ivkova-slava-2005dvdrip ] [https://soundcloud.com/urtulapingz/ivkova-slava-2005dvdrip ] 334. : wetfine 19191a764c 335. : kalkgery 19191a764c 336. : laurmala 19191a764c 337. : thurhug 19191a764c https://soundcloud.com/morlioporgo/the-wolf-of-wall-street-1080p-torrent [https://soundcloud.com/morlioporgo/the-wolf-of-wall-street-1080p-torrent ] [https://soundcloud.com/morlioporgo/the-wolf-of-wall-street-1080p-torrent ] [https://soundcloud.com/morlioporgo/the-wolf-of-wall-street-1080p-torrent ] 338. : yerjes 19191a764c 339. : kaeree 19191a764c 340. : berowi 19191a764c https://soundcloud.com/confvimultse/coolorus-serial-number-crack-keygen [https://soundcloud.com/confvimultse/coolorus-serial-number-crack-keygen ] [https://soundcloud.com/confvimultse/coolorus-serial-number-crack-keygen ] [https://soundcloud.com/confvimultse/coolorus-serial-number-crack-keygen ] 341. : magder 19191a764c 342. : nasgau 19191a764c 343. : eveljay 19191a764c 344. : rachile 19191a764c 345. : katburn 19191a764c 346. : pansimm 19191a764c 347. : laveodwy 19191a764c https://soundcloud.com/cotsisamar1988/telecharger-gratuitement-scrabble-ubisoft [https://soundcloud.com/cotsisamar1988/telecharger-gratuitement-scrabble-ubisoft ] [https://soundcloud.com/cotsisamar1988/telecharger-gratuitement-scrabble-ubisoft ] [https://soundcloud.com/cotsisamar1988/telecharger-gratuitement-scrabble-ubisoft ] 348. : patdar 19191a764c https://soundcloud.com/arpernetar1983/duplicate-photo-cleaner-registration-key [https://soundcloud.com/arpernetar1983/duplicate-photo-cleaner-registration-key ] [https://soundcloud.com/arpernetar1983/duplicate-photo-cleaner-registration-key ] [https://soundcloud.com/arpernetar1983/duplicate-photo-cleaner-registration-key ] 349. : dennmar 19191a764c https://soundcloud.com/esraretip1984/rendering-in-sketchup-2015-crack [https://soundcloud.com/esraretip1984/rendering-in-sketchup-2015-crack ] [https://soundcloud.com/esraretip1984/rendering-in-sketchup-2015-crack ] [https://soundcloud.com/esraretip1984/rendering-in-sketchup-2015-crack ] 350. : lauorle 19191a764c 351. : valevan 19191a764c https://soundcloud.com/jinkiebarsam0/bindhast-marathi-full-movie-free-185 [https://soundcloud.com/jinkiebarsam0/bindhast-marathi-full-movie-free-185 ] [https://soundcloud.com/jinkiebarsam0/bindhast-marathi-full-movie-free-185 ] [https://soundcloud.com/jinkiebarsam0/bindhast-marathi-full-movie-free-185 ] 352. : colryna 19191a764c https://soundcloud.com/maishejahimu/gm-thai-mix-v431sf2 [https://soundcloud.com/maishejahimu/gm-thai-mix-v431sf2 ] [https://soundcloud.com/maishejahimu/gm-thai-mix-v431sf2 ] [https://soundcloud.com/maishejahimu/gm-thai-mix-v431sf2 ] 353. : kanpal 19191a764c 354. : talimerv 19191a764c 355. : thorisab 19191a764c 356. : ellaxe 19191a764c 357. : jarpelh 19191a764c https://soundcloud.com/ale2tfvkotov/stallcups-electrical-design-book-pdf [https://soundcloud.com/ale2tfvkotov/stallcups-electrical-design-book-pdf ] [https://soundcloud.com/ale2tfvkotov/stallcups-electrical-design-book-pdf ] [https://soundcloud.com/ale2tfvkotov/stallcups-electrical-design-book-pdf ] 358. : https://soundcloud.com/kettailoispec1970/mecaflux-2012-torrentrar [https://soundcloud.com/kettailoispec1970/mecaflux-2012-torrentrar ] [https://soundcloud.com/kettailoispec1970/mecaflux-2012-torrentrar ] [https://soundcloud.com/kettailoispec1970/mecaflux-2012-torrentrar ] 359. : moreelli 19191a764c 360. : latbirt 19191a764c 361. : quedlor 19191a764c https://soundcloud.com/recosmaiscap1983/crack-para-muvee-reveal-x-901 [https://soundcloud.com/recosmaiscap1983/crack-para-muvee-reveal-x-901 ] [https://soundcloud.com/recosmaiscap1983/crack-para-muvee-reveal-x-901 ] [https://soundcloud.com/recosmaiscap1983/crack-para-muvee-reveal-x-901 ] 362. : glenjaq 19191a764c 363. : papysala 19191a764c https://soundcloud.com/chegueselebia/autodesk-revit-mep-2012-x32-x64-iso-crackrarrar [https://soundcloud.com/chegueselebia/autodesk-revit-mep-2012-x32-x64-iso-crackrarrar ] [https://soundcloud.com/chegueselebia/autodesk-revit-mep-2012-x32-x64-iso-crackrarrar ] [https://soundcloud.com/chegueselebia/autodesk-revit-mep-2012-x32-x64-iso-crackrarrar ] 364. : manzla 19191a764c https://soundcloud.com/uwusesrendony/ravan-sanhita-pdf-in-hindi-free-900 [https://soundcloud.com/uwusesrendony/ravan-sanhita-pdf-in-hindi-free-900 ] [https://soundcloud.com/uwusesrendony/ravan-sanhita-pdf-in-hindi-free-900 ] [https://soundcloud.com/uwusesrendony/ravan-sanhita-pdf-in-hindi-free-900 ] 365. : xolsaye 19191a764c https://soundcloud.com/soeamufordy1971/autodesk-inventor-2015-portable-win64 [https://soundcloud.com/soeamufordy1971/autodesk-inventor-2015-portable-win64 ] [https://soundcloud.com/soeamufordy1971/autodesk-inventor-2015-portable-win64 ] [https://soundcloud.com/soeamufordy1971/autodesk-inventor-2015-portable-win64 ] 366. : loudar 19191a764c 367. : zannike 19191a764c 368. : maxileo 19191a764c 369. : unythali 19191a764c https://soundcloud.com/chochatumap1988/crack-mappe-tomtom-start-20 [https://soundcloud.com/chochatumap1988/crack-mappe-tomtom-start-20 ] [https://soundcloud.com/chochatumap1988/crack-mappe-tomtom-start-20 ] [https://soundcloud.com/chochatumap1988/crack-mappe-tomtom-start-20 ] 370. : izabmaky 19191a764c https://soundcloud.com/thopenpena1977/autodata-345-crack-full-techtools-keygenbfdcm [https://soundcloud.com/thopenpena1977/autodata-345-crack-full-techtools-keygenbfdcm ] [https://soundcloud.com/thopenpena1977/autodata-345-crack-full-techtools-keygenbfdcm ] [https://soundcloud.com/thopenpena1977/autodata-345-crack-full-techtools-keygenbfdcm ] 371. : abychi 19191a764c 372. : yealit 19191a764c 373. : ualaanne 19191a764c https://soundcloud.com/monthelpletho1984/fantasy-frontier-second [https://soundcloud.com/monthelpletho1984/fantasy-frontier-second ] [https://soundcloud.com/monthelpletho1984/fantasy-frontier-second ] [https://soundcloud.com/monthelpletho1984/fantasy-frontier-second ] 374. : rosascoo 19191a764c https://soundcloud.com/tangexbiscah/carlportoco [https://soundcloud.com/tangexbiscah/carlportoco ] [https://soundcloud.com/tangexbiscah/carlportoco ] [https://soundcloud.com/tangexbiscah/carlportoco ] 375. : lebeyalu 19191a764c 376. : levste 19191a764c 377. : benuni 19191a764c https://soundcloud.com/gulyanaturev/cdma-workshop-390-full-cracked [https://soundcloud.com/gulyanaturev/cdma-workshop-390-full-cracked ] [https://soundcloud.com/gulyanaturev/cdma-workshop-390-full-cracked ] [https://soundcloud.com/gulyanaturev/cdma-workshop-390-full-cracked ] 378. : jamalm 19191a764c https://soundcloud.com/ikehmerdiso/chef-damodaran-recipes-in-tamil-pdf-31 [https://soundcloud.com/ikehmerdiso/chef-damodaran-recipes-in-tamil-pdf-31 ] [https://soundcloud.com/ikehmerdiso/chef-damodaran-recipes-in-tamil-pdf-31 ] [https://soundcloud.com/ikehmerdiso/chef-damodaran-recipes-in-tamil-pdf-31 ] 379. : markdaf 19191a764c https://soundcloud.com/denaldeipep1972/soal-medspin-kedokteran-unair-dan-pembahasan14 [https://soundcloud.com/denaldeipep1972/soal-medspin-kedokteran-unair-dan-pembahasan14 ] [https://soundcloud.com/denaldeipep1972/soal-medspin-kedokteran-unair-dan-pembahasan14 ] [https://soundcloud.com/denaldeipep1972/soal-medspin-kedokteran-unair-dan-pembahasan14 ] 380. : beroura 19191a764c https://soundcloud.com/urskacelpand/ex4-to-mq4-decompiler-403921 [https://soundcloud.com/urskacelpand/ex4-to-mq4-decompiler-403921 ] [https://soundcloud.com/urskacelpand/ex4-to-mq4-decompiler-403921 ] [https://soundcloud.com/urskacelpand/ex4-to-mq4-decompiler-403921 ] 381. : devpea 19191a764c 382. : berbval 19191a764c https://soundcloud.com/ulariozorvew/mototrbo-cps-80-build-410-aa-hit [https://soundcloud.com/ulariozorvew/mototrbo-cps-80-build-410-aa-hit ] [https://soundcloud.com/ulariozorvew/mototrbo-cps-80-build-410-aa-hit ] [https://soundcloud.com/ulariozorvew/mototrbo-cps-80-build-410-aa-hit ] 383. : deralby 19191a764c 384. : chitbel 19191a764c https://soundcloud.com/derhandlasas1975/wondershare-drfone-for-ios-v73712-final-crack-serial-key-keygen [https://soundcloud.com/derhandlasas1975/wondershare-drfone-for-ios-v73712-final-crack-serial-key-keygen ] [https://soundcloud.com/derhandlasas1975/wondershare-drfone-for-ios-v73712-final-crack-serial-key-keygen ] [https://soundcloud.com/derhandlasas1975/wondershare-drfone-for-ios-v73712-final-crack-serial-key-keygen ] 385. : solnea 19191a764c https://soundcloud.com/crppramotv/t-splines-for-rhino-5-crack97 [https://soundcloud.com/crppramotv/t-splines-for-rhino-5-crack97 ] [https://soundcloud.com/crppramotv/t-splines-for-rhino-5-crack97 ] [https://soundcloud.com/crppramotv/t-splines-for-rhino-5-crack97 ] 386. : avrywel 19191a764c 387. : yasammo 19191a764c https://soundcloud.com/krecimullic/call-of-duty-4-modern-warfare-keygen-razor-1911 [https://soundcloud.com/krecimullic/call-of-duty-4-modern-warfare-keygen-razor-1911 ] [https://soundcloud.com/krecimullic/call-of-duty-4-modern-warfare-keygen-razor-1911 ] [https://soundcloud.com/krecimullic/call-of-duty-4-modern-warfare-keygen-razor-1911 ] 388. : fatzot 19191a764c 389. : kamerm 19191a764c https://soundcloud.com/predinpropen1988/aact-network-v110-portable-cracksmind-rar [https://soundcloud.com/predinpropen1988/aact-network-v110-portable-cracksmind-rar ] [https://soundcloud.com/predinpropen1988/aact-network-v110-portable-cracksmind-rar ] [https://soundcloud.com/predinpropen1988/aact-network-v110-portable-cracksmind-rar ] 390. : halstri 19191a764c https://soundcloud.com/dosebendi1982/solidworks-2020-crack-with-keygen [https://soundcloud.com/dosebendi1982/solidworks-2020-crack-with-keygen ] [https://soundcloud.com/dosebendi1982/solidworks-2020-crack-with-keygen ] [https://soundcloud.com/dosebendi1982/solidworks-2020-crack-with-keygen ] 391. : yudeanne 19191a764c 392. : derpala 19191a764c 393. : werofear 19191a764c https://soundcloud.com/vigillslocox1986/chess-position-trainer-5-crack [https://soundcloud.com/vigillslocox1986/chess-position-trainer-5-crack ] [https://soundcloud.com/vigillslocox1986/chess-position-trainer-5-crack ] [https://soundcloud.com/vigillslocox1986/chess-position-trainer-5-crack ] 394. : jannger 19191a764c https://soundcloud.com/nocabfive1981/caligula-1979-unrated-720p-blu-ray-x264-anoxmous-torrent [https://soundcloud.com/nocabfive1981/caligula-1979-unrated-720p-blu-ray-x264-anoxmous-torrent ] [https://soundcloud.com/nocabfive1981/caligula-1979-unrated-720p-blu-ray-x264-anoxmous-torrent ] [https://soundcloud.com/nocabfive1981/caligula-1979-unrated-720p-blu-ray-x264-anoxmous-torrent ] 395. : jannger 19191a764c https://soundcloud.com/nocabfive1981/caligula-1979-unrated-720p-blu-ray-x264-anoxmous-torrent [https://soundcloud.com/nocabfive1981/caligula-1979-unrated-720p-blu-ray-x264-anoxmous-torrent ] [https://soundcloud.com/nocabfive1981/caligula-1979-unrated-720p-blu-ray-x264-anoxmous-torrent ] [https://soundcloud.com/nocabfive1981/caligula-1979-unrated-720p-blu-ray-x264-anoxmous-torrent ] 396. : reegezab 19191a764c 397. : https://soundcloud.com/pdlselwan8/ploypailin-jensen-nude-album [https://soundcloud.com/pdlselwan8/ploypailin-jensen-nude-album ] [https://soundcloud.com/pdlselwan8/ploypailin-jensen-nude-album ] [https://soundcloud.com/pdlselwan8/ploypailin-jensen-nude-album ] 398. : vinokt 19191a764c 399. : faeyul 19191a764c 400. : gayhel 19191a764c https://soundcloud.com/gnitjecocand1988/usb-network-gate-62-keygen [https://soundcloud.com/gnitjecocand1988/usb-network-gate-62-keygen ] [https://soundcloud.com/gnitjecocand1988/usb-network-gate-62-keygen ] [https://soundcloud.com/gnitjecocand1988/usb-network-gate-62-keygen ] 401. : jamsal 19191a764c 402. : sashpaeg 19191a764c
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18119017779827118, "perplexity": 16453.5074826172}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500294.64/warc/CC-MAIN-20230205224620-20230206014620-00465.warc.gz"}
https://yingnanwang.com/misc/2020-10-06-low-pass-filter/
# A Low-Pass Filter Guide for Non-DSP Engineer A simple implementation of C++ digital Low-pass filter Life without powerful DSP tools like MATLAB can be very tough, especially for those engineers who need to process real-world data in the production env. Digital filtering on continuous data is a very common use case in a lot of User Interface rendering. Among all the digital signal processing techniques, the low-pass filter is the most fundamental one and can smooth out noise or unwanted jittering in the data sequence. This article will discuss designing a C++ Low-pass filter from scratch. ### Digital Filter Basics Filter designing is all about needs. There are so many choices and parameters to tweak. Knowing the design requirements is very important before we even start. Some typical digital filter spec parameters are passband cutoff frequency, stopband cutoff frequency, peak ripple, attenuation, and gain. The definition of the cutoff frequency is the frequency which gain magnitude drop by -3dB. There are basically 2 types of digital filters: Infinite-Impulse Response filter and Finite-Impulse Response filter. IIR FIR Designed base on the feedback loop, not guaranteed to be stable Designed base on a sliding window. Hard to design but stable. Image Gaussian Blur can be understand as a 2D FIR filter IIR is usually faster in response because the feedback loop design FIR usually worse than the IIR under the same order IIR design you can read the rest of this article FIR kernel design you can use the scipy python package Also, we cannot break the law of Physics. So there must be some delay in the digital filtering in the time domain. The delay is correlated with the order of the filter. For example, if the order is 3, then we are expected to see 3 frames of delay. To obtain the full sequence of the processed data, we need to have some phase compensation for the delay at the end. Also the higher the order, the faster the response damping. ### Low-pass Filter Low-pass filter is a filter that only allows low-frequency components in the signal to pass. We can use it to remove spikes in the curve, erase high-frequency component to blur images, and denoising in the audio. Other formats of filters like High-pass or Band-pass are designed in a similar fashion. So we only focus on the Low-pass Filter in this article. There are multiple formats of Low-pass Filter. The most common 2 types are Butterworth and Chebyshev. The difference between them is that they have different mathematic formulas to characterize the frequency response curve. Butterworth Chebyshev Type I Slow damping Faster damping, but has ripple Here we choose Butterworth Low-pass filter. To design a Butterworth we just need the number of the order N and the Cutoff frequency Wc. Or we can use the passband cutoff and stopband cutoff to calculate the N and Wc. Notice that when the Butterworth order is 2, the filter is also called Biquad filter. It is very handy to cascade and builds filter blocks. The Bode plot of the Butterworth Low-pass filter and others can be found below. From the above frequency response formula, we can see that when w = Wc, the gain is 0.707 which is -3dB. Also the higher the frequency, the lower the gain. ### S-Domain In the above section, we design our Butterworth filter in the frequency domain, this is because it is straightforward to characterize the response curve function to meet our requirement in the frequency domain. However, when implementing the actual digital filter in the LTI system, engineers usually analyze the transform function in the S-domain. We can perform Nyquist stability criterion analysis in the S-domain. Also as our data is digital which is discrete, so we will also perform the Z-transform to convert the transfer function to the Z-domain. To determine the transfer function in the S-domain, we need to use the frequency domain’s property: Complex Conjugate Symmetric. We then obtain this constraint equation: Then we solve the equation to find out the poles for the transfer function H(s) and H(-s). To keep the system stable, we only select the poles in the negative real half-plane of S-domain. Then eventually we can write down the transfer function in the S-domain based on: Probably you like me, already lost in the last paragraph. A good thing is there is a normalized format of this transfer function base on the order of the filter. You can find the reference chart here: Butterworth filter - Wikipedia. To use this we just need to select an order and substitute the s with s/Wc. Now we successfully have our Low-pass filter in the S-domain as H(s). ### Z-Domain Since the H(s) we obtained are analog filters, we need to map it to the discrete Z-Domain to obtain a digital filter. The common methods are the Impulse Invariance and the Bilinear Transform. Here we using Bilinear Transform, which is the first-order Taylor Series approximation to map S-domain to Z-domain. Once we are in the Z-domain, it is very easy to convert the filter to a digital circuit or the software algorithm. Bilinear Transform is just substituting the s in the H(s) to this: The Z-domain representation of a Biquad Low-pass filter is: A complete example can be found in the EarLevel article ### Digital Representation of the Filter Now we need to do is converting this discrete domain filter into a logic block network. There are multiple design principles here, like the Direct form and the Transposed Direct form. Either form of design can work, the major difference is how much actual logic blocks the form will use. The Z^-1 is the delay block. In order to reduce the number of delay blocks, we can use the Transposed Direct-Forms II design. So we can reduce the memory usage of the filter. The logic block diagram is: Using the above diagram, we can convert the Z-domain system Transfer function into a difference equation. The following is the Biquad Difference Equation. We can resolve the unknown parameters a and b. Then replace it inside the difference equation, then we have our final digital Low-pass filter in the time domain. ### C++ code for 3rd-order Butterworth Filter for a float sequence constexpr float Wc = 0.2f; // cutoff frequency in rad/s constexpr float K = std::tan(M_PI * Wc); float norm = 1 / (K*K*K + 2*K*K + 2*K + 1); float a0 = K*K*K*norm; float a1 = 3 * a0; float a2 = a1; float a3 = a0; float b1 = (3*K*K*K + 2*K*K - 2*K - 3) * norm; float b2 = (3*K*K*K - 2*K*K - 2*K + 3) * norm; float b3 = (K*K*K - 2*K*K + 2*K - 1) * norm; // z and p are the delay memory blocks bool onReceiveData(float input, float& output) { float output = input * a0 + z1; z1 = input * a1 + z2 - b1 * output; z2 = input * a2 + z3 - b2 * output; z3 = input * a3 - b3 * output; p0 = p1; p1 = p2; p2 = input; // Since LPF is not stable on first N frame // 1) we need to bypass input at first N-3 frames // 2) cache input between [N-3, N), no output at all // 3) between [N, N+3), we need to blend the output with the cached inputs with index based weights // 4) for following frames, we can use the LPF normally } std::vector<float> getPhaseCompensation() { // return latest 3 inputs, since they are not processed return {p0, p1, p2}; }
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6357522010803223, "perplexity": 1292.7200760048743}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335190.45/warc/CC-MAIN-20220928082743-20220928112743-00498.warc.gz"}
http://www.progressingeography.com/CN/abstract/abstract14634.shtml
• • ### 气候变化科学与人类可持续发展① 1. 1. 中国科学院冰冻圈科学国家重点实验室,兰州 730000 2. 中国气象局,北京 100081 • 出版日期:2014-07-25 发布日期:2014-07-25 • 作者简介: 作者简介:秦大河(1947-),男,山东泰安人,中国科学院院士,主要从事气候变化、冰冻圈与全球变化研究。E-mail:[email protected] • 基金资助: 国家重大科学研究计划项目(2013CBA01808);国家重点基础研究发展计划(973)项目(2007CB411507);中国气象局气候变化专项 ### Climate change science and sustainable development Dahe QIN1,2() 1. 1. State Key Laboratory of Cryosphere Science, Chinese Academy of Sciences, Lanzhou 730000, China 2. China Meteorological Administration, Beijing 100081, China • Online:2014-07-25 Published:2014-07-25 Abstract: Since the Fourth Assessment Report (AR4) was released by the Intergovernmental Panel on Climate Change (IPCC) in 2007, new observations have further proved that the warming of the global climate system is unequivocal. Each of the last three successive decades before 2012 has been successively warmer at global mean surface temperature than any preceding decade since 1850. 1983-2012 was likely the warmest 30-year period of the last 1400 years. From 1998 to 2012, the rate of warming of the global land surface slowed down, but it did not reflect the long-term trends in climate change. The ocean has warmed, and the upper 75 m of the ocean warmed by more than 0.11℃ per decade since 1970. Over the period of 1971 to 2010, 93% of the net energy increase in the Earth's climate system was stored in the oceans. The rate of global mean sea level rise has accelerated, which was up to 3.2 mm yr-1 between 1993 and 2010. Anthropogenic global ocean carbon stocks were likely to have increased and caused acidification of the ocean surface water. Since 1971, the glaciers and the Greenland and Antarctic ice sheets have been losing mass. Since 1979, the Arctic sea ice extent deceased at 3.5% to 4.1% per decade, and the Antarctic sea ice extent in the same period increased by 1.2% to 1.8% per decade. The extent of the Northern Hemisphere snow cover has decreased. Since the early 1980s, the permafrost temperatures have increased in most regions. Human influence has been detected in the warming of the atmosphere and the ocean, changes in the water cycle, reductions in snow and ice, global mean sea level rise, and changes in climate extremes. The largest contribution to the increase in the anthropogenic radiative forcing was by the increase in the atmospheric concentration of CO2 since 1750. It led to more than half of global warming since the 1950s (with 95 % confidence). It is predicted using Coupled Model Intercomparison Project Phase 5 (CMIP5) and Representative Concentration Pathways (RCPs) that the global mean surface temperature will continue to rise for the end of this century, the frequency of extreme events such as heat waves and heavy precipitation will increase, and precipitation will present a trend of "the dry becomes drier, the wet becomes wetter". The temperature of the upper ocean will increase by 0.6 to 2.0℃ compared to the period of 1986 to 2005, heat will penetrate from the surface to the deep ocean which will affect ocean circulation, and sea level will rise by 0.26 to 0.82 m in 2100. Cryosphere will continue to warm. To control global warming, humans need to reduce the greenhouse gas emissions. If the increase in temperature is higher than 2℃ than before industrialization, the mean annual economic losses worldwide will reach 0.2% to 2.0% of income, and cause large-scale irreversible effects, including death, disease, food insecurity, inland flooding and water logging, and rural drinking water and irrigation difficulties that affect human security. If taking prompt actions, however, it is still possible to limit the increase in temperature within 2℃. To curb the gradually out-of-control global warming and achieve the goal of sustainable development of the human society, global efforts to reduce emissions are needed. • P467
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5500436425209045, "perplexity": 1811.0821934608437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178365454.63/warc/CC-MAIN-20210303042832-20210303072832-00095.warc.gz"}
https://converths.com/1500-meters-to-miles-online-calculator-for-units/
How do you usually convert between different units of conversion, like 1500 meters to miles? How many miles is 1500 meters? ## 1500 meters is equal to 0.9320565 miles. Online Unit Numerator: 1500 meters in miles ## What is 1500 meters in miles? We usually use different units for length in different countries. There are several internationally agreed systems of measurements. For example, the metric system, Imperial units (also known as British Imperial), and the Chinese system of weights and measures. Each and every system of unit and conversion is common in various countries and regions. ## How long is 1500 meters in miles? But, 1500 meters is equal to how many miles? To get result, we need to refer to the basic formulas, that 1 meter is 0.000621371 miles, and 1 mile is 1609.3 meters. We can multiply or divide them when we want to convert meters to miles, that is 1 meter multiply by 0.000621371 miles, or 1 mile is divided by 1609.3 meters. Also check out the video below for details about the conversion, and you can always round up decimals to their nearest whole number, for instance, 0.000621371 miles can be rounded up to 0.000621 miles. Anyway, 1500 meters how many miles? So, # Method No. 1: . 1 meter = 0.000621371 miles . 1 m = 0.000621371 ml 1500 meters = 1500 x1 m = 1500 x 0.000621371 m = 0.9320565 miles 1500 meters ≈ 0.932 miles (PS: m = meter (plural: meters), mi or ml = mile (plural: miles)) # Method No. 2: . 1 mile = 1609.3 meters . 1 ml = 1609.3 m 1500 meters = 1500 ÷ 1 ml = 1500 ÷ 1609.3 m = 4.971105 miles 1500 meters 0.932 miles # Method No. 3: . 1 kilometer = 1000 meters . 1 km = 1000 m 1500 meters = 1500 ÷ 1 km = 1500 ÷ 1000 m = 1.5 kilometers . 1 mile = 1.609 kilometers . 1 mi = 1.609 km 1.5 kilometers = 1.5 ÷ 1 ml = 1.5 ÷ 1.609 = 0.932 miles 1500 meters =1.5 kilometers 0.932 miles ## 1500 Meters is How Many Mile – Video Got a different answer? Which unit system do you use or prefer? Leave your comment below, share with a friend and never stop wondering.❤️
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8417573571205139, "perplexity": 3065.660973303598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711016.32/warc/CC-MAIN-20221205100449-20221205130449-00782.warc.gz"}
https://worldwidescience.org/topicpages/p/puzzle+comparative+bac-mapping.html
#### Sample records for puzzle comparative bac-mapping 1. The European sea bass Dicentrarchus labrax genome puzzle: comparative BAC-mapping and low coverage shotgun sequencing Directory of Open Access Journals (Sweden) Volckaert Filip AM 2010-01-01 Full Text Available Abstract Background Food supply from the ocean is constrained by the shortage of domesticated and selected fish. Development of genomic models of economically important fishes should assist with the removal of this bottleneck. European sea bass Dicentrarchus labrax L. (Moronidae, Perciformes, Teleostei is one of the most important fishes in European marine aquaculture; growing genomic resources put it on its way to serve as an economic model. Results End sequencing of a sea bass genomic BAC-library enabled the comparative mapping of the sea bass genome using the three-spined stickleback Gasterosteus aculeatus genome as a reference. BAC-end sequences (102,690 were aligned to the stickleback genome. The number of mappable BACs was improved using a two-fold coverage WGS dataset of sea bass resulting in a comparative BAC-map covering 87% of stickleback chromosomes with 588 BAC-contigs. The minimum size of 83 contigs covering 50% of the reference was 1.2 Mbp; the largest BAC-contig comprised 8.86 Mbp. More than 22,000 BAC-clones aligned with both ends to the reference genome. Intra-chromosomal rearrangements between sea bass and stickleback were identified. Size distributions of mapped BACs were used to calculate that the genome of sea bass may be only 1.3 fold larger than the 460 Mbp stickleback genome. Conclusions The BAC map is used for sequencing single BACs or BAC-pools covering defined genomic entities by second generation sequencing technologies. Together with the WGS dataset it initiates a sea bass genome sequencing project. This will allow the quantification of polymorphisms through resequencing, which is important for selecting highly performing domesticated fish. 2. A Comparative BAC Map for the Gilthead Sea Bream (Sparus aurata L. Directory of Open Access Journals (Sweden) Heiner Kuhl 2011-01-01 Full Text Available This study presents the first comparative BAC map of the gilthead sea bream (Sparus aurata, a highly valuated marine aquaculture fish species in the Mediterranean. High-throughput end sequencing of a BAC library yielded 92,468 reads (60.6 Mbp. Comparative mapping was achieved by anchoring BAC end sequences to the three-spined stickleback (Gasterosteus aculeatus genome. BACs that were consistently ordered along the stickleback chromosomes accounted for 14,265 clones. A fraction of 5,249 BACs constituted a minimal tiling path that covers 73.5% of the stickleback chromosomes and 70.2% of the genes that have been annotated. The N50 size of 1,485 “BACtigs” consisting of redundant BACs is 337,253 bp. The largest BACtig covers 2.15 Mbp in the stickleback genome. According to the insert size distribution of mapped BACs the sea bream genome is 1.71-fold larger than the stickleback genome. These results represent a valuable tool to researchers in the field and may support future projects to elucidate the whole sea bream genome. 3. A comparative study of the A* heuristic search algorithm used to solve efficiently a puzzle game Science.gov (United States) Iordan, A. E. 2018-01-01 The puzzle game presented in this paper consists in polyhedra (prisms, pyramids or pyramidal frustums) which can be moved using the free available spaces. The problem requires to be found the minimum number of movements in order the game reaches to a goal configuration starting from an initial configuration. Because the problem is enough complex, the principal difficulty in solving it is given by dimension of search space, that leads to necessity of a heuristic search. The improving of the search method consists into determination of a strong estimation by the heuristic function which will guide the search process to the most promising side of the search tree. The comparative study is realized among Manhattan heuristic and the Hamming heuristic using A* search algorithm implemented in Java. This paper also presents the necessary stages in object oriented development of a software used to solve efficiently this puzzle game. The modelling of the software is achieved through specific UML diagrams representing the phases of analysis, design and implementation, the system thus being described in a clear and practical manner. With the purpose to confirm the theoretical results which demonstrates that Manhattan heuristic is more efficient was used space complexity criterion. The space complexity was measured by the number of generated nodes from the search tree, by the number of the expanded nodes and by the effective branching factor. From the experimental results obtained by using the Manhattan heuristic, improvements were observed regarding space complexity of A* algorithm versus Hamming heuristic. 4. Puzzling Mechanisms Science.gov (United States) van Deventer, M. Oskar 2009-01-01 The basis of a good mechanical puzzle is often a puzzling mechanism. This article will introduce some new puzzling mechanisms, like two knots that engage like gears, a chain whose links can be interchanged, and flat gears that do not come apart. It illustrates how puzzling mechanisms can be transformed into real mechanical puzzles, e.g., by… 5. Phthalate Puzzle Abstract. The most common plasticizer, phthalates, are facing stricterregulations due to their omnipresence and possible effects onhuman health, and environment. But high cost, lack of applicationrange, and unknown long-term effects of non-phthalatealternatives make the scenario puzzling. 6. Idea Puzzle OpenAIRE Parente, C.; Ferro, L. 2016-01-01 WOS:000387124100017 (Nº de Acesso Web of Science) The Idea Puzzle is a software application created in 2007. It is a support tool to assist PhD students and researchers in the process of designing research projects through a focus on three central dimensions of research that are collectively represented by a triangle. Each side of the Idea Puzzle triangle corresponds to one of the three dimensions that every empirical research project should ideally include: ontology (data), epistemology (... 7. Deductive Puzzling Science.gov (United States) Wanko, Jeffrey J. 2010-01-01 To help fifth- through eighth-grade students develop their deductive reasoning skills, the author used a ten-week supplementary curriculum so that students could answer logic questions. The curriculum, a series of lessons built around language-independent logic puzzles, has been used in classrooms of fifth through eighth grades. In most cases,… 8. Incomplete Puzzle Science.gov (United States) 2006-01-01 15 April 2006 This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) image shows a mid-summer view of a portion of the south polar residual cap of Mars. The large, relatively flat-lying, puzzle-like pieces in this scene are mesas composed largely of solid carbon dioxide. Location near: 85.5oS, 76.8oW Image width: 3 km (1.9 mi) Illumination from: upper left Season: Southern Summer 9. Puzzle-based versus traditional lecture: comparing the effects of pedagogy on academic performance in an undergraduate human anatomy and physiology II lab. Science.gov (United States) Stetzik, Lucas; Deeter, Anthony; Parker, Jamie; Yukech, Christine 2015-06-23 A traditional lecture-based pedagogy conveys information and content while lacking sufficient development of critical thinking skills and problem solving. A puzzle-based pedagogy creates a broader contextual framework, and fosters critical thinking as well as logical reasoning skills that can then be used to improve a student's performance on content specific assessments. This paper describes a pedagogical comparison of traditional lecture-based teaching and puzzle-based teaching in a Human Anatomy and Physiology II Lab. Using a single subject/cross-over design half of the students from seven sections of the course were taught using one type of pedagogy for the first half of the semester, and then taught with a different pedagogy for the second half of the semester. The other half of the students were taught the same material but with the order of the pedagogies reversed. Students' performance on quizzes and exams specific to the course, and in-class assignments specific to this study were assessed for: learning outcomes (the ability to form the correct conclusion or recall specific information), and authentic academic performance as described by (Am J Educ 104:280-312, 1996). Our findings suggest a significant improvement in students' performance on standard course specific assessments using a puzzle-based pedagogy versus a traditional lecture-based teaching style. Quiz and test scores for students improved by 2.1 and 0.4% respectively in the puzzle-based pedagogy, versus the traditional lecture-based teaching. Additionally, the assessments of authentic academic performance may only effectively measure a broader conceptual understanding in a limited set of contexts, and not in the context of a Human Anatomy and Physiology II Lab. In conclusion, a puzzle-based pedagogy, when compared to traditional lecture-based teaching, can effectively enhance the performance of students on standard course specific assessments, even when the assessments only test a limited 10. Solving a binary puzzle NARCIS (Netherlands) Utomo, P.H.; Makarim, R.H. 2017-01-01 A Binary puzzle is a Sudoku-like puzzle with values in each cell taken from the set {0,1} {0,1}. Let n≥4 be an even integer, a solved binary puzzle is an n×n binary array that satisfies the following conditions: (1) no three consecutive ones and no three consecutive zeros in each row and each 11. Tangrams: Puzzles of Art Science.gov (United States) Fee, Brenda 2009-01-01 Challenging one's brain is the beginning of making great art. Tangrams are a great way to keep students thinking about their latest art project long after leaving the classroom. A tangram is a Chinese puzzle. The earliest known reference to tangrams appears in a Chinese book dated 1813, but the puzzles existed long before that date. The puzzle… 12. The Anatomy Puzzle Book. Science.gov (United States) Jacob, Willis H.; Carter, Robert, III This document features review questions, crossword puzzles, and word search puzzles on human anatomy. Topics include: (1) Anatomical Terminology; (2) The Skeletal System and Joints; (3) The Muscular System; (4) The Nervous System; (5) The Eye and Ear; (6) The Circulatory System and Blood; (7) The Respiratory System; (8) The Urinary System; (9) The… 13. New Sliding Puzzle with Neighbors Swap Motion OpenAIRE Prihardono, Ariyanto; Kawagoe, Kenichi 2015-01-01 The sliding puzzles (15-puzzle, 8-puzzle, 5-puzzle) are known to have 2 kind of puz-zle: solvable puzzle and unsolvable puzzle. In this thesis, we make a new puzzle with only 1 kind of it, solvable puzzle. This new puzzle is made by adopting sliding puzzle with several additional rules from M13 puzzle; the puzzle that is formed form The Mathieu group M13. This puzzle has a movement that called a neighbors swap motion, a rule of movement that enables every neighboring points to swap. This extr... 14. PSQP: Puzzle Solving by Quadratic Programming. Science.gov (United States) Andalo, Fernanda A; Taubin, Gabriel; Goldenstein, Siome 2017-02-01 In this article we present the first effective method based on global optimization for the reconstruction of image puzzles comprising rectangle pieces-Puzzle Solving by Quadratic Programming (PSQP). The proposed novel mathematical formulation reduces the problem to the maximization of a constrained quadratic function, which is solved via a gradient ascent approach. The proposed method is deterministic and can deal with arbitrary identical rectangular pieces. We provide experimental results showing its effectiveness when compared to state-of-the-art approaches. Although the method was developed to solve image puzzles, we also show how to apply it to the reconstruction of simulated strip-shredded documents, broadening its applicability. 15. Blood Type Puzzle. Science.gov (United States) Kelly, Janet 1997-01-01 Presents a blood type puzzle that provides a visual, hands-on mechanism by which students can examine blood group reactions. Offers students an opportunity to construct their own knowledge about blood types. (JRH) 16. The Entrepreneurial Earnings Puzzle DEFF Research Database (Denmark) Chen, Jing; Åstebro, Thomas 2014-01-01 A review of recent evidence on relative earnings from entrepreneurship versus wage work presents a puzzle: why do individuals become entrepreneurs when entrepreneurs on average apparently earn less than employees? After considering several potential explanations, we empirically analyze one: income... 17. The PPP Puzzle DEFF Research Database (Denmark) Juselius, Katarina The persistent movements away from long-run benchmark values in real exchange rates, dubbed the PPP puzzle, observed in many real exchange rates during periods of currency float have been subject to much empirical research without resolving the puzzle. The paper demonstrates how the cointegrated...... VAR approach by grouping together components of similar persistence can be used to uncover structures in the data that ultimately may help to explain theoretically the forces underlying such puzzling movements. The charaterization of the data into components which are empirically I(0), I(1), and I(2......) is shown to be a powerful organizing principle allowing us to structure the data in long-run, medium-run, and short-run behavior. Its main advantage is the ability to associate persistent movements away from fundamental benchmark values in one variable/relation with similar persistent movements somewhere... 18. On IBM's Millennial Puzzle Home; Journals; Resonance – Journal of Science Education; Volume 5; Issue 10. On IBM's Millennial Puzzle. A Sarangarajan. Classroom Volume 5 Issue 10 October 2000 pp 81-89. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/005/10/0081-0089. Author Affiliations. 19. Idiosyncratic Volatility Puzzle DEFF Research Database (Denmark) Aslanidis, Nektarios; Christiansen, Charlotte; Lambertides, Neophytos from a large pool of macroeconomic and Önancial variables. Cleaning for macro-Önance e§ects reverses the puzzling negative relation between returns and idiosyncratic volatility documented previously. Portfolio analysis shows that the e§ects from macro-Önance factors are economically strong... 20. Puzzles in B physics Home; Journals; Pramana – Journal of Physics; Volume 67; Issue 5. Puzzles in physics. Hsiang-Nan Li ... Author Affiliations. Hsiang-Nan Li1 2. Institute of Physics, Academia Sinica, Taipei, Taiwan 115, Republic of China; Department of Physics, National Cheng-Kung University, Tainan, Taiwan 701, Republic of China ... 1. La Francophonie. Puzzle Corner. Science.gov (United States) Andrews, Ian A. 2000-01-01 Discusses the organization La Francophonie, which is an international community of people who speak French and convene to address issues. Presents a crossword puzzle that introduces readers to some of the nations involved in La Francophonie. Provides the across and down clues, a word list, and answer key. (CMK) 2. Nature's Greatest Puzzles International Nuclear Information System (INIS) Quigg, Chris 2005-01-01 It is a pleasure to be part of the SLAC Summer Institute again, not simply because it is one of the great traditions in our field, but because this is a moment of great promise for particle physics. I look forward to exploring many opportunities with you over the course of our two weeks together. My first task in talking about Nature's Greatest Puzzles, the title of this year's Summer Institute, is to deconstruct the premise a little bit 3. The infinity puzzle CERN Document Server Close, Frank 2011-01-01 We are living in a Golden Age of Physics. Forty or so years ago, three brilliant, yet little-known scientists - an American, a Dutchman, and an Englishman - made breakthroughs which later inspired the construction of the Large Hadron Collider at CERN in Geneva: a 27 kilometer-long machine which has already costs ten billion dollars, taken twenty years to build, and now promises to reveal how the universe itself came to be. The Infinity Puzzle is the inside story of those forty years of research, breakthrough, and endeavour. Peter Higgs, Gerard 't Hooft and James Bjorken, were the three scienti 4. Isotope puzzle in sputtering International Nuclear Information System (INIS) Zheng Liping 1998-01-01 Mechanisms affecting multicomponent material sputtering are complex. Isotope sputtering is the simplest in the multicomponent materials sputtering. Although only mass effect plays a dominant role in the isotope sputtering, there is still an isotope puzzle in sputtering by ion bombardment. The major arguments are as follows: (1) At the zero fluence, is the isotope enrichment ejection-angle-independent or ejection-angle-dependent? (2) Is the isotope angular effect the primary or the secondary sputter effect? (3) How to understand the action of momentum asymmetry in collision cascade on the isotope sputtering? 5. Musings on the puzzle piece. Science.gov (United States) Goin-Kochel, Robin P 2016-02-01 Following is a brief musing on Roy Grinker's discussion of what the puzzle piece symbolizes for autism during his presentation at the 2015 International Meeting for Autism Research. In his words, "The puzzle piece is ubiquitous." It likely holds a different meaning for each of us, and this is how one autism researcher sees it. © The Author(s) 2015. 6. Basic Functional Analysis Puzzles of Spectral Flow DEFF Research Database (Denmark) Booss-Bavnbek, Bernhelm 2011-01-01 We explain an array of basic functional analysis puzzles on the way to general spectral flow formulae and indicate a direction of future topological research for dealing with these puzzles.......We explain an array of basic functional analysis puzzles on the way to general spectral flow formulae and indicate a direction of future topological research for dealing with these puzzles.... Science.gov (United States) Bonesini, Maurizio 2017-12-01 The FAMU (Fisica degli Atomi Muonici) experiment has the goal to measure precisely the proton Zemach radius, thus contributing to the solution of the so-called proton radius "puzzle". To this aim, it makes use of a high-intensity pulsed muon beam at RIKEN-RAL impinging on a cryogenic hydrogen target with an high-Z gas admixture and a tunable mid-IR high power laser, to measure the hyperfine (HFS) splitting of the 1S state of the muonic hydrogen. From the value of the exciting laser frequency, the energy of the HFS transition may be derived with high precision ( 10-5) and thus, via QED calculations, the Zemach radius of the proton. The experimental apparatus includes a precise fiber-SiPMT beam hodoscope and a crown of eight LaBr3 crystals and a few HPGe detectors for detection of the emitted characteristic X-rays. Preliminary runs to optimize the gas target filling and its operating conditions have been taken in 2014 and 2015-2016. The final run, with the pump laser to drive the HFS transition, is expected in 2018. 8. The birth order puzzle. Science.gov (United States) Zajonc, R B; Markus, H; Markus, G B 1979-08-01 Studies relating intellectual performance to birth order report conflicting results, some finding intellectual scores to increase, others to decrease with birth order. In contrast, the relationship between intellectual performance and family size is stable and consistently replicable. Why do these two highly related variables generate such divergent results? This birth order puzzle is resolved by means of the confluence model that quantifies the influences upon intellectual growth arising within the family context. At the time of a new birth, two opposing influences act upon intellectual growth of the elder sibling: (a) his or her intellectual environment is "diluted" and (b) he or she loses the "last-born's handicap" and begins serving as an intellectual resource to the younger sibling. Since these opposite effects are not equal in magnitude, the differences in intellectual performance among birth ranks are shown to be age dependent. While elder children may surpass their younger siblings in intellectual performance at some ages, they may be overtaken by them at others. Thus when age is taken into consideration, the birth order literature loses its chaotic character and an orderly pattern of results emerges. 9. Current puzzles in nuclear physics International Nuclear Information System (INIS) 1985-01-01 A meeting on ''Current puzzles in nuclear physics'' was held at Research Center for Nuclear Physics, Osaka University, on June 27 - 28, 1984. The meeting put emphasis on several puzzles which have not been solved for a long time in nuclear physics, and also on the puzzles. This collective report is composed of following eleven papers presented at the meeting. Almost all the papers are witten in English : (1) M1, GT excitations and configuration mixing (in Japanese). (2) Hadronic excitation of pionic states. (3) Microscopic analyses of 28 Si(α,α') 28 Si scattering and single particle strength in A = 29 nuclei. (4) Few-body physics and its incentives to nuclear physics. (5) Is it necessary to introduce three body interactions ? (in Japanese). (6) Puzzles in the neutron-deuteron elastic scattering. (7) Puzzles in NN, NΔ, πN and Nanti N interactions. (8) Problems in Hadron-Nucleus interaction. (9) Unified approach to the meson- and quark- theory of nuclear forces and currents. (10) Pion photoproduction in two Chiral bag models. (11) The dynamic bag model : The electromagnetic properties of nucleon. (Aoki, K.) 10. Astroparticle physics: puzzles and discoveries International Nuclear Information System (INIS) Berezinsky, V 2008-01-01 Puzzles often give birth to the great discoveries, the false discoveries sometimes stimulate the exiting ideas in theoretical physics. The historical examples of both are described in Introduction and in section 'Cosmological Puzzles'. From existing puzzles most attention is given to Ultra High Energy Cosmic Ray (UHECR) puzzle and to cosmological constant problem. The 40-years old UHECR problem consisted in absence of the sharp steepening in spectrum of extragalactic cosmic rays caused by interaction with CMB radiation. This steepening is known as Greisen-Zatsepin-Kuzmin (GZK) cutoff. It is demonstrated here that the features of interaction of cosmic ray protons with CMB are seen now in the spectrum in the form of the dip and beginning of the GZK cutoff. The most serious cosmological problem is caused by large vacuum energy of the known elementary-particle fields which exceeds at least by 45 orders of magnitude the cosmological vacuum energy. The various ideas put forward to solve this problem during last 40 years, have weaknesses and cannot be accepted as the final solution of this puzzle. The anthropic approach is discussed 11. Imaginary Cubes and Their Puzzles Directory of Open Access Journals (Sweden) Hideki Tsuiki 2012-05-01 Full Text Available Imaginary cubes are three dimensional objects which have square silhouette projections in three orthogonal ways just as a cube has. In this paper, we study imaginary cubes and present assembly puzzles based on them. We show that there are 16 equivalence classes of minimal convex imaginary cubes, among whose representatives are a hexagonal bipyramid imaginary cube and a triangular antiprism imaginary cube. Our main puzzle is to put three of the former and six of the latter pieces into a cube-box with an edge length of twice the size of the original cube. Solutions of this puzzle are based on remarkable properties of these two imaginary cubes, in particular, the possibility of tiling 3D Euclidean space. 12. Famous puzzles of great mathematicians CERN Document Server Petković, Miodrag S 2009-01-01 This entertaining book presents a collection of 180 famous mathematical puzzles and intriguing elementary problems that great mathematicians have posed, discussed, and/or solved. The selected problems do not require advanced mathematics, making this book accessible to a variety of readers. Mathematical recreations offer a rich playground for both amateur and professional mathematicians. Believing that creative stimuli and aesthetic considerations are closely related, great mathematicians from ancient times to the present have always taken an interest in puzzles and diversions. The goal of this 13. Sleep for Kids: Games and Puzzles Science.gov (United States) 14. Do Puzzle Pieces and Autism Puzzle Piece Logos Evoke Negative Associations? Science.gov (United States) Gernsbacher, Morton Ann; Raimond, Adam R.; Stevenson, Jennifer L.; Boston, Jilana S.; Harp, Bev 2018-01-01 Puzzle pieces have become ubiquitous symbols for autism. However, puzzle-piece imagery stirs debate between those who support and those who object to its use because they believe puzzle-piece imagery evokes negative associations. Our study empirically investigated whether puzzle pieces evoke negative associations in the general public.… 15. À chacun son puzzle Directory of Open Access Journals (Sweden) Jean-Noël Ferrié 2012-05-01 Full Text Available Le texte soutient que le « tournant naturaliste » que l’on nous invite à négocier ne donne aucun moyen supplémentaire pour parvenir à une description perspicace de ce que les gens font dans des circonstances précises, l’existence humaine pouvant être considérée comme une collection de circonstances précises. Sans doute le naturalisme nous permet-il de comprendre comment certaines actions humaines sont possibles, mais cela ne nous dit pas pourquoi et comment elles font sens pour tout un chacun. La méthodologie nécessaire pour éclaircir le premier point obscurcit généralement le second. Le mieux est donc de considérer que les deux approches ne vont pas de pair. Ce point de vue est soutenu à partir d’exemple tirés de l’anthropologie de la religion.To each one his puzzle. For a serene methodological pluralismThe text argues that the « naturalistic turn » that we are invited to negotiate does not give any additional means to achieve an insightful description of what people do in specific circumstances, and human existence can be considered as a collection of specific circumstances. Probably naturalism allows us to understand how some human actions are possible, but that does not tell us why and how they make sense for everyone. The methodology needed to clarify the first point usually obscures the second one. The best way is to consider that the two approaches do not go together. The text supports this view from an example drawn from the anthropology of religion.A cada uno su rompecabezas. En favor de un pluralismo metodológico serenoEl texto argumenta que la inflexión naturalista a la que se nos invita a participar no proporciona ningún medio suplementario que desemboque en una descripción perspicaz de lo que la gente hace en circunstancias concretas ya que la existencia humana puede ser considerada como una concatenación de circunstancias concretas. Sin duda el naturalismo nos permite comprender como son 16. Teaching Inductive Reasoning with Puzzles Science.gov (United States) Wanko, Jeffrey J. 2017-01-01 Working with language-independent logic structures can help students develop both inductive and deductive reasoning skills. The Japanese publisher Nikoli (with resources available both in print and online) produces a treasure trove of language-independent logic puzzles. The Nikoli print resources are mostly in Japanese, creating the extra… 17. Current puzzles and future possibilities International Nuclear Information System (INIS) Nagamiya, S. 1982-02-01 Four current puzzles and several future experimental possibilities in high-energy nuclear collision research are discussed. These puzzles are (1) entropy, (2) hydrodynamic flow, (3) anomalon, and (4) particle emission at backward angles in proton-nucleus collisions. The last one seems not to be directly related to the subject of the present school. But it is, because particle emission into the region far beyond the nucleon-nucleon kinematical limit is an interesting subject common for both proton-nucleus and nucleus-nucleus collisions, and the basic mechanism involved is strongly related in these two cases. Future experimental possibilities are described which include: (1) possibilities of studying multibaryonic excited states, (2) applications of neutron-rich isotopes, and (3) other needed experimental tasks. 72 references 18. Neutron star news and puzzles International Nuclear Information System (INIS) 2014-01-01 Gerry Brown has had the most influence on my career in Physics, and my life after graduate studies. This article gives a brief account of some of the many ways in which Gerry shaped my research. Focus is placed on the significant strides on neutron star research made by the group at Stony Brook, which Gerry built from scratch. Selected puzzles about neutron stars that remain to be solved are noted 19. Construction-Paper Puzzle Masterpieces Science.gov (United States) Vance, Shelly 2010-01-01 Creating an appreciation of art history in her junior-high students has always been one of the author's greatest challenges as an art teacher. In this article, the author describes how her eighth-grade students re-created a famous work of art--piece by piece, like a puzzle or a stained-glass window--out of construction paper. (Contains 1 resource.) 20. Puzzles of large scale structure and gravitation International Nuclear Information System (INIS) Sidharth, B.G. 2006-01-01 We consider the puzzle of cosmic voids bounded by two-dimensional structures of galactic clusters as also a puzzle pointed out by Weinberg: How can the mass of a typical elementary particle depend on a cosmic parameter like the Hubble constant? An answer to the first puzzle is proposed in terms of 'Scaled' Quantum Mechanical like behaviour which appears at large scales. The second puzzle can be answered by showing that the gravitational mass of an elementary particle has a Machian character (see Ahmed N. Cantorian small worked, Mach's principle and the universal mass network. Chaos, Solitons and Fractals 2004;21(4)) 1. The puzzle of neutron lifetime International Nuclear Information System (INIS) Paul, Stephan 2009-01-01 In this paper we review the role of the neutron lifetime and discuss the present status of measurements. In view of the large discrepancy observed by the two most precise individual measurements so far we describe the different techniques and point out the principle strengths and weaknesses. In particular we discuss the estimation of systematic uncertainties and its correlation to the statistical ones. In order to solve the present puzzle, many new experiments are either ongoing or being proposed. An overview on their possible contribution to this field will be given. International Nuclear Information System (INIS) Guenster, N.; Derwall, J.; Bauer, R.; Koedijk, K. 2004-09-01 Conventional investment theory suggests that socially responsible investing (SRI) leads to inferior, rather than superior, portfolio performance. Using Innovest's well-established corporate eco-efficiency scores, we provide evidence supporting the contrary. We compose two equity portfolios that differ in ecoefficiency characteristics and find that our high-ranked portfolio provided substantially higher average returns compared to its low-ranked counterpart over the period 1995-2003. Using a wide range of performance attribution techniques to address common methodological concerns, we show that this performance differential cannot be explained by differences in market sensitivity, investment style, or industry-specific components. We finally investigate whether this ecoefficiency premium puzzle withstands the inclusion of transaction costs scenarios, and evaluate how excess returns can be earned in a practical setting via a best-in-class stock selection strategy. The results remain significant under all levels of transactions costs, thus suggesting that the incremental benefits of SRI can be substantial 3. The Magnets Puzzle is NP-Complete DEFF Research Database (Denmark) Kölker, Jonas 2012-01-01 In a Magnets puzzle, one must pack magnets in a box subjet to polarity and numeric constraints. We show that solvability of Magnets instances is NP-complete.......In a Magnets puzzle, one must pack magnets in a box subjet to polarity and numeric constraints. We show that solvability of Magnets instances is NP-complete.... 4. Hadronic decay puzzle in charmonium physics International Nuclear Information System (INIS) Gu Yifan 1996-01-01 Recent experimental results obtained at Beijing Electron-proton Collider sensitivity level the crisply defined nature of the hadronic decay puzzle in charmonium physics. Discovery of new anomalous decay modes breaks with the previously established pattern of the puzzle, and poses new challenges for its theoretical understanding 5. Puzzle based teaching versus traditional instruction in electrocardiogram interpretation for medical students--a pilot study. Science.gov (United States) Rubinstein, Jack; Dhoble, Abhijeet; Ferenchick, Gary 2009-01-13 Most medical professionals are expected to possess basic electrocardiogram (EKG) interpretation skills. But, published data suggests that residents' and physicians' EKG interpretation skills are suboptimal. Learning styles differ among medical students; individualization of teaching methods has been shown to be viable and may result in improved learning. Puzzles have been shown to facilitate learning in a relaxed environment. The objective of this study was to assess efficacy of teaching puzzle in EKG interpretation skills among medical students. This is a reader blinded crossover trial. Third year medical students from College of Human Medicine, Michigan State University participated in this study. Two groups (n = 9) received two traditional EKG interpretation skills lectures followed by a standardized exam and two extra sessions with the teaching puzzle and a different exam. Two other groups (n = 6) received identical courses and exams with the puzzle session first followed by the traditional teaching. EKG interpretation scores on final test were used as main outcome measure. The average score after only traditional teaching was 4.07 +/- 2.08 while after only the puzzle session was 4.04 +/- 2.36 (p = 0.97). The average improvement after the traditional session was followed up with a puzzle session was 2.53 +/- 1.94 while the average improvement after the puzzle session was followed with the traditional session was 2.08 +/- 1.73 (p = 0.67). The final EKG exam score for this cohort (n = 15) was 84.1 compared to 86.6 (p = 0.22) for a comparable sample of medical students (n = 15) at a different campus. Teaching EKG interpretation with puzzles is comparable to traditional teaching and may be particularly useful for certain subgroups of students. Puzzle session are more interactive and relaxing, and warrant further investigations on larger scale. 6. Puzzle based teaching versus traditional instruction in electrocardiogram interpretation for medical students – a pilot study Science.gov (United States) Rubinstein, Jack; Dhoble, Abhijeet; Ferenchick, Gary 2009-01-01 Background Most medical professionals are expected to possess basic electrocardiogram (EKG) interpretation skills. But, published data suggests that residents' and physicians' EKG interpretation skills are suboptimal. Learning styles differ among medical students; individualization of teaching methods has been shown to be viable and may result in improved learning. Puzzles have been shown to facilitate learning in a relaxed environment. The objective of this study was to assess efficacy of teaching puzzle in EKG interpretation skills among medical students. Methods This is a reader blinded crossover trial. Third year medical students from College of Human Medicine, Michigan State University participated in this study. Two groups (n = 9) received two traditional EKG interpretation skills lectures followed by a standardized exam and two extra sessions with the teaching puzzle and a different exam. Two other groups (n = 6) received identical courses and exams with the puzzle session first followed by the traditional teaching. EKG interpretation scores on final test were used as main outcome measure. Results The average score after only traditional teaching was 4.07 ± 2.08 while after only the puzzle session was 4.04 ± 2.36 (p = 0.97). The average improvement after the traditional session was followed up with a puzzle session was 2.53 ± 1.94 while the average improvement after the puzzle session was followed with the traditional session was 2.08 ± 1.73 (p = 0.67). The final EKG exam score for this cohort (n = 15) was 84.1 compared to 86.6 (p = 0.22) for a comparable sample of medical students (n = 15) at a different campus. Conclusion Teaching EKG interpretation with puzzles is comparable to traditional teaching and may be particularly useful for certain subgroups of students. Puzzle session are more interactive and relaxing, and warrant further investigations on larger scale. PMID:19144134 7. Puzzle based teaching versus traditional instruction in electrocardiogram interpretation for medical students – a pilot study Directory of Open Access Journals (Sweden) Dhoble Abhijeet 2009-01-01 Full Text Available Abstract Background Most medical professionals are expected to possess basic electrocardiogram (EKG interpretation skills. But, published data suggests that residents' and physicians' EKG interpretation skills are suboptimal. Learning styles differ among medical students; individualization of teaching methods has been shown to be viable and may result in improved learning. Puzzles have been shown to facilitate learning in a relaxed environment. The objective of this study was to assess efficacy of teaching puzzle in EKG interpretation skills among medical students. Methods This is a reader blinded crossover trial. Third year medical students from College of Human Medicine, Michigan State University participated in this study. Two groups (n = 9 received two traditional EKG interpretation skills lectures followed by a standardized exam and two extra sessions with the teaching puzzle and a different exam. Two other groups (n = 6 received identical courses and exams with the puzzle session first followed by the traditional teaching. EKG interpretation scores on final test were used as main outcome measure. Results The average score after only traditional teaching was 4.07 ± 2.08 while after only the puzzle session was 4.04 ± 2.36 (p = 0.97. The average improvement after the traditional session was followed up with a puzzle session was 2.53 ± 1.94 while the average improvement after the puzzle session was followed with the traditional session was 2.08 ± 1.73 (p = 0.67. The final EKG exam score for this cohort (n = 15 was 84.1 compared to 86.6 (p = 0.22 for a comparable sample of medical students (n = 15 at a different campus. Conclusion Teaching EKG interpretation with puzzles is comparable to traditional teaching and may be particularly useful for certain subgroups of students. Puzzle session are more interactive and relaxing, and warrant further investigations on larger scale. 8. Solving the BM Camelopardalis puzzle Science.gov (United States) Teke, Mathias; Busby, Michael R.; Hall, Douglas S. 1989-01-01 BM Camelopardalis (=12 Cam) is a chromospherically active binary star with a relatively large orbital eccentricity. Systems with large eccentricities usually rotate pseudosynchronously. However, BM Cam has been a puzzle since its observed rotation rate is virtually equal to its orbital period indicating synchronization. All available photometry data for BM Cam have been collected and analyzed. Two models of modulated ellipticity effect are proposed, one based on equilibrium tidal deformation of the primary star and the other on a dynamical tidal effect. When the starspot variability is removed from the data, the dynamical tidal model was the better approximation to the real physical situation. The analysis indicates that BM Cam is not rotating pseudosynchronously but rotating in virtual synchronism after all. 9. The RPA Atomization Energy Puzzle. Science.gov (United States) Ruzsinszky, Adrienn; Perdew, John P; Csonka, Gábor I 2010-01-12 There is current interest in the random phase approximation (RPA), a "fifth-rung" density functional for the exchange-correlation energy. RPA has full exact exchange and constructs the correlation with the help of the unoccupied Kohn-Sham orbitals. In many cases (uniform electron gas, jellium surface, and free atom), the correction to RPA is a short-ranged effect that is captured by a local spin density approximation (LSDA) or a generalized gradient approximation (GGA). Nonempirical density functionals for the correction to RPA were constructed earlier at the LSDA and GGA levels (RPA+), but they are constructed here at the fully nonlocal level (RPA++), using the van der Waals density functional (vdW-DF) of Langreth, Lundqvist, and collaborators. While they make important and helpful corrections to RPA total and ionization energies of free atoms, they correct the RPA atomization energies of molecules by only about 1 kcal/mol. Thus, it is puzzling that RPA atomization energies are, on average, about 10 kcal/mol lower than those of accurate values from experiment. We find here that a hybrid of 50% Perdew-Burke-Ernzerhof GGA with 50% RPA+ yields atomization energies much more accurate than either one does alone. This suggests a solution to the puzzle: While the proper correction to RPA is short-ranged in some systems, its contribution to the correlation hole can spread out in a molecule with multiple atomic centers, canceling part of the spread of the exact exchange hole (more so than in RPA or RPA+), making the true exchange-correlation hole more localized than in RPA or RPA+. This effect is not captured even by the vdW-DF nonlocality, but it requires the different kind of full nonlocality present in a hybrid functional. 10. The impact of memory load and perceptual cues on puzzle learning by 24-month olds. Science.gov (United States) Barr, Rachel; Moser, Alecia; Rusnak, Sylvia; Zimmermann, Laura; Dickerson, Kelly; Lee, Herietta; Gerhardstein, Peter 2016-11-01 Early childhood is characterized by memory capacity limitations and rapid perceptual and motor development [Rovee-Collier (1996). Infant Behavior & Development, 19, 385-400]. The present study examined 2-year olds' reproduction of a sliding action to complete an abstract fish puzzle under different levels of memory load and perceptual feature support. Experimental groups were compared to baseline controls to assess spontaneous rates of production of the target actions; baseline production was low across all experiments. Memory load was manipulated in Exp. 1 by adding pieces to the puzzle, increasing sequence length from 2 to 3 items, and to 3 items plus a distractor. Although memory load did not influence how toddlers learned to manipulate the puzzle pieces, it did influence toddlers' achievement of the goal-constructing the fish. Overall, girls were better at constructing the puzzle than boys. In Exp. 2, the perceptual features of the puzzle were altered by changing shape boundaries to create a two-piece horizontally cut puzzle (displaying bilateral symmetry), and by adding a semantically supportive context to the vertically cut puzzle (iconic). Toddlers were able to achieve the goal of building the fish equally well across the 2-item puzzle types (bilateral symmetry, vertical, iconic), but how they learned to manipulate the puzzle pieces varied as a function of the perceptual features. Here, as in Exp. 1, girls showed a different pattern of performance from the boys. This study demonstrates that changes in memory capacity and perceptual processing influence both goal-directed imitation learning and motoric performance. © 2016 Wiley Periodicals, Inc. 11. The Monotonicity Puzzle: An Experimental Investigation of Incentive Structures Directory of Open Access Journals (Sweden) Jeannette Brosig 2010-05-01 Full Text Available Non-monotone incentive structures, which - according to theory - are able to induce optimal behavior, are often regarded as empirically less relevant for labor relationships. We compare the performance of a theoretically optimal non-monotone contract with a monotone one under controlled laboratory conditions. Implementing some features relevant to real-world employment relationships, our paper demonstrates that, in fact, the frequency of income-maximizing decisions made by agents is higher under the monotone contract. Although this observed behavior does not change the superiority of the non-monotone contract for principals, they do not choose this contract type in a significant way. This is what we call the monotonicity puzzle. Detailed investigations of decisions provide a clue for solving the puzzle and a possible explanation for the popularity of monotone contracts. 12. The Puzzle of Male Chronophilias. Science.gov (United States) Seto, Michael C 2017-01-01 13. The Incomplete Glutathione Puzzle: Just Guessing at Numbers and Figures? Science.gov (United States) Deponte, Marcel 2017-11-20 Glutathione metabolism is comparable to a jigsaw puzzle with too many pieces. It is supposed to comprise (i) the reduction of disulfides, hydroperoxides, sulfenic acids, and nitrosothiols, (ii) the detoxification of aldehydes, xenobiotics, and heavy metals, and (iii) the synthesis of eicosanoids, steroids, and iron-sulfur clusters. In addition, glutathione affects oxidative protein folding and redox signaling. Here, I try to provide an overview on the relevance of glutathione-dependent pathways with an emphasis on quantitative data. Recent Advances: Intracellular redox measurements reveal that the cytosol, the nucleus, and mitochondria contain very little glutathione disulfide and that oxidative challenges are rapidly counterbalanced. Genetic approaches suggest that iron metabolism is the centerpiece of the glutathione puzzle in yeast. Furthermore, recent biochemical studies provide novel insights on glutathione transport processes and uncoupling mechanisms. Which parts of the glutathione puzzle are most relevant? Does this explain the high intracellular concentrations of reduced glutathione? How can iron-sulfur cluster biogenesis, oxidative protein folding, or redox signaling occur at high glutathione concentrations? Answers to these questions not only seem to depend on the organism, cell type, and subcellular compartment but also on different ideologies among researchers. A rational approach to compare the relevance of glutathione-dependent pathways is to combine genetic and quantitative kinetic data. However, there are still many missing pieces and too little is known about the compartment-specific repertoire and concentration of numerous metabolites, substrates, enzymes, and transporters as well as rate constants and enzyme kinetic patterns. Gathering this information might require the development of novel tools but is crucial to address potential kinetic competitions and to decipher uncoupling mechanisms to solve the glutathione puzzle. Antioxid. Redox Signal 14. Learning structural bioinformatics and evolution with a snake puzzle Directory of Open Access Journals (Sweden) Gonzalo S. Nido 2016-12-01 Full Text Available We propose here a working unit for teaching basic concepts of structural bioinformatics and evolution through the example of a wooden snake puzzle, strikingly similar to toy models widely used in the literature of protein folding. In our experience, developed at a Master’s course at the Universidad Autónoma de Madrid (Spain, the concreteness of this example helps to overcome difficulties caused by the interdisciplinary nature of this field and its high level of abstraction, in particular for students coming from traditional disciplines. The puzzle will allow us discussing a simple algorithm for finding folded solutions, through which we will introduce the concept of the configuration space and the contact matrix representation. This is a central tool for comparing protein structures, for studying simple models of protein energetics, and even for a qualitative discussion of folding kinetics, through the concept of the Contact Order. It also allows a simple representation of misfolded conformations and their free energy. These concepts will motivate evolutionary questions, which we will address by simulating a structurally constrained model of protein evolution, again modelled on the snake puzzle. In this way, we can discuss the analogy between evolutionary concepts and statistical mechanics that facilitates the understanding of both concepts. The proposed examples and literature are accessible, and we provide supplementary material (see ‘Data Availability’ to reproduce the numerical experiments. We also suggest possible directions to expand the unit. We hope that this work will further stimulate the adoption of games in teaching practice. 15. Lepton mixing and the ''solar neutrino puzzle'' International Nuclear Information System (INIS) Bilenky, S.M.; Pontecorvo, B. 1977-01-01 The results of the well known solar neutrino experiment of Davis et al. are discussed, in which the Cl-Ar method is used. The result of the experiment, a too small neutrino signal (the so-called ''solar neutrino puzzle'), has been tentatively accounted for in a number of quite exotic explanations. It appears that the explanation in terms of lepton mixing and neutrino sterility is quite attractive from the point of view of present day elementary particle physics and is much more natural than the other explanations of the ''puzzle'' 16. Difficult Sudoku Puzzles Created by Replica Exchange Monte Carlo Method OpenAIRE Watanabe, Hiroshi 2013-01-01 An algorithm to create difficult Sudoku puzzles is proposed. An Ising spin-glass like Hamiltonian describing difficulty of puzzles is defined, and difficult puzzles are created by minimizing the energy of the Hamiltonian. We adopt the replica exchange Monte Carlo method with simultaneous temperature adjustments to search lower energy states efficiently, and we succeed in creating a puzzle which is the world hardest ever created in our definition, to our best knowledge. (Added on Mar. 11, the ... 17. Puzzle Pedagogy: A Use of Riddles in Mathematics Education Science.gov (United States) Farnell, Elin 2017-01-01 In this article, I present a collection of puzzles appropriate for use in a variety of undergraduate courses, along with suggestions for relevant discussion. Logic puzzles and riddles have long been sources of amusement for mathematicians and the general public alike. I describe the use of puzzles in a classroom setting, and argue for their use as… 18. Algorithmic Puzzles: History, Taxonomies, and Applications in Human Problem Solving Science.gov (United States) Levitin, Anany 2017-01-01 The paper concerns an important but underappreciated genre of algorithmic puzzles, explaining what these puzzles are, reviewing milestones in their long history, and giving two different ways to classify them. Also covered are major applications of algorithmic puzzles in cognitive science research, with an emphasis on insight problem solving, and… 19. A Puzzle Guide to Gödel Home; Journals; Resonance – Journal of Science Education; Volume 6; Issue 7. Forever Undecided: A Puzzle Guide to Gödel. R Ramanujam. Book Review Volume 6 Issue 7 July 2001 pp 97-98. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/006/07/0097-0098 ... 20. Solving jigsaw puzzles using image features DEFF Research Database (Denmark) Nielsen, Ture R.; Drewsen, Peter; Hansen, Klaus 2008-01-01 In this article, we describe a method for automatic solving of the jigsaw puzzle problem based on using image features instead of the shape of the pieces. The image features are used for obtaining an accurate measure for edge similarity to be used in a new edge matching algorithm. The algorithm i... 1. New data and an old puzzle DEFF Research Database (Denmark) Lee, S Hong; Byrne, Enda M; Hultman, Christina M 2015-01-01 BACKGROUND: A long-standing epidemiological puzzle is the reduced rate of rheumatoid arthritis (RA) in those with schizophrenia (SZ) and vice versa. Traditional epidemiological approaches to determine if this negative association is underpinned by genetic factors would test for reduced rates of one... 2. A generalization of the Pasadena puzzle NARCIS (Netherlands) Peterson, M.B. 2013-01-01 By generalizing the Pasadena puzzle introduced by Nover and Hájek (2004) we show that the sum total of value produced by an act can be made to converge to any real number by applying the Riemann rearrangement theorem, even if the scenario faced by the decision maker is non-probabilistic and fully 3. Puzzles in studies of quantum chaos International Nuclear Information System (INIS) Xu Gongou 1994-01-01 Puzzles in studies of quantum chaos are discussed. From the view of global properties of quantum states, it is clarified that quantum chaos originates from the break-down of invariant properties of quantum canonical transformations. There exist precise correspondences between quantum and classical chaos 4. The Crossword Puzzle as a Teaching Tool. Science.gov (United States) Crossman, Edward K. 1983-01-01 In courses such as the history of psychology, it is necessary to learn a variety of relationships, events, and sequences, in addition to the task of having to pair certain key concepts with related names, e.g., phrenology--Hall. One tool useful in this type of learning is the crossword puzzle. (RM) 5. Mathematical History: Activities, Puzzles, Stories, and Games. Science.gov (United States) Mitchell, Merle Based on the history of mathematics, these materials have been planned to enrich the teaching of mathematics in grades four, five, and six. Puzzles and games are based on stories about topics such as famous mathematicians, numerals of ancient peoples, and numerology. The sheets are arranged by grade level and are designed for easy duplication.… 6. Peelle's pertinent puzzle: Way of solution International Nuclear Information System (INIS) Pronyaev, V.G. 2003-01-01 The effect of evident bias of evaluated data below the majority of experimental data observed in general least-squares model fitting of these data is called as Peelle's Pertinent Puzzle (PPP). Since the transformation of the central values is trivial, the solution by transformation of covariance matrices is deducted 7. Bullet-Block Science Video Puzzle Science.gov (United States) Shakur, Asif 2015-01-01 A science video blog, which has gone viral, shows a wooden block shot by a vertically aimed rifle. The video shows that the block hit dead center goes exactly as high as the one shot off-center. (Fig. 1). The puzzle is that the block shot off-center carries rotational kinetic energy in addition to the gravitational potential energy. This leads a… 8. Effect of a puzzle on the process of students' learning about cardiac physiology. Science.gov (United States) Cardozo, Lais Tono; Miranda, Aline Soares; Moura, Maria José Costa Sampaio; Marcondes, Fernanda Klein 2016-09-01 9. Lepton mixing and the 'solar neutrino puzzle' International Nuclear Information System (INIS) Bilenky, S.M.; Pontecorvo, B. 1977-01-01 The results of the well-known solar neutrino experiments in which the Cl-Ar method was employed are discussed; the results of this experiment gave a too-small neutrino signal and were referred to as the 'solar neutrino puzzle'. A number of explanations have been offered to account for the results, but it is stated that the explanation in terms of lepton mixing and neutrino sterility is attractive in terms of present day elementary particle physics and much more natural than the other explanations offered. Headings are as follows: neutrino oscillations and lepton charge, oscillations and solar neutrino experiments, lepton mixing according to old and present ideas, neutrino oscillations and the 'solar neutrino puzzle'. (U.K.) 10. Mankiw's Puzzle on Consumer Durables: A Misspecification OpenAIRE Tam Bang Vu 2005-01-01 Mankiw (1982) shows that consumer durables expenditures should follow a linear ARMA(1,1) process, but the data analyzed supports an AR(1) process instead; thus, a puzzle. In this paper, we employ a more general utility function than Mankiw's quadratic one. Further, the disturbance and depreciation rate are respecified, respectively, as multiplicative and stochastic. The analytical consequence is a nonlinear ARMA(infinity,1) process, which implies that the linear ARMA(1,1) is a misspecificatio... 11. Social Security and the Equity Premium Puzzle OpenAIRE Olovsson, Conny 2004-01-01 This paper shows that social security may be an important factor in explaining the equity premium puzzle. In the absence of shortselling constraints, the young shortsell bonds to the middle-aged and buy equity. Social security reduces the bond demand of the middle-aged, thereby restricting the possibilities of the young to finance their equity purchases. Their equity demand increases as does the average return to equity. Social security also increases the covariance between future consumption... 12. The Ay puzzle and the nuclear force International Nuclear Information System (INIS) Hueber, D. 1999-01-01 The nucleon-deuteron analyzing power A y in elastic nucleon-deuteron scattering poses a longstanding puzzle. At energies E lab below approximately 30 MeV, A y cannot be described by any realistic NN force. The inclusion of existing three-nucleon forces does not improve the situation. Because of recent questions about the 3 P J NN phases, we examine whether reasonable changes in the NN force can resolve the puzzle. In order to do this, we investigate the effect on the 3 P J waves produced by changes in different parts of the potential (viz., the central force, tensor force, etc.), as well as on the 2-body observables and on A y . We find that it is not possible with reasonable changes in the NN potential to increase the 3-body A y , and at the same time to keep the 2-body observables unchanged. We therefore conclude that the A y puzzle is likely to be solved by new three-nucleon forces, such as those of spin-orbit type, which have not yet been taken into account. Refs. 7, tab. 1 (author) 13. Dissolving the Puzzle of Resultant Moral Luck. Science.gov (United States) Levy, Neil The puzzle of resultant moral luck arises when we are disposed to think that an agent who caused a harm deserves to be blamed more than an otherwise identical agent who did not. One popular (but controversial) perspective on resultant moral luck explains our dispositions to produce different judgments with regard to the agents who feature in these cases as a product not of what they genuinely deserve but of our epistemic situation. On this account, there is no genuine resultant moral luck; there is only luck in what evidence becomes available to observers. In this paper, I develop an evolutionary account of our inclination to take the results of actions as evidence for the mental states of agents, thereby explaining why the resulting intuitions are recalcitrant to correction. The account explains why the puzzle of resultant moral luck arises: because our disposition to take the harms agents cause as evidence of their mental states can produce intuitions which conflict with those that arise when we examine agents' mental states without reference to the results of their actions. The account also helps to solve the puzzle of resultant moral luck, by providing a strong reason to ignore the intuitions caused by our disposition to regard actual harms as evidence of mental states. Since these intuitions arise using an unreliable proxy for agents' mental states, they ought to be trumped by more reliable evidence. 14. The Ay puzzle and the nuclear force International Nuclear Information System (INIS) Hueber, D.; Friar, J.L. 1998-01-01 The nucleon-deuteron analyzing power A y in elastic nucleon-deuteron scattering poses a longstanding puzzle. At energies E lab below approximately 30 MeV A y cannot be described by any realistic nucleon-nucleon (NN) force. The inclusion of existing three-nucleon forces does not improve the situation. Because of recent questions about the 3 P J NN phases, we examine whether reasonable changes in the NN force can resolve the puzzle. In order to do this we investigate the effect on the 3 P J waves produced by changes in different parts of the potential (viz., the central force, tensor force, etc.), as well as on the two-body observables and on A y . We find that it is not possible with reasonable changes in the NN potential to increase the three-body A y and at the same time to keep the two-body observables unchanged. We therefore conclude that the A y puzzle is likely to be solved by new three-nucleon forces, such as those of the spin-orbit type, which have not yet been taken into account. copyright 1998 The American Physical Society 15. A puzzle form of a non-verbal intelligence test gives significantly higher performance measures in children with severe intellectual disability. Science.gov (United States) Bello, Katrina D; Goharpey, Nahal; Crewther, Sheila G; Crewther, David P 2008-08-01 Assessment of 'potential intellectual ability' of children with severe intellectual disability (ID) is limited, as current tests designed for normal children do not maintain their interest. Thus a manual puzzle version of the Raven's Coloured Progressive Matrices (RCPM) was devised to appeal to the attentional and sensory preferences and language limitations of children with ID. It was hypothesized that performance on the book and manual puzzle forms would not differ for typically developing children but that children with ID would perform better on the puzzle form. The first study assessed the validity of this puzzle form of the RCPM for 76 typically developing children in a test-retest crossover design, with a 3 week interval between tests. A second study tested performance and completion rate for the puzzle form compared to the book form in a sample of 164 children with ID. In the first study, no significant difference was found between performance on the puzzle and book forms in typically developing children, irrespective of the order of completion. The second study demonstrated a significantly higher performance and completion rate for the puzzle form compared to the book form in the ID population. Similar performance on book and puzzle forms of the RCPM by typically developing children suggests that both forms measure the same construct. These findings suggest that the puzzle form does not require greater cognitive ability but demands sensory-motor attention and limits distraction in children with severe ID. Thus, we suggest the puzzle form of the RCPM is a more reliable measure of the non-verbal mentation of children with severe ID than the book form. 16. Early puzzle play: a predictor of preschoolers' spatial transformation skill. Science.gov (United States) Levine, Susan C; Ratliff, Kristin R; Huttenlocher, Janellen; Cannon, Joanna 2012-03-01 Individual differences in spatial skill emerge prior to kindergarten entry. However, little is known about the early experiences that may contribute to these differences. The current study examined the relation between children's early puzzle play and their spatial skill. Children and parents (n = 53) were observed at home for 90 min every 4 months (6 times) between 2 and 4 years of age (26 to 46 months). When children were 4 years 6 months old, they completed a spatial task involving mental transformations of 2-dimensional shapes. Children who were observed playing with puzzles performed better on this task than those who did not, controlling for parent education, income, and overall parent word types. Moreover, among those children who played with puzzles, frequency of puzzle play predicted performance on the spatial transformation task. Although the frequency of puzzle play did not differ for boys and girls, the quality of puzzle play (a composite of puzzle difficulty, parent engagement, and parent spatial language) was higher for boys than for girls. In addition, variation in puzzle play quality predicted performance on the spatial transformation task for girls but not for boys. Implications of these findings as well as future directions for research on the role of puzzle play in the development of spatial skill are discussed. PsycINFO Database Record (c) 2012 APA, all rights reserved. 17. Early Puzzle Play: A predictor of preschoolers’ spatial transformation skill Science.gov (United States) Levine, S.C.; Ratliff, K.R.; Huttenlocher, J.; Cannon, J. 2011-01-01 Individual differences in spatial skill emerge prior to kindergarten entry. However, little is known about the early experiences that may contribute to these differences. The current study examines the relation between children’s early puzzle play and their spatial skill. Children and parents (n = 53) were observed at home for 90 minutes every four months (six times) between 2 and 4 years of age (26 to 46 months). When children were 4 years 6 months old, they completed a spatial task involving mental transformations of 2D shapes. Children who were observed playing with puzzles performed better on this task than those who did not, controlling for parent education, income, and overall parent word types. Moreover, among those children who played with puzzles, frequency of puzzle play predicted performance on the spatial transformation task. Although the frequency of puzzle play did not differ for boys and girls, the quality of puzzle play (a composite of puzzle difficulty, parent engagement, and parent spatial language) was higher for boys than girls. In addition, variation in puzzle play quality predicted performance on the spatial transformation task for girls but not boys. Implications of these findings as well as future directions for research on the role of the role of puzzle play in the development of spatial skill are discussed. PMID:22040312 18. The Gran Sasso muon puzzle CERN Document Server Fernandez-Martinez, Enrique 2012-01-01 We carry out a time-series analysis of the combined data from three experiments measuring the cosmic muon flux at the Gran Sasso laboratory, at a depth of 3800 m.w.e. These data, taken by the MACRO, LVD and Borexino experiments, span a period of over 20 years, and correspond to muons with a threshold energy, at sea level, of around 1.3 TeV. We compare the best-fit period and phase of the full muon data set with the combined DAMA/NaI and DAMA/LIBRA data, which spans the same time period, as a test of the hypothesis that the cosmic ray muon flux is responsible for the annual modulation detected by DAMA. We find in the muon data a large-amplitude fluctuation with a period of around one year, and a phase that is incompatible with that of the DAMA modulation at 5.2 sigmas. Aside from this annual variation, the muon data also contains a further significant modulation with a period between 10 and 11 years and a power well above the 99.9% C.L threshold for noise, whose phase corresponds well with the solar cycle: a s... 19. Nuclear clustering and the electron screening puzzle Science.gov (United States) Bertulani, C. A.; Spitaleri, C. 2018-01-01 Electron screening changes appreciably the magnitude of astrophysical nuclear reactions within stars. This effect is also observed in laboratory experiments on Earth, where atomic electrons are present in the nuclear targets. Theoretical models were developed over the past 30 years and experimental measurements have been carried out to study electron screening in thermonuclear reactions. None of the theoretical models were able to explain the high values of the experimentally determined screening potentials. We explore the possibility that the "electron screening puzzle" is due to nuclear clusterization and polarization e_ects in the fusion reactions. We will discuss the supporting arguments for this scenario. 20. Nature's Greatest Puzzles Energy Technology Data Exchange (ETDEWEB) Quigg, Chris; /Fermilab 2005-02-01 It is a pleasure to be part of the SLAC Summer Institute again, not simply because it is one of the great traditions in our field, but because this is a moment of great promise for particle physics. I look forward to exploring many opportunities with you over the course of our two weeks together. My first task in talking about Nature's Greatest Puzzles, the title of this year's Summer Institute, is to deconstruct the premise a little bit. 1. Last piece of the puzzle for ATLAS CERN Multimedia Clare Ryan At around 15.40 on Friday 29th February the ATLAS collaboration cracked open the champagne as the second of the small wheels was lowered into the cavern. Each of ATLAS' small wheels are 9.3 metres in diameter and weigh 100 tonnes including the massive shielding elements. They are the final parts of ATLAS' muon spectrometer. The first piece of ATLAS was installed in 2003 and since then many detector elements have journeyed down the 100 metre shaft into the ATLAS underground cavern. This last piece completes this gigantic puzzle. 2. THE PUZZLE TECHNIQUE, COOPERATIVE LEARNING STRATEGY TO IMPROVE ACADEMIC PERFORMANCE Directory of Open Access Journals (Sweden) M.ª José Mayorga Fernández 2012-04-01 Full Text Available This  article  presents  an  innovative  experience  carried  out  in  the  subject Pedagogical Bases of Special Education, a 4.5 credit core subject taught at the second year of the Degree in Physical Education Teacher Training (to be extinguish, based on the use of a methodological strategic in accordance with the new demands of the EEES. With the experience we pursue a double purpose: firstly, to present the technique of jigsaw or puzzle as a useful methodological strategy for university learning and, on the other hand, to show whether this strategy improves students results. Comparing the results with students previous year results shows that the performance of students who participated in the innovative experience has improved considerably, increasing their motivation and involvement towards the task. 3. Sampling Random Bioinformatics Puzzles using Adaptive Probability Distributions DEFF Research Database (Denmark) Have, Christian Theil; Appel, Emil Vincent; Bork-Jensen, Jette 2016-01-01 We present a probabilistic logic program to generate an educational puzzle that introduces the basic principles of next generation sequencing, gene finding and the translation of genes to proteins following the central dogma in biology. In the puzzle, a secret "protein word" must be found by asse... 4. Decoding Codewords: Statistical Analysis of a Newspaper Puzzle Science.gov (United States) Meacock, Susan; Meacock, Geoff 2012-01-01 In recent years English newspapers have started featuring a number of puzzles other than the ubiquitous crossword. Many of the puzzles are of Japanese origin such as Sudoku, Kakuro or Hidato. However, one recent one is very English and is called variously Cross-code, Alphapuzzle or some other name. In this article, it will be known as Codeword.… 5. A Puzzle Used to Teach the Cardiac Cycle Science.gov (United States) Marcondes, Fernanda K.; Moura, Maria J. C. S.; Sanches, Andrea; Costa, Rafaela; Oliveira de Lima, Patricia; Groppo, Francisco Carlos; Amaral, Maria E. C.; Zeni, Paula; Gaviao, Kelly Cristina; Montrezor, Luís H. 2015-01-01 The aim of the present article is to describe a puzzle developed for use in teaching cardiac physiology classes. The puzzle presents figures of phases of the cardiac cycle and a table with five columns: phases of cardiac cycle, atrial state, ventricular state, state of atrioventricular valves, and pulmonary and aortic valves. Chips are provided… 6. The Clock Is Ticking: Library Orientation as Puzzle Room Science.gov (United States) 2017-01-01 Tripp Reade is the school librarian at Cardinal Gibbons High School in Raleigh, North Carolina. This article describes how he redesigned his school's library orientation program after learning about escape rooms and a variant known as puzzle rooms. Puzzle rooms present players with a set of challenges to solve; they require "teamwork,… 7. Crossword Puzzles as Learning Tools in Introductory Soil Science Science.gov (United States) Barbarick, K. A. 2010-01-01 Students in introductory courses generally respond favorably to novel approaches to learning. To this end, I developed and used three crossword puzzles in spring and fall 2009 semesters in Introductory Soil Science Laboratory at Colorado State University. The first hypothesis was that crossword puzzles would improve introductory soil science… 8. Matter-antimatter puzzle: LHCb improves resolution CERN Multimedia Antonella Del Rosso 2012-01-01 In 2010, Fermilab’s DØ experiment reported a one percent difference in the properties of matter and antimatter in decays of B mesons (that is, particles containing beauty quarks) to muons. Saturday, at the ICHEP Conference in Melbourne, the LHCb experiment at CERN presents new results, which do not confirm this anomaly and are consistent with the Standard Model predictions. The same experiment has also presented the first evidence of asymmetry arising in other decays of the same family of mesons. The image becomes clearer but the puzzle has not yet been solved.   Inside the LHCb detector. The matter-antimatter imbalance in the Universe is a very hot topic in physics. The conundrum arises from the fact that, although objects made of antimatter are not observed in the Universe, theory predicts that matter and antimatter be created equally in particle interactions and in the Big Bang. Only small deviations from this very symmetric behaviour are incorporated in the theory. E... 9. A new piece of the puzzle CERN Multimedia 2005-01-01 The team responsible for the installation of the hadronic calorimeter's central barrel after completion of the assembly work. Assembly of the great ATLAS puzzle continues underground. On 10 December, the final module of the central barrel of the tile hadronic calorimeter was assembled. This piece of the tile calorimeter had already been assembled above ground during a "dress rehearsal" in 2003 (see Bulletin no 46/2003, 10 November 2003). The hadronic calorimeter's two other barrels, the so-called "extended barrels", remain to be assembled with this first central barrel, which now surrounds the electromagnetic calorimeter barrel that was lowered into the cavern at the end of October. At the end of November, the second of the eight barrel toroid coils was also installed. 10. Heavy quarkonium: progress, puzzles, and opportunities CERN Document Server Brambilla, N; Heltsley, B K; Vogt, R; Bodwin, G T; Eichten, E; Frawley, A D; Meyer, A B; Mitchell, R E; Papadimitriou, V; Petreczky, P; Petrov, A A; Robbe, P; Vairo, A; Andronic, A; Arnaldi, R; Artoisenet, P; Bali, G; Bertolin, A; Bettoni, D; Brodzicka, J; Bruno, G E; Caldwell, A; Catmore, J; Chang, C H; Chao, K T; Chudakov, E; Cortese, P; Crochet, P; Drutskoy, A; Ellwanger, U; Faccioli, P; Gabareen Mokhtar, A; Garcia i Tormo, X; Hanhart, C; Harris, F A; Kaplan, D M; Klein, S R; Kowalski, H; Lansberg, J P; Levichev, E; Lombardo, V; Lourenco, C; Maltoni, F; Mocsy, A; Mussa, R; Navarra, F S; Negrini, M; Nielsen, M; Olsen, S L; Pakhlov, P; Pakhlova, G; Peters, K; Polosa, A D; Qian, W; Qiu, J W; Rong, G; Sanchis-Lozano, M A; Scomparin, E; Senger, P; Simon, F; Stracka, S; Sumino, Y; Voloshin, M; Weiss, C; Wohri, H K; Yuan, C Z 2011-01-01 A golden age for heavy quarkonium physics dawned a decade ago, initiated by the confluence of exciting advances in quantum chromodynamics (QCD) and an explosion of related experimental activity. The early years of this period were chronicled in the Quarkonium Working Group (QWG) CERN Yellow Report (YR) in 2004, which presented a comprehensive review of the status of the field at that time and provided specific recommendations for further progress. However, the broad spectrum of subsequent breakthroughs, surprises, and continuing puzzles could only be partially anticipated. Since the release of the YR, the BESII program concluded only to give birth to BESIII; the $B$-factories and CLEO-c flourished; quarkonium production and polarization measurements at HERA, JLab, and the Tevatron matured; and heavy-ion collisions at RHIC have opened a window on the deconfinement regime. All these experiments leave legacies of quality, precision, and unsolved mysteries for quarkonium physics, and therefore beg for continuing ... 11. SOLVING THE PUZZLE OF SUBHALO SPINS Energy Technology Data Exchange (ETDEWEB) Wang, Yang; Lin, Weipeng [Key Laboratory for Research in Galaxies and Cosmology, Shanghai Astronomical Observatory, Shanghai 200030 (China); Pearce, Frazer R.; Lux, Hanni; Onions, Julian [School of Physics and Astronomy, University of Nottingham, Nottingham, NG7 2RD (United Kingdom); Muldrew, Stuart I., E-mail: [email protected], E-mail: [email protected] [Department of Physics and Astronomy, University of Leicester, University Road, Leicester, LE1 7RH (United Kingdom) 2015-03-10 Investigating the spin parameter distribution of subhalos in two high-resolution isolated halo simulations, recent work by Onions et al. suggested that typical subhalo spins are consistently lower than the spin distribution found for field halos. To further examine this puzzle, we have analyzed simulations of a cosmological volume with sufficient resolution to resolve a significant subhalo population. We confirm the result of Onions et al. and show that the typical spin of a subhalo decreases with decreasing mass and increasing proximity to the host halo center. We interpret this as the growing influence of tidal stripping in removing the outer layers, and hence the higher angular momentum particles, of the subhalos as they move within the host potential. Investigating the redshift dependence of this effect, we find that the typical subhalo spin is smaller with decreasing redshift. This indicates a temporal evolution, as expected in the tidal stripping scenario. 12. Solar Twins and the Barium Puzzle International Nuclear Information System (INIS) Reddy, Arumalla B. S.; Lambert, David L. 2017-01-01 Several abundance analyses of Galactic open clusters (OCs) have shown a tendency for Ba but not for other heavy elements (La−Sm) to increase sharply with decreasing age such that Ba was claimed to reach [Ba/Fe] ≃ +0.6 in the youngest clusters (ages < 100 Myr) rising from [Ba/Fe] = 0.00 dex in solar-age clusters. Within the formulation of the s -process, the difficulty to replicate higher Ba abundance and normal La−Sm abundances in young clusters is known as the barium puzzle. Here, we investigate the barium puzzle using extremely high-resolution and high signal-to-noise spectra of 24 solar twins and measured the heavy elements Ba, La, Ce, Nd, and Sm with a precision of 0.03 dex. We demonstrate that the enhanced Ba ii relative to La−Sm seen among solar twins, stellar associations, and OCs at young ages (<100 Myr) is unrelated to aspects of stellar nucleosynthesis but has resulted from overestimation of Ba by standard methods of LTE abundance analysis in which the microturbulence derived from the Fe lines formed deep in the photosphere is insufficient to represent the true line broadening imposed on Ba ii lines by the upper photospheric layers from where the Ba ii lines emerge. Because the young stars have relatively active photospheres, Ba overabundances most likely result from the adoption of a too low value of microturbulence in the spectrum synthesis of the strong Ba ii lines but the change of microturbulence in the upper photosphere has only a minor affect on La−Sm abundances measured from the weak lines. 13. Exfoliation syndrome: assembling the puzzle pieces. Science.gov (United States) Pasquale, Louis R; Borrás, Terete; Fingert, John H; Wiggs, Janey L; Ritch, Robert 2016-09-01 To summarize various topics and the cutting edge approaches to refine XFS pathogenesis that were discussed at the 21st annual Glaucoma Foundation Think Tank meeting in New York City, Sept. 19-20, 2014. The highlights of three categories of talks on cutting edge research in the field were summarized. Exfoliation syndrome (XFS) is a systemic disorder with a substantial ocular burden, including high rates of cataract, cataract surgery complications, glaucoma and retinal vein occlusion. New information about XFS is akin to puzzle pieces that do not quite join together to reveal a clear picture regarding how exfoliation material (XFM) forms. Meeting participants concluded that it is unclear how the mild homocysteinemia seen in XFS might contribute to the disarrayed extracellular aggregates characteristic of this syndrome. Lysyl oxidase-like 1 (LOXL1) variants are unequivocally genetic risk factors for XFS but exactly how these variants contribute to the assembly of exfoliation material (XFM) remains unclear. Variants in a new genomic region, CACNA1A associated with XFS, may alter calcium concentrations at the cell surface and facilitate XFM formation but much more work is needed before we can place this new finding in proper context. It is hoped that various animal model and ex vivo systems will emerge that will allow for proper assembly of the puzzle pieces into a coherent picture of XFS pathogenesis. A clear understanding of XFS pathogenesis may lead to 'upstream solutions' to reduce the ocular morbidity produced by XFS. © 2015 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd. 14. Solar Twins and the Barium Puzzle Energy Technology Data Exchange (ETDEWEB) Reddy, Arumalla B. S.; Lambert, David L., E-mail: [email protected] [W.J. McDonald Observatory and Department of Astronomy, The University of Texas at Austin, Austin, TX 78712-1205 (United States) 2017-08-20 Several abundance analyses of Galactic open clusters (OCs) have shown a tendency for Ba but not for other heavy elements (La−Sm) to increase sharply with decreasing age such that Ba was claimed to reach [Ba/Fe] ≃ +0.6 in the youngest clusters (ages < 100 Myr) rising from [Ba/Fe] = 0.00 dex in solar-age clusters. Within the formulation of the s -process, the difficulty to replicate higher Ba abundance and normal La−Sm abundances in young clusters is known as the barium puzzle. Here, we investigate the barium puzzle using extremely high-resolution and high signal-to-noise spectra of 24 solar twins and measured the heavy elements Ba, La, Ce, Nd, and Sm with a precision of 0.03 dex. We demonstrate that the enhanced Ba ii relative to La−Sm seen among solar twins, stellar associations, and OCs at young ages (<100 Myr) is unrelated to aspects of stellar nucleosynthesis but has resulted from overestimation of Ba by standard methods of LTE abundance analysis in which the microturbulence derived from the Fe lines formed deep in the photosphere is insufficient to represent the true line broadening imposed on Ba ii lines by the upper photospheric layers from where the Ba ii lines emerge. Because the young stars have relatively active photospheres, Ba overabundances most likely result from the adoption of a too low value of microturbulence in the spectrum synthesis of the strong Ba ii lines but the change of microturbulence in the upper photosphere has only a minor affect on La−Sm abundances measured from the weak lines. 15. A Resolution of the Purchasing Power Parity Puzzle DEFF Research Database (Denmark) Frydman, Roman; Goldberg, Michael D.; Johansen, Søren Asset prices undergo long swings that revolve around benchmark levels. In currency markets, fluctuations involve real exchange rates that are highly persistent and that move in near-parallel fashion with nominal rates. The inability to explain these two regularities with one model has been called...... the "Purchasing Power Parity puzzle". In this paper, we trace the puzzle to exchange rate modelers' use of the "Rational Expectations Hypothesis". We show that once imperfect knowledge is recognized, a monetary model is able to account for the puzzle, as well as other salient features of the data, including... 16. Real exchange rate persistence and the excess return puzzle DEFF Research Database (Denmark) Juselius, Katarina; Assenmacher, Katrin 2017-01-01 The PPP puzzle refers to the wide swings of nominal exchange rates around their long-run equilibrium values whereas the excess return puzzle represents the persistent deviation of the domestic-foreign interest rate differential from the expected change in the nominal exchange rate. Using the I(2......) cointegrated VAR model, much of the excess return puzzle disappears when an uncertainty premium in the foreign exchange market, proxied by the persistent PPP gap, is introduced. Self-reinforcing feedback mechanisms seem to cause the persistence in the Swiss-US parity conditions. These results support imperfect... 17. Teaching the Blue-Eyed Islanders Puzzle in a Liberal Arts Mathematics Course Science.gov (United States) Shea, Stephen 2012-01-01 The blue-eyed islanders puzzle is an old and challenging logic puzzle. This is a narrative of an experience introducing a variation of this puzzle on the first day of classes in a liberal arts mathematics course for non-majors. I describe an exercise that was used to facilitate the class's understanding of the puzzle. 18. Puzzling through General Chemistry: A Light-Hearted Approach to Engaging Students with Chemistry Content Science.gov (United States) Boyd, Susan L. 2007-01-01 Several puzzles are designed to be used by chemistry students as learning tools and teach them basic chemical concepts. The topics of the puzzles are based on the chapters from Chemistry, The Central Science used in general chemistry course and the puzzles are in various forms like crosswords, word searches, number searches, puzzles based on… 19. An Analysis of Closed-end Fund Puzzle for Emerging Capital Markets Directory of Open Access Journals (Sweden) Victor Dragota 2008-10-01 Full Text Available This paper analyzes the closed-end fund puzzle for an emerging capital market, respectively the Romanian one. Comparatively to more developed markets, as long as small markets are often very illiquid, it has to be used some specific valuation techniques in order to estimate the market values for closed-end funds. Also, one problem is this estimation can be made only in some (punctual moments. 20. Formative Assessment Probes: Mountaintop Fossil: A Puzzling Phenomenon Science.gov (United States) Keeley, Page 2015-01-01 This column focuses on promoting learning through assessment. This month's issue describes using formative assessment probes to uncover several ways of thinking about the puzzling discovery of a marine fossil on top of a mountain. 1. Stateless Puzzles for Real Time Online Fraud Preemption OpenAIRE Rahman, Mizanur; Recabarren, Ruben; Carbunar, Bogdan; Lee, Dongwon 2017-01-01 The profitability of fraud in online systems such as app markets and social networks marks the failure of existing defense mechanisms. In this paper, we propose FraudSys, a real-time fraud preemption approach that imposes Bitcoin-inspired computational puzzles on the devices that post online system activities, such as reviews and likes. We introduce and leverage several novel concepts that include (i) stateless, verifiable computational puzzles, that impose minimal performance overhead, but e... 2. The Puzzle of the Scandinavian Welfare State and Social Trust DEFF Research Database (Denmark) Svendsen, Gunnar Lind Haase; Svendsen, Gert Tinggaard 2015-01-01 The Scandinavian welfare model is a puzzle to economists: It works economically, even though free-riding should prevail with its explosive cocktail of high taxation and high social benefits. One overlooked solution to the puzzle could be the unique stock of social trust present in Scandinavia. Here......, the four Scandinavian countries (Norway, Denmark, Sweden, and Finland) form the top three with scores above 60 percent social trust on a ranking that covers 94 countries from all over the world.... 3. CROSSWORD PUZZLE INCREASE ATTENTION OF CHILDREN WITH ADHD Directory of Open Access Journals (Sweden) Ah. Yusuf 2017-07-01 Full Text Available Introduction: Attention deficit is one of three main problems of children with Attention Deficit Hyperactivity Disorder (ADHD. Children experience difficulty of paying attention and concentrating to one or more things or objects. As a results these children cannot perform the task well. Crossword puzzle is one of games that may increase attention and concentration. The aim of this study was to analyze the effect of crossword puzzle to increase attention of children with ADHD. Method: Pre-experimental design was employed in this study. The population was ADHD students in Cakra Autisme Therapy. Seven students were recruited by means of purposive sampling. The independent variable was crossword puzzle and the dependent variable was the increase of attention. Data were collected using observation sheet and analyzed using Wilcoxon Signed Rank Test with level of significance of α≤ 0.05. Result: Result showed that crossword puzzle could increase attention. Respondent’s attention improved from less to good attention, particularly in playing activities (p=0.014. Analysis: This finding suggests that there was differences of attention between pre and post-test. It can be concluded that crossword puzzle can increase attention of the students with ADHD. Discussion: It is recommended for teachers and parents of ADHD children to give them crossword puzzle game everyday at school or at home. Further studies should involve larger sample size and employs another game not only to increase attention, but also reduce hyperactivity and impulsivity of ADHD child. 4. LEARNING VOCABULARY THROUGH COLOURFUL PUZZLE GAME Directory of Open Access Journals (Sweden) Risca Dwiaryanti 2014-05-01 Full Text Available Vocabulary plays an important role because it links to the four skills of listening, speaking, reading, and writing. Those aspects should be integrated in teaching and learning process of English. However, the students must be able to know the meaning of each word or vocabulary of English in order to master the four skills. It is as a mean to create a sentence in daily communication to show someone’s feeling, opinion, idea, desire, etc. So that, both speakers understand what the other speaker mean. However, English as a second language in Indonesia seems very hard for the students to master vocabulary of English. It makes them not easy to be understood directly and speak fluently. The students, sometimes, get difficulties in understanding, memorizing the meaning of the vocabulary, and getting confused in using the new words. There must be an effective strategy to attract students’ interest, break the boredom, and make the class more lively. Based on the writer experience, Colourful Puzzle Game is able to make the students learn vocabulary quickly. It needs teacher’s creativity to create the materials of this game based on the class condition. The teacher just need a game board made from colourful papers, write any command and prohibition words on it. A dice is a tool to decide where the player should stop based on the number. Some pins as counter as sign of each player. 5. Immature stages and phylogenetic importance of Astrapaeus, a rove beetle genus of puzzling systematic position (Coleoptera, Staphylinidae, Staphylinini) NARCIS (Netherlands) Pietrykowska-Tudruj, E.; Staniec, B.; Wojas, T.; Alexey, A. 2014-01-01 For the first time eggs, larvae and pupae obtained by rearing are described for Astrapaeus, a monotypic West Palearctic rove beetle genus of a puzzling phylogenetic position within the megadiverse tribe Staphylinini. Morphology of the immature stages of Astrapaeus ulmi is compared to that of other 6. Brodmann area 12: an historical puzzle relevant to FTLD. Science.gov (United States) Kawamura, M; Miller, M W; Ichikawa, H; Ishihara, K; Sugimoto, A 2011-05-03 Brodmann brain maps, assembled in 1909, are still in use, but understanding of their animal-human homology is uncertain. Furthermore, in 1909, Brodmann did not identify human area 12 (BA12), a location now important to understanding of frontotemporal lobar degeneration (FTLD). We re-examined Brodmann's areas, both animal and human, in his 1909 monograph and other literature, both historical and contemporary, and projected BA12 onto the medial surface of a fixed human brain to show its location. We found Brodmann did identify human BA12 in later maps (1910 and 1914), but that his brain areas, contrary to his own aims as a comparative anatomist, are now used as physiologic loci in human brain. Because of its current link with frontotemporal dementia, BA12's transition from animal (1909) to human (1910 and 1914) is not only an historical puzzle. It impacts how Brodmann's areas, based on comparative animal-human cytoarchitecture, are widely used in current research as functional loci in human brain. 7. Heavy quarkonium: progress, puzzles, and opportunities Energy Technology Data Exchange (ETDEWEB) Brambilla, N; Heltsley, B K; Vogt, R; Bodwin, G T; Eichten, E; Frawley, A D; Meyer, A B; Mitchell, R E; Papdimitriou, V; Petreczky, P; Petrov, A A; Robbe, P; Vairo, A; Andronic, A; Arnaldi, R; Artoisenet, P; Bali, G; Bertolin, A; Bettoni, D; Brodzicka, J; Bruno, G E; Caldwell, A; Catmore, J; Chang, C -H; Chao, K -T; Chudakov, E; Cortese, P; Crochet, P; Drutskoy, A; Ellwanger, U; Faccioli, P; Gabareen Mokhtar, A; Garcia i Tormo, X; Hanhart, C; Harris, F A; Kaplan, D M; Klein, S R; Kowalski, H; Lansberg, J -P; Levichev, E; Lombardo, V; Loureno, C; Maltoni, F; Mocsy, A; Mussa, R; Navarra, F S; Negrini, M; Nielsen, M; Olsen, S L; Pakhlov, P; Pakhlova, G; Peters, K; Polosa, A D; Qian, W; Qiu, J -W; Rong, G; Sanchis-Lozano, M A; Scomparin, E; Senger, P; Simon, F; Stracka, S; Sumino, Y; Voloshin, M; Weiss, C; Wohri, H K; Yuan, C -Z 2011-02-01 A golden age for heavy quarkonium physics dawned a decade ago, initiated by the confluence of exciting advances in quantum chromodynamics (QCD) and an explosion of related experimental activity. The early years of this period were chronicled in the Quarkonium Working Group (QWG) CERN Yellow Report (YR) in 2004, which presented a comprehensive review of the status of the field at that time and provided specific recommendations for further progress. However, the broad spectrum of subsequent breakthroughs, surprises, and continuing puzzles could only be partially anticipated. Since the release of the YR, the BESII program concluded only to give birth to BESIII; the $B$-factories and CLEO-c flourished; quarkonium production and polarization measurements at HERA, JLab, and the Tevatron matured; and heavy-ion collisions at RHIC have opened a window on the deconfinement regime. All these experiments leave legacies of quality, precision, and unsolved mysteries for quarkonium physics, and therefore beg for continuing investigations. The plethora of newly-found quarkonium-like states unleashed a flood of theoretical investigations into new forms of matter such as quark-gluon hybrids, mesonic molecules, and tetraquarks. Measurements of the spectroscopy, decays, production, and in-medium behavior of c\\bar{c}, b\\bar{b}, and b\\bar{c} bound states have been shown to validate some theoretical approaches to QCD and highlight lack of quantitative success for others. The intriguing details of quarkonium suppression in heavy-ion collisions that have emerged from RHIC have elevated the importance of separating hot- and cold-nuclear-matter effects in quark-gluon plasma studies. This review systematically addresses all these matters and concludes by prioritizing directions for ongoing and future efforts. 8. Modular Extracellular Matrices: Solutions for the Puzzle Science.gov (United States) Serban, Monica A.; Prestwich, Glenn D. 2008-01-01 The common technique of growing cells in two-dimensions (2-D) is gradually being replaced by culturing cells on matrices with more appropriate composition and stiffness, or by encapsulation of cells in three-dimensions (3-D). The universal acceptance of the new 3-D paradigm has been constrained by the absence of a commercially available, biocompatible material that offers ease of use, experimental flexibility, and a seamless transition from in vitro to in vivo applications. The challenge – the puzzle that needs a solution – is to replicate the complexity of the native extracellular matrix (ECM) environment with the minimum number of components necessary to allow cells to rebuild and replicate a given tissue. For use in drug discovery, toxicology, cell banking, and ultimately in reparative medicine, the ideal matrix would therefore need to be highly reproducible, manufacturable, approvable, and affordable. Herein we describe the development of a set of modular components that can be assembled into biomimetic materials that meet these requirements. These semi-synthetic ECMs, or sECMs, are based on hyaluronan derivatives that form covalently crosslinked, biodegradable hydrogels suitable for 3-D culture of primary and stem cells in vitro, and for tissue formation in vivo. The sECMs can be engineered to provide appropriate biological cues needed to recapitulate the complexity of a given ECM environment. Specific applications for different sECM compositions include stem cell expansion with control of differentiation, scar-free wound healing, growth factor delivery, cell delivery for osteochondral defect and liver repair, and development of vascularized tumor xenografts for personalized chemotherapy. PMID:18442709 9. Is the proton radius puzzle evidence of extra dimensions? Energy Technology Data Exchange (ETDEWEB) Dahia, F.; Lemos, A.S. [Universidade Federal da Paraiba, Department of Physics, Joao Pessoa, PB (Brazil) 2016-08-15 The proton charge radius inferred from muonic hydrogen spectroscopy is not compatible with the previous value given by CODATA-2010, which, on its turn, essentially relies on measurements of the electron-proton interaction. The proton's new size was extracted from the 2S-2P Lamb shift in the muonic hydrogen, which showed an energy excess of 0.3 meV in comparison to the theoretical prediction, evaluated with the CODATA radius. Higher-dimensional gravity is a candidate to explain this discrepancy, since the muon-proton gravitational interaction is stronger than the electron-proton interaction and, in the context of braneworld models, the gravitational potential can be hugely amplified in short distances when compared to the Newtonian potential. Motivated by these ideas, we study a muonic hydrogen confined in a thick brane. We show that the muon-proton gravitational interaction modified by extra dimensions can provide the additional separation of 0.3 meV between the 2S and 2P states. In this scenario, the gravitational energy depends on the higher-dimensional Planck mass and indirectly on the brane thickness. Studying the behavior of the gravitational energy with respect to the brane thickness in a realistic range, we find constraints for the fundamental Planck mass that solve the proton radius puzzle and are consistent with previous experimental bounds. (orig.) 10. Evaluation of the three-nucleon analyzing power puzzle International Nuclear Information System (INIS) Tornow, W.; Witala, H. 1998-01-01 The current status of the three-nucleon analyzing power puzzle is reviewed. Applying tight constraints on the allowed deviations between calculated predictions and accepted values for relevant nucleon-nucleon observables reveals that energy independent correction factors applied to the 3 P j nucleon-nucleon interactions can not solve the puzzle. Furthermore, using the same constraints, charge-independence breaking in the 3 P j nucleon-nucleon interactions can be ruled out as a possible tool to improve the agreement between three-nucleon calculations and data. The study of the energy dependence of the three-nucleon analyzing power puzzle gives clear evidence that the 3 P j nucleon-nucleon interaction obtained from phase-shift analyses and used in potential models are correct above about 25 MeV, i.e., the 3 P j nucleon-nucleon interactions have to be modified only at lower energies in order to solve the three-nucleon analyzing power puzzle, unless new three-nucleon forces can be found that account for the three-nucleon analyzing power puzzle without destroying the beautiful agreement between rigorous three-nucleon calculations and a large body of accurate three-nucleon data. (orig.) 11. Three Modes of Hydrogeophysical Investigation: Puzzles, Mysteries, and Conundrums Science.gov (United States) Ferre, P. A. 2011-12-01 In an article in the New Yorker in 2007, Malcolm Gladwell discussed the distinction that national security expert Gregory Treverton has made between puzzles and mysteries. Specifically, puzzles are problems that we understand and that will eventually be solved when we amass enough information. (Think crossword puzzles.) Mysteries are problems for which we have the necessary information, but it is often overwhelmed by irrelevant or misleading input. To solve a mystery, we require improved analysis. (Think find-a-word.) Gladwell goes on to explain that, in the national security realm, the Cold War was a puzzle while the current national security condition is a mystery. I will discuss the past, current, and future trajectories of hydrogeophysics in terms of puzzles and mysteries. I will also add a third class of problem: conundrums - those for which we lack sufficient information about their structure to know how to solve them. A conundrum is a mystery with an unexpected twist. I hope to make the case that the future growth of hydrogeophysics lies in our ability to address this more challenging and more interesting class of problem. 12. Evaluation of the three-nucleon analyzing power puzzle Energy Technology Data Exchange (ETDEWEB) Tornow, W. [Duke Univ., Durham, NC (United States). Dept. of Physics]|[Triangle Univ. Nuclear Lab., Durham, NC (United States); Witala, H. [Uniwersytet Jagiellonski, Cracow (Poland). Inst. Fizyki 1998-07-20 The current status of the three-nucleon analyzing power puzzle is reviewed. Applying tight constraints on the allowed deviations between calculated predictions and accepted values for relevant nucleon-nucleon observables reveals that energy independent correction factors applied to the {sup 3}P{sub j} nucleon-nucleon interactions can not solve the puzzle. Furthermore, using the same constraints, charge-independence breaking in the {sup 3}P{sub j} nucleon-nucleon interactions can be ruled out as a possible tool to improve the agreement between three-nucleon calculations and data. The study of the energy dependence of the three-nucleon analyzing power puzzle gives clear evidence that the {sup 3}P{sub j} nucleon-nucleon interaction obtained from phase-shift analyses and used in potential models are correct above about 25 MeV, i.e., the {sup 3}P{sub j} nucleon-nucleon interactions have to be modified only at lower energies in order to solve the three-nucleon analyzing power puzzle, unless new three-nucleon forces can be found that account for the three-nucleon analyzing power puzzle without destroying the beautiful agreement between rigorous three-nucleon calculations and a large body of accurate three-nucleon data. (orig.) 18 refs. 13. On puzzles and non-puzzles in B→ππ,πK decays International Nuclear Information System (INIS) Fleischer, R.; Recksiegel, S.; Schwab, F. 2007-01-01 Recently, we have seen interesting progress in the exploration of CP violation in B d 0 →π + π - : the measurements of mixing-induced CP violation by the BaBar and Belle collaborations are now in good agreement with each other, whereas the picture of direct CP violation is still unclear. Using the branching ratio and direct CP asymmetry of B d 0 →π - K + , this situation can be clarified. We predict A CP dir (B d →π + π - )=-0.24±0.04, which favours the BaBar result, and we extract γ=(70.0 -4.3 +3.8 ) , which agrees with the unitarity triangle fits. Extending our analysis to other B→πK modes and B s 0 →K + K - with the help of the SU(3) flavour symmetry and plausible dynamical assumptions, we find that all observables with colour-suppressed electroweak penguin contributions are measured to be in excellent agreement with the standard model. As far as the ratios R c,n of the charged and neutral B→πK branching ratios are concerned, which are sizeably affected by electroweak penguin contributions, our standard-model predictions have almost unchanged central values but significantly reduced errors. Since the new data have moved quite a bit towards these results, the ''B→πK puzzle'' for the CP conserving quantities has been significantly reduced. However, the mixing-induced CP violation of B d 0 →π 0 K S does look puzzling; if confirmed by future measurements, this effect could be accommodated through a modified electroweak penguin sector with a large CP violating new-physics phase. Finally, we point out that the established difference between the direct CP asymmetries of B ± →π 0 K ± and B d →π -+ K ± appears to be generated by hadronic and not by new physics. (orig.) 14. Distress risk and leverage puzzles: Evidence from Taiwan Directory of Open Access Journals (Sweden) Kung-Cheng Ho 2016-05-01 Full Text Available Financial distress has been invoked in the asset pricing literature to explain the anomalous patterns in the cross-section of stock returns. The risk of financial distress can be measured using indexes. George and Hwang (2010 suggest that leverage can explain the distress risk puzzle and that firms with high costs choose low leverage to reduce distress intensities and earn high returns. This study investigates whether this relationship exists in the Taiwan market. When examined separately, distress intensity is found to be negatively related to stock returns, but leverage is found to not be significantly related to stock returns. The results are the same when distress intensity and leverage are examined simultaneously. After assessing the robustness by using O-scores, distress risk puzzle is found to exist in the Taiwan market, but the leverage puzzle is not 15. International trade network: fractal properties and globalization puzzle. Science.gov (United States) Karpiarz, Mariusz; Fronczak, Piotr; Fronczak, Agata 2014-12-12 Globalization is one of the central concepts of our age. The common perception of the process is that, due to declining communication and transport costs, distance becomes less and less important. However, the distance coefficient in the gravity model of trade, which grows in time, indicates that the role of distance increases rather than decreases. This, in essence, captures the notion of the globalization puzzle. Here, we show that the fractality of the international trade system (ITS) provides a simple solution for the puzzle. We argue that the distance coefficient corresponds to the fractal dimension of ITS. We provide two independent methods, the box counting method and spatial choice model, which confirm this statement. Our results allow us to conclude that the previous approaches to solving the puzzle misinterpreted the meaning of the distance coefficient in the gravity model of trade. 16. The B→πK puzzle and supersymmetry International Nuclear Information System (INIS) Imbeault, Maxime; Baek, Seungwon; London, David 2008-01-01 At present, there are discrepancies between the measurements of several observables in B→πK decays and the predictions of the Standard Model (the 'B→πK puzzle'). Although the effect is not yet statistically significant-it is at the level of ≥3σ-it does hint at the presence of new physics. In this Letter, we explore whether supersymmetry (SUSY) can explain the B→πK puzzle. In particular, we consider the SUSY model of Grossman, Neubert and Kagan (GNK). We find that it is extremely unlikely that GNK explains the B→πK data. We also find a similar conclusion in many other models of SUSY. And there are serious criticisms of the two SUSY models that do reproduce the B→πK data. If the B→πK puzzle remains, it could pose a problem for SUSY models 17. THE EQUITY PREMIUM PUZZLE AND EMOTIONAL ASSET PRICING OpenAIRE MARC GÜRTLER; NORA HARTMANN 2007-01-01 "Since the equity premium as well as the risk-free rate puzzle question the concepts central to financial and economic modeling, we apply behavioral decision theory to asset pricing in view of solving these puzzles. U.S. stock market data for the period 1960-2003 and German stock market data for the period 1977-2003 show that emotional investors who act in accordance to Bell's (1985) disappointment theory -a special case of prospect theory- and additionally administer mental accounts demand a... 18. An overview of heavy quark energy loss puzzle at RHIC International Nuclear Information System (INIS) Djordjevic, Magdalena 2006-01-01 We give a theoretical overview of the heavy quark tomography puzzle posed by recent non-photonic single electron data from central Au+Au collisions at √s = 200A GeV. We show that radiative energy loss mechanisms alone are not able to explain large single electron suppression data, as long as realistic parameter values are assumed. We argue that a combined collisional and radiative pQCD approach can solve a substantial part of the non-photonic single electron puzzle 19. The Meissner effect puzzle and the quantum force in superconductor International Nuclear Information System (INIS) Nikulov, A.V. 2012-01-01 The puzzle of the acceleration of the mobile charge carriers and the ions in the superconductor in direction opposite to the electromagnetic force revealed formerly in the Meissner effect is considered in the case of the transition of a narrow ring from normal to superconducting state. It is elucidated that the azimuthal quantum force was deduced eleven years ago from the experimental evidence of this acceleration but it cannot solve this puzzle. This quantum force explains other paradoxical phenomena connected with reiterated switching of the ring between normal and superconducting states. 20. The Closed-End Funds Puzzle: A Survey Review Directory of Open Access Journals (Sweden) Marta Charrón 2009-09-01 Full Text Available The main objective of this paper is to explore the most salient research aimed at explaining the closed-end fund puzzle from both the traditional and behavioral finance perspectives. It provides a better understanding of closed-end fund behavior and motivates further research of closed-end funds, market efficiency, asset pricing and the traditional and behavioral finance paradigms. So far, none of the possible explanations from either traditional finance or behavioral finance have been able to fully account for the occurrence of the puzzle. It continues to be an important issue in the long standing debate between traditional finance and behavioral finance. 1. NICHD Research Networks Help Piece Together the Puzzle of Polycystic Ovary Syndrome Science.gov (United States) ... Print NICHD research networks help piece together the puzzle of polycystic ovary syndrome Many people think that ... more like putting together a thousand-piece jigsaw puzzle. Except that you can’t check the cover ... 2. Puzzles in modern biology. I. Male sterility, failure reveals design [version 1; referees: 2 approved Directory of Open Access Journals (Sweden) Steven A. Frank 2016-09-01 Full Text Available Many human males produce dysfunctional sperm. Various plants frequently abort pollen. Hybrid matings often produce sterile males. Widespread male sterility is puzzling. Natural selection prunes reproductive failure. Puzzling failure implies something that we do not understand about how organisms are designed. Solving the puzzle reveals the hidden processes of design. 3. The Effect of Monetary Policy on Exchange Rates : How to Solve the Puzzles NARCIS (Netherlands) Kumah, F.Y. 1996-01-01 Recent empirical research on the effects of monetary policy shocks on exchange rate fluctuations have encountered the exchange rate puzzle and th e forward discount bias puzzle.The exchange rate puzzle is the tendency of the domestic currency (of non-US G-7 countries) to depreciate against the US 4. Teaching Proofs and Algorithms in Discrete Mathematics with Online Visual Logic Puzzles Science.gov (United States) Cigas, John; Hsin, Wen-Jung 2005-01-01 Visual logic puzzles provide a fertile environment for teaching multiple topics in discrete mathematics. Many puzzles can be solved by the repeated application of a small, finite set of strategies. Explicitly reasoning from a strategy to a new puzzle state illustrates theorems, proofs, and logic principles. These provide valuable, concrete… 5. Games and puzzles for English as a second language CERN Document Server Fremont, Victoria 2011-01-01 Students can hone their verbal and grammatical skills with this entertaining workbook. Search-a-words, crossword puzzles, anagrams, and other challenges build vocabulary and spelling skills. They also help students understand and identify idioms, irregular past tenses and participles, and other linguistic stumbling blocks. Perfect for individual study or as a course supplement. 6. Towards a security model for computational puzzle schemes NARCIS (Netherlands) Tang, Qiang; Jeckmans, Arjan 2011-01-01 In the literature, computational puzzle schemes have been considered as a useful tool for a number of applications, such as constructing timed cryptography, fighting junk emails, and protecting critical infrastructure from denial-of-service attacks. However, there is a lack of a general security 7. General intelligence is an emerging property, not an evolutionary puzzle. Science.gov (United States) Ramus, Franck 2017-01-01 Burkart et al. contend that general intelligence poses a major evolutionary puzzle. This assertion presupposes a reification of general intelligence - that is, assuming that it is one "thing" that must have been selected as such. However, viewing general intelligence as an emerging property of multiple cognitive abilities (each with their own selective advantage) requires no additional evolutionary explanation. 8. Pedagogy Corner: The Architect's Puzzle Science.gov (United States) Lovitt, Charles 2017-01-01 Some years back, the author found the following problem in a spatial puzzle book: how many ways can you put four blocks together, face to face (with no vertical rotation symmetry)? He gave each student just four blocks and they collectively tried combinations to eventually agree on the answer of 15. He used to think it was a halfway decent task,… Science.gov (United States) Wang, Feihong; Algina, James; Snyder, Patricia; Cox, Martha 2017-01-01 We examined children's task engagement during a challenging puzzle task in the presence of their primary caregivers by using a representative sample of rural children from six high-poverty counties across two states. Weighted longitudinal confirmatory factor analysis and structural equation modeling were used to identify a task engagement factor… 10. Asset pricing puzzles explained by incomplete Brownian equilibria DEFF Research Database (Denmark) Christensen, Peter Ove; Larsen, Kasper We examine a class of Brownian based models which produce tractable incomplete equilibria. The models are based on finitely many investors with heterogeneous exponential utilities over intermediate consumption who receive partially unspanned income. The investors can trade continuously on a finit...... markets. Consequently, our model can simultaneously help explaining the risk-free rate and equity premium puzzles.... 11. What do we learn from the ρ-π puzzle International Nuclear Information System (INIS) Li Xueqian 2010-01-01 The experimental observation indicates that the branching ratio of ψ' →ρπ is very small while the ρ-π channel is a main one in J/ψ decays. To understand the puzzle, various interpretations have been proposed. Meanwhile according to the hadronic helicity selection rule, this decay mode should be suppressed. Numerical calculations are needed to determine how it is suppressed.We calculate the branching ratios of J/ψ→ρπ and ππ in the framework of QCD. The results show that the branching ratios are proportional to [(m u +m d )/(M J/ψ )] 2 for the ρπ mode and [(m u -m d )/(m J/ψ )] 2 for the ππ mode which is isospin violated. The theoretical prediction of the ratio of J/ψ → ρπ is smaller than data, but not too small to invoke a completely new mechanism. Thus the puzzle is still standing even though we learn much knowledge towards the puzzle and this will help to finally interpret the puzzle and then gain a deeper insight to the heavy quarkonia. (author) 12. Crossword Puzzles as a Learning Tool for Vocabulary Development Science.gov (United States) Orawiwatnakul, Wiwat 2013-01-01 Introduction: Since vocabulary is a key basis on which reading achievement depends, various vocabulary acquisition techniques have become pivotal. Among the many teaching approaches, traditional or otherwise, the use of crossword puzzles seems to offer potential and a solution for the problem of learning vocabulary. Method: This study was… 13. Adding a Piece to the Leaf Epidermal Cell Shape Puzzle. Science.gov (United States) von Wangenheim, Daniel; Wells, Darren M; Bennett, Malcolm J 2017-11-06 The jigsaw puzzle-shaped pavement cells in the leaf epidermis collectively function as a load-bearing tissue that controls organ growth. In this issue of Developmental Cell, Majda et al. (2017) shed light on how the jigsaw shape can arise from localized variations in wall stiffness between adjacent epidermal cells. Copyright © 2017 Elsevier Inc. All rights reserved. 14. Unraveling "Braid": Puzzle Games and Storytelling in the Imperative Mood Science.gov (United States) Arnott, Luke 2012-01-01 "Unraveling Braid" analyzes how unconventional, non-linear narrative fiction can help explain the ways in which video games signify. Specifically, this essay looks at the links between the semiotic features of Jonathan Blow's 2008 puzzle-platform video game Braid and similar elements in Georges Perec's 1978 novel "Life A User's Manual," as well as… 15. The Potential of Crossword Puzzles in Aiding English Language Learners Science.gov (United States) Merkel, Warren 2016-01-01 In an academic environment, teachers utilize crossword puzzles to help students learn or remember terminology. Outside the classroom, typically in daily newspapers, crosswords aid in vocabulary development, used as a learning tool, a leisure activity, or both. However, both the content and the grid structure of the crosswords in these two… 16. Experimental status of the E/ι puzzle International Nuclear Information System (INIS) Lanaro, A. 1996-11-01 Despite the prolonged experimental effort devoted to the spectroscopy of the E/ι mesons, and the intense theoretical debate around this subject, many puzzling issues still prevent from a full understanding of the true scenario. The advent of new and more precise experimental measurements motivates a review on this topic 17. Puzzling with potential : dynamic testing of analogical reasoning in children NARCIS (Netherlands) Stevenson, Claire Elisabeth 2012-01-01 Assessment procedures are frequent in children's school careers; however, measuring potential for learning has remained a puzzle. Dynamic testing is a method to assess cognitive potential that includes training in the assessment process. The goal of this thesis project was to develop a new dynamic 18. HIV in Japan: Epidemiologic puzzles and ethnographic explanations Directory of Open Access Journals (Sweden) Anthony S. DiStefano 2016-12-01 Full Text Available Japan is widely perceived to have a low level of HIV occurrence; however, its HIV epidemics also have been the subject of considerable misunderstanding globally. I used a ground truthing conceptual framework to meet two aims: first, to determine how accurately official surveillance data represented Japan's two largest epidemics (urban Kansai and Tokyo as understood and experienced on the ground; and second, to identify explanations for why the HIV epidemics were unfolding as officially reported. I used primarily ethnographic methods while drawing upon epidemiology, and compared government surveillance data to observations at community and institutional sites (459 pages of field notes; 175 persons observed, qualitative interviews with stakeholders in local HIV epidemics (n = 32, and document research (n = 116. This revealed seven epidemiologic puzzles involving officially reported trends and conspicuously missing information. Ethnographically grounded explanations are presented for each. These included factors driving the epidemics, which ranged from waning government and public attention to HIV, to gaps in sex education and disruptive leadership changes in public institutions approximately every two years. Factors constraining the epidemics also contributed to explanations. These ranged from subsidized medical treatment for most people living with HIV, to strong partnerships between government and a well-developed, non-governmental sector of HIV interventionists, and protective norms and built environments in the sex industry. Local and regional HIV epidemics were experienced and understood as worse than government reports indicated, and ground-level data often contradicted official knowledge. Results thus call into question epidemiologic trends, including recent stabilization of the national epidemic, and suggest the need for revisions to the surveillance system and strategies that address factors driving and constraining the epidemics. Based 19. Solving the puzzle of pluripotent stem cell-derived cardiomyocyte maturation: piece by piece. Science.gov (United States) Lundy, David J; Lee, Desy S; Hsieh, Patrick C H 2017-03-01 There is a growing need for in vitro models which can serve as platforms for drug screening and basic research. Human adult cardiomyocytes cannot be readily obtained or cultured, and so pluripotent stem cell-derived cardiomyocytes appear to be an attractive option. Unfortunately, these cells are structurally and functionally immature-more comparable to foetal cardiomyocytes than adult. A recent study by Ruan et al ., provides new insights into accelerating the maturation process and takes us a step closer to solving the puzzle of pluripotent stem cell-derived cardiomyocyte maturation. 20. Meson spectroscopy experiment at KEK - E/iota puzzle International Nuclear Information System (INIS) Tsuru, Tsuneaki 1985-01-01 Physics interests at the KEK (National Laboratory for High Energy Physics) are (1) search for exotic mesons such as glueballs (gg), meiktons (q anti q g) and multiquark states (q sup(2 - )q 2 ), (2) search for missing ordinary mesons (q anti q) and confirmation of unestablished mesons, and (3) new informations of quark contents of mesons, mixing angles of SU(3) singlet-octet and tests of conservations law. Special interest is in search for exotics such as glueballs and meiktons. (2) is a so-called meson spectroscopy experiment. This is important not only in itself but also in identifying newly discovered states as exotics because exotics have often same quantum numbers as ordinary mesons. Contents are the following: glueballs and E/iota puzzles, spectrometer system, experiments, performance of the spectrometer, physics outputs, E/iota puzzles and πI experiment, future plans. (Mori, K.) 1. Spectroscopy of muonic atoms and the proton radius puzzle Science.gov (United States) Antognini, Aldo 2017-09-01 We have measured several 2 S -2 P transitions in muonic hydrogen (μp), muonic deuterium (μd) and muonic helium ions (μ3He, μ4He). From muonic hydrogen we extracted a proton charge radius 20 times more precise than obtained from electron-proton scattering and hydrogen high-precision laser spectroscopy but at a variance of 7 σ from these values. This discrepancy is nowadays referred to as the proton radius puzzle. New insight has been recently provided by the first determination of the deuteron charge radius from laser spectroscopy of μd. The status of the proton charge radius puzzle including the new insights obtained by μd spectroscopy will be discussed. Work supported by the Swiss National Science Foundation SNF-200021-165854 and the ERC CoG. #725039. 2. The puzzle of the ultra-high energy cosmic rays CERN Document Server Tkachev, I I 2003-01-01 In early years the cosmic ray studies were ahead of accelerator research, starting from the discovery of positrons, through muons, to that of pions and strange particles. Today we are facing the situation that the puzzling saga of cosmic rays of the highest energies may again unfold in the discovery of new physics, now beyond the Standard Model; or it may bring to life an "extreme" astrophysics. After a short review of the Greisen-Zatsepin-Kuzmin puzzle, I discuss different models which were suggested for its resolution. Are there any hints pointing to the correct model? I argue that the small-scale clustering of arrival directions of cosmic rays gives a clue, and BL Lacs are the probable sources of the observed events. (58 refs). 3. Does Intrinsic Habit Formation Actually Resolve the Equity Premium Puzzle? OpenAIRE David A. Chapman 2002-01-01 Constantinides (1990) describes a simple model of intrinsic habit formation that appears to resolve the "equity premium puzzle" of Mehra and Prescott (1985). This finding is particularly important, since it has motivated a broader consideration of the implications of habit formation preferences in dynamic equilibrium models. However, consumption growth actually behaves very differently pre- and post-1948, and the explanatory power of the habit formation model is driven by the pre-1948 data. U... 4. Can Equity Volatility Explain the Global Loan Pricing Puzzle? OpenAIRE Lewis Gaul; Pinar Uysal 2013-01-01 This paper examines whether unobservable differences in firm volatility are responsible for the global loan pricing puzzle, which is the observation that corporate loan interest rates appear to be lower in Europe than in the United States. We analyze whether equity volatility, an error prone measure of firm volatility, can explain this difference in loan spreads. We show that using equity volatility in OLS regressions will result in biased and inconsistent estimates of the difference in U.S. ... 5. Can the magnetic moment contribution explain the Ay puzzle? International Nuclear Information System (INIS) Stoks, V.G. 1998-01-01 We evaluate the full one-photon-exchange Born amplitude for Nd scattering. We include the contributions due to the magnetic moment of the proton or neutron, and the magnetic moment and quadrupole moment of the deuteron. It is found that the inclusion of the magnetic-moment interaction in the theoretical description of the Nd scattering observables cannot resolve the long-standing A y puzzle. copyright 1998 The American Physical Society 6. Puzzling with potential: dynamic testing of analogical reasoning in children OpenAIRE Stevenson, Claire Elisabeth 2012-01-01 Assessment procedures are frequent in children's school careers; however, measuring potential for learning has remained a puzzle. Dynamic testing is a method to assess cognitive potential that includes training in the assessment process. The goal of this thesis project was to develop a new dynamic test of analogical reasoning for school children. The main aims were to (1) investigate factors that influence children’s differences in performance and change during dynamic testing and (2) examine... 7. PUZZLES – A CREATIVE WAY OF DEVELOPMENT OF LOGICAL THINKING Directory of Open Access Journals (Sweden) Milková, Eva 2011-12-01 Full Text Available Logical thinking of students should be enhanced at all levels of their studies. There are many possibilities how to achieve it. In the paper one possible way within the subjects “Discrete Mathematics” and “Discrete Methods and Optimization” dealing with graph theory and combinatorial optimization will be presented. These mathematical disciplines are powerful tools for teachers allowing them to develop logical thinking of students, increase their imagination and make them familiar with solutions to various problems. Thanks the knowledge gained within the subjects students should be able to describe various practical situations with the aid of graphs, solve the given problem expressed by the graph, and translate the solution back into the initial situation. Student engagement is crucial for successful education. Practical tasks and puzzles attract students to know more about the explained subject matter and to apply gained knowledge. There are an endless number of enjoyable tasks, puzzles and logic problems in books like “Mathematics is Fun”, in riddles magazines and on the Internet. In the paper, as an inspiration, four puzzles developing logical thinking appropriate to be solved using graph theory and combinatorial optimization will be introduced. On these puzzles of different level of difficulty the students’ ability to find out the appropriate graph-representation of the given task and solve it will be discussed as well. The author of the paper has been prepared with her students various multimedia applications dealing with objects appropriate to subject matter for more than 15 years. In the paper we also discuss a benefit of multimedia applications used as a support of subjects “Discrete Mathematics” and “Discrete Methods and Optimization”. 8. Understanding the Puzzling Risk-Return Relationship for Housing OpenAIRE Lu Han 2013-01-01 Standard theory predicts a positive relationship between risk and return, yet recent data show that housing returns vary positively with risk in some markets but negatively in others. This paper rationalizes these cross-market differences in the risk-return relationship for housing, and in so doing, explains the puzzling negative relationship. The paper shows that when the current house provides a hedge against the risk associated with the future housing consumption, households are willing to... 9. Puzzles, paradoxes, and problem solving an introduction to mathematical thinking CERN Document Server Reba, Marilyn A 2014-01-01 Graphs: Puzzles and Optimization Graphical Representation and Search Greedy Algorithms and Dynamic Programming Shortest Paths, DNA Sequences, and GPS Systems Routing Problems and Optimal Circuits Traveling Salesmen and Optimal Orderings Vertex Colorings and Edge Matchings Logic: Rational Inference and Computer Circuits Inductive and Deductive Arguments Deductive Arguments and Truth-Tables Deductive Arguments and Derivations Deductive Logic and Equivalence Modeling Using Deductive Logic Probability: Predictions and Expectations Probability and Counting Counting and Unordered Outcomes Independen 10. Supersymmetry, the flavour puzzle and rare B decays Energy Technology Data Exchange (ETDEWEB) Straub, David Michael 2010-07-14 The gauge hierarchy problem and the flavour puzzle belong to the most pressing open questions in the Standard Model of particle physics. Supersymmetry is arguably the most popular framework of physics beyond the Standard Model and provides an elegant solution to the gauge hierarchy problem; however, it aggravates the flavour puzzle. In the first part of this thesis, I discuss several approaches to address the flavour puzzle in the minimal supersymmetric extension of the Standard Model and experimental tests thereof: supersymmetric grand unified theories with a unification of Yukawa couplings at high energies, theories with minimal flavour violation and additional sources of CP violation and theories with gauge mediation of supersymmetry breaking and a large ratio of Higgs vacuum expectation values. In the second part of the thesis, I discuss the phenomenology of two rare B meson decay modes which are promising probes of physics beyond the Standard Model: The exclusive B {yields} K{sup *}l{sup +}l{sup -} decay, whose angular decay distribution will be studied at LHC and gives access to a large number of observables and the b{yields}s{nu}anti {nu} decays, which are in the focus of planned high-luminosity Super B factories. I discuss the predictions for these observables in the Standard Model and their sensitivity to New Physics. (orig.) 11. Food puzzles for cats: Feeding for physical and emotional wellbeing. Science.gov (United States) Dantas, Leticia Ms; Delgado, Mikel M; Johnson, Ingrid; Buffington, Ca Tony 2016-09-01 Many pet cats are kept indoors for a variety of reasons (eg, safety, health, avoidance of wildlife predation) in conditions that are perhaps the least natural to them. Indoor housing has been associated with health issues, such as chronic lower urinary tract signs, and development of problem behaviors, which can cause weakening of the human-animal bond and lead to euthanasia of the cat. Environmental enrichment may mitigate the effects of these problems and one approach is to take advantage of cats' natural instinct to work for their food. In this article we aim to equip veterinary professionals with the tools to assist clients in the use of food puzzles for their cats as a way to support feline physical health and emotional wellbeing. We outline different types of food puzzles, and explain how to introduce them to cats and how to troubleshoot challenges with their use. The effect of food puzzles on cats is a relatively new area of study, so as well as reviewing the existing empirical evidence, we provide case studies from our veterinary and behavioral practices showing health and behavioral benefits resulting from their use. © The Author(s) 2016. 12. Decodoku: Quantum error rorrection as a simple puzzle game Science.gov (United States) Wootton, James To build quantum computers, we need to detect and manage any noise that occurs. This will be done using quantum error correction. At the hardware level, QEC is a multipartite system that stores information non-locally. Certain measurements are made which do not disturb the stored information, but which do allow signatures of errors to be detected. Then there is a software problem. How to take these measurement outcomes and determine: a) The errors that caused them, and (b) how to remove their effects. For qubit error correction, the algorithms required to do this are well known. For qudits, however, current methods are far from optimal. We consider the error correction problem of qubit surface codes. At the most basic level, this is a problem that can be expressed in terms of a grid of numbers. Using this fact, we take the inherent problem at the heart of quantum error correction, remove it from its quantum context, and presented in terms of simple grid based puzzle games. We have developed three versions of these puzzle games, focussing on different aspects of the required algorithms. These have been presented and iOS and Android apps, allowing the public to try their hand at developing good algorithms to solve the puzzles. For more information, see www.decodoku.com. Funding from the NCCR QSIT. 13. Supersymmetry, the flavour puzzle and rare B decays International Nuclear Information System (INIS) Straub, David Michael 2010-01-01 The gauge hierarchy problem and the flavour puzzle belong to the most pressing open questions in the Standard Model of particle physics. Supersymmetry is arguably the most popular framework of physics beyond the Standard Model and provides an elegant solution to the gauge hierarchy problem; however, it aggravates the flavour puzzle. In the first part of this thesis, I discuss several approaches to address the flavour puzzle in the minimal supersymmetric extension of the Standard Model and experimental tests thereof: supersymmetric grand unified theories with a unification of Yukawa couplings at high energies, theories with minimal flavour violation and additional sources of CP violation and theories with gauge mediation of supersymmetry breaking and a large ratio of Higgs vacuum expectation values. In the second part of the thesis, I discuss the phenomenology of two rare B meson decay modes which are promising probes of physics beyond the Standard Model: The exclusive B → K * l + l - decay, whose angular decay distribution will be studied at LHC and gives access to a large number of observables and the b→sνanti ν decays, which are in the focus of planned high-luminosity Super B factories. I discuss the predictions for these observables in the Standard Model and their sensitivity to New Physics. (orig.) 14. A possible explanation of the 'exchange rate disconnect puzzle': A common solution to three major macroeconomic puzzles? OpenAIRE Horioka, Charles Yuji; Ford, Nicholas 2016-01-01 Meese and Rogoff (1983) and subsequent studies find that economic fundamentals are apparently not able to explain exchange rate movements, but we argue that this so-called "Exchange Rate Disconnect Puzzle" arose because researchers such as Meese and Rogoff (1983) did not use the right fundamentals and because they did not allow for the forward-looking nature of exchange rate determination. Further, because they apparently were not aware that financial markets by themselves could not equalise ... 15. 3D satellite puzzles for young and old kids Science.gov (United States) Biondi, Riccardo; Galoforo, Germana 2017-04-01 The Italian Space Agency (ASI) is active in outreach willing to increase the interest of young generations and general public toward the space activities. ASI proposes educational programmes for supporting and encouraging the development of European society based on knowledge, inspiring and motivating the young generations. One of the initiatives promoted by ASI on this regards is the 3D satellite puzzles. The idea was born in 2007 from the will to conceive an educational product for promoting and explaining to students the small all-Italian mission AGILE (Astrorivelatore Gamma ad Immagini ultra Leggero) thought as a tool for students aged 8-13. Working with this slot of students is very productive in terms of the imprints left on the kids, in fact it is useful to produce things they can use, touch and play with, with an active approach instead of a passive one. Therefore it was decided to produce something that kids could build and use at home with their parents or friends, or all together at school with teachers and mates. Other puzzles followed AGILE, one about the COSMO-SkyMED satellites about Earth Observation and also a broader one of the International Space Station. During these 10 years the puzzles were mostly used as outreach tools for school children, but they surprisingly received a great success also within older generations. So far the 3D puzzles have been printed in more than 10 thousand copies and distributed for free to students of hundreds of schools in Italy, and to the general public through science associations, planetaria and museums. Recently they have been used also during special events such as the international Geoscience Communication School (as best practice outreach tool), the EXPO 2015 and the European Researcheŕs Night at the Parlamentarium in Brussels 2016. While the students are building the puzzles, the tutor explains them the different components that they are assembling, what the importance of the satellite is and how it works 16. Quantum top secret. The solution of the quantum puzzle. Metamorphosis of a picture of world International Nuclear Information System (INIS) Wingert, M. 2008-01-01 Many physicists believe that because of unexplained causes, which must anyway be concerned with the quantum puzzle and the mysterious consciousness, it would be no more possible to understand the real structure of the reality - this subtle smiling of the nature, which irritates the physicists since 100 years and the disturbed the theoretical physics so much that they threw the towel. Since nature is considered as absurd, strange, and crazy - and quantum theory as very complicated. But in reality the basic experiments are of a touching simplicity, which seems only completely unintelligible in the picture of world of mechanics. For these experiments show that the concept of body of mechanics and the body conceptions of the thinking cannot at all match the structure of nature. If this is objectively taken notice of without doubting on the existence of a reality, the experiments show the real, unveiled face of the nature. Light and matter must then consist of fields, which can themselves divide by non-mechanical way, so with wholeness, comparable only with cell division and branching processes in biology. Either it is completely crazy - or the only logic interpretation, which hitherto only no physicist risked to think. For these experiments disprove the atom and elementary-particle hypothesis, the picture of world of mechanics, and also the quantum-mechanical interpretation - and indeed uniquely. This knowledge could break the Gordian knot, solve the quantum puzzle, and also give away the secret of the thinking spirit 17. Interference between a fast-paced spatial puzzle task and verbal memory demands. Science.gov (United States) Epling, Samantha L; Blakely, Megan J; Russell, Paul N; Helton, William S 2017-06-01 18. COMPAR International Nuclear Information System (INIS) Kuefner, K. 1976-01-01 COMPAR works on FORTRAN arrays with four indices: A = A(i,j,k,l) where, for each fixed k 0 ,l 0 , only the 'plane' [A(i,j,k 0 ,l 0 ), i = 1, isub(max), j = 1, jsub(max)] is held in fast memory. Given two arrays A, B of this type COMPAR has the capability to 1) re-norm A and B ind different ways; 2) calculate the deviations epsilon defined as epsilon(i,j,k,l): =[A(i,j,k,l) - B(i,j,k,l)] / GEW(i,j,k,l) where GEW (i,j,k,l) may be chosen in three different ways; 3) calculate mean, standard deviation and maximum in the array epsilon (by several intermediate stages); 4) determine traverses in the array epsilon; 5) plot these traverses by a printer; 6) simplify plots of these traverses by the PLOTEASY-system by creating input data blocks for this system. The main application of COMPAR is given (so far) by the comparison of two- and three-dimensional multigroup neutron flux-fields. (orig.) [de 19. Europe vs. the U.S. A New Look at the Syndicated Loan Pricing Puzzle OpenAIRE Burietz, Aurore; Oosterlinck, Kim; Szafarz, Ariane 2017-01-01 According to the syndicated loan pricing puzzle (Carey and Nini, Journal of Finance, 2007) interest rates charged to corporate borrowers are lower in Europe than in the U.S. Our investigation suggests that controlling for region-specific credit ratings makes the Europe-U.S. gap insignificant, and solves the puzzle. We speculate that the puzzle originates from the lack of uniformity of accounting standards. 20. Robust Sex Differences in Jigsaw Puzzle Solving-Are Boys Really Better in Most Visuospatial Tasks? Science.gov (United States) Kocijan, Vid; Horvat, Marina; Majdic, Gregor 2017-01-01 Sex differences are consistently reported in different visuospatial tasks with men usually performing better in mental rotation tests while women are better on tests for memory of object locations. In the present study, we investigated sex differences in solving jigsaw puzzles in children. In total 22 boys and 24 girls were tested using custom build tablet application representing a jigsaw puzzle consisting of 25 pieces and featuring three different pictures. Girls outperformed boys in solving jigsaw puzzles regardless of the picture. Girls were faster than boys in solving the puzzle, made less incorrect moves with the pieces of the puzzle, and spent less time moving the pieces around the tablet. It appears that the strategy of solving the jigsaw puzzle was the main factor affecting differences in success, as girls tend to solve the puzzle more systematically while boys performed more trial and error attempts, thus having more incorrect moves with the puzzle pieces. Results of this study suggest a very robust sex difference in solving the jigsaw puzzle with girls outperforming boys by a large margin. 1. Robust Sex Differences in Jigsaw Puzzle Solving—Are Boys Really Better in Most Visuospatial Tasks? Science.gov (United States) Kocijan, Vid; Horvat, Marina; Majdic, Gregor 2017-01-01 Sex differences are consistently reported in different visuospatial tasks with men usually performing better in mental rotation tests while women are better on tests for memory of object locations. In the present study, we investigated sex differences in solving jigsaw puzzles in children. In total 22 boys and 24 girls were tested using custom build tablet application representing a jigsaw puzzle consisting of 25 pieces and featuring three different pictures. Girls outperformed boys in solving jigsaw puzzles regardless of the picture. Girls were faster than boys in solving the puzzle, made less incorrect moves with the pieces of the puzzle, and spent less time moving the pieces around the tablet. It appears that the strategy of solving the jigsaw puzzle was the main factor affecting differences in success, as girls tend to solve the puzzle more systematically while boys performed more trial and error attempts, thus having more incorrect moves with the puzzle pieces. Results of this study suggest a very robust sex difference in solving the jigsaw puzzle with girls outperforming boys by a large margin. PMID:29109682 2. Iran irritates and puzzles the world International Nuclear Information System (INIS) Pressburg, A. P. 2006-01-01 In this paper author analyses political and economic situation in the Iran. Iran is endangered by economic sanctions and military attacks on nuclear facilities, the president threatens by retributive military attacks. Situation in Iran is compared with Iraq. Perspectives of application of political and economic embargo against Iran are discussed. USA and co-adjuster would probably meet the Russian or Chinese veto. On the international markets or on the bilateral level the good volition to disable to Iran the sale of its oil and natural gas would by scarcely expected. Iran exports from 2.5 to 3 million barrels of oil daily. It is impossible to substitute Iran essentially. The country is the fourth biggest exporter of oil in the world and has the second biggest stocks of oil after Saudi Arabia. Iran is endangered by the fact that big question mark will stay to hang above nuclear program. The part of uranium is imported from African countries like Gabon and Niger through France and the sanctions of UN would stop this import. Iran has its own uranium fields, even which belongs to the largest in the world. However, Iran cannot exploit them effectively without help of western companies. Nuclear program, which is supported by big hope and sources from the country, probably collide with unexpected barriers. Even if UN Safety Council would not confirmed anti-Iran sanctions, the seven states that produce nuclear technologies would stop their sale to Iran. It would freeze its civilian and military nuclear program from a few years or some decades 3. QALYs, euthanasia and the puzzle of death. Science.gov (United States) Barrie, Stephen 2015-08-01 This paper considers the problems that arise when death, which is a philosophically difficult concept, is incorporated into healthcare metrics, such as the quality-adjusted life year (QALY). These problems relate closely to the debate over euthanasia and assisted suicide because negative QALY scores can be taken to mean that patients would be 'better off dead'. There is confusion in the literature about the meaning of 0 QALY, which is supposed to act as an 'anchor' for the surveyed preferences on which QALYs are based. In the context of the debate over euthanasia, the QALY assumes an ability to make meaningful comparisons between life-states and death. Not only is this assumption questionable, but the ethical debate is much more broad than the question of whether death is preferable to a state of living. QALYs are derived from preferences about health states, so do not necessarily reflect preferences about events (eg, dying) or actions (eg, killing). This paper presents a new kind of problem for the QALY. As it stands, the QALY provides confused and unreliable information when it reports zero or negative values, and faces further problems when it appears to recommend death. This should preclude its use in the debate over euthanasia and assisted suicide. These problems only apply where the QALY involves or seems to involve a comparison between life-states and death, and are not relevant to the more general discussion of the use of QALYs as a tool for comparing the benefits derived from treatment options. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions. 4. Mercury concentration in coal - Unraveling the puzzle Science.gov (United States) Toole-O'Neil, B.; Tewalt, S.J.; Finkelman, R.B.; Akers, D.J. 1999-01-01 Based on data from the US Geological Survey's COALQUAL database, the mean concentration of mercury in coal is approximately 0.2 ??gg-1. Assuming the database reflects in-ground US coal resources, values for conterminous US coal areas range from 0.08 ??gg-1 for coal in the San Juan and Uinta regions to 0.22 ??gg-1 for the Gulf Coast lignites. Recalculating the COALQUAL data to an equal energy basis unadjusted for moisture differences, the Gulf Coast lignites have the highest values (36.4 lb of Hg/1012 Btu) and the Hams Fork region coal has the lowest value (4.8 lb of Hg/1012Btu). Strong indirect geochemical evidence indicates that a substantial proportion of the mercury in coal is associated with pyrite occurrence. This association of mercury and pyrite probably accounts for the removal of mercury with the pyrite by physical coal cleaning procedures. Data from the literature indicate that conventional coal cleaning removes approximately 37% of the mercury on an equal energy basis, with a range of 0% to 78%. When the average mercury reduction value is applied to in-ground mercury values from the COALQUAL database, the resulting 'cleaned' mercury values are very close to mercury in 'as-shipped' coal from the same coal bed in the same county. Applying the reduction fact or for coal cleaning to eastern US bituminous coal, reduces the mercury input load compared to lower-rank non-deaned western US coal. In the absence of analytical data on as-shipped coal, the mercury data in the COALQUAL database, adjusted for deanability where appropriate, may be used as an estimator of mercury contents of as-shipped coal. ?? 1998 Published by Elsevier Science Ltd. All rights reserved. 5. The Hiroshima neutron dosimetry enigma: Missing puzzle piece No. 6 International Nuclear Information System (INIS) Gold, Raymond 1999-01-01 More than a decade has elapsed since the serious nature of the discrepancy between neutron dosimetry experiments (E) and neutron transport calculations (C) for the Hiroshima site was identified. Since that time extensive efforts to resolve this Hiroshima neutron dosimetry enigma have not only failed, but now demonstrate that the magnitude of this discrepancy is much greater than initially estimated. The currently evaluated E/C ratio for thermal neutron fluence at the Hiroshima site increases rapidly with increasing slant range from the epicenter. In the slant range region beyond 1000 m, E/C exceeds unity by one to two orders of magnitude depending on the specific dosimetry data that are utilized. Principal features that characterize the Hiroshima neutron dosimetry enigma are summarized. Puzzle Piece No. 6: In-situ production and Prompt fallout of radionuclides from Little Boy is advanced as a possible contributory phenomenon to this enigma. (The atom bomb detonated over Hiroshima was called Little Boy.) Measurements of 60 Co and 152 Eu specific activity at the Hiroshima site are used to obtain order of magnitude numerical estimates that show this conjecture is plausible. Comparison of different 60 Co measurements at the Hiroshima site reveals that the variation of E/C with slant range depends on the method used to quantify 60 Co specific activity as well as the type of dosimetry samples that are employed. These 60 Co comparisons lend additional qualitative credence to this conjecture. Within the limits of presently available data, these assessments show that Puzzle Piece No. 6 qualitatively satisfies the principal features that characterize the Hiroshima neutron dosimetry enigma. Nevertheless, current lack of data prevent this conjecture from being conclusively confirmed or refuted. Consequently, specific recommendations are advanced to resolve the Hiroshima neutron dosimetry enigma with emphasis on experimental tests that can quantitatively evaluate Puzzle Piece 6. ysteries, Puzzles, and Paradoxes in Quantum Mechanics. Proceedings International Nuclear Information System (INIS) Rodolfo, B. 1999-01-01 These proceedings represent papers presented at the Mysteries, Puzzles, and Paradoxes in Quantum Mechanics Workshop held in Italy, in August 1998. The Workshop was devoted to recent experimental and theoretical advances such as new interference, effects, the quantum eraser, non-disturbing and Schroedinger-cat-like states, experiments, EPR correlations, teleportation, superluminal effects, quantum information and computing, locality and causality, decoherence and measurement theory. Tachyonic information transfer was also discussed. There were 45 papers presented at the conference,out of which 2 have been abstracted for the Energy, Science and Technology database 7. Precautionary Borrowing and the Credit Card Debt Puzzle DEFF Research Database (Denmark) Druedahl, Jeppe; Jørgensen, Casper Nordal 2015-01-01 This paper addresses the credit card debt puzzle using a generalization of the buffer-stock consumption model with long-term revolving debt contracts. Closely resembling actual US credit card law, we assume that card issuers can always deny their cardholders access to new debt, but that they cannot...... to simultaneously hold positive gross debt and positive gross assets even though the interest rate on the debt is much higher than the return rate on the assets. Including a risk of being excluded from new borrowing which is positively correlated with unemployment, we are able to simultaneously explain... 8. The Puzzle of a Marble in a Spinning Pipe Science.gov (United States) 2015-05-01 MAY 2015 2. REPORT TYPE 3. DATES COVERED 00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE The Puzzle of a Marble in a Spinning Pipe 5a. CONTRACT...Approved for public release; distribution unlimited 13. SUPPLEMENTARY NOTES 14. ABSTRACT What trajectory does a marble follow if it is held...298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 Physics Education 50 (3) 279 1. Problem statement A marble is placed one-third of the length along a 9. Hybrid Charmonium and the p-n Puzzle International Nuclear Information System (INIS) Kisslinger, L.S.; Parno, D.; Riordan, S. 2008-01-01 Using the method of QCD sum rules, we estimate the energy of the lowest hybrid charmonium state, and find it to be at the energy of the Ψ (2S) state, about 600 MeV above the J/Ψ(1S) state. Since our solution is not consistent with a pure hybrid at this energy, we conclude that the Ψ (2S) state is probably an admixed cc - and hybrid cc - g state. From this conjecture, we find a possible explanation of the famous ρ-p puzzle. 10. The Puzzle of Non-proliferation and Disarmament (Part II) International Nuclear Information System (INIS) Ponga, J. de 2011-01-01 Since, in 1945, the World was aware of the devastating power of nuclear weapons there have been many initiatives at international level to avoid nuclear weapon proliferation: the foundation of the IAEA, the NPT, the Safeguards Agreements, the Nuclear Weapon Free Zones, the treaties banning nuclear tests or the export control regime of the NSG, among others. This article aims to offer a general picture of all of them as pieces of a puzzle the purpose of which is not to allow gaps to non pacific uses of nuclear energy. (Author) 11. Multiple sclerosis pathogenesis: missing pieces of an old puzzle. Science.gov (United States) 2018-06-08 Traditionally, multiple sclerosis (MS) was considered to be a CD4 T cell-mediated CNS autoimmunity, compatible with experimental autoimmune encephalitis model, which can be characterized by focal lesions in the white matter. However, studies of recent decades revealed several missing pieces of MS puzzle and showed that MS pathogenesis is more complex than the traditional view and may include the following: a primary degenerative process (e.g. oligodendroglial pathology), generalized abnormality of normal-appearing brain tissue, pronounced gray matter pathology, involvement of innate immunity, and CD8 T cells and B cells. Here, we review these findings and discuss their implications in MS pathogenesis. 12. Tiny bubbles challenge giant turbines: Three Gorges puzzle. Science.gov (United States) Li, Shengcai 2015-10-06 Since the birth of the first prototype of the modern reaction turbine, cavitation as conjectured by Euler in 1754 always presents as a challenge. Following his theory, the evolution of modern reaction (Francis and Kaplan) turbines has been completed by adding the final piece of the element 'draft-tube' that enables turbines to explore water energy at efficiencies of almost 100%. However, during the last two and a half centuries, with increasing unit capacity and specific speed, the problem of cavitation has been manifested and complicated by the draft-tube surges rather than being solved. Particularly, during the last 20 years, the fierce competition in the international market for extremely large turbines with compact design has encouraged the development of giant Francis turbines of 700-1000 MW. The first group (24 units) of such giant turbines of 700 MW each was installed in the Three Gorges project. Immediately after commission, a strange erosion phenomenon appeared on the guide vane of the machines that has puzzled professionals. From a multi-disciplinary analysis, this Three Gorges puzzle could reflect an unknown type of cavitation inception presumably triggered by turbulence production from the boundary-layer streak transitional process. It thus presents a fresh challenge not only to this old turbine industry, but also to the fundamental sciences. 13. Yet another possible explanation of the solar-neutrino puzzle International Nuclear Information System (INIS) Kolb, E.W.; Turner, M.S.; Walker, T.P. 1986-01-01 Mikheyev and Smirnov have shown that the interactions of neutrinos with matter can result in the conversion of electron neutrinos produced in the center of the sun to muon neutrinos. Bethe has exploited this and has pointed out that the solar-neutrino puzzle can be resolved if the mass difference squared of the two neutrinos is m 2 2 -m 2 1 approx.=6x10 -5 eV 2 , and the mixing angle satisfies sin THETAsub(v)>0.0065. We discuss a qualitatively different solution to the solar-neutrino puzzle which requires 1.0x10 -8 2 2 -m 2 1 )(sin 2 2THETAsub(v)/cos2THETAsub(v)) -8 eV 2 . Our solutions result in a much smaller flux of neutrinos from the p-p process than predicted by standard solar models, while Bethe's solution results in a flux of neutrinos from the p-p process that is about the same as standard solar models. (orig.) 14. May heavy neutrinos solve underground and cosmic-ray puzzles? International Nuclear Information System (INIS) Belotsky, K. M.; Fargion, D.; Khlopov, M. Yu.; Konoplich, R. V. 2008-01-01 Primordial heavy neutrinos of the fourth generation might explain different astrophysical puzzles. The simplest fourth-neutrino scenario is consistent with known fourth-neutrino physics, cosmic ray antimatter, cosmic gamma fluxes, and positive signals in underground detectors for a very narrow neutrino mass window (46–47 GeV). However, accounting for the constraint of underground experiment CDMS prohibits solution of cosmic-ray puzzles in this scenario. We have analyzed extended heavy-neutrino models related to the clumpiness of neutrino density, new interactions in heavy-neutrino annihilation, neutrino asymmetry, and neutrino decay. We found that, in these models, the cosmic-ray imprint may fit the positive underground signals in DAMA/Nal experiment in the entire mass range 46–70 GeV allowed from uncertainties of electroweak parameters, while satisfaction of the CDMS constraint reduces the mass range to around 50 GeV, where all data can come to consent in the framework of the considered hypothesis. 15. Status of particle physics solutions to the UHECR puzzle International Nuclear Information System (INIS) Kachelrieb, M. 2004-01-01 The status of solutions to the ultra-high energy cosmic ray (UHECR) puzzle that involve particle physics beyond the standard model is reviewed. Signatures and experimental constraints are discussed for most proposals such as the Z burst model and topological defects (both allowed only as sub-dominant contributions), supermassive dark matter (no positive evidence from its key signatures galactic anisotropy and photon dominance), strongly interacting neutrinos or new primaries (no viable models known), and violation of Lorentz invariance (viable). Lorentz invariance violation should be considered seriously as an explanation for the UHECR puzzle, if there is not a considerable fraction of photon primaries at the highest energies, correlations with sources at cosmological distance can be established, and the spectrum extends well beyond the GZK (Greisen-Zatsepin-Kuzmin) cutoff. If only the two first conditions are found to be true, and the UHECR spectrum is close to the one measured in the HiRes experiment, then bottom-up scenarios are a sufficient explanation for the data 16. May heavy neutrinos solve underground and cosmic-ray puzzles? International Nuclear Information System (INIS) Belotsky, K. M.; Fargion, D.; Khlopov, M. Yu.; Konoplich, R. V. 2008-01-01 Primordial heavy neutrinos of the fourth generation might explain different astrophysical puzzles. The simplest fourth-neutrino scenario is consistent with known fourth-neutrino physics, cosmic ray antimatter, cosmic gamma fluxes, and positive signals in underground detectors for a very narrow neutrino mass window (46-47 GeV). However, accounting for the constraint of underground experiment CDMS prohibits solution of cosmic-ray puzzles in this scenario. We have analyzed extended heavy-neutrino models related to the clumpiness of neutrino density, new interactions in heavy-neutrino annihilation, neutrino asymmetry, and neutrino decay. We found that, in these models, the cosmic-ray imprint may fit the positive underground signals in DAMA/Nal experiment in the entire mass range 46-70 GeV allowed from uncertainties of electroweak parameters, while satisfaction of the CDMS constraint reduces the mass range to around 50 GeV, where all data can come to consent in the framework of the considered hypothesis 17. The puzzling unsolved mysteries of liquid water: Some recent progress Science.gov (United States) Stanley, H. E.; Kumar, P.; Xu, L.; Yan, Z.; Mazza, M. G.; Buldyrev, S. V.; Chen, S.-H.; Mallamace, F. 2007-12-01 Water is perhaps the most ubiquitous, and the most essential, of any molecule on earth. Indeed, it defies the imagination of even the most creative science fiction writer to picture what life would be like without water. Despite decades of research, however, water's puzzling properties are not understood and 63 anomalies that distinguish water from other liquids remain unsolved. We introduce some of these unsolved mysteries, and demonstrate recent progress in solving them. We present evidence from experiments and computer simulations supporting the hypothesis that water displays a special transition point (which is not unlike the “tipping point” immortalized by Malcolm Gladwell). The general idea is that when the liquid is near this “tipping point,” it suddenly separates into two distinct liquid phases. This concept of a new critical point is finding application to other liquids as well as water, such as silicon and silica. We also discuss related puzzles, such as the mysterious behavior of water near a protein. 18. Yet another possible explanation of the solar-neutrino puzzle International Nuclear Information System (INIS) Kolb, E.W.; Turner, M.S.; Walker, T.P. 1986-04-01 Mikheyev and Smirnov have shown that the interactions of neutrinos with matter can result in the conversion of electron neutrinos produced in the center of the sun to muon neutrinos. Bethe has exploited this and has pointed out that the solar-neutrino puzzle can be resolved if the mass difference squared of the two neutrinos is m 2 2 - m 1 2 approx. = 6 x 10 -5 eV 2 , and the mixing angle satisfies sin theta/sub v/ > 0.0065. We discuss a qualitatively different solution to the solar-neutrino puzzle which requires 1.0 x 10 -8 2 2 - m 1 2 ) (sin 2 2theta/sub v//cos 2theta/sub v/) -8 eV 2 . Our solutions result in a much smaller flux of neutrinos from the p - p process than predicted by standard solar models, while Bethe's solution results in a flux of neutrinos from the p - process that is about the same as standard solar models 19. Simultaneous explanation of the RK and R (D (*)) puzzles Science.gov (United States) Bhattacharya, Bhubanjyoti; Datta, Alakabha; London, David; Shivashankara, Shanmuka 2015-03-01 At present, there are several hints of lepton flavor non-universality. The LHCb Collaboration has measured RK ≡ B (B+ →K+μ+μ-) / B (B+ →K+e+e-), and the BaBar Collaboration has measured R (D (*)) ≡ B (B bar →D (*) +τ-νbarτ) / B (B bar →D (*) +ℓ-νbarℓ) (ℓ = e , μ). In all cases, the experimental results differ from the standard model predictions by 2- 3 σ. Recently, an explanation of the RK puzzle was proposed in which new physics (NP) generates a neutral-current operator involving only third-generation particles. Now, assuming the scale of NP is much larger than the weak scale, this NP operator must be made invariant under the full SU (3)C × SU (2)L × U(1)Y gauge group. In this Letter, we note that, when this is done, a new charged-current operator can appear, and this can explain the R (D (*)) puzzle. A more precise measurement of the double ratio R (D) / R (D*) can rule out this model. 20. Application-Based Crossword Puzzles: Players’ Perception and Vocabulary Retention Directory of Open Access Journals (Sweden) Dzulfikri Dzulfikri 2016-09-01 Full Text Available This study investigates the perceptions of students towards Application-Based Crossword Puzzles and how playing this game can affect the development of vocabulary amongst students. Drawing on Vygostky’s Socio-Cultural Theory which states that the human mind is mediated by cultural artifacts, the nature of this game poses challenges and builds curiosity, allowing players to pay more attention to the words to fill in the boxes which subsequently enhances their retention of vocabulary. This game has very good potential to build positive perceptions and to develop cognition in the linguistic domain of players, i.e. the amount of their vocabulary. In this study, the researcher conducted interviews with eligible or selected student players to find out their perceptions toward this game and administered a vocabulary test to find out how this game had added to the retention in memory of new words acquired by the players from the game. The study findings showed that the participants perceive this game positively and it affects the players’ vocabulary retention positively as indicated by their test results. It is recommended that English teachers consider using Application-Based Crossword Puzzles to help students build their vocabularies especially as part of extracurricular activities. 1. Using the Tower of Hanoi Puzzle to Infuse Your Mathematics Classroom with Computer Science Concepts Science.gov (United States) Marzocchi, Alison S. 2016-01-01 This article suggests that logic puzzles, such as the well-known Tower of Hanoi puzzle, can be used to introduce computer science concepts to mathematics students of all ages. Mathematics teachers introduce their students to computer science concepts that are enacted spontaneously and subconsciously throughout the solution to the Tower of Hanoi… 2. An Alternative Evaluation: Online Puzzle as a Course-End Activity Science.gov (United States) Genç, Zülfü; Aydemir, Emrah 2015-01-01 Purpose: The purpose of this study is to determine whether the use of online puzzles in the instructional process has an effect on student achievement and learning retention. This study examined students ' perception and experiences on use of puzzle as an alternative evaluation tool. To achieve this aim, the following hypotheses were tested: using… 3. Studying the proton 'radius' puzzle with μp elastic scattering International Nuclear Information System (INIS) Gilman, R. 2013-01-01 The disagreement between the proton radius determined from muonic hydrogen and from electronic measurements is called the proton radius puzzle. The resolution of the puzzle remains unclear and appears to require new experimental results. An experiment to measure muon-proton elastic scattering is presented here 4. Crossword Puzzle Makes It Fun: Introduce Green Manufacturing in Wood Technology Courses Science.gov (United States) Iley, John L.; Hague, Doug 2012-01-01 Sustainable, or "green," manufacturing and its practices are becoming more and more a part of today's industry, including wood product manufacturing. This article provides introductory information on green manufacturing in wood technology and a crossword puzzle based on green manufacturing terms. The authors use the puzzle at the college level to… 5. The King and Prisoner Puzzle: A Way of Introducing the Components of Logical Structures Science.gov (United States) Roh, Kyeong Hah; Lee, Yong Hah; Tanner, Austin 2016-01-01 The purpose of this paper is to provide issues related to student understanding of logical components that arise when solving word problems. We designed a logic problem called the King and Prisoner Puzzle--a linguistically simple, yet logically challenging problem. In this paper, we describe various student solutions to the puzzle and discuss the… 6. Puzzle-Based Learning in Engineering Mathematics: Students' Attitudes Science.gov (United States) Klymchuk, Sergiy 2017-01-01 The article reports on the results of two case studies on the impact of the regular use of puzzles as a pedagogical strategy in the teaching and learning of engineering mathematics. The intention of using puzzles is to engage students' emotions, creativity and curiosity and also to enhance their generic thinking skills and lateral thinking… 7. What Puzzles Teachers in Rio de janeiro, and What Keeps Them Going? Science.gov (United States) Lyra, Isolina; Fish, Solange; Braga, Walewska Gomes 2003-01-01 Focuses on the key mechanism of "puzzling" in Exploratory Practice (EP), a form of practitioner research, and the critical issue of sustainability in the context of volunteer teacher development work in Rio de Janeiro, Brazil. Investigated puzzles (concerns) of language teachers and grouped them into six categories; motivation, anxiety,… 8. On Non-Parallelizable Deterministic Client Puzzle Scheme with Batch Verification Modes NARCIS (Netherlands) Tang, Qiang; Jeckmans, Arjan A (computational) client puzzle scheme enables a client to prove to a server that a certain amount of computing resources (CPU cycles and/or Memory look-ups) has been dedicated to solve a puzzle. Researchers have identified a number of potential applications, such as constructing timed cryptography, 9. Making Peer-Assisted Content Distribution Robust to Collusion Using Bandwidth Puzzles Science.gov (United States) Reiter, Michael K.; Sekar, Vyas; Spensky, Chad; Zhang, Zhenghao Many peer-assisted content-distribution systems reward a peer based on the amount of data that this peer serves to others. However, validating that a peer did so is, to our knowledge, an open problem; e.g., a group of colluding attackers can earn rewards by claiming to have served content to one another, when they have not. We propose a puzzle mechanism to make contribution-aware peer-assisted content distribution robust to such collusion. Our construction ties solving the puzzle to possession of specific content and, by issuing puzzle challenges simultaneously to all parties claiming to have that content, our mechanism prevents one content-holder from solving many others' puzzles. We prove (in the random oracle model) the security of our scheme, describe our integration of bandwidth puzzles into a media streaming system, and demonstrate the resulting attack resilience via simulations. 10. An Empirical Evaluation of Puzzle-Based Learning as an Interest Approach for Teaching Introductory Computer Science Science.gov (United States) Merrick, K. E. 2010-01-01 This correspondence describes an adaptation of puzzle-based learning to teaching an introductory computer programming course. Students from two offerings of the course--with and without the puzzle-based learning--were surveyed over a two-year period. Empirical results show that the synthesis of puzzle-based learning concepts with existing course… 11. Gamma-ray bursts, a puzzle being resolved CERN Multimedia Piran, T 1999-01-01 Gamma Ray Bursts (GRBs), short and intense bursts of Gamma-Rays, have puzzled astrophysicists since their accidental discovery in the seventies. BATSE, launched in 1991, has established the cosmological origin of GRBs and has shown that they involve energies much higher than previously expected, corresponding to the most powerful explosions known in the Universe. The fireball model, which has been developed during the last ten years, explains most of the observed features of GRBs . According to this model, GRBs are produced in internal collisions of ejected matter flowing at ultra-relativistic energy. This ultra-relativistic motion reaches Lorentz factors of order 100 or more, higher than seen elsewhere in the Universe. The GRB afterglow was discovered in 1997. It was predicted by this model and it takes place when this relativistic flow is slowed down by the surrounding material. This model was confirmed recently with the discovery last January of the predicted prompt optical emission from GRB 990123. Unfort... 12. The puzzling resilience of transnational organized criminal networks DEFF Research Database (Denmark) Leuprecht, Christian; Aulthouse, Andrew; Walther, Olivier 2016-01-01 international organized crime syndicate based in Jamaica, whose resilience proves particularly puzzling. We were curious to know whether there is any evidence that international borders have an effect on the structure of illicit networks that cross them. It turns out that transnational drug distribution......Why is transnational organized crime so difficult to dismantle? While organized crime networks within states have received some attention, actual transnational operations have not. In this article, we study the transnational drug and gun trafficking operations of the Shower Posse, a violent...... networks such as the Shower Posse rely on a small number of brokers whose role is to connect otherwise distinct domestic markets. Due to the high transaction costs associated with developing and maintaining transnational movement, the role of such brokers appears particularly important in facilitating... 13. Interfacial depinning transitions in disordered media: revisiting an old puzzle International Nuclear Information System (INIS) Moglia, Belén; Albano, Ezequiel V; Villegas, Pablo; Muñoz, Miguel A 2014-01-01 Interfaces advancing through random media represent a number of different problems in physics, biology and other disciplines. Here, we study the pinning/depinning transition of the prototypical non-equilibrium interfacial model, i.e. the Kardar–Parisi–Zhang equation, advancing in a disordered medium. We will separately analyze the cases of positive and negative non-linearity coefficients, which are believed to exhibit qualitatively different behavior: the positive case shows a continuous transition that can be related to directed-percolation-depinning, while in the negative case there is a discontinuous transition and faceted interfaces appear. Some studies have argued from different perspectives that both cases share the same universal behavior. By using a number of computational and scaling techniques we will shed light on this puzzling situation and conclude that the two cases are intrinsically different. (paper) 14. 'Super' Japanese site gears up to sole neutrino puzzle International Nuclear Information System (INIS) Normile, D. 1995-01-01 Ever since Wolfgang Pauli proposed the existence of neutrinos in 1930 to explain some puzzling features of the radioactive decay of certain atoms, expermentalists have labored hard to detect enough of the elusive particles to determine their properties. It took 26 years to prove that Pauli's particle even exits-a feat for which Frederick Reines of the University of California (UC), Irvine, won the Nobel Prize last month. Soon, however, physicists will be capturing neutrinos in unprecedented numbers in a 50,000-metric-ton tank that will fill with water starting next month. Researchers hope that this colossal waterbath will yield an answer to one of the most pressing questions is cosmology and high-energy physics: Do neutrinos have mass?The $100 million experiment, called Super-Kamiokande, in located in a lead mine west of Tokyo. This article describes the work to be conducted 15. Ambiguity aversion and household portfolio choice puzzles: Empirical evidence. Science.gov (United States) Dimmock, Stephen G; Kouwenberg, Roy; Mitchell, Olivia S; Peijnenburg, Kim 2016-03-01 We test the relation between ambiguity aversion and five household portfolio choice puzzles: nonparticipation in equities, low allocations to equity, home-bias, own-company stock ownership, and portfolio under-diversification. In a representative US household survey, we measure ambiguity preferences using custom-designed questions based on Ellsberg urns. As theory predicts, ambiguity aversion is negatively associated with stock market participation, the fraction of financial assets in stocks, and foreign stock ownership, but it is positively related to own-company stock ownership. Conditional on stock ownership, ambiguity aversion is related to portfolio under-diversification, and during the financial crisis, ambiguity-averse respondents were more likely to sell stocks. 16. Ambiguity aversion and household portfolio choice puzzles: Empirical evidence* Science.gov (United States) Dimmock, Stephen G.; Kouwenberg, Roy; Mitchell, Olivia S.; Peijnenburg, Kim 2017-01-01 We test the relation between ambiguity aversion and five household portfolio choice puzzles: nonparticipation in equities, low allocations to equity, home-bias, own-company stock ownership, and portfolio under-diversification. In a representative US household survey, we measure ambiguity preferences using custom-designed questions based on Ellsberg urns. As theory predicts, ambiguity aversion is negatively associated with stock market participation, the fraction of financial assets in stocks, and foreign stock ownership, but it is positively related to own-company stock ownership. Conditional on stock ownership, ambiguity aversion is related to portfolio under-diversification, and during the financial crisis, ambiguity-averse respondents were more likely to sell stocks. PMID:28458446 17. Diagnosing the Cause of Scientific Standstill, Unravelling the Needham Puzzle Institute of Scientific and Technical Information of China (English) 刘迎秋; 刘春江 2007-01-01 There are diverse opinions about how to solve the Needham Puzzle. Such opinions or schools of thought can be roughly classified into three theories of a) geographical conditions, b) empirical trial and error, and c) private property rights. Although each school of thought makes sense, they all fail to fully uncover the main reason why, in modern history, China lagged behind western countries in the development of science and technology. In our opinion, the correct solution is to draw on historical experiences, integrate all schools of thought, proceed with an emphasis on the definition and protection of property rights, boost government investment in basic scientific research, strengthen government service functionality, actively develop NGOs, and open more widely to the outside world, with a view of pushing forward China’s scientific and technological innovation and accelerating the pace of China’s modernization. 18. Estrogen, Angiogenesis, Immunity and Cell Metabolism: Solving the Puzzle. Science.gov (United States) Trenti, Annalisa; Tedesco, Serena; Boscaro, Carlotta; Trevisi, Lucia; Bolego, Chiara; Cignarella, Andrea 2018-03-15 Estrogen plays an important role in the regulation of cardiovascular physiology and the immune system by inducing direct effects on multiple cell types including immune and vascular cells. Sex steroid hormones are implicated in cardiovascular protection, including endothelial healing in case of arterial injury and collateral vessel formation in ischemic tissue. Estrogen can exert potent modulation effects at all levels of the innate and adaptive immune systems. Their action is mediated by interaction with classical estrogen receptors (ERs), ERα and ERβ, as well as the more recently identified G-protein coupled receptor 30/G-protein estrogen receptor 1 (GPER1), via both genomic and non-genomic mechanisms. Emerging data from the literature suggest that estrogen deficiency in menopause is associated with an increased potential for an unresolved inflammatory status. In this review, we provide an overview through the puzzle pieces of how 17β-estradiol can influence the cardiovascular and immune systems. 19. Why plants make puzzle cells, and how their shape emerges. Science.gov (United States) Sapala, Aleksandra; Runions, Adam; Routier-Kierzkowska, Anne-Lise; Das Gupta, Mainak; Hong, Lilan; Hofhuis, Hugo; Verger, Stéphane; Mosca, Gabriella; Li, Chun-Biu; Hay, Angela; Hamant, Olivier; Roeder, Adrienne Hk; Tsiantis, Miltos; Prusinkiewicz, Przemyslaw; Smith, Richard S 2018-02-27 The shape and function of plant cells are often highly interdependent. The puzzle-shaped cells that appear in the epidermis of many plants are a striking example of a complex cell shape, however their functional benefit has remained elusive. We propose that these intricate forms provide an effective strategy to reduce mechanical stress in the cell wall of the epidermis. When tissue-level growth is isotropic, we hypothesize that lobes emerge at the cellular level to prevent formation of large isodiametric cells that would bulge under the stress produced by turgor pressure. Data from various plant organs and species support the relationship between lobes and growth isotropy, which we test with mutants where growth direction is perturbed. Using simulation models we show that a mechanism actively regulating cellular stress plausibly reproduces the development of epidermal cell shape. Together, our results suggest that mechanical stress is a key driver of cell-shape morphogenesis. © 2018, Sapala et al. 20. The orthopositronium lifetime puzzle and its final solution International Nuclear Information System (INIS) Liu Feng; Wu Jianda; Zhan Liang; Ye Bangjiao 2004-01-01 The ortho-positronium (o-Ps), which consists of an electron and positron, is a pure lepton bound system. The o-Ps lifetime can be calculated accurately by quantum electrodynamics, but there is a long-standing discrepancy between the theoretical calculations and the experimental results. Theoretical and experimental physicists have worked hard for a long time to solve the problem, and recently finally solved this lifetime puzzle. The authors briefly outline the discrepancy between the theoretical calculations of the o-Ps annihilation decay rate and some of the experimental measurements, as well as recent developments of experimental techniques, and its final solution. In particular, the final results of the Tokyo and michigan groups are discussed 1. Gaia's view of the λ Boo star puzzle Science.gov (United States) Murphy, Simon J.; Paunzen, Ernst 2017-04-01 The evolutionary status of the chemically peculiar class of λ Boo stars has been intensely debated. It is now agreed that the λ Boo phenomenon affects A stars of all ages, from star formation to the terminal age main sequence, but the cause of the chemical peculiarity is still a puzzle. We revisit the debate of their ages and temperatures in order to shed light on the phenomenon, using the new parallaxes in Gaia Data Release 1 with existing Hipparcos parallaxes and multicolour photometry. We find that no single formation mechanism is able to explain all the observations, and suggest that there are multiple channels producing λ Boo spectra. The relative importance of these channels varies with age, temperature and environment. 2. 200 more puzzling physics problems with hints and solutions CERN Document Server Gnädig, Péter; Vigh, Máté 2016-01-01 Like its predecessor, 200 Puzzling Physics Problems, this book is aimed at strengthening students' grasp of the laws of physics by applying them to situations that are practical, and to problems that yield more easily to intuitive insight than to brute-force methods and complex mathematics. The problems are chosen almost exclusively from classical, non-quantum physics, but are no easier for that. They are intriguingly posed in accessible non-technical language, and require readers to select an appropriate analysis framework and decide which branches of physics are involved. The general level of sophistication needed is that of the exceptional school student, the good undergraduate, or the competent graduate student; some physics professors may find some of the more difficult questions challenging. By contrast, the mathematical demands are relatively minimal, and seldom go beyond elementary calculus. This further book of physics problems is not only instructive and challenging, but also enjoyable. 3. Lambda-nuclear interactions and hyperon puzzle in neutron stars Energy Technology Data Exchange (ETDEWEB) Haidenbauer, J. [Forschungszentrum Juelich, Institute for Advanced Simulation, Institut fuer Kernphysik and Juelich Center for Hadron Physics, Juelich (Germany); Universitaet Bonn, Helmholtz Institut fuer Strahlen- und Kernphysik and Bethe Center for Theoretical Physics, Bonn (Germany); Meissner, U.G. [Universitaet Bonn, Helmholtz Institut fuer Strahlen- und Kernphysik and Bethe Center for Theoretical Physics, Bonn (Germany); Forschungszentrum Juelich, Institute for Advanced Simulation, Institut fuer Kernphysik and Juelich Center for Hadron Physics, Juelich (Germany); Kaiser, N.; Weise, W. [Technische Universitaet Muenchen, Physik Department, Garching (Germany) 2017-06-15 Brueckner theory is used to investigate the in-medium properties of a Λ-hyperon in nuclear and neutron matter, based on hyperon-nucleon interactions derived within SU(3) chiral effective field theory (EFT). It is shown that the resulting Λ single-particle potential U{sub Λ}(p{sub Λ} = 0, ρ) becomes strongly repulsive for densities ρ of two-to-three times that of normal nuclear matter. Adding a density-dependent effective ΛN-interaction constructed from chiral ΛNN three-body forces increases the repulsion further. Consequences of these findings for neutron stars are discussed. It is argued that for hyperon-nuclear interactions with properties such as those deduced from the SU(3) EFT potentials, the onset for hyperon formation in the core of neutron stars could be shifted to much higher density which, in turn, could pave the way for resolving the so-called hyperon puzzle. (orig.) 4. Clarifying some remaining questions in the anomaly puzzle International Nuclear Information System (INIS) Huang, Xing; Parker, Leonard 2011-01-01 We discuss several points that may help to clarify some questions that remain about the anomaly puzzle in supersymmetric theories. In particular, we consider a general N=1 supersymmetric Yang-Mills theory. The anomaly puzzle concerns the question of whether there is a consistent way in the quantized theory to put the R-current and the stress tensor in a single supermultiplet called the supercurrent, even though in the classical theory they are in the same supermultiplet. It was proposed that the classically conserved supercurrent bifurcates into two supercurrents having different anomalies in the quantum regime. The most interesting result we obtain is an explicit expression for the lowest component of one of the two supercurrents in 4-dimensional spacetime, namely the supercurrent that has the energy-momentum tensor as one of its components. This expression for the lowest component is an energy-dependent linear combination of two chiral currents, which itself does not correspond to a classically conserved chiral current. The lowest component of the other supercurrent, namely, the R-current, satisfies the Adler-Bardeen theorem. The lowest component of the first supercurrent has an anomaly, which we show is consistent with the anomaly of the trace of the energy-momentum tensor. Therefore, we conclude that there is no consistent way to construct a single supercurrent multiplet that contains the R-current and the stress tensor in the straightforward way originally proposed. We also discuss and try to clarify some technical points in the derivations of the two supercurrents in the literature. These latter points concern the significance of infrared contributions to the NSVZ β-function and the role of the equations of motion in deriving the two supercurrents. (orig.) 5. A Resolution of the Purchasing Power Parity Puzzle: Imperfect Knowledge and Long Swings DEFF Research Database (Denmark) Frydman, Roman; Goldberg, Michael D.; Johansen, Søren 2009-01-01 Asset prices undergo long swings that revolve around benchmark levels. In currency markets, fluctuations involve real exchange rates that are highly persistent and that move in near-parallel fashion with nominal rates. The inability to explain these two regularities with one model has been called...... the "purchasing power parity puzzle." In this paper, we trace the puzzle to exchange rate modelers' use of the "Rational Expectations Hypothesis." We show that once imperfect knowledge is recognized, a monetary model is able to account for the puzzle, as well as other salient features of the data, including... 6. The 20th anniversary of the three-nucleon analyzing power puzzle - a personal recollection International Nuclear Information System (INIS) Tornow, W. . Author 2008-01-01 The history of the three-nucleon analyzing power puzzle is described by an experimentalist who has been collaborating with few-body theoreticians in trying to unravel the physics of this long-standing phenomenon. Although surprising effects have been discovered along the way, the puzzle is still unexplained. Hopefully, some of the long-range three-nucleon force terms predicted by chiral effective field theory in N 3 LO will eventually solve the puzzle. Presented at the 20th Few-Body Conference, Pisa, Italy, 10-14 September 2007. (author) 7. Family caregivers of palliative cancer patients at home: the puzzle of pain management. Science.gov (United States) Mehta, Anita; Cohen, S Robin; Carnevale, Franco A; Ezer, Hélène; Ducharme, Francine 2010-01-01 The purpose of this grounded theory study was to understand the processes used by family caregivers to manage the pain of cancer patients at home. A total of 24 family caregivers participated. They were recruited using purposeful then theoretical sampling. The data sources were taped, transcribed (semi-structured) interviews and field notes. Data analysis was based on Strauss and Corbin's (1998) requirements for open, axial, and selective coding. The result was an explanatory model titled "the puzzle of pain management," which includes four main processes: "drawing on past experiences"; "strategizing a game plan"; "striving to respond to pain"; and "gauging the best fit," a decision-making process that joins the puzzle pieces. Understanding how family caregivers assemble their puzzle pieces can help health care professionals make decisions related to the care plans they create for pain control and help them to recognize the importance of providing information as part of resolving the puzzle of pain management. 8. The Retrofit Puzzle Extended: Optimal Fleet Owner Behavior over Multiple Time Periods Science.gov (United States) 2009-08-04 In "The Retrofit Puzzle: Optimal Fleet Owner Behavior in the Context of Diesel Retrofit Incentive Programs" (1) an integer program was developed to model profit-maximizing diesel fleet owner behavior when selecting pollution reduction retrofits. Flee... 9. Well-Defined Cyclic Triblock Terpolymers: A Missing Piece of the Morphology Puzzle KAUST Repository Polymeropoulos, George; Bilalis, Panayiotis; Hadjichristidis, Nikolaos 2016-01-01 Two well-defined cyclic triblock terpolymers, missing pieces of the terpolymer morphology puzzle, consisting of poly(isoprene), polystyrene, and poly(2-vinylpyridine), were synthesized by combining the Glaser coupling reaction with anionic 10. Piecing together the puzzle: Improving event content coverage for real-time sub-event detection using adaptive microblog crawling. Directory of Open Access Journals (Sweden) Laurissa Tokarchuk Full Text Available In an age when people are predisposed to report real-world events through their social media accounts, many researchers value the benefits of mining user generated content from social media. Compared with the traditional news media, social media services, such as Twitter, can provide more complete and timely information about the real-world events. However events are often like a puzzle and in order to solve the puzzle/understand the event, we must identify all the sub-events or pieces. Existing Twitter event monitoring systems for sub-event detection and summarization currently typically analyse events based on partial data as conventional data collection methodologies are unable to collect comprehensive event data. This results in existing systems often being unable to report sub-events in real-time and often in completely missing sub-events or pieces in the broader event puzzle. This paper proposes a Sub-event detection by real-TIme Microblog monitoring (STRIM framework that leverages the temporal feature of an expanded set of news-worthy event content. In order to more comprehensively and accurately identify sub-events this framework first proposes the use of adaptive microblog crawling. Our adaptive microblog crawler is capable of increasing the coverage of events while minimizing the amount of non-relevant content. We then propose a stream division methodology that can be accomplished in real time so that the temporal features of the expanded event streams can be analysed by a burst detection algorithm. In the final steps of the framework, the content features are extracted from each divided stream and recombined to provide a final summarization of the sub-events. The proposed framework is evaluated against traditional event detection using event recall and event precision metrics. Results show that improving the quality and coverage of event contents contribute to better event detection by identifying additional valid sub-events. The 11. Piecing together the puzzle: Improving event content coverage for real-time sub-event detection using adaptive microblog crawling. Science.gov (United States) Tokarchuk, Laurissa; Wang, Xinyue; Poslad, Stefan 2017-01-01 In an age when people are predisposed to report real-world events through their social media accounts, many researchers value the benefits of mining user generated content from social media. Compared with the traditional news media, social media services, such as Twitter, can provide more complete and timely information about the real-world events. However events are often like a puzzle and in order to solve the puzzle/understand the event, we must identify all the sub-events or pieces. Existing Twitter event monitoring systems for sub-event detection and summarization currently typically analyse events based on partial data as conventional data collection methodologies are unable to collect comprehensive event data. This results in existing systems often being unable to report sub-events in real-time and often in completely missing sub-events or pieces in the broader event puzzle. This paper proposes a Sub-event detection by real-TIme Microblog monitoring (STRIM) framework that leverages the temporal feature of an expanded set of news-worthy event content. In order to more comprehensively and accurately identify sub-events this framework first proposes the use of adaptive microblog crawling. Our adaptive microblog crawler is capable of increasing the coverage of events while minimizing the amount of non-relevant content. We then propose a stream division methodology that can be accomplished in real time so that the temporal features of the expanded event streams can be analysed by a burst detection algorithm. In the final steps of the framework, the content features are extracted from each divided stream and recombined to provide a final summarization of the sub-events. The proposed framework is evaluated against traditional event detection using event recall and event precision metrics. Results show that improving the quality and coverage of event contents contribute to better event detection by identifying additional valid sub-events. The novel combination of 12. A Play on Words: Using Cognitive Computing as a Basis for AI Solvers in Word Puzzles Science.gov (United States) Manzini, Thomas; Ellis, Simon; Hendler, James 2015-12-01 In this paper we offer a model, drawing inspiration from human cognition and based upon the pipeline developed for IBM's Watson, which solves clues in a type of word puzzle called syllacrostics. We briefly discuss its situation with respect to the greater field of artificial general intelligence (AGI) and how this process and model might be applied to other types of word puzzles. We present an overview of a system that has been developed to solve syllacrostics. 13. The Forward-Bias Puzzle: A Solution Based on Covered Interest Parity OpenAIRE Pippenger, John 2009-01-01 The forward-bias puzzle is probably the most important puzzle in international macroeconomics. After more than 20 years, there is no accepted solution. My solution is based on covered interest parity (CIP). CIP implies: (1) Forward rates are not rational expectations of future spot rates. Those expectations depend on future spot rates and interest rate differentials. (2) The forward bias is the result of a specification error, replacing future forward exchange rates with current forward ... 14. The π+-emission puzzle in 4ΛHe decay International Nuclear Information System (INIS) Gibson, B.F.; Timmermans, R.G.E. 1998-01-01 We re-examine the puzzling π + emission from the weak decay of 4 Λ He and propose an explanation in terms of a three-body decay of the virtual Σ + . Such a resolution of the π + decay puzzle is consistent with the calculated Σ + probability in light Λ hypernuclei as well as the experimentally observed π + energy spectrum and s-wave angular distribution. (orig.) 15. A Hierarchical Interface Design of a Puzzle Game for Elementary Education OpenAIRE Eun-Young Park; Young-Ho Park 2010-01-01 A basic instinct of humans for perfect completion usually drives us happy. Basically, humans purchase a certain complete match for scattered facts. The satisfaction of completing the scattered pieces gives us great pleasure. Thus many people put in their time and effort in the puzzle, and they gain strong satisfaction. The paper solves the importance of the general effects of a puzzle in building the edu-game design. Legacy online education has following problems. First, educational effects b... 16. JIGSAW PUZZLE IMPROVE FINE MOTOR ABILITIES OF UPPER EXTREMITIES IN POST-STROKE ISCHEMIC CLIENTS Directory of Open Access Journals (Sweden) Kusnanto Kusnanto 2017-06-01 Full Text Available Introduction: Ischemic stroke is a disease caused by focal cerebral ischemia, where is a decline in blood flow that needed for neuronal metabolism, leading to neurologic deficit include motor deficit such as fine motor skills impairment. Therapy of fine motor skills disorders is to improve motor function, prevent contractures and complications. These study aimed to identify the effect of playing Jigsaw Puzzle on muscle strength, extensive motion, and upper extremity fine motor skills in patients with ischemic stroke at Dr. Moewardi Hospital, Surakarta. Methods: Experimental Quasi pre-posttest one group control. The number of samples were 34 respondents selected using purposive sampling technique. The samples were divided into intervention and control groups. The intervention group was 17 respondents who were given standard treatment hospital and played Jigsaw Puzzle 2 times a day for six days. Control group is one respondent given by hospital standard therapy without given additional Jigsaw Puzzle game. Evaluation of these research is done on the first and seventh day for those groups. Result: The results showed that muscle strength, the range of joint motion and fine motor skills of upper extremities increased (p = 0.001 significantly after being given the Jigsaw Puzzle games. These means playing Jigsaw Puzzle increase muscle strength, the range of joint motion and upper extremity fine motor skill of ischemic stroke patients. Discussion and conclusion: Jigsaw puzzle game administration as additional rehabilitation therapy in upper extremity fine motor to minimize the occurrence of contractures and motor disorders in patients with ischemic stroke. Jigsaw puzzle game therapy capable of creating repetitive motion as a key of neurological rehabilitation in Ischemic Stroke. This study recommends using jigsaw puzzle game as one of intervention in the nursing care of Ischemic Stroke patients. 17. Association of Crossword Puzzle Participation with Memory Decline in Persons Who Develop Dementia Science.gov (United States) Pillai, Jagan A.; Hall, Charles B.; Dickson, Dennis W.; Buschke, Herman; Lipton, Richard B.; Verghese, Joe 2013-01-01 Participation in cognitively stimulating leisure activities such as crossword puzzles may delay onset of the memory decline in the preclinical stages of dementia, possibly via its effect on improving cognitive reserve. We followed 488 initially cognitively intact community residing individuals with clinical and cognitive assessments every 12–18 months in the Bronx Aging Study. We assessed the influence of crossword puzzle participation on the onset of accelerated memory decline as measured by the Buschke Selective Reminding Test in 101 individuals who developed incident dementia using a change point model. Crossword puzzle participation at baseline delayed onset of accelerated memory decline by 2.54 years. Inclusion of education or participation in other cognitively stimulating activities did not significantly add to the fit of the model beyond the effect of puzzles. Our findings show that late life crossword puzzle participation, independent of education, was associated with delayed onset of memory decline in persons who developed dementia. Given the wide availability and accessibility of crossword puzzles, their role in preventing cognitive decline should be validated in future clinical trials. PMID:22040899 18. PENERAPAN JIGSAW PUZZLE COMPETITION DALAM PEMBELAJARAN KONTEKSTUAL UNTUK MENINGKATKAN MINAT DAN HASIL BELAJAR FISIKA SISWA SMP Directory of Open Access Journals (Sweden) D. Yulianti 2012-01-01 Full Text Available Untuk mengatasi kurangnya minat dan hasil belajar fisika siswa dilakukan penelitian melalui kegiatan pembelajaran fisikakontekstual berbantuan jigsaw puzzle competititon. Subjek penelitian ini adalah siswa kelas VII H SMP Negeri 18 Semarang.Penelitian ini telah dilakukan pembelajaran dengan pendekatan kontekstual berbantuan jigsaw puzzle competition. Hasilpenelitian menunjukkan bahwa pembelajaran kontekstual berbantuan jigsaw puzzle competition mampu meningkatan minat danhasil belajar siswa kelas VII H SMPNegeri 18 Semarang tahun pelajaran 2008/2009 secara signifikan. Agar lebih efektif sebaiknyadikembangkan pembelajaran kontekstual dengan metode lain agar diperoleh peningkatan minat dan hasil belajar Model ini perludiaplikasikan dalam pembelajaran fisika untuk materi yang lain. To overcome the problem of lack of students' interest as well as their achievements a Jigsaw Puzzle Competition in physicscontextual learning process was done. The students from VIIHclass of Junior High School 18 Semarang academic year 2008/2009were chosen as the subjects. The result of this research shows that contextual teaching and learning using Jigsaw PuzzleCompetition approach was not only increase the students' interest but also improve their achievements. In order to get moreeffective result, it is necessary to develop contextual teaching and learning by combining them with other method. Because of thegreat benefit of this model, it is necessary to apply this model to other physics learning concepts.Keywords: Jigsaw Puzzle Competition, contextual, interest; 19. Fanconi Anemia Proteins and Their Interacting Partners: A Molecular Puzzle Science.gov (United States) Kaddar, Tagrid; Carreau, Madeleine 2012-01-01 In recent years, Fanconi anemia (FA) has been the subject of intense investigations, primarily in the DNA repair research field. Many discoveries have led to the notion of a canonical pathway, termed the FA pathway, where all FA proteins function sequentially in different protein complexes to repair DNA cross-link damages. Although a detailed architecture of this DNA cross-link repair pathway is emerging, the question of how a defective DNA cross-link repair process translates into the disease phenotype is unresolved. Other areas of research including oxidative metabolism, cell cycle progression, apoptosis, and transcriptional regulation have been studied in the context of FA, and some of these areas were investigated before the fervent enthusiasm in the DNA repair field. These other molecular mechanisms may also play an important role in the pathogenesis of this disease. In addition, several FA-interacting proteins have been identified with roles in these “other” nonrepair molecular functions. Thus, the goal of this paper is to revisit old ideas and to discuss protein-protein interactions related to other FA-related molecular functions to try to give the reader a wider perspective of the FA molecular puzzle. PMID:22737580 20. Hyperon puzzle of neutron stars with Skyrme force models International Nuclear Information System (INIS) Lim, Yeunhwan; Hyun, Chang Ho; Kwak, Kyujin; Lee, Chang-Hwan 2015-01-01 We consider the so-called hyperon puzzle of neutron star (NS). We employ Skyrme force models for the description of in-medium nucleon–nucleon (NN), nucleon–Lambda hyperon (NΛ) and Lambda–Lambda (ΛΛ) interactions. A phenomenological finite-range force (FRF) for the ΛΛ interaction is considered as well. Equation of state (EoS) of NS matter is obtained in the framework of density functional theory, and Tolman–Oppenheimer–Volkoff (TOV) equations are solved to obtain the mass-radius relations of NSs. It has been generally known that the existence of hyperons in the NS matter is not well supported by the recent discovery of large-mass NSs (M ≃ 2M⊙) since hyperons make the EoS softer than the one without them. For the selected interaction models, NΛ interactions reduce the maximum mass of NS by about 30%, while ΛΛ interactions can give about 10% enhancement. Consequently, we find that some Skyrme force models predict the maximum mass of NS consistent with the observation of 2M ⊙ NSs, and at the same time satisfy observationally constrained mass-radius relations. (author) 1. The Puzzle of Visual Development: Behavior and Neural Limits. Science.gov (United States) Kiorpes, Lynne 2016-11-09 The development of visual function takes place over many months or years in primate infants. Visual sensitivity is very poor near birth and improves over different times courses for different visual functions. The neural mechanisms that underlie these processes are not well understood despite many decades of research. The puzzle arises because research into the factors that limit visual function in infants has found surprisingly mature neural organization and adult-like receptive field properties in very young infants. The high degree of visual plasticity that has been documented during the sensitive period in young children and animals leaves the brain vulnerable to abnormal visual experience. Abnormal visual experience during the sensitive period can lead to amblyopia, a developmental disorder of vision affecting ∼3% of children. This review provides a historical perspective on research into visual development and the disorder amblyopia. The mismatch between the status of the primary visual cortex and visual behavior, both during visual development and in amblyopia, is discussed, and several potential resolutions are considered. It seems likely that extrastriate visual areas further along the visual pathways may set important limits on visual function and show greater vulnerability to abnormal visual experience. Analyses based on multiunit, population activity may provide useful representations of the information being fed forward from primary visual cortex to extrastriate processing areas and to the motor output. Copyright © 2016 the authors 0270-6474/16/3611384-10$15.00/0. 2. Cohabitants' perspective on housing adaptations: a piece of the puzzle. Science.gov (United States) Granbom, Marianne; Taei, Afsaneh; Ekstam, Lisa 2017-12-01 As part of the Swedish state-funded healthcare system, housing adaptations are used to promote safe and independent living for disabled people in ordinary housing through the elimination of physical environmental barriers in the home. The aim of this study was to describe the cohabitants' expectations and experiences of how a housing adaptation, intended for the partner, would impact their everyday life. In-depth interviews were conducted with cohabitants of nine people applying for a housing adaptation, initially at the time of the application and then again 3 months after the housing adaptation was installed. A longitudinal analysis was performed including analysis procedures from Grounded Theory. The findings revealed the expectations and experiences in four categories: partners' activities and independence; cohabitants' everyday activities and caregiving; couples' shared recreational/leisure activities; and housing decisions. A core category putting the intervention into perspective was called 'Housing adaptations - A piece of the puzzle'. From the cohabitants' perspective, new insights on housing adaptations emerged, which are important to consider when planning and carrying out successful housing adaptations. © 2017 Nordic College of Caring Science. 3. Puzzling Two-Proton Decay of 67Kr Science.gov (United States) Wang, S. M.; Nazarewicz, W. 2018-05-01 Ground-state two-proton (2 p ) radioactivity is a rare decay mode found in a few very proton-rich isotopes. The 2 p decay lifetime and properties of emitted protons carry invaluable information on nuclear structure in the presence of a low-lying proton continuum. The recently measured 2 p decay of 67Kr turned out to be unexpectedly fast. Since 67Kr is expected to be a deformed system, we investigate the impact of deformation effects on the 2 p radioactivity. We apply the recently developed Gamow coupled-channel framework, which allows for a precise description of three-body systems in the presence of rotational and vibrational couplings. This is the first application of a three-body approach to a two-nucleon decay from a deformed nucleus. We show that deformation couplings significantly increase the 2 p decay width of 67Kr; this finding explains the puzzling experimental data. The calculated angular proton-proton correlations reflect a competition between 1 p and 2 p decay modes in this nucleus. 4. Induced Hyperon-Nucleon-Nucleon Interactions and the Hyperon Puzzle. Science.gov (United States) Wirth, Roland; Roth, Robert 2016-10-28 We present the first ab initio calculations for p-shell hypernuclei including hyperon-nucleon-nucleon (YNN) contributions induced by a similarity renormalization group transformation of the initial hyperon-nucleon interaction. The transformation including the YNN terms conserves the spectrum of the Hamiltonian while drastically improving model-space convergence of the importance-truncated no-core model, allowing a precise extraction of binding and excitation energies. Results using a hyperon-nucleon interaction at leading order in chiral effective field theory for lower- to mid-p-shell hypernuclei show a good reproduction of experimental excitation energies while hyperon separation energies are typically overestimated. The induced YNN contributions are strongly repulsive and we show that they are related to a decoupling of the Σ hyperons from the hypernuclear system, i.e., a suppression of the Λ-Σ conversion terms in the Hamiltonian. This is linked to the so-called hyperon puzzle in neutron-star physics and provides a basic mechanism for the explanation of strong ΛNN three-baryon forces. 5. Global climate change and the equity-efficiency puzzle International Nuclear Information System (INIS) Manne, Alan S.; Stephan, Gunter 2005-01-01 There is a broad consensus that the costs of abatement of global climate change can be reduced efficiently through the assignment of quota rights and through international trade in these rights. There is, however, no consensus on whether the initial assignment of emissions permits can affect the Pareto-optimal global level of abatement. This paper provides some insight into the equity-efficiency puzzle. Qualitative results are obtained from a small-scale model; then quantitative evidence of separability is obtained from MERGE, a multiregion integrated assessment model. It is shown that if all the costs of climate change can be expressed in terms of GDP losses, Pareto-efficient abatement strategies are independent of the initial allocation of emissions rights. This is the case sometimes described as 'market damages'. If, however, different regions assign different values to nonmarket damages such as species losses, different sharing rules may affect the Pareto-optimal level of greenhouse gas abatement. Separability may then be demonstrated only in specific cases (e.g. identical welfare functions or quasi-linearity of preferences or small shares of wealth devoted to abatement) 6. Induced hyperon-nucleon-nucleon interactions and the hyperon puzzle Energy Technology Data Exchange (ETDEWEB) Wirth, Roland; Roth, Robert [Institut fuer Kernphysik, TU Darmstadt (Germany) 2016-07-01 There is a strong experimental and theoretical interest in determining the structure of hypernuclei and the effect of strangeness in strongly interacting many-body systems. Recently, we presented the first calculations of hypernuclei in the p shell from first principles. However, these calculations showed either slow convergence with respect to model-space size or, when the hyperon-nucleon potential is transformed via the Similarity Renormalization Group, strong induced three-body terms. By including these induced hyperon-nucleon-nucleon (YNN) terms explicitly, we get precise binding and excitation energies. We present first results for p-shell hypernuclei and discuss the origin of the YNN terms, which are mainly driven by the evolution of the Λ-Σ conversion terms. We find that they are tightly connected to the hyperon puzzle, a long-standing issue where the appearance of hyperons in models of neutron star matter lowers the predicted maximum neutron star mass below the bound set by the heaviest observed objects. 7. METODE BERMAIN PUZZLE BERPENGARUH PADA PERKEMBANGAN MOTORIK HALUS ANAK USIA PRASEKOLAH Directory of Open Access Journals (Sweden) Lilis Maghfuroh 2018-03-01 Full Text Available Pre-school is a period to increase fine motor development of children. This research aims to determine the increasing of fine motor development using the puzzle for preschoolers. his research is using one-group pre-post test design without control and procedures for statistical analysis through Wilcoxon Sign Rank Test with a confidence level of 95% and α: 5%. The subjects of this study were 40 children. The results of the analysis showed that there was effect of the intervention method by playing puzzle through the development of fine motor skills at pre-school children in mind that the value of Z sign p = 0.001 where significant value of p <0.05. Puzzle play method can improve child language development. The results of this research can be used as the basic for doing the puzzles therapy in children because it can improve fine motor skills development of children. Masa prasekolah merupakan masa peningkatan perkembangan motorik halus. Motorik halus adalah gerakan yang dilakukan oleh sekelompok otot-otot kecil seperti jari-jemari. Pada survey awal hampir sebagian anak mengalami perkembangan motorik suspek. Penelitian ini untuk mengetahui pengaruh metode puzzle terhadap perkembangan motorik halus anak pra sekolah. Penelitian ini menggunakan one-group pra-post test design tanpa control dan analisis statistik menggunakan Uji Wilcoxon Sign Rank Test dengan tingkat  kepercayaan 95% dan α : 5%. Populasi penelitian 50 anak dan sample 40 anak dengan tehnik Simple Random Sampling. Setelah data terkumpul dengan menggunakan DDST selanjutnya dianalisa. Hasil penelitian ini menunjukkan ada pengaruh metode bermain puzzle terhadap perkembangan motorik halus diketahui p sign = 0,001 dimana nilai signifikan p < 0,05. Hasil penelitian ini dapat dijadikan dasar untuk melakukan terapi puzzle pada anak untuk meningkatkan perkembangan motorik halus anak. 8. What cognitive strategies do orangutans (Pongo pygmaeus) use to solve a trial-unique puzzle-tube task incorporating multiple obstacles? Science.gov (United States) Tecwyn, Emma C; Thorpe, Susannah K S; Chappell, Jackie 2012-01-01 Apparently sophisticated behaviour during problem-solving is often the product of simple underlying mechanisms, such as associative learning or the use of procedural rules. These and other more parsimonious explanations need to be eliminated before higher-level cognitive processes such as causal reasoning or planning can be inferred. We presented three Bornean orangutans with 64 trial-unique configurations of a puzzle-tube to investigate whether they were able to consider multiple obstacles in two alternative paths, and subsequently choose the correct direction in which to move a reward in order to retrieve it. We were particularly interested in how subjects attempted to solve the task, namely which behavioural strategies they could have been using, as this is how we may begin to elucidate the cognitive mechanisms underpinning their choices. To explore this, we simulated performance outcomes across the 64 trials for various procedural rules and rule combinations that subjects may have been using based on the configuration of different obstacles. Two of the three subjects solved the task, suggesting that they were able to consider at least some of the obstacles in the puzzle-tube before executing action to retrieve the reward. This is impressive compared with the past performances of great apes on similar, arguably less complex tasks. Successful subjects may have been using a heuristic rule combination based on what they deemed to be the most relevant cue (the configuration of the puzzle-tube ends), which may be a cognitively economical strategy. 9. Electron-muon puzzle and the electromagnetic coupling constant International Nuclear Information System (INIS) Jehle, H. 1977-01-01 On the basis of a heuristic model we argued in an earlier paper (paper C of this series) electric field (and of course the magnetic field, too) of a lepton or of a quark may be formulated in terms of a closed loop of quantized magnetic flux whose alternative forms (''loopforms'') are superposed with probability amplitudes so as to represent the electromagnetic field of that lepton or quark. The Zitterbewegung of a single stationary (''elementary'') particle suggests a kind of quasiextension, which is assumed, in the present theory, to permit concepts of structuralization of the electromagnetic field even for leptons. Mesons and baryons may be represented by linked quantized flux loops, i.e., quark loops (as in paper B). The central problem now (in this paper D) is to formulate those probability-amplitude distributions in terms of wave functions to characterize the internal structure of the lepton or quark in question. As probability-amplitude functions one may choose bases of irreducible representations of the group with respect to which the model is to be invariant. It is seen that this implies the SO(4) group. As both the electron-muon mass ratio and the electromagnetic coupling constant depend, in this flux-quantization model, on the correct formulation of the structuralization of probability-amplitude distributions, we should expect to get an insight into both these puzzles from finding the right probability-amplitude wave functions. Furthermore, it is seen that this same structuralization of probability-amplitude distributions also permits one to estimate the rate of weak interactions, thus relating them to electromagnetic interactions 10. Hyperon puzzle, hadron-quark crossover and massive neutron stars International Nuclear Information System (INIS) Masuda, Kota; Hatsuda, Tetsuo; Takatsuka, Tatsuyuki 2016-01-01 Bulk properties of cold and hot neutron stars are studied on the basis of the hadron-quark crossover picture where a smooth transition from the hadronic phase to the quark phase takes place at finite baryon density. By using a phenomenological equation of state (EOS) ''CRover'', which interpolates the two phases at around 3 times the nuclear matter density (ρ 0 ), it is found that the cold NSs with the gravitational mass larger than 2M CircleDot can be sustained. This is in sharp contrast to the case of the first-order hadron-quark transition. The radii of the cold NSs with the CRover EOS are in the narrow range (12.5 ± 0.5) km which is insensitive to the NS masses. Due to the stiffening of the EOS induced by the hadron-quark crossover, the central density of the NSs is at most 4 ρ 0 and the hyperon-mixing barely occurs inside the NS core. This constitutes a solution of the long-standing hyperon puzzle. The effect of color superconductivity (CSC) on the NS structures is also examined with the hadron-quark crossover. For the typical strength of the diquark attraction, a slight softening of the EOS due to two-flavor CSC (2SC) takes place and the maximum mass is reduced by about 0.2M CircleDot . The CRover EOS is generalized to the supernova matter at finite temperature to describe the hot NSs at birth. The hadron-quark crossover is found to decrease the central temperature of the hot NSs under isentropic condition. The gravitational energy release and the spin-up rate during the contraction from the hot NS to the cold NS are also estimated. (orig.) 11. Desiccation tolerance of Sphagnum revisited: a puzzle resolved. Science.gov (United States) Hájek, T; Vicherová, E 2014-07-01 As ecosystem engineers, Sphagnum mosses control their surroundings through water retention, acidification and peat accumulation. Because water retention avoids desiccation, sphagna are generally intolerant to drought; however, the literature on Sphagnum desiccation tolerance (DT) provides puzzling results, indicating the inducible nature of their DT. To test this, various Sphagnum species and other mesic bryophytes were hardened to drought by (i) slow drying; (ii) ABA application and (iii) chilling or frost. DT tolerance was assessed as recovery of chlorophyll fluorescence parameters after severe desiccation. We monitored the seasonal course of DT in bog bryophytes. Under laboratory conditions, following initial de-hardening, untreated Sphagnum shoots lacked DT; however, DT was induced by all hardening treatments except chilling, notably by slow drying, and in Sphagnum species of the section Cuspidata. In the field, sphagna in hollows and lawns developed DT several times during the growing season, responding to reduced precipitation and a lowered water table. Hummock and aquatic species developed DT only in late autumn, probably as a response to frost. Sphagnum protonemata failed to develop DT; hence, desiccation may limit Sphagnum establishment in drier habitats with suitable substrate chemistry. Desiccation avoiders among sphagna form compact hummocks or live submerged; thus, they do not develop DT in the field, lacking the initial desiccation experience, which is frequent in hollow and lawn habitats. We confirmed the morpho-physiological trade-off: in contrast to typical hollow sphagna, hummock species invest more resources in water retention (desiccation avoidance), while they have a lower ability to develop physiological DT. © 2013 German Botanical Society and The Royal Botanical Society of the Netherlands. 12. Hyperon puzzle, hadron-quark crossover and massive neutron stars Energy Technology Data Exchange (ETDEWEB) Masuda, Kota [The University of Tokyo, Department of Physics, Tokyo (Japan); Nishina Center, RIKEN, Theoretical Research Division, Wako (Japan); Hatsuda, Tetsuo [Nishina Center, RIKEN, Theoretical Research Division, Wako (Japan); The University of Tokyo, Kavli IPMU (WPI), Chiba (Japan); Takatsuka, Tatsuyuki [Nishina Center, RIKEN, Theoretical Research Division, Wako (Japan) 2016-03-15 Bulk properties of cold and hot neutron stars are studied on the basis of the hadron-quark crossover picture where a smooth transition from the hadronic phase to the quark phase takes place at finite baryon density. By using a phenomenological equation of state (EOS) ''CRover'', which interpolates the two phases at around 3 times the nuclear matter density (ρ{sub 0}), it is found that the cold NSs with the gravitational mass larger than 2M {sub CircleDot} can be sustained. This is in sharp contrast to the case of the first-order hadron-quark transition. The radii of the cold NSs with the CRover EOS are in the narrow range (12.5 ± 0.5) km which is insensitive to the NS masses. Due to the stiffening of the EOS induced by the hadron-quark crossover, the central density of the NSs is at most 4 ρ{sub 0} and the hyperon-mixing barely occurs inside the NS core. This constitutes a solution of the long-standing hyperon puzzle. The effect of color superconductivity (CSC) on the NS structures is also examined with the hadron-quark crossover. For the typical strength of the diquark attraction, a slight softening of the EOS due to two-flavor CSC (2SC) takes place and the maximum mass is reduced by about 0.2M {sub CircleDot}. The CRover EOS is generalized to the supernova matter at finite temperature to describe the hot NSs at birth. The hadron-quark crossover is found to decrease the central temperature of the hot NSs under isentropic condition. The gravitational energy release and the spin-up rate during the contraction from the hot NS to the cold NS are also estimated. (orig.) 13. The puzzle of immune phenotypes of childhood asthma. Science.gov (United States) Landgraf-Rauf, Katja; Anselm, Bettina; Schaub, Bianca 2016-12-01 new immunological molecules, the complex puzzle of childhood asthma is still far from being completed. Addressing the current challenges of distinct clinical asthma and wheeze phenotypes, including their stability and underlying endotypes, involves addressing the interplay of innate and adaptive immune regulatory mechanisms in large, interdisciplinary cohorts. 14. Emergence of Life on Earth: A Physicochemical Jigsaw Puzzle. Science.gov (United States) Spitzer, Jan 2017-01-01 We review physicochemical factors and processes that describe how cellular life can emerge from prebiotic chemical matter; they are: (1) prebiotic Earth is a multicomponent and multiphase reservoir of chemical compounds, to which (2) Earth-Moon rotations deliver two kinds of regular cycling energies: diurnal electromagnetic radiation and seawater tides. (3) Emerging colloidal phases cyclically nucleate and agglomerate in seawater and consolidate as geochemical sediments in tidal zones, creating a matrix of microspaces. (4) Some microspaces persist and retain memory from past cycles, and others re-dissolve and re-disperse back into the Earth's chemical reservoir. (5) Proto-metabolites and proto-biopolymers coevolve with and within persisting microspaces, where (6) Macromolecular crowding and other non-covalent molecular forces govern the evolution of hydrophilic, hydrophobic, and charged molecular surfaces. (7) The matrices of microspaces evolve into proto-biofilms of progenotes with rudimentary but evolving replication, transcription, and translation, enclosed in unstable cell envelopes. (8) Stabilization of cell envelopes 'crystallizes' bacteria-like genetics and metabolism with low horizontal gene transfer-life 'as we know it.' These factors and processes constitute the 'working pieces' of the jigsaw puzzle of life's emergence. They extend the concept of progenotes as the first proto-cellular life, connected backward in time to the cycling chemistries of the Earth-Moon planetary system, and forward to the ancient cell cycle of first bacteria-like organisms. Supra-macromolecular models of 'compartments first' are preferred: they facilitate macromolecular crowding-a key abiotic/biotic transition toward living states. Evolutionary models of metabolism or genetics 'first' could not have evolved in unconfined and uncrowded environments because of the diffusional drift to disorder mandated by the second law of thermodynamics. 15. Physics and financial economics (1776-2014): puzzles, Ising and agent-based models Science.gov (United States) Sornette, Didier 2014-06-01 This short review presents a selected history of the mutual fertilization between physics and economics—from Isaac Newton and Adam Smith to the present. The fundamentally different perspectives embraced in theories developed in financial economics compared with physics are dissected with the examples of the volatility smile and of the excess volatility puzzle. The role of the Ising model of phase transitions to model social and financial systems is reviewed, with the concepts of random utilities and the logit model as the analog of the Boltzmann factor in statistical physics. Recent extensions in terms of quantum decision theory are also covered. A wealth of models are discussed briefly that build on the Ising model and generalize it to account for the many stylized facts of financial markets. A summary of the relevance of the Ising model and its extensions is provided to account for financial bubbles and crashes. The review would be incomplete if it did not cover the dynamical field of agent-based models (ABMs), also known as computational economic models, of which the Ising-type models are just special ABM implementations. We formulate the ‘Emerging Intelligence Market Hypothesis’ to reconcile the pervasive presence of ‘noise traders’ with the near efficiency of financial markets. Finally, we note that evolutionary biology, more than physics, is now playing a growing role to inspire models of financial markets. 16. Physics and financial economics (1776–2014): puzzles, Ising and agent-based models International Nuclear Information System (INIS) Sornette, Didier 2014-01-01 This short review presents a selected history of the mutual fertilization between physics and economics—from Isaac Newton and Adam Smith to the present. The fundamentally different perspectives embraced in theories developed in financial economics compared with physics are dissected with the examples of the volatility smile and of the excess volatility puzzle. The role of the Ising model of phase transitions to model social and financial systems is reviewed, with the concepts of random utilities and the logit model as the analog of the Boltzmann factor in statistical physics. Recent extensions in terms of quantum decision theory are also covered. A wealth of models are discussed briefly that build on the Ising model and generalize it to account for the many stylized facts of financial markets. A summary of the relevance of the Ising model and its extensions is provided to account for financial bubbles and crashes. The review would be incomplete if it did not cover the dynamical field of agent-based models (ABMs), also known as computational economic models, of which the Ising-type models are just special ABM implementations. We formulate the ‘Emerging Intelligence Market Hypothesis’ to reconcile the pervasive presence of ‘noise traders’ with the near efficiency of financial markets. Finally, we note that evolutionary biology, more than physics, is now playing a growing role to inspire models of financial markets. (key issues reviews) 17. The Use of the Puzzle Box as a Means of Assessing the Efficacy of Environmental Enrichment Science.gov (United States) O'Connor, Angela M.; Burton, Thomas J.; Leamey, Catherine A.; Sawatari, Atomu 2014-01-01 Environmental enrichment can dramatically influence the development and function of neural circuits. Further, enrichment has been shown to successfully delay the onset of symptoms in models of Huntington’s disease 1-4, suggesting environmental factors can evoke a neuroprotective effect against the progressive, cellular level damage observed in neurodegenerative disorders. The ways in which an animal can be environmentally enriched, however, can vary considerably. Further, there is no straightforward manner in which the effects of environmental enrichment can be assessed: most methods require either fairly complicated behavioral paradigms and/or postmortem anatomical/physiological analyses. This protocol describes the use of a simple and inexpensive behavioral assay, the Puzzle Box 5-7 as a robust means of determining the efficacy of increased social, sensory and motor stimulation on mice compared to cohorts raised in standard laboratory conditions. This simple problem solving task takes advantage of a rodent’s innate desire to avoid open enclosures by seeking shelter. Cognitive ability is assessed by adding increasingly complex impediments to the shelter’s entrance. The time a given subject takes to successfully remove the obstructions and enter the shelter serves as the primary metric for task performance. This method could provide a reliable means of rapidly assessing the efficacy of different enrichment protocols on cognitive function, thus paving the way for systematically determining the role specific environmental factors play in delaying the onset of neurodevelopmental and neurodegenerative disease. PMID:25590345 18. Physics and financial economics (1776-2014): puzzles, Ising and agent-based models. Science.gov (United States) Sornette, Didier 2014-06-01 This short review presents a selected history of the mutual fertilization between physics and economics--from Isaac Newton and Adam Smith to the present. The fundamentally different perspectives embraced in theories developed in financial economics compared with physics are dissected with the examples of the volatility smile and of the excess volatility puzzle. The role of the Ising model of phase transitions to model social and financial systems is reviewed, with the concepts of random utilities and the logit model as the analog of the Boltzmann factor in statistical physics. Recent extensions in terms of quantum decision theory are also covered. A wealth of models are discussed briefly that build on the Ising model and generalize it to account for the many stylized facts of financial markets. A summary of the relevance of the Ising model and its extensions is provided to account for financial bubbles and crashes. The review would be incomplete if it did not cover the dynamical field of agent-based models (ABMs), also known as computational economic models, of which the Ising-type models are just special ABM implementations. We formulate the 'Emerging Intelligence Market Hypothesis' to reconcile the pervasive presence of 'noise traders' with the near efficiency of financial markets. Finally, we note that evolutionary biology, more than physics, is now playing a growing role to inspire models of financial markets. 19. KEEFEKTIFAN MODEL PEMBELAJARAN WORD SQUARE BERBANTU MEDIA PUZZLE PADA MATA PELAJARAN IPS SD Directory of Open Access Journals (Sweden) IBNATUL IZZATI 2018-01-01 Full Text Available Abstract                The problems of this research was: how the effectiveness of Word Square learning model supported by Puzzle media to improve learning outcome of Social subject of third grade at Public Elementary School of Wonopringgo 01 (SDN 01 Wonopringgo? The type of this research was True Experiment Design with type of pretest-posttest control group design one kind of treatment. The samples were taken from students of the third grade of A SDN 01 Wonopringgo in the academic year 2016/2017. The data in this study was obtained through tests and documentation. Experimental research on the third grade of A which was given Word Square learning model supported by Puzzle media and the third grade of B was not given Word Square learning model supported by Puzzle media. Posttest results showed that the percentage of posttest grade of the experimental class was 95% of students expressed thoroughly, while the control class is 70%, and based on t test one-party analysis obtained ttest > ttable = 3,100816112> 1.72. Thus, it could be concluded that the learning with Word Square model supported by Puzzle media was effective against student learning outcomes in Social subjects (IPS the third grade at SDN 01 Wonopringgo Pekalongan. Keywords : effectiveness, word square, puzzle 20. Cryptographic Puzzles and Game Theory against DoS and DDoS attacks in Networks DEFF Research Database (Denmark) Mikalas, Antonis; Komninos, Nikos; Prasad, Neeli R. 2008-01-01 In this chapter, we present techniques to defeat Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks. In the _rst part, we describe client puzzle techniques that are based on the idea of computationally exhausting a malicious user when he attempts to launch an attack. In the ......In this chapter, we present techniques to defeat Denial of Service (DoS) and Distributed Denial of Service (DDoS) attacks. In the _rst part, we describe client puzzle techniques that are based on the idea of computationally exhausting a malicious user when he attempts to launch an attack....... In the second part we are introducing some basic principles of game theory and we discuss how game theoretical frameworks can protect computer networks. Finally, we show techniques that combine client puzzles with game theory in order to provide DoS and DDoS resilience.... 1. Piecing It Together: The Effect of Background Music on Children's Puzzle Assembly. Science.gov (United States) Koolidge, Louis; Holmes, Robyn M 2018-04-01 This study explored the effects of background music on cognitive (puzzle assembly) task performance in young children. Participants were 87 primarily European-American children (38 boys, 49 girls; mean age = 4.77 years) enrolled in early childhood classes in the northeastern United States. Children were given one minute to complete a 12-piece puzzle task in one of three background music conditions: music with lyrics, music without lyrics, and no music. The music selection was "You're Welcome" from the Disney movie "Moana." Results revealed that children who heard the music without lyrics completed more puzzle pieces than children in either the music with lyrics or no music condition. Background music without distracting lyrics may be beneficial and superior to background music with lyrics for young children's cognitive performance even when they are engaged independently in a nonverbal task. 2. A puzzling aspect of the effect of advance notice on unemployment DEFF Research Database (Denmark) 1995-01-01 Displaced male workers with generous periods of advance notice tend to move directly into reemployment faster than their non-notified counterparts but once unemployed tend to escape from unemployment much more slowly. We examine three potential explanations for this puzzle associated with unemplo......Displaced male workers with generous periods of advance notice tend to move directly into reemployment faster than their non-notified counterparts but once unemployed tend to escape from unemployment much more slowly. We examine three potential explanations for this puzzle associated...... with unemployment insurance, the endogeneity of notice, and differential search intensity. Of these alternatives, the evidence suggests that it is the additional but less productive search time during the notice interval that creates the appearance of a puzzle.... 3. The DS86 neutron dosimetry enigma: Some missing pieces to the puzzle International Nuclear Information System (INIS) Gold, R. 1994-01-01 International programs have been conducted over the last four decades to quantify the exposure of atom bomb survivors from Hiroshima and Nagasaki. Unfortunately, the quest for accurate gamma-ray and neutron exposure doses of atom bomb survivors has proven illusive. Efforts in the most recent of these programs, designated as Dosimetry System 1986 (DS86), have revealed a serious and persistent discrepancy between neutron transport calculations and thermal neutron activation measurements at the Hiroshima site, which will be called the DS86 neutron dosimetry enigma. It is established that this enigma is a complex puzzle that precludes simple solutions. This conclusion is deduced through the identification of a number of missing pieces to the puzzle. Implications and conclusions that can be inferred from these missing puzzle pieces are advanced 4. Puzzle Imaging: Using Large-Scale Dimensionality Reduction Algorithms for Localization. Science.gov (United States) Glaser, Joshua I; Zamft, Bradley M; Church, George M; Kording, Konrad P 2015-01-01 Current high-resolution imaging techniques require an intact sample that preserves spatial relationships. We here present a novel approach, "puzzle imaging," that allows imaging a spatially scrambled sample. This technique takes many spatially disordered samples, and then pieces them back together using local properties embedded within the sample. We show that puzzle imaging can efficiently produce high-resolution images using dimensionality reduction algorithms. We demonstrate the theoretical capabilities of puzzle imaging in three biological scenarios, showing that (1) relatively precise 3-dimensional brain imaging is possible; (2) the physical structure of a neural network can often be recovered based only on the neural connectivity matrix; and (3) a chemical map could be reproduced using bacteria with chemosensitive DNA and conjugative transfer. The ability to reconstruct scrambled images promises to enable imaging based on DNA sequencing of homogenized tissue samples. 5. Penyelesaian Masalah 8-Puzzle dengan Algoritma Steepest-Ascent Hill Climbing Directory of Open Access Journals (Sweden) David Abraham 2016-03-01 Full Text Available 8 puzzle merupakan salah satu implementasi dari Artificial Intelegence. Dalam proses penyelesaiannya banyak terdapat algoritma-algoritma pencarian yang dapat diterapkan. Solusi 8 puzzle akan lebih cepat diperoleh jika digunakan prinsip array dengan variasi algoritma Steepest-Ascent Hill Climbing (Hill Climbing dengan memilih kemiringan yang paling tajam / curam dengan parameter heuristik posisi yang benar dan heuristik jarak serta dikombinasikan dengan LogList sebagai penyimpanan state state yang pernah dilalui untuk menanggulangi permasalah pada algoritma hill climbing itu sendiri dan terhindar dari looping state yang pernah dilalui. Metode-metode yang termasuk ke dalam teknik pencarian yang berdasarkan pada fungsi heuristik salah satu diantaranya adalah Hill Climbing, Best First Search, A* (A Bintang. Loglist merupakan tempat penyimpanan setiap kunjungan dari state-state puzzle yang telah dilakukan untuk menghindari looping atau pengulangan terhadap state yang pernah dilalui. Untuk menanggulangi permasalahan pada SteepestAscent Hill Climbing. 6. The Simple Past Puzzle. A Study of Some Aspects of the Syntax and Semantics of Tense Directory of Open Access Journals (Sweden) Nino Gulli 2014-05-01 Full Text Available In this paper, I claim that the so-called present perfect puzzle is, in reality, a puzzle about the simple past. It is the latter, I argue, that shows a puzzling behavior, given that it can be used not only in definite contexts but also in seemingly indefinite ones. I employ the notions of time frame and specifiability to show how the obvious distinction between the two tenses in terms of temporal logic can be accounted for. I also propose that the past morpheme -ed be considered a kind of verb determiner which selects a temporal XP as a complement. Such complement can be (and usually is expressed either in the sentence or in the larger discourse; however, it can also remain implicit, or covert. OpenAIRE Jiandong Ju; Ziru Wei; Hong Ma 2015-01-01 From 1992 to 2011, the total trade volume between the U.S. and China increased by 25 times, and China's share in U.S. total imports increased from 5% to 20%. However, the U.S.'s share in China's total imports dropped from 11% to 8% in the same period. In the major categories of U.S. exports to China, Waste & Scrap increased from 744 million dollars in 2000 to 7,562 million dollars in 2008, rising 916% times and becoming the No.1 product that the U.S exports to China. It is important to unders... 8. Jigsaw puzzle metasurface for multiple functions: polarization conversion, anomalous reflection and diffusion. Science.gov (United States) Zhao, Yi; Cao, Xiangyu; Gao, Jun; Liu, Xiao; Li, Sijia 2016-05-16 We demonstrate a simple reconfigurable metasurface with multiple functions. Anisotropic tiles are investigated and manufactured as fundamental elements. Then, the tiles are combined in a certain sequence to construct a metasurface. Each of the tiles can be adjusted independently which is like a jigsaw puzzle and the whole metasurface can achieve diverse functions by different layouts. For demonstration purposes, we realize polarization conversion, anomalous reflection and diffusion by a jigsaw puzzle metasurface with 6 × 6 pieces of anisotropic tile. Simulated and measured results prove that our method offers a simple and effective strategy for metasurface design. 9. PuzzleArt Therapy: Connecting the Pieces in Search of Answers Directory of Open Access Journals (Sweden) Jennifer Fortuna 2016-10-01 Full Text Available Alli Berman, a New York based artist, provided the cover art for the Fall 2016 issue of The Open Journal of Occupational Therapy (OJOT. “Sunlight Underwater” is a 12 piece PuzzleArt painting made from acrylic on American maple that measures 22x30. The PuzzleArt concept began as a simple exercise that evolved into a therapeutic modality. When a sudden stroke impacted Berman’s well-being and quality of life, it was art that helped her to make connections during recovery. 10. An Easy & Fun Way to Teach about How Science "Works": Popularizing Haack's Crossword-Puzzle Analogy Science.gov (United States) Pavlova, Iglika V.; Lewis, Kayla C. 2013-01-01 Science is a complex process, and we must not teach our students overly simplified versions of "the" scientific method. We propose that students can uncover the complex realities of scientific thinking by exploring the similarities and differences between solving the familiar crossword puzzles and scientific "puzzles."… 11. Having Fun and Accepting Challenges Are Natural Instincts: Jigsaw Puzzles to Challenge Students and Test Their Abilities While Having Fun! Science.gov (United States) Rodenbaugh, Hanna R.; Lujan, Heidi L.; Rodenbaugh, David W.; DiCarlo, Stephen E. 2014-01-01 Because jigsaw puzzles are fun, and challenging, students will endure and discover that persistence and grit are rewarded. Importantly, play and fun have a biological place just like sleep and dreams. Students also feel a sense of accomplishment when they have completed a puzzle. Importantly, the reward of mastering a challenge builds confidence… 12. Instructional Media Production for Early Childhood Education: A. B. C. Jig-Saw Puzzle, a Model Science.gov (United States) Yusuf, Mudashiru Olalere; Olanrewaju, Olatayo Solomon; Soetan, Aderonke K. 2015-01-01 In this paper, a. b. c. jig-saw puzzle was produced for early childhood education using local materials. This study was a production based type of research, to serve as a supplemental or total learning resource. Its production followed four phases of development referred to as information, design, production and evaluation. The storyboard cards,… 13. Gardner's Two Children Problems and Variations: Puzzles with Conditional Probability and Sample Spaces Science.gov (United States) Taylor, Wendy; Stacey, Kaye 2014-01-01 This article presents "The Two Children Problem," published by Martin Gardner, who wrote a famous and widely-read math puzzle column in the magazine "Scientific American," and a problem presented by puzzler Gary Foshee. This paper explains the paradox of Problems 2 and 3 and many other variations of the theme. Then the authors… 14. Resolutions of Several Puzzles at Intermediate pT and Recent Developments in Correlation International Nuclear Information System (INIS) Hwa, Rudolph C. 2006-01-01 Some of the puzzles on hadron production at intermediate p T found at RHIC are explained as natural consequences of parton recombination. In that framework for hadronization the correlation among hadrons produced in jets can be calculated. Some new results on both near-side and away-side jet structures are presented 15. High-mass twins & resolution of the reconfinement, masquerade and hyperon puzzles of compact star interiors International Nuclear Information System (INIS) Blaschke, David; Alvarez-Castillo, David E. 2016-01-01 We aim at contributing to the resolution of three of the fundamental puzzles related to the still unsolved problem of the structure of the dense core of compact stars (CS): (i) the hyperon puzzle: how to reconcile pulsar masses of 2 M ⊙ with the hyperon softening of the equation of state (EoS); (ii) the masquerade problem: modern EoS for cold, high density hadronic and quark matter are almost identical; and (iii) the reconfinement puzzle: what to do when after a deconfinement transition the hadronic EoS becomes favorable again? We show that taking into account the compositeness of baryons (by excluded volume and/or quark Pauli blocking) on the hadronic side and confining and stiffening effects on the quark matter side results in an early phase transition to quark matter with sufficient stiffening at high densities which removes all three present-day puzzles of CS interiors. Moreover, in this new class of EoS for hybrid CS falls the interesting case of a strong first order phase transition which results in the observable high mass twin star phenomenon, an astrophysical observation of a critical endpoint in the QCD phase diagram 16. The Role of Inhibitory Control in Children's Cooperative Behaviors during a Structured Puzzle Task Science.gov (United States) Giannotta, Fabrizia; Burk, William J.; Ciairano, Silvia 2011-01-01 This study examined the role of inhibitory control (measured by Stroop interference) in children's cooperative behaviors during a structured puzzle task. The sample consisted of 250 8-, 10-, and 12-year-olds (117 girls and 133 boys) attending classrooms in three primary schools in Northern Italy. Children individually completed an elaborated… 17. Free-style puzzle flap: the concept of recycling a perforator flap. Science.gov (United States) Feng, Kuan-Ming; Hsieh, Ching-Hua; Jeng, Seng-Feng 2013-02-01 Theoretically, a flap can be supplied by any perforator based on the angiosome theory. In this study, the technique of free-style perforator flap dissection was used to harvest a pedicled or free skin flap from a previous free flap for a second difficult reconstruction. The authors call this a free-style puzzle flap. For the past 3 years, the authors treated 13 patients in whom 12 pedicled free-style puzzle flaps were harvested from previous redundant free flaps and recycled to reconstruct soft-tissue defects at various anatomical locations. One free-style free puzzle flap was harvested from a previous anterolateral thigh flap for buccal cancer to reconstruct a foot defect. Total flap survival was attained in 12 of 13 flaps. One transferred flap failed completely. This patient had received postoperative radiotherapy after the initial cancer ablation and free anterolateral thigh flap reconstruction. Another free flap was used to close and reconstruct the wound. All the donor sites could be closed primarily. The free-style puzzle flap, harvested from a previous redundant free flap and used as a perforator flap to reconstruct a new defect, has proven to be versatile and reliable. When indicated, it is an alternative donor site for further reconstruction of soft-tissue defects. 18. Enhancing the Understanding of Government and Nonprofit Accounting with THE PUZZLE GAME: A Pilot Study Science.gov (United States) Elson, Raymond J.; Ostapski, S. Andrew; O'Callaghan, Susanne; Walker, John P. 2012-01-01 Nontraditional teaching aids such as crossword puzzles have been successfully used in the classroom to enhance student learning. Government and nonprofit accounting is a confusing course for students since it has strange terminologies and contradicts the accounting concepts learned in other courses. As such, it is an ideal course for a… 19. Precedents, Patterns and Puzzles: Feminist Reflections on the First Women Lawyers Directory of Open Access Journals (Sweden) Mary Jane Mossman 2016-10-01 Full Text Available This paper initially examines the historical precedents established by some of the first women who entered the “gentleman’s profession” of law in different jurisdictions, as well as the biographical patterns that shaped some women’s ambitions to enter the legal professions. The paper then uses feminist methods and theories to interpret “puzzles that remain unsolved” about early women lawyers, focusing especially on two issues. One puzzle is the repeated claims on the part of many of these early women lawyers that they were “lawyers”, and not “women lawyers”, even as they experienced exclusionary practices and discrimination on the part of male lawyers and judges—a puzzle that suggests how professional culture required women lawyers to conform to existing patterns in order to succeed. A second puzzle relates to the public voices of early women lawyers, which tended to suppress disappointments, difficulties and discriminatory practices. In this context, feminist theories suggest a need to be attentive to the “silences” in women’s stories, including the stories of the lives of early women lawyers. Moreover, these insights may have continuing relevance for contemporary women lawyers because it is at least arguable that, while there have been changes in women’s experiences, there has been very little transformation in their work status in relation to men. 20. Chinese American and Caucasian American Family Interaction Patterns in Spatial Rotation Puzzle Solutions. Science.gov (United States) Hutsinger, Carol S.; Jose, Paul E. 1995-01-01 Examined sociocultural influences on mathematics achievement. First generation Chinese American and Caucasian American mother-father-daughter triads were audiotaped as the fifth- and sixth-grade girls solved a spatial puzzle. Chinese American triads were quieter, more respectful, more serious, and more orderly, whereas Caucasian American triads… 1. Box-Cox transformation for resolving the Peelle's Pertinent Puzzle in a curve fitting International Nuclear Information System (INIS) Oh, S. Y.; Seo, C. G. 2004-01-01 Incorporating the Box-Cox transformation into a curve fitting is presented as one of methods for resolving an anomaly known as the Peelle's Pertinent Puzzle in the nuclear data community. The Box-Cox transformation is a strategy to make non-normal distribution data resemble normal distribution data. The proposed method consists of the following steps: transform the raw data to be fitted with the optimized Box-Cox transformation parameter, fit the transformed data using a conventional curve fitting tool, the least-squares method in this study, then inverse-transform the fitted results to the final estimates. Covariance matrices are correspondingly transformed and inverse-transformed with the aid of the law of error propagation. In addition to a sensible answer to the Puzzle, the proposed method resulted in reasonable estimates for a test evaluation with pseudo-experimental 6 Li(n, t) cross sections in several to 800 keV energy region, while the GMA code resulted in systematic underestimates that characterize the Puzzle. Meanwhile, it is observed that the present method and the Chiba-Smith method yield almost the same estimates for the test evaluation on 6 Li(n, t). Conceptually, however, two methods are very different from each other and further discussions are needed for a consensus on the issue of how to resolve the Puzzle. (authors) 2. The Quark Puzzle: A Novel Approach to Visualizing the Color Symmetries of Quarks Science.gov (United States) Gettrust, Eric 2010-01-01 This paper describes a simple hands-on and visual-method designed to introduce physics students of many age groups to the topic of quarks and their role in forming composite particles (baryons and mesons). A set of puzzle pieces representing individual quarks that fit together in ways consistent with known restrictions of flavor, color, and charge… 3. (Mis)perception of Sleep in Insomnia: A Puzzle and a Resolution Science.gov (United States) Harvey, Allison G.; Tang, Nicole K. Y. 2012-01-01 Insomnia is prevalent, causing severe distress and impairment. This review focuses on illuminating the puzzling finding that many insomnia patients misperceive their sleep. They overestimate their sleep onset latency (SOL) and underestimate their total sleep time (TST), relative to objective measures. This tendency is ubiquitous (although not… 4. A dialog with a puzzled profile: Poetry of an old pedological discussion Science.gov (United States) Itkin, Danny 2017-04-01 Defining and classifying are fundamental needs in the everyday life of humans. Among quite a few relevant examples in pedology, stands the question of whether soils and some types of sediments should or can be distinct. This issue is as old as soil science itself and is possibly very much related to the never ending debate regarding "the definition of soil". As is the case in many fields, the necessity of humans to create and keep a uniform common language might collide with different cultural and/or scientific perspectives. Such is the case with the wide variety of soil classifications found throughout the world. One can easily note this diversity when reading publications that address two similar regolith profiles from different locations round the globe. In some cases it would be impossible to correlate two comparable profiles when using different classification systems. This contradictory situation is one of the most challenging topics in pedology. This whole background gave the inspiration for the following poem, titled "A dialog with a puzzled profile": Are you a soil or a sediment? Ask the oak, see if he knows. Ask him whether these are peds Or maybe a bedrock under his toes. And if you're a soil, what are you? It depends on the viewpoint, you see: Some define me with Soil Taxonomy, While others with WRB. For Hutton and Lyell I'm a weathered rock, Yet Hilgard and Dokuchaev dispute. If you ask me, well I'm a 'pedosediment', I couldn't care less for the suit. If it helps you I'll be whatever it takes,
Making sure that no one will lose. I know it depends very much on the platform Where experts are setting the rules. 5. Geoscience Data Puzzles: Developing Students' Ability to Make Meaning from Data Science.gov (United States) Kastens, K. A.; Turrin, M. 2010-12-01 One of the most fundamental aspects of geoscience expertise is the ability to extract insights from observational earth data. Where an expert might see trends, patterns, processes, and candidate causal relationships, a novice could look at the same data representation and see dots, wiggles and blotches of color. The problem is compounded when the student was not personally involved in collecting the data or samples and thus has no experiential knowledge of the Earth setting that the data represent. In other words, the problem is especially severe when students tap into the vast archives of professionally-collected data that the geoscience community has worked so hard to make available for instructional use over the internet. Moreover, most high school and middle school teachers did not themselves learn Earth Science through analyzing data, and they may lack skills and/or confidence needed to scaffold students through the process of learning to interpret realistically-complex data sets. We have developed “Geoscience Data Puzzles” with the paired goals of (a) helping students learn about the earth from data, and (b) helping teachers learn to teach with data. Geoscience Data Puzzles are data-using activities that purposefully present a low barrier-to-entry for teachers and a high ratio of insight-to-effort for students. Each Puzzle uses authentic geoscience data, but the data are carefully pre-selected in order to illuminate a fundamental Earth process within tractable snippets of data. Every Puzzle offers "Aha" moments, when the connection between data and process comes clear in a rewarding burst of insight. Every Puzzle is accompanied by a Pedagogical Content Knowledge (PCK) guide, which explicates the chain of reasoning by which the puzzle-solver can use the evidence provided by the data to construct scientific claims. Four types of reasoning are stressed: spatial reasoning, in which students make inferences from observations about location, orientation, shape 6. Action and puzzle video games prime different speed/accuracy tradeoffs. Science.gov (United States) Nelson, Rolf A; Strachan, Ian 2009-01-01 To understand the way in which video-game play affects subsequent perception and cognitive strategy, two experiments were performed in which participants played either a fast-action game or a puzzle-solving game. Before and after video-game play, participants performed a task in which both speed and accuracy were emphasized. In experiment 1 participants engaged in a location task in which they clicked a mouse on the spot where a target had appeared, and in experiment 2 they were asked to judge which of four shapes was most similar to a target shape. In both experiments, participants were much faster but less accurate after playing the action game, while they were slower but more accurate after playing the puzzle game. Results are discussed in terms of a taxonomy of video games by their cognitive and perceptual demands. 7. Puzzle of the particles and the universe. The inner life of the elementary particles IX d International Nuclear Information System (INIS) Geitner, Uwe W. 2013-01-01 The series The Inner Life of the Elementary Particles attempts to develop the elementary particles along of a genealogical tree, which begins before the ''big bang''. The simple presentation without mathematics opens also for the interested layman a plastic understanding. Volume IX discusses the known puzzles of particle physics and cosmology and offers for many of them explanation models. Explanation approaches are among others the ''DNA'' of the elementary particles and the interpretation of the quanta and the spin. 8. The Loss Aversion / Narrow Framing Approach to the Equity Premium Puzzle OpenAIRE Nicholas Barberis; Ming Huang 2006-01-01 We review a recent approach to understanding the equity premium puzzle. The key elements of this approach are loss aversion and narrow framing, two well-known features of decision-making under risk in experimental settings. In equilibrium, models that incorporate these ideas can generate a large equity premium and a low and stable risk-free rate, even when consumption growth is smooth and only weakly correlated with the stock market. Moreover, they can do so for parameter values that correspo... 9. Review of Experimental and Theoretical Status of the Proton Radius Puzzle Energy Technology Data Exchange (ETDEWEB) Hill, Richard J. [TRIUMF 2017-01-01 The discrepancy between the measured Lamb shift in muonic hydrogen and expectations from electron-proton scattering and regular hydrogen spectroscopy has become known as the proton radius puzzle, whose most “mundane” resolution requires a > 5σ shift in the value of the fundamental Rydberg constant. I briefly review the status of spectroscopic and scattering measurements, recent theoretical developments, and implications for fundamental physics. 10. The height of Tennessee convicts: another piece of the "antebellum puzzle". Science.gov (United States) Sunder, Marco 2004-03-01 Average height of the free population in the United States born in the mid-1830s began to decline despite growing per capita incomes. Explanations for this "antebellum puzzle" revolve around a possibly deteriorating disease environment promoted by urban agglomeration and increases in the relative price of protein-rich foods. However, several groups were immune to the effect, including members of the middle class, whose income was high enough, and increased enough to overcome the adverse developments and maintain their nutritional status. Although at the opposite end of the social spectrum, the height of male slaves also increased, as it was in their owners' interest to raise their slaves' food allotments. The height of Tennessee convicts, analyzed in this article, also increased in the late-1830s, being the third exception to the "antebellum puzzle." Mid-19th century Tennessee was integrated into interstate commerce in cotton and tobacco and experienced considerable movement of people who would have brought with them diseases from elsewhere, hence, it would have been integrated into the US disease pool, and the fact that heights did not decline in the 1830s is therefore an indication that the antebellum puzzle cannot be explained exclusively by the spread of diseases. Yet, Tennessee's economy was quite different to that of the rest of the country. Although it did export live swine to the South, these exports did not increase during the antebellum decades. Hence, Tennessee remained self-sufficient in pork, and consumption of pork did not decline. Thus, the evidence presented here is consistent with the economic interpretation of the "antebellum puzzle": self-sufficiency in protein production protected even the members of the lower-classes of Tennessee from the negative externalities associated with the onset of industrialization. 11. The puzzle of new etiological agents in the Americas: Punta del Toro virus another piece? Directory of Open Access Journals (Sweden) Salim Mattar V 2017-01-01 Full Text Available In a recent study of undifferentiated tropical fevers in an endemic area of Colombia, it was shown that not all acute fevers are caused by the dengue virus (1. The complex clinical-epidemiological panorama of tropical fevers has become a puzzle of difficult resolution due to the appearance of new etiological agents in the Americas such as Chikungunya and Zika. For the differential diagnosis Hantavirus, Arenavirus, Orupuche, tick thrombocytopenic virus, Heartland virus, leptospira and malaria should be considered. 12. (Mis)perception of sleep in insomnia: a puzzle and a resolution. Science.gov (United States) Harvey, Allison G; Tang, Nicole K Y 2012-01-01 Insomnia is prevalent, causing severe distress and impairment. This review focuses on illuminating the puzzling finding that many insomnia patients misperceive their sleep. They overestimate their sleep onset latency (SOL) and underestimate their total sleep time (TST), relative to objective measures. This tendency is ubiquitous (although not universal). Resolving this puzzle has clinical, theoretical, and public health importance. There are implications for assessment, definition, and treatment. Moreover, solving the puzzle creates an opportunity for real-world applications of theories from clinical, perceptual, and social psychology as well as neuroscience. Herein we evaluate 13 possible resolutions to the puzzle. Specifically, we consider the possible contribution, to misperception, of (1) features inherent to the context of sleep (e.g., darkness); (2) the definition of sleep onset, which may lack sensitivity for insomnia patients; (3) insomnia being an exaggerated sleep complaint; (4) psychological distress causing magnification; (5) a deficit in time estimation ability; (6) sleep being misperceived as wake; (7) worry and selective attention toward sleep-related threats; (8) a memory bias influenced by current symptoms and emotions, a confirmation bias/belief bias, or a recall bias linked to the intensity/recency of symptoms; (9) heightened physiological arousal; (10) elevated cortical arousal; (11) the presence of brief awakenings; (12) a fault in neuronal circuitry; and (13) there being 2 insomnia subtypes (one with and one without misperception). The best supported resolutions were misperception of sleep as wake, worry, and brief awakenings. A deficit in time estimation ability was not supported. We conclude by proposing several integrative solutions. 13. Black-hole information puzzle: a generic string-inspired approach International Nuclear Information System (INIS) Nikolic, H. 2008-01-01 Given the insight stemming from string theory, the origin of the black-hole (BH) information puzzle is traced back to the assumption that it is physically meaningful to trace out the density matrix over negative-frequency Hawking particles. Instead, treating them as virtual particles necessarily absorbed by the BH in a manner consistent with the laws of BH thermodynamics, and tracing out the density matrix only over physical BH states, complete evaporation becomes compatible with unitarity. (orig.) 14. The Puzzle of Democratic Monopolies: Single Party Dominance and Decline in India OpenAIRE 2016-01-01 How to explain political monopolies in democratic institutional settings? Dominant parties in countries with robust formal democratic institutions are surprisingly frequent, yet poorly understood. Existing theories explain away the puzzle by characterizing dominant parties as catch-all' parties that survive on the basis of historically imbued mass voter legitimacy. This dissertation develops a theory of how dominant parties in fact routinely win free and fair elections despite counter-majori... 15. Understanding the proton radius puzzle: Nuclear structure effects in light muonic atoms Directory of Open Access Journals (Sweden) Ji Chen 2016-01-01 Full Text Available We present calculations of nuclear structure effects to the Lamb shift in light muonic atoms. We adopt a modern ab-initio approach by combining state-of-the-art nuclear potentials with the hyperspherical harmonics method. Our calculations are instrumental to the determination of nuclear charge radii in the Lamb shift measurements, which will shed light on the proton radius puzzle. 16. Puzzling the Jesus of the Parables: A response to Ruben Zimmermann Directory of Open Access Journals (Sweden) Llewellyn Howes 2017-07-01 Full Text Available This article responds to Ruben Zimmermann’s latest book, Puzzling the Parables of Jesus (2015. In particular, one aspect of his proposed method is challenged, namely, his conscious attempt to do away with considerations of the pre-Easter context when interpreting the parables. The article finishes by proposing a variant methodology of parable interpretation, featuring the parable of the Good Samaritan as a working example. 17. International Evidence on the Role of Monetary Policy in the Uncovered Interest Rate Parity Puzzle OpenAIRE Alfred V Guender 2015-01-01 CPI inflation targeting necessitates a flexible exchange rate regime. This paper embeds an endogenous target rule into a simple open economy macro model to explain the UIP puzzle. The model predicts that the change in the exchange rate is inversely related to the lagged interest rate differential. Openness and aversion to inflation variability determine the strength of this linkage. Foreign inflation and the foreign interest rate also affect exchange rate changes. This hypothesis is tested on... 18. A solution to B→ππ puzzle and B→KK International Nuclear Information System (INIS) Baek, Seungwon 2008-01-01 The large ratio of color-suppressed tree amplitude to color-allowed one in B→ππ decays is difficult to understand within the Standard Model, which is known as the 'B→ππ puzzle'. The two tree diagrams contain the up- and charm-quark component of penguin amplitude, P uc , which cannot be separated by measuring B→ππ decays alone. We show that the measurements of the branching ratio and direct CP asymmetry of B + →K + K 0 -bar decay enable one to disentangle the P uc with two-fold ambiguity. One of the two degenerate solutions of the P uc can solve the B→ππ puzzle by giving |C/T|∼0.3 which is consistent with the expectation in the Standard Model. We also show that the two solutions can be discriminated by the measurement of the indirect CP asymmetry of B 0 →K 0 K 0 -bar. We point out that the corresponding puzzle in B→πK decays is not solved in this way 19. Right frontal gamma and beta band enhancement while solving a spatial puzzle with insight. Science.gov (United States) Rosen, A; Reiner, M 2017-12-01 Solving a problem with an "a-ha" effect is known as insight. Unlike incremental problem solving, insight is sudden and unique, and the question about its distinct brain activity, intrigues many researchers. In this study, electroencephalogram signals were recorded from 12 right handed, human participants before (baseline) and while they solved a spatial puzzle known as the '10 coin puzzle' that could be solved incrementally or by insight. Participants responded as soon as they reached a solution and reported whether the process was incremental or by sudden insight. EEG activity was recorded from 19 scalp locations. We found significant differences between insight and incremental solvers in the Gamma and Beta 2 bands in frontal areas (F8) and in the alpha band in right temporal areas (T6). The right-frontal gamma indicates a process of restructuring which leads to an insight solution, in spatial problems, further suggesting a universal role of gamma in restructuring. These results further suggest that solving a spatial puzzle via insight requires exclusive brain areas and neurological-cognitive processes which may be important for meta-cognitive components of insight solutions, including attention and monitoring of the solution. Copyright © 2016 Elsevier B.V. All rights reserved. 20. A puzzle assembly strategy for fabrication of large engineered cartilage tissue constructs. Science.gov (United States) Nover, Adam B; Jones, Brian K; Yu, William T; Donovan, Daniel S; Podolnick, Jeremy D; Cook, James L; Ateshian, Gerard A; Hung, Clark T 2016-03-21 Engineering of large articular cartilage tissue constructs remains a challenge as tissue growth is limited by nutrient diffusion. Here, a novel strategy is investigated, generating large constructs through the assembly of individually cultured, interlocking, smaller puzzle-shaped subunits. These constructs can be engineered consistently with more desirable mechanical and biochemical properties than larger constructs (~4-fold greater Young׳s modulus). A failure testing technique was developed to evaluate the physiologic functionality of constructs, which were cultured as individual subunits for 28 days, then assembled and cultured for an additional 21-35 days. Assembled puzzle constructs withstood large deformations (40-50% compressive strain) prior to failure. Their ability to withstand physiologic loads may be enhanced by increases in subunit strength and assembled culture time. A nude mouse model was utilized to show biocompatibility and fusion of assembled puzzle pieces in vivo. Overall, the technique offers a novel, effective approach to scaling up engineered tissues and may be combined with other techniques and/or applied to the engineering of other tissues. Future studies will aim to optimize this system in an effort to engineer and integrate robust subunits to fill large defects. Copyright © 2016 Elsevier Ltd. All rights reserved. 1. Padrão dos fluxos de capitais: teoria, evidência e puzzle Directory of Open Access Journals (Sweden) 2014-04-01 2. Relative fault and efficient negligence: comparative negligence explained NARCIS (Netherlands) Dari-Mattiacci, G.; Hendriks, E.S. 2010-01-01 Comparative negligence poses a persisting puzzle in law & economics. Under standard assumptions, its performance is identical to other negligence rules, while its implementation is slightly more complex. If so, why is it the most common rule? In this paper, we advance a novel argument: comparative 3. Quantum top secret. The solution of the quantum puzzle. Metamorphosis of a picture of world; Quantum top secret. Die Loesung des Quantenraetsels. Metamorphose eines Weltbildes Energy Technology Data Exchange (ETDEWEB) Wingert, M. 2008-07-01 Many physicists believe that because of unexplained causes, which must anyway be concerned with the quantum puzzle and the mysterious consciousness, it would be no more possible to understand the real structure of the reality - this subtle smiling of the nature, which irritates the physicists since 100 years and the disturbed the theoretical physics so much that they threw the towel. Since nature is considered as absurd, strange, and crazy - and quantum theory as very complicated. But in reality the basic experiments are of a touching simplicity, which seems only completely unintelligible in the picture of world of mechanics. For these experiments show that the concept of body of mechanics and the body conceptions of the thinking cannot at all match the structure of nature. If this is objectively taken notice of without doubting on the existence of a reality, the experiments show the real, unveiled face of the nature. Light and matter must then consist of fields, which can themselves divide by non-mechanical way, so with wholeness, comparable only with cell division and branching processes in biology. Either it is completely crazy - or the only logic interpretation, which hitherto only no physicist risked to think. For these experiments disprove the atom and elementary-particle hypothesis, the picture of world of mechanics, and also the quantum-mechanical interpretation - and indeed uniquely. This knowledge could break the Gordian knot, solve the quantum puzzle, and also give away the secret of the thinking spirit. 4. Sc2C2@D3h(14246)-C74: A Missing Piece of the Clusterfullerene Puzzle. Science.gov (United States) Wang, Yaofeng; Tang, Qiangqiang; Feng, Lai; Chen, Ning 2017-02-20 Clusterfullerenes with variable carbon cages have been extensively studied in recent years. However, despite all these efforts, C 74 cage-based clusterfullerene remains a missing piece of the puzzle. Herein, we show that single-crystal X-ray crystallographic analysis unambiguously assigns the previously reported dimetallofullerene Sc 2 @C 76 to a novel carbide clusterfullerene, Sc 2 C 2 @D 3h (14246)-C 74 , the first experimentally proven clusterfullerene with a C 74 cage. In addition, Sc 2 C 2 @D 3h (14246)-C 74 was charaterized by mass spectrometry, ultraviolet-visible-near-infrared absorption spectroscopy, 45 Sc nuclear magnetic resonance, and cyclic voltammetry. Comparative studies of the motion of the carbide cluster in Sc 2 C 2 @D 3h (14246)-C 74 and Sc 2 C 2 @C 2n (n = 40-44) revealed that a combination of factors, involving both the shape and size of the cage, is crucial in dictating the cluster motion. Moreover, structural studies of D 3h (14246)-C 74 revealed that it can be easily converted to C s (10528)-C 72 and T d (19151)-C 76 cages via C 2 desertion/insertion and Stone-Wales transformation. This suggests that D 3h (14246)-C 74 might play an important role in the growth pathway of clusterfullerenes. 5. Solving the "Personhood Jigsaw Puzzle" in Residential Care Homes for the Elderly in the Hong Kong Chinese Context. Science.gov (United States) Kong, Sui-Ting; Fang, Christine Meng-Sang; Lou, Vivian W Q 2017-02-01 End-of-life care studies on the nature of personhood are bourgeoning; however, the practices utilized for achieving personhood in end-of-life care, particularly in a cultural context in which interdependent being and collectivism prevail, remain underexplored. This study seeks to examine and conceptualize good practices for achieving the personhood of the dying elderly in residential care homes in a Chinese context. Twelve interviews were conducted with both medical and social care practitioners in four care homes to collect narratives of practitioners' practices. Those narratives were utilized to develop an "end-of-life case graph." Constant comparative analysis led to an understanding of the practice processes, giving rise to a process model of "solving the personhood jigsaw puzzle" that includes "understanding the person-in-relationship and person-in-time," "identifying the personhood-inhibiting experiences," and "enabling personalized care for enhanced psychosocial outcomes." Findings show how the "relational personhood" of the elderly can be maintained when physical deterioration and even death are inevitable. 6. Two-fluid limits on stellarator performance: Explanation of three stellarator puzzles and comparison to axisymmetric plasmas International Nuclear Information System (INIS) Sugiyama, L.E.; Strauss, H.R.; Park, W.; Fu, G.Y.; Breslau, J.A.; Chen, J. 2005-01-01 The basic two-fluid processes, those related to the nonlinearly self-consistent diamagnetic drifts of the electrons and ions, are shown to have fundamentally different effects on the steady state and beta limits of stellarator configurations, compared to MHD predictions. Nonlinear numerical simulation shows that the ideal MHD ballooning modes and the resistive MHD ballooning and interchange modes at relatively high mode numbers, that set the most severe theoretical limits on beta in stellarators with fixed boundary, are easily stabilized by two-fluid effects at realistic parameters, including finite Larmor radius effects related to the ion diamagnetic drift. Magnetic reconnection at low-order rational magnetic surfaces, on the other hand, is enhanced through the parallel component of the two-fluid electron pressure gradient in Ohm's law. The accelerated reconnection rates may impose the true intrinsic limit on beta in stellarators, as a 'soft' or confinement mediated limit in β e , due to steady confinement degradation in the presence of large magnetic islands. Study of the corresponding axisymmetric configurations shows that the helical component of the stellarator configuration provides an important amplifying factor for these effects. The two-fluid results may explain several previously puzzling experimental observations on stellarator behavior. (author) 7. Puzzles of J/Ψ production off nuclei International Nuclear Information System (INIS) Kopeliovich, B.Z. 2011-01-01 Nuclear effects for J/Ψ production in pA collisions are controlled by the coherence and color transparency effects. Color transparency onsets when the time of formation of the charmonium wave function becomes longer than the inter-nucleon spacing. In this energy regime the effective break-up cross section for a c-barc dipole depends on energy and nuclear path length, and agrees well with data from fixed target experiments, both in magnitude and energy dependence. At higher energies of RHIC and LHC coherence in c-barc pair production leads to charm quark shadowing which is a complement to the high twist break up cross section. These two effects explain well with no adjusted parameters the magnitude and rapidity dependence of nuclear suppression of J/Ψ observed at RHIC in dAu collisions, while the contribution of leading twist gluon shadowing is found to be vanishingly small. A novel mechanism of double color filtering for c-barc dipoles makes nuclei significantly more transparent in AA compared to pA collisions. This is one of the mechanisms which make impossible a model independent 'data driven' extrapolation from pA to AA. This effect also explains the enhancement of nuclear suppression observed at forward rapidities in AA collisions at RHIC, what hardly can be related to the produced dense medium. J/Ψ is found to be a clean and sensitive tool measuring the transport coefficient characterizing the dense matter created in AA collisions. RHIC data for p T dependence of J/Ψ production in nuclear collisions are well explained with the low value of the transport coefficient q-hat 0 2 /fm. 8. Freestyle multiple propeller flap reconstruction (jigsaw puzzle approach) for complicated back defects. Science.gov (United States) Park, Sung Woo; Oh, Tae Suk; Eom, Jin Sup; Sun, Yoon Chi; Suh, Hyun Suk; Hong, Joon Pio 2015-05-01 The reconstruction of the posterior trunk remains to be a challenge as defects can be extensive, with deep dead space, and fixation devices exposed. Our goal was to achieve a tension-free closure for complex defects on the posterior trunk. From August 2006 to May 2013, 18 cases were reconstructed with multiple flaps combining perforator(s) and local skin flaps. The reconstructions were performed using freestyle approach. Starting with propeller flap(s) in single or multilobed design and sequentially in conjunction with adjacent random pattern flaps such as fitting puzzle. All defects achieved tensionless primary closure. The final appearance resembled a jigsaw puzzle-like appearance. The average size of defect was 139.6 cm(2) (range, 36-345 cm(2)). A total of 26 perforator flaps were used in addition to 19 random pattern flaps for 18 cases. In all cases, a single perforator was used for each propeller flap. The defect and the donor site all achieved tension-free closure. The reconstruction was 100% successful without flap loss. One case of late infection was noted at 12 months after surgery. Using multiple lobe designed propeller flaps in conjunction with random pattern flaps in a freestyle approach, resembling putting a jigsaw puzzle together, we can achieve a tension-free closure by distributing the tension to multiple flaps, supplying sufficient volume to obliterate dead space, and have reliable vascularity as the flaps do not need to be oversized. This can be a viable approach to reconstruct extensive defects on the posterior trunk. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA. 9. An Interactive 3D Virtual Anatomy Puzzle for Learning and Simulation - Initial Demonstration and Evaluation. Science.gov (United States) Messier, Erik; Wilcox, Jascha; Dawson-Elli, Alexander; Diaz, Gabriel; Linte, Cristian A 2016-01-01 To inspire young students (grades 6-12) to become medical practitioners and biomedical engineers, it is necessary to expose them to key concepts of the field in a way that is both exciting and informative. Recent advances in medical image acquisition, manipulation, processing, visualization, and display have revolutionized the approach in which the human body and internal anatomy can be seen and studied. It is now possible to collect 3D, 4D, and 5D medical images of patient specific data, and display that data to the end user using consumer level 3D stereoscopic display technology. Despite such advancements, traditional 2D modes of content presentation such as textbooks and slides are still the standard didactic equipment used to teach young students anatomy. More sophisticated methods of display can help to elucidate the complex 3D relationships between structures that are so often missed when viewing only 2D media, and can instill in students an appreciation for the interconnection between medicine and technology. Here we describe the design, implementation, and preliminary evaluation of a 3D virtual anatomy puzzle dedicated to helping users learn the anatomy of various organs and systems by manipulating 3D virtual data. The puzzle currently comprises several components of the human anatomy and can be easily extended to include additional organs and systems. The 3D virtual anatomy puzzle game was implemented and piloted using three display paradigms - a traditional 2D monitor, a 3D TV with active shutter glass, and the DK2 version Oculus Rift, as well as two different user interaction devices - a space mouse and traditional keyboard controls. 10. "It's like a puzzle": Pregnant women's perceptions of professional support in midwifery care. Science.gov (United States) Bäckström, Caroline A; Mårtensson, Lena B; Golsäter, Marie H; Thorstensson, Stina A 2016-12-01 Pregnant women are not always satisfied with the professional support they receive during their midwifery care. More knowledge is needed to understand what professional support pregnant women need for childbirth and parenting. Childbearing and the transition to becoming a parent is a sensitive period in one's life during which one should have the opportunity to receive professional support. Professional support does not always correspond to pregnant women's needs. To understand pregnant women's needs for professional support within midwifery care, it is crucial to further illuminate women's experiences of this support. To explore pregnant women's perceptions of professional support in midwifery care. A qualitative study using semi-structured interviews. Fifteen women were interviewed during gestational weeks 36-38. Data was analysed using phenomenography. The women perceived professional support in midwifery care to be reassuring and emotional, to consist of reliable information, and to be mediated with pedagogical creativity. The professional support facilitated new social contacts, partner involvement and contributed to mental preparedness. The findings of the study were presented in six categories and the category Professional support contributes to mental preparedness was influenced by the five other categories. Pregnant women prepare for childbirth and parenting by using several different types of professional support in midwifery care: a strategy that could be described as piecing together a puzzle. When the women put the puzzle together, each type of professional support works as a valuable piece in the whole puzzle. Through this, professional support could contribute to women's mental preparedness for childbirth and parenting. Copyright © 2016 Australian College of Midwives. Published by Elsevier Ltd. All rights reserved. 11. Republic of South Africa: unraveling the population puzzle. Country profile. Science.gov (United States) Spain, D 1984-06-01 whites it was only 1.5%. The birthrate among whites in 1980 was 16.5 births/1000 whites compared with 40 births/1000 blacks. Immigration has played an important role in South Africa's white population growth. Net migration in 1981 was 33,000; 55% of new arrivals were from Europe and 40% were from other African countries. In 1980, just over half the population (53%) lived in urban areas. Approximately 90% of whites and Asians lived in urban areas, while 3/4 of coloureds and 38% of blacks were classified as urban. understanding South Africa's racial and ethnic divisions is the key to understanding the country. Political and social interaction across racial lines is forbidden. Economic relationships are strictly controlled by the passbrook system. The passbook, which must be carried by every black aged 16 and olfder, establishes a black's right to be in a particular area of the country. South Africa's population is fairly young due to a history of high fertility and high mortality among blacks, coloureds, and Asias. For the population as a whole, 38% were under age 15 in 1980 and 4% were aged 65 and older. Whites accounted for 22% and blacks 64% of the labor force in 1980. The government has tried to narrow the wage gap between white and black workers. In 1972 blacks earned only 15% of whites salaries; by 1981 black wages were 24% of white wages. 12. Exchange rate regimes, saving glut and the Feldstein Horioka puzzle: The East Asian experience Science.gov (United States) Kaya-Bahçe, Seçil; Özmen, Erdal 2008-04-01 This paper investigates whether the recent experience of the emerging East Asian countries with current account surpluses is consistent with the “saving glut” hypothesis and the Feldstein and Horioka puzzle. The evidence suggests that the saving retention coefficients declined substantially in most of the countries after an endogenous break date coinciding with a major exchange rate regime change with the 1997-1998 crisis. Exchange rate flexibility appears to be enhancing financial integration. The results are consistent with an “investment slump” explanation rather than the “saving glut” postulation. 13. The Puzzle of Simultaneous Anti-Dumping and Anti-Subsidy Measures DEFF Research Database (Denmark) Nielsen, Jørgen Ulff-Møller; Hansen, Jørgen Drud be a surprise, as the same total level of protection may be obtained by using the anti-dumping procedure exclusively. When calculating the two duties in the EU the outcome depends on whether the subsidies are export subsidies or domestic subsidies and this may also cause surprise. This paper addresses...... these puzzles in a theoretical analysis based on a duopoly model for a horizontally differentiated product. We argue that the procedures of two investigations leading to a two-component duty may be rational because it provides an incentive for the offending country and companies to terminate their ‘unfair... 14. Jigsaw Puzzles As Cognitive Enrichment (PACE) - the effect of solving jigsaw puzzles on global visuospatial cognition in adults 50 years of age and older: study protocol for a randomized controlled trial. Science.gov (United States) Fissler, Patrick; Küster, Olivia C; Loy, Laura S; Laptinskaya, Daria; Rosenfelder, Martin J; von Arnim, Christine A F; Kolassa, Iris-Tatjana 2017-09-06 Neurocognitive disorders are an important societal challenge and the need for early prevention is increasingly recognized. Meta-analyses show beneficial effects of cognitive activities on cognition. However, high financial costs, low intrinsic motivation, logistic challenges of group-based activities, or the need to operate digital devices prevent their widespread application in clinical practice. Solving jigsaw puzzles is a cognitive activity without these hindering characteristics, but cognitive effects have not been investigated yet. With this study, we aim to evaluate the effect of solving jigsaw puzzles on visuospatial cognition, daily functioning, and psychological outcomes. The pre-posttest, assessor-blinded study will include 100 cognitively healthy adults 50 years of age or older, who will be randomly assigned to a jigsaw puzzle group or a cognitive health counseling group. Within the 5-week intervention period, participants in the jigsaw puzzle group will engage in 30 days of solving jigsaw puzzles for at least 1 h per day and additionally receive cognitive health counseling. The cognitive health counseling group will receive the same counseling intervention but no jigsaw puzzles. The primary outcome, global visuospatial cognition, will depict the average of the z-standardized performance scores in visuospatial tests of perception, constructional praxis, mental rotation, processing speed, flexibility, working memory, reasoning, and episodic memory. As secondary outcomes, we will assess the eight cognitive abilities, objective and subjective visuospatial daily functioning, psychological well-being, general self-efficacy, and perceived stress. The primary data analysis will be based on mixed-effects models in an intention-to-treat approach. Solving jigsaw puzzles is a low-cost, intrinsically motivating, cognitive leisure activity, which can be executed alone or with others and without the need to operate a digital device. In the case of positive results 15. Effects of age and type of picture on visuospatial working memory assessed with a computerized jigsaw-puzzle task. Science.gov (United States) Toril, Pilar; Reales, José M; Mayas, Julia; Ballesteros, Soledad 2017-09-15 We investigated the effect of age and color in a computerized version of the jigsaw-puzzle task. In Experiment 1, young and older adults were presented with puzzles in color and black-and-white line drawings, varying in difficulty from 4 to 9 pieces. Older adults performed the task better with the black-and-white stimuli and younger adults performed better with the color ones. In Experiment 2, new older and young adults identified the same fragmented pictures as fast and accurately as possible. The older group identified the black-and-white stimuli faster than those presented in color, while the younger adults identified both similarly. In Experiment 3A, new older and young groups performed the puzzle task with the same color pictures and their monochrome versions. In Experiment 3B, participants performed a speeded identification task with the two sets. The findings of these experiments showed that older adults have a memory not a perceptual difficulty. 16. Puzzling antimatter CERN Multimedia Francesco Poppi 2010-01-01 For many years, the absence of antimatter in the Universe has tantalised particle physicists and cosmologists: while the Big Bang should have created equal amounts of matter and antimatter, we do not observe any primordial antimatter today. Where has it gone? The LHC experiments have the potential to unveil natural processes that could hold the key to solving this paradox.   Every time that matter is created from pure energy, equal amounts of particles and antiparticles are generated. Conversely, when matter and antimatter meet, they annihilate and produce light. Antimatter is produced routinely when cosmic rays hit the Earth's atmosphere, and the annihilations of matter and antimatter are observed during physics experiments in particle accelerators. If the Universe contained antimatter regions, we would be able to observe intense fluxes of photons at the boundaries of the matter/antimatter regions. “Experiments measuring the diffuse gamma-ray background in the Universe would be able... 17. Phthalate Puzzle difficulties in working with it and thereby limited its applicability. Plasticized PVC, a ... performance rather than physical properties as its incorporation increases ... Adult women had higher levels of urinary metabolites than men as phthalates ... 18. Puzzling asymmetries CERN Multimedia Antonella Del Rosso 2012-01-01 In a recently published paper, the LHCb collaboration reported on a possible deviation from the Standard Model. Theorists are now working to calculate precisely this effect and to evaluate the implications that such an unexpected result could have on the established theory.   The Standard Model is able to predict the decay rates of particles with high precision. In most cases, experimentalists confirm the value predicted by theory and the figure is added to the official publications. However, this time, things seem to have taken a different route. Studying data collected in 2011, the LHCb collaboration found that in a specific decay – a B particle transforming into a K particle plus two charged muons (B -> Kμ-μ+) – the branching ratio of the neutral B in the corresponding decay (i.e. B0 -> K0μ-μ+) was different it that of the positively charged B (i.e. B+ -> K+μ-μ+). Such an “isospin asymmetry”... 19. Puzzling asymmetries CERN Multimedia Antonella Del Rosso 2012-01-01 In a recently published paper, the LHCb Collaboration has reported on a possible deviation from the Standard Model. Theorists are now working to calculate precisely this effect and to evaluate the implications that such unexpected result could have on the established theory.   The Standard Model is able to predict the decay rates of particles with high precision. In most cases, experimentalists confirm the value predicted by theory and the figure is added to the official publications. However, this time, things seem to have taken a different route. Studying data collected in 2011, the LHCb Collaboration found that in a specific decay – a B particle transforming into a K particle plus two charged muons (B -> Kμ-μ+) – the branching ratio of the neutral B in the corresponding decay (i.e. B0 -> K0μ-μ+) is different from that of the positively charged B (i.e. B+ -> K+μ-μ+). Such an “isospin asymmetry&rdquo... 20. The Equity Premium Puzzle: Analysis in Brazil after the Real Plan Directory of Open Access Journals (Sweden) Fábio Augusto Reis Gomes 2013-04-01 Full Text Available Our paper investigates whether there is evidence of an Equity Premium Puzzle (EPP in Brazil, applying two different methodologies. The EPP was identified by Mehra and Prescott (1985 since the Consumption Capital Asset Pricing Model (CCAPM, when calibrated with reasonable preference parameters, could not explain high historical average risk premiums in the United States. In our first approach, we consider Mehra’s (2003 model and calibrate the coefficient of risk aversion, using 1995:2-2012:1 quarterly data. The Ibovespa index was used as a measure of the market return, whereas the risk-free rate was proxied by the Selic interbank rate and by the savings account rate. In our second approach, we propose a new method to test the puzzle. We jointly estimate, via generalized method of moments, the parameters of interest using a moment condition that has not been previously explored, as far as we are aware of. The two approaches produced a high risk aversion coefficient, however the second approach indicated that we cannot reject the hypothesis of the risk aversion coefficient being statistically equal to zero. A possible explanation for this result might be that in Brazil the equity premium is not statistically different from zero. Therefore there is no evidence of EPP in Brazil for the studied period. 1. New Nuclear Magnetic Moment of 209Bi: Resolving the Bismuth Hyperfine Puzzle Science.gov (United States) Skripnikov, Leonid V.; Schmidt, Stefan; Ullmann, Johannes; Geppert, Christopher; Kraus, Florian; Kresse, Benjamin; Nörtershäuser, Wilfried; Privalov, Alexei F.; Scheibe, Benjamin; Shabaev, Vladimir M.; Vogel, Michael; Volotka, Andrey V. 2018-03-01 A recent measurement of the hyperfine splitting in the ground state of Li-like 80+208Bi has established a "hyperfine puzzle"—the experimental result exhibits a 7 σ deviation from the theoretical prediction [J. Ullmann et al., Nat. Commun. 8, 15484 (2017), 10.1038/ncomms15484; J. P. Karr, Nat. Phys. 13, 533 (2017), 10.1038/nphys4159]. We provide evidence that the discrepancy is caused by an inaccurate value of the tabulated nuclear magnetic moment (μI) of 209Bi. We perform relativistic density functional theory and relativistic coupled cluster calculations of the shielding constant that should be used to extract the value of μI(209ipts>) and combine it with nuclear magnetic resonance measurements of Bi (NO3 )3 in nitric acid solutions and of the hexafluoridobismuthate(V) BiF6- ion in acetonitrile. The result clearly reveals that μI(209Bi) is much smaller than the tabulated value used previously. Applying the new magnetic moment shifts the theoretical prediction into agreement with experiment and resolves the hyperfine puzzle. 2. A Hybrid alldifferent-Tabu Search Algorithm for Solving Sudoku Puzzles Directory of Open Access Journals (Sweden) Ricardo Soto 2015-01-01 Full Text Available The Sudoku problem is a well-known logic-based puzzle of combinatorial number-placement. It consists in filling a n2 × n2 grid, composed of n columns, n rows, and n subgrids, each one containing distinct integers from 1 to n2. Such a puzzle belongs to the NP-complete collection of problems, to which there exist diverse exact and approximate methods able to solve it. In this paper, we propose a new hybrid algorithm that smartly combines a classic tabu search procedure with the alldifferent global constraint from the constraint programming world. The alldifferent constraint is known to be efficient for domain filtering in the presence of constraints that must be pairwise different, which are exactly the kind of constraints that Sudokus own. This ability clearly alleviates the work of the tabu search, resulting in a faster and more robust approach for solving Sudokus. We illustrate interesting experimental results where our proposed algorithm outperforms the best results previously reported by hybrids and approximate methods. 3. A resolution of the inclusive flavor-breaking τ |Vus| puzzle Science.gov (United States) Hudspith, Renwick J.; Lewis, Randy; Maltman, Kim; Zanotti, James 2018-06-01 We revisit the puzzle of |Vus | values obtained from the conventional implementation of hadronic-τ- decay-based flavor-breaking finite-energy sum rules lying > 3 σ below the expectations of three-family unitarity. Significant unphysical dependences of |Vus | on the choice of weight, w, and upper limit, s0, of the experimental spectral integrals entering the analysis are confirmed, and a breakdown of assumptions made in estimating higher dimension, D > 4, OPE contributions identified as the main source of these problems. A combination of continuum and lattice results is shown to suggest a new implementation of the flavor-breaking sum rule approach in which not only |Vus |, but also D > 4 effective condensates, are fit to data. Lattice results are also used to clarify how to reliably treat the slowly converging D = 2 OPE series. The new sum rule implementation is shown to cure the problems of the unphysical w- and s0-dependence of |Vus | and to produce results ∼0.0020 higher than those of the conventional implementation employing the same data. With B-factory input, and using, in addition, dispersively constrained results for the Kπ branching fractions, we find |Vus | = 0.2231(27)exp(4)th, in excellent agreement with the result from Kℓ3, and compatible within errors with the expectations of three-family unitarity, thus resolving the long-standing inclusive τ |Vus | puzzle. 4. New Nuclear Magnetic Moment of ^{209}Bi: Resolving the Bismuth Hyperfine Puzzle. Science.gov (United States) Skripnikov, Leonid V; Schmidt, Stefan; Ullmann, Johannes; Geppert, Christopher; Kraus, Florian; Kresse, Benjamin; Nörtershäuser, Wilfried; Privalov, Alexei F; Scheibe, Benjamin; Shabaev, Vladimir M; Vogel, Michael; Volotka, Andrey V 2018-03-02 A recent measurement of the hyperfine splitting in the ground state of Li-like ^{208}Bi^{80+} has established a "hyperfine puzzle"-the experimental result exhibits a 7σ deviation from the theoretical prediction [J. Ullmann et al., Nat. Commun. 8, 15484 (2017)NCAOBW2041-172310.1038/ncomms15484; J. P. Karr, Nat. Phys. 13, 533 (2017)NPAHAX1745-247310.1038/nphys4159]. We provide evidence that the discrepancy is caused by an inaccurate value of the tabulated nuclear magnetic moment (μ_{I}) of ^{209}Bi. We perform relativistic density functional theory and relativistic coupled cluster calculations of the shielding constant that should be used to extract the value of μ_{I}(^{209}Bi) and combine it with nuclear magnetic resonance measurements of Bi(NO_{3})_{3} in nitric acid solutions and of the hexafluoridobismuthate(V) BiF_{6}^{-} ion in acetonitrile. The result clearly reveals that μ_{I}(^{209}Bi) is much smaller than the tabulated value used previously. Applying the new magnetic moment shifts the theoretical prediction into agreement with experiment and resolves the hyperfine puzzle. 5. A Computational/Experimental Platform for Investigating Three-Dimensional Puzzle Solving of Comminuted Articular Fractures Science.gov (United States) Thomas, Thaddeus P.; Anderson, Donald D.; Willis, Andrew R.; Liu, Pengcheng; Frank, Matthew C.; Marsh, J. Lawrence; Brown, Thomas D. 2011-01-01 Reconstructing highly comminuted articular fractures poses a difficult surgical challenge, akin to solving a complicated three-dimensional (3D) puzzle. Pre-operative planning using CT is critically important, given the desirability of less invasive surgical approaches. The goal of this work is to advance 3D puzzle solving methods toward use as a pre-operative tool for reconstructing these complex fractures. Methodology for generating typical fragmentation/dispersal patterns was developed. Five identical replicas of human distal tibia anatomy, were machined from blocks of high-density polyetherurethane foam (bone fragmentation surrogate), and were fractured using an instrumented drop tower. Pre- and post-fracture geometries were obtained using laser scans and CT. A semi-automatic virtual reconstruction computer program aligned fragment native (non-fracture) surfaces to a pre-fracture template. The tibias were precisely reconstructed with alignment accuracies ranging from 0.03-0.4mm. This novel technology has potential to significantly enhance surgical techniques for reconstructing comminuted intra-articular fractures, as illustrated for a representative clinical case. PMID:20924863 6. Puzzle of magnetic moments of Ni clusters revisited using quantum Monte Carlo method. Science.gov (United States) Lee, Hung-Wen; Chang, Chun-Ming; Hsing, Cheng-Rong 2017-02-28 The puzzle of the magnetic moments of small nickel clusters arises from the discrepancy between values predicted using density functional theory (DFT) and experimental measurements. Traditional DFT approaches underestimate the magnetic moments of nickel clusters. Two fundamental problems are associated with this puzzle, namely, calculating the exchange-correlation interaction accurately and determining the global minimum structures of the clusters. Theoretically, the two problems can be solved using quantum Monte Carlo (QMC) calculations and the ab initio random structure searching (AIRSS) method correspondingly. Therefore, we combined the fixed-moment AIRSS and QMC methods to investigate the magnetic properties of Ni n (n = 5-9) clusters. The spin moments of the diffusion Monte Carlo (DMC) ground states are higher than those of the Perdew-Burke-Ernzerhof ground states and, in the case of Ni 8-9 , two new ground-state structures have been discovered using the DMC calculations. The predicted results are closer to the experimental findings, unlike the results predicted in previous standard DFT studies. 7. Toward a solution to the RAA and v2 puzzle for heavy quarks Directory of Open Access Journals (Sweden) Santosh K. Das 2015-07-01 Full Text Available The heavy quarks constitute a unique probe of the quark–gluon plasma properties. A puzzling relation between the nuclear modification factor RAA(pT and the elliptic flow v2(pT has been observed both at RHIC and LHC energies. Predicting correctly both observables has been a challenge to all existing models, especially for D mesons. We discuss how the temperature dependence of the heavy quark drag coefficient is responsible for a large part of such a puzzle. In particular, we have considered four different models to evaluate the temperature dependence of drag and diffusion coefficients propagating through a quark gluon plasma (QGP. All the four different models are set to reproduce the same RAA(pT observed in experiments at RHIC and LHC energy. We point out that for the same RAA(pT one can generate 2–3 times more v2 depending on the temperature dependence of the heavy quark drag coefficient. A non-decreasing drag coefficient as T→Tc is a major ingredient for a simultaneous description of RAA(pT and v2(pT. 8. The puzzle of the 6Li quadrupole moment: steps toward the solution International Nuclear Information System (INIS) Blokhintsev, L.D.; Kukulin, V.I.; Pomerantsev, V.N. 2005-01-01 The problem of origin of the ground-state 6 Li quadrupole deformation has been investigated with account of the three-deuteron component of this nucleus wave function. two long-standing puzzles related to the tensor interaction in 6 Li are known. The first one lies in the anomalously small value of the 6 Li quadrupole moment which, being negative, is in absolute magnitude smaller by the factor of 5 than that of 6 Li. The second puzzle consists in the anomalous behavior of the tensor analyzing power T 2q in scattering of polarized 6 Li nuclei from various targets. It is shown that the large (in absolute magnitude) negative contribution to the 6 Li quadrupole moment resulting from the three-deuteron configuration cancels almost completely the direct positive contribution due to the folding αd-potential. As a result, the total quadrupole moment turns out to be close to zero and highly sensitive to fine details of the tensor NN interaction and of the 4 He wave function [ru 9. Heavy flavor puzzle at LHC: a serendipitous interplay of jet suppression and fragmentation. Science.gov (United States) Djordjevic, Magdalena 2014-01-31 Both charged hadrons and D mesons are considered to be excellent probes of QCD matter created in ultrarelativistic heavy ion collisions. Surprisingly, recent experimental observations at LHC show the same jet suppression for these two probes, which--contrary to pQCD expectations--may suggest similar energy losses for light quarks and gluons in the QCD medium. We here use our recently developed energy loss formalism in a finite-size dynamical QCD medium to analyze this phenomenon that we denote as the "heavy flavor puzzle at LHC." We show that this puzzle is a consequence of an unusual combination of the suppression and fragmentation patterns and, in fact, does not require invoking the same energy loss for light partons. Furthermore, we show that this combination leads to a simple relationship between the suppressions of charged hadrons and D mesons and the corresponding bare quark suppressions. Consequently, a coincidental matching of jet suppression and fragmentation allows considerably simplifying the interpretation of the corresponding experimental data. 10. An Effective Method of Introducing the Periodic Table as a Crossword Puzzle at the High School Level Science.gov (United States) Joag, Sushama D. 2014-01-01 A simple method to introduce the modern periodic table of elements at the high school level as a game of solving a crossword puzzle is presented here. A survey to test the effectiveness of this new method relative to the conventional method, involving use of a wall-mounted chart of the periodic table, was conducted on a convenience sample. This… 11. Pengaruh Model Pembelajaran Kooperatif Tipe Time Token Berbantu Puzzle Terhadap Kemampuan Berpikir Kritis Peserta Didik Kelas X Pada Materi Gelombang Directory of Open Access Journals (Sweden) Sri Latifah 2015-04-01 Full Text Available The purpose in this research is to know the influence of cooperative Learning Models Time Type Token with puzzle toward critical thought abilityof the students grade X on Wave material at MA Al Hikmah Bandar Lampung year2014/2015 This research is quantitative with quasi experiment. Reasearch design that used is Nonequivalent Control Group Design with the population of all the students in grade X semestre even at MA Al Hikmah, Bandar Lampung year 2014/2015. The Sample of this research are used 2 classes; experiment and control class, where as experiment class (XA used cooperative time type token model with puzzle and control class (XB used cooperative learning model with pictures as media. Data colecting is using test (pretest and posttest, observation and documentation. After the data test collected, then it is analyzed by using normality test of statistic analysis, homoginity test and test-t. According to the result, it can be concluded that time type token with puzzle application influence significantly ttoward the ability of critical thinking of the students on Wave material at MA Al Hikmah Bandar Lampung year 2014/2015. Tujuan penelitian ini adalah untuk mengetahui pengaruh model pembelajaran kooperatif tipe time token berbantu puzzle terhadap kemampuan berpikir kritis peserta didik kelas X pada materi Gelombang di MA Al Hikmah Bandar Lampung Tahun Pelajaran 2014/2015 .Penelitian ini termasuk kedalam penelitian kuantitatif dengan jenis penelitian Quasi Eksperimen. Desain penelitian ini menggunakan Nonequivalent Control Group Design dengan populasi yaitu seluruh peserta didik kelas X semester genap di MA Al Hikmah Bandar Lampung T.P 2014/2015. Sampel penelitian ini menggunakan 2 kelas yaitu kelas eksperimen dan kontrol, dimana kelas eksperimen (XA menggunakan model kooperatif tipe time token berbantu puzzle dan kelas kontrol (XB menggunakan model pembelajaran kooperatif media gambar. Teknik pengambilan data menggunakan test (pretest dan 12. Future strategy and puzzles of heavy ion beam mediated technique in genetic improvement of biological bodies International Nuclear Information System (INIS) Huang Qunce 2007-01-01 The 7 research puzzles in the genetic improvement of biological bodies made by ion beam mediated technique, are worth noticed. The technical ideas, including one mediated technique in physics, 2 significant subjects, 3 effective changes, the mediated evidences of 4 aspects and 5 biological characteristics, were particularly put forward according to the existing states in the field. The 2 significant subjects consist of the mechanics of the allogenetic materials entering into the acceptor and they being to be recombined. The 3 effective changes include from studying morphology to genetic laws, from researching M1 generation to the next generations, from determining the single character to the synthetic traits. The mediated evidences of 4 aspects come from morphology, physiology and biochemistry, molecule biology. The 5 biological characteristics are mainly reproduction, development, photosynthesis, bad condition-resistant and quality. (authors) 13. Well-Defined Cyclic Triblock Terpolymers: A Missing Piece of the Morphology Puzzle KAUST Repository Polymeropoulos, George 2016-10-27 Two well-defined cyclic triblock terpolymers, missing pieces of the terpolymer morphology puzzle, consisting of poly(isoprene), polystyrene, and poly(2-vinylpyridine), were synthesized by combining the Glaser coupling reaction with anionic polymerization. An α,ω-dihydroxy linear triblock terpolymer (OH-PI1,4-b-PS-b-P2VP-OH) was first synthesized followed by transformation of the OH to alkyne groups by esterification with pentynoic acid and cyclization by Glaser coupling. The size exclusion chromatography (SEC) trace of the linear terpolymer precursor was shifted to lower elution time after cyclization, indicating the successful synthesis of the cyclic terpolymer. Additionally, the SEC trace of the cyclic terpolymer produced, after cleavage of the ester groups, shifted again practically to the position corresponding to the linear precursor. The first exploratory results on morphology showed the tremendous influence of the cyclic structure on the morphology of terpolymers. © 2016 American Chemical Society. 14. Gold-nanoparticle-mediated jigsaw-puzzle-like assembly of supersized plasmonic DNA origami. Science.gov (United States) Yao, Guangbao; Li, Jiang; Chao, Jie; Pei, Hao; Liu, Huajie; Zhao, Yun; Shi, Jiye; Huang, Qing; Wang, Lianhui; Huang, Wei; Fan, Chunhai 2015-03-02 DNA origami has rapidly emerged as a powerful and programmable method to construct functional nanostructures. However, the size limitation of approximately 100 nm in classic DNA origami hampers its plasmonic applications. Herein, we report a jigsaw-puzzle-like assembly strategy mediated by gold nanoparticles (AuNPs) to break the size limitation of DNA origami. We demonstrated that oligonucleotide-functionalized AuNPs function as universal joint units for the one-pot assembly of parent DNA origami of triangular shape to form sub-microscale super-origami nanostructures. AuNPs anchored at predefined positions of the super-origami exhibited strong interparticle plasmonic coupling. This AuNP-mediated strategy offers new opportunities to drive macroscopic self-assembly and to fabricate well-defined nanophotonic materials and devices. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim. 15. Simons Puzzle: Heuristics in the Process of Making Political Choices Directory of Open Access Journals (Sweden) Mateusz Wajzer 2014-07-01 Full Text Available In this article we analyse one of the most fascinating paradoxes of mass politics. Based on the data from the studies of neurobiologists, neurologists, social psychology, cognitive and evolution studies we answer the question specified in literature as the Simon’s puzzle: How is it possible that citizens have their opinions about politics, if they know so little about it? We began our analysis from the criticism of the economic rationality approach. To do this, we referred to the Allais paradox, cognitive dissonance theory, Ellsberg paradox, the concept of bounded rationality, conjunction fallacy and prospect theory. Next, we described the evolutionary processes shaping the minds of Homo sapiens and characterised cognitive mechanisms, thanks to which people can make political choices, especially in view of the shortage of time and information. The following heuristics are referred to herein: affect, recognition, judgment and imitation. 16. A Theoretical Model of Jigsaw-Puzzle Pattern Formation by Plant Leaf Epidermal Cells. Science.gov (United States) Higaki, Takumi; Kutsuna, Natsumaro; Akita, Kae; Takigawa-Imamura, Hisako; Yoshimura, Kenji; Miura, Takashi 2016-04-01 Plant leaf epidermal cells exhibit a jigsaw puzzle-like pattern that is generated by interdigitation of the cell wall during leaf development. The contribution of two ROP GTPases, ROP2 and ROP6, to the cytoskeletal dynamics that regulate epidermal cell wall interdigitation has already been examined; however, how interactions between these molecules result in pattern formation remains to be elucidated. Here, we propose a simple interface equation model that incorporates both the cell wall remodeling activity of ROP GTPases and the diffusible signaling molecules by which they are regulated. This model successfully reproduces pattern formation observed in vivo, and explains the counterintuitive experimental results of decreased cellulose production and increased thickness. Our model also reproduces the dynamics of three-way cell wall junctions. Therefore, this model provides a possible mechanism for cell wall interdigitation formation in vivo. 17. Puzzle test: A tool for non-analytical clinical reasoning assessment. Science.gov (United States) Monajemi, Alireza; Yaghmaei, Minoo 2016-01-01 Most contemporary clinical reasoning tests typically assess non-automatic thinking. Therefore, a test is needed to measure automatic reasoning or pattern recognition, which has been largely neglected in clinical reasoning tests. The Puzzle Test (PT) is dedicated to assess automatic clinical reasoning in routine situations. This test has been introduced first in 2009 by Monajemi et al in the Olympiad for Medical Sciences Students.PT is an item format that has gained acceptance in medical education, but no detailed guidelines exist for this test's format, construction and scoring. In this article, a format is described and the steps to prepare and administer valid and reliable PTs are presented. PT examines a specific clinical reasoning task: Pattern recognition. PT does not replace other clinical reasoning assessment tools. However, it complements them in strategies for assessing comprehensive clinical reasoning. 18. Shared Environment Estimates for Educational Attainment: A Puzzle and Possible Solutions. Science.gov (United States) Freese, Jeremy; Jao, Yu-Han 2017-02-01 Classical behavioral genetics models for twin and other family designs decompose traits into heritability, shared environment, and nonshared environment components. Estimates of heritability of adult traits are pervasively observed to be far higher than those of shared environment, which has been used to make broad claims about the impotence of upbringing. However, the most commonly studied nondemographic variable in many areas of social science, educational attainment, exhibits robustly high estimates both for heritability and for shared environment. When previously noticed, the usual explanation has emphasized family resources, but evidence suggests this is unlikely to explain the anomalous high estimates for shared environment of educational attainment. We articulate eight potential complementary explanations and discuss evidence of their prospective contributions to resolving the puzzle. In so doing, we hope to further consideration of how behavioral genetics findings may advance studies of social stratification beyond the effort to articulate specific genetic influences. © 2015 Wiley Periodicals, Inc. 19. The deuteron-radius puzzle is alive: A new analysis of nuclear structure uncertainties Science.gov (United States) Hernandez, O. J.; Ekström, A.; Nevo Dinur, N.; Ji, C.; Bacca, S.; Barnea, N. 2018-03-01 To shed light on the deuteron radius puzzle we analyze the theoretical uncertainties of the nuclear structure corrections to the Lamb shift in muonic deuterium. We find that the discrepancy between the calculated two-photon exchange correction and the corresponding experimentally inferred value by Pohl et al. [1] remain. The present result is consistent with our previous estimate, although the discrepancy is reduced from 2.6 σ to about 2 σ. The error analysis includes statistic as well as systematic uncertainties stemming from the use of nucleon-nucleon interactions derived from chiral effective field theory at various orders. We therefore conclude that nuclear theory uncertainty is more likely not the source of the discrepancy. 20. Refraining from terror: the puzzle of non violence in Western Sahara Directory of Open Access Journals (Sweden) Matthew Porges 2018-02-01 Full Text Available In Western Sahara, the former Spanish colony occupied by Morocco since 1975, virtually no violent resistance has been mounted by the indigenous Sahrawi people since the end of the 1975-1991 war between Morocco and the pro-independence Polisario Front. This absence of violence is puzzling in the light of several factors: the widespread public support for independence; the social and economic disparities between Moroccan and Sahrawi inhabitants of the territory; and Morocco’s brutal repression of Sahrawi culture, resistance, and expressions of proindependence feeling. This article examines the logic of violence (and its absence and of resistance, and draws lessons from Western Sahara. As well as advancing theoretical development, the article makes a methodological contribution to the study of resistance, and improves our understanding of the Western Sahara conflict through fieldwork, including around 60 interviews with Sahrawi activists conducted in the summer of 2014. 1. From near to eternity: Spin-glass planting, tiling puzzles, and constraint-satisfaction problems Science.gov (United States) Hamze, Firas; Jacob, Darryl C.; Ochoa, Andrew J.; Perera, Dilina; Wang, Wenlong; Katzgraber, Helmut G. 2018-04-01 We present a methodology for generating Ising Hamiltonians of tunable complexity and with a priori known ground states based on a decomposition of the model graph into edge-disjoint subgraphs. The idea is illustrated with a spin-glass model defined on a cubic lattice, where subproblems, whose couplers are restricted to the two values {-1 ,+1 } , are specified on unit cubes and are parametrized by their local degeneracy. The construction is shown to be equivalent to a type of three-dimensional constraint-satisfaction problem known as the tiling puzzle. By varying the proportions of subproblem types, the Hamiltonian can span a dramatic range of typical computational complexity, from fairly easy to many orders of magnitude more difficult than prototypical bimodal and Gaussian spin glasses in three space dimensions. We corroborate this behavior via experiments with different algorithms and discuss generalizations and extensions to different types of graphs. 2. The puzzling assembly of the Milky Way halo – contributions from dwarf Spheroidals and globular clusters Directory of Open Access Journals (Sweden) Lépine S. 2012-02-01 Full Text Available While recent sky surveys have uncovered large numbers of ever fainter Milky Way satellites, their classification as star clusters, low-luminosity galaxies, or tidal overdensities remains often unclear. Likewise, their contributions to the build-up of the halo is yet debated. In this contribution we will discuss the current knowledge of the stellar populations and chemo-dynamics in these puzzling satellites, with a particular focus on dwarf spheroidal galaxies and the globular clusters in the outer Galactic halo. Also the question of whether some of the outermost halo objects are dynamically associated with the (Milky Way halo at all is addressed in terms of proper measurements in the remote Leo I and II dwarf galaxies. 3. Flaxion: a minimal extension to solve puzzles in the standard model Energy Technology Data Exchange (ETDEWEB) Ema, Yohei [Department of Physics,The University of Tokyo, Tokyo 133-0033 (Japan); Hamaguchi, Koichi; Moroi, Takeo; Nakayama, Kazunori [Department of Physics,The University of Tokyo, Tokyo 133-0033 (Japan); Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU),University of Tokyo, Kashiwa 277-8583 (Japan) 2017-01-23 We propose a minimal extension of the standard model which includes only one additional complex scalar field, flavon, with flavor-dependent global U(1) symmetry. It not only explains the hierarchical flavor structure in the quark and lepton sector (including neutrino sector), but also solves the strong CP problem by identifying the CP-odd component of the flavon as the QCD axion, which we call flaxion. Furthermore, the flaxion model solves the cosmological puzzles in the standard model, i.e., origin of dark matter, baryon asymmetry of the universe, and inflation. We show that the radial component of the flavon can play the role of inflaton without isocurvature nor domain wall problems. The dark matter abundance can be explained by the flaxion coherent oscillation, while the baryon asymmetry of the universe is generated through leptogenesis. 4. Towards a solution of the puzzle posed by superconducting SrTiO3 Science.gov (United States) Malik, G. P. 2015-09-01 Suitably doped SrTiO3 was found in 1964 to undergo a superconducting transition below 1 K with a dome-like Tc versus n (electron concentration) plot. The apex of the dome — a point of inflection — corresponds to the point (n≈9 ×1019cm-3, Tc≈0.30 K). On either side of it, Tc goes down to ≈0.1 K for the extreme values between which n was varied. A single value of Tc is thus observed for two different values of n. The puzzle for the theory has been to explain this result. Treating the problem in all its generality, we present here three equations: the μ1-incorporated BCS equation for Tc, the μ0-incorporated equation for the T = 0 gap Δ0, where μ1 and μ0 are the chemical potentials at T = Tc and T = 0 respectively, and an equation that relates the interaction parameters λ1 and λ0 at these temperatures. Because there are five unknowns in the problem, we tackle these equations via an approximation scheme that includes setting μ1 = μ0 and λ1 = λ0. The latter of these is factually a basic tenet of the BCS theory. Salient features of our findings are: (i) the solutions for Tc and Δ0 on the RHS (LHS) of the dome correspond to μ> kBθD(μ LHS the limits of the integrals in the equations need to be curtailed to obtain real solutions and (iii) the point μ = kBθD is a point of inflection in the Tc versus μ plot. Since the puzzle has remained unsolved for a long time, we also offer here a purely mathematical model for λ(μ) — sans physical justification — which leads to a Tc versus μ plot qualitatively in agreement with experiment. 5. Box-Cox transformation for resolving Peelle's Pertinent Puzzle in curve fitting International Nuclear Information System (INIS) Oh, Soo-Youl 2003-01-01 Incorporating the Box-Cox transformation into a least-squares method is presented as one of resolutions of an anomaly known as Peelle's Pertinent Puzzle. The transformation is a strategy to make non-normal distribution data resemble normal data. A procedure is proposed: transform the measured raw data with an optimized Box-Cox transformation parameter, fit the transformed data using a usual curve fitting method, then inverse-transform the fitted results to final estimates. The generalized least-squares method utilized in GMA is adopted as the curve fitting tool for the test of proposed procedure. In the procedure, covariance matrices are correspondingly transformed and inverse-transformed with the aid of error propagation law. In addition to a sensible answer to the Peelle's problem itself, the procedure resulted in reasonable estimates of 6 Li(n,t) cross sections in several to 800 keV energy region. Meanwhile, comparisons of the present procedure with that of Chiba and Smith show that both procedures yield estimates so close each other for the sample evaluation on 6 Li(n,t) above as well as for the Peelle's problem. Two procedures, however, are conceptually very different and further discussions would be needed for a consensus on this issue of resolving the Puzzle. It is also pointed out that the transformation is applicable not only to a least-squares method but also to other parameter estimation method such as a usual Bayesian approach formulated with an assumption of normality of the probability density function. (author) 6. Desain Pembelajaran Bangun Datar Menggunakan Fable “Dog Catches Cat” And Puzzle Tangram Di Kelas II SD Directory of Open Access Journals (Sweden) Lisnani Lisnani 2013-06-01 Full Text Available Tujuan penelitian ini adalah mengembangkan kemampuan berpikir kreatif matematis siswa dalam pengenalan dan pengelompokkan bangun datar melalui fable “dog catches cat”, puzzle tangram, dan kreasi origami. Metode yang digunakan adalah design research terdiri dari tiga tahap, yaitu: preliminary, design experiment (pilot experiment dan teaching experiment, dan analysis retrospective.  Penelitian ini mengembangkan hasil pembelajaran tentang bangun datar melalui serangkaian aktivitas, prosedur, dan strategi bagi siswa dalam menemukan kemampuan berpikir kreatif melalui Pendekatan Pendidikan Matematika Realistik Indonesia (PMRI melalui konteks tangram melalui fable “dog catches cat”. Puzzle tangram, dan kreasi origami menjadi starting point materi pengenalan dan pengelompokkan bangun datar. Hasil dari penelitian ini berupa learning trajectory pada masing-masing aktivitas yaitu: 1 Aktivitas 1, siswa mengenal berbagai bentuk bangun datar melalui penggunaan fable. 2 Aktivitas 2, siswa mampu menyebutkan dan mengelompokkan berbagai bangun datar melalui puzzle tangram. 3 Aktivitas 3, membentuk dan mengelompokkan bangun datar dan terbentuk suatu kreasi baru berupa kucing, anjing, dan lainnya.The purpose of this research is to develop mathematical creative thinking abilities and grouping students in the introduction of a flat wake through the fable “dog catches paint " , tangram puzzles, and origami creations. The method used is the research design consists of three stages: preliminary, design of experiments (pilot experiments and teaching experiments, and a retrospective analysis. This study develops learning outcomes on a flat wake through a series of activities, procedures, and strategies for students in finding creative thinking abilities through Realistic Mathematics Education Approach Indonesia (PMRI through tangram context through fable “dog catches the paint “. Tangram puzzles, and origami creations become the starting point and the 7. Chromhome: a rich internet application for accessing comparative chromosome homology maps. Science.gov (United States) Nagarajan, Sridevi; Rens, Willem; Stalker, James; Cox, Tony; Ferguson-Smith, Malcolm A 2008-03-26 regions of the chromosomes of the species under study. Future releases of Chromhome will accommodate more species and their respective gene and BAC maps, in addition to chromosome painting data. Chromhome application provides a single-page interface (SPI) with desktop style layout, delivering a better and richer user experience. 8. Chromhome: A rich internet application for accessing comparative chromosome homology maps Directory of Open Access Journals (Sweden) Cox Tony 2008-03-01 map entire genomes and helps focus only on relevant regions of the chromosomes of the species under study. Future releases of Chromhome will accommodate more species and their respective gene and BAC maps, in addition to chromosome painting data. Chromhome application provides a single-page interface (SPI with desktop style layout, delivering a better and richer user experience. 9. Cles: Etes-vous bon detective?; Enigmes grammaticales; Problemes policiers; Kidnapping (Keys: Are You a Good Detective?; Grammatical Puzzles; Detective Mysteries; Kidnapping). Science.gov (United States) Debyser, Francis; And Others 1984-01-01 Four sets of French classroom activities are presented: a mystery whose clues include two postcard messages; three puzzles with grammar-related clues; a mystery contained in three comic strip frames; and the solving of a kidnapping mystery. (MSE) 10. A solution to the rho-π puzzle: Spontaneously broken symmetries of the quark model International Nuclear Information System (INIS) Caldi, D.G.; Pagels, H. 1976-01-01 This article proposes a solution to the long-standing rho-π puzzle: How can the rho and π be members of a quark model U(6) 36 and the π be a Nambu-Goldstone boson satisfying partial conservation of the axial-vector current (PCAC) Our solution to the puzzle requires a revision of conventional concepts regarding the vector mesons rho, ω, K*, and phi. Just as the π is a Goldstone state, a collective excitation of the Nambu--Jona-Lasinio type, transforming as a member of the (3, 3) + (3, 3) representation of the chiral SU(3) x SU(3) group, so also the rho transforms like (3, 3) + (3, 3) and is also a collective state, a ''dormant'' Goldstone boson that is a true Goldstone boson in the static chiral U(6) x U(6) limit. The static chiral U(6) x U(6) is to be spontaneously broken to static U(6) in the vacuum. Relativisitc effects provide for U(6) breaking and a massive rho. This viewpoint has many consequences. Vector-meson dominance is a consequence of spontaneously broken chiral symmetry: the mechanism that couples the axial-vector current to the π couples the vector current to the rho. The transition rate is calculated as γ/sub rho/ -1 = f/sub pi//m/sub rho/ in rough agreement with experiment. This picture requires soft rho's to decouple. The chiral partner of the rho is not the A 1 but the B (1235). The experimental absence of the A 1 is no longer a theoretical embarrassment in this scheme. As the analog of PCAC for the pion we establish a tensor-field identity for the rho meson in which the rho is interpreted as a dormant Goldstone state. The decays delta → eta + π, B → ω + π, epsilon → 2π are estimated and are found to be in agreement with the observed rates. A static U(6) x U(6) generalization of the Σ model is presented with the π, rho, sigma, B in the (6, 6) + (6, 6) representation. The rho emerges as a dormant Goldstone boson in this model 11. The B→πK puzzle and its relation to rare B and K decays International Nuclear Information System (INIS) Buras, A.J.; Recksiegel, S.; Fleischer, R.; Schwab, F. 2003-01-01 The standard-model interpretation of the ratios of charged and neutral B→πK rates, R c and R n , respectively, points towards a puzzling picture. Since these observables are affected significantly by colour-allowed electroweak (EW) penguins, this ''B→πK puzzle'' could be a manifestation of new physics in the EW penguin sector. Performing the analysis in the R n - R c plane, which is very suitable for monitoring various effects, we demonstrate that we may, in fact, move straightforwardly to the experimental region in this plane through an enhancement of the relevant EW penguin parameter q. We derive analytical bounds for q in terms of a quantity L, which measures the violation of the Lipkin sum rule, and point out that strong phases around 90 circle are favoured by the data, in contrast to QCD factorisation. The B→πK modes imply a correlation between q and the angle γ that, in the limit of negligible rescattering effects and colour-suppressed EW penguins, depends only on the value of L. Concentrating on a minimal flavour-violating new-physics scenario with enhanced Z 0 penguins, we find that the current experimental values on B→X s μ + μ - require roughly L≤1.8. As the B→πK data give L = 5.7±2.4, L has either to move to smaller values once the B→πK data improve or new sources of flavour and CP violation are needed. In turn, the enhanced values of L seen in the B→πK data could be accompanied by enhanced branching ratios for the rare decays K + →π + νanti ν, K L →π 0 e + e - , B→X s νanti ν and B s,d →μ + μ - . Most interesting turns out to be the correlation between the B→πK modes and BR (K + →π + νanti ν), with the latter depending approximately on a single ''scaling'' variable anti L = L.(vertical stroke V ub /V cb vertical stroke /0.086) 2.3 . (orig.) 12. Two approaches towards the flavour puzzle. Dynamical minimal flavour violation and warped extra dimensions Energy Technology Data Exchange (ETDEWEB) Albrecht, Michaela E. 2010-08-16 The minimal-flavour-violating (MFV) hypothesis considers the Standard Model (SM) Yukawa matrices as the only source of flavour violation. In this work, we promote their entries to dynamical scalar spurion fields, using an effective field theory approach, such that the maximal flavour symmetry (FS) of the SM gauge sector is formally restored at high energy scales. The non-vanishing vacuum expectation values of the spurions induce a sequence of FS breaking and generate the observed hierarchy in the SM quark masses and mixings. The fact that there exists no explanation for it in the SM is known as the flavour puzzle. Gauging the non-abelian subgroup of the spontaneously broken FS, we interpret the associated Goldstone bosons as the longitudinal degrees of freedom of the corresponding massive gauge bosons. Integrating out the heavy Higgs modes in the Yukawa spurions leads directly to flavour-changing neutral currents (FCNCs) at tree level. The coefficients of the effective four-quark operators, resulting from the exchange of heavy flavoured gauge bosons, strictly follow the MFV principle. On the other hand, the Goldstone bosons associated with the global abelian symmetry group behave as weakly coupled axions which can be used to solve the strong CP problem within a modified Peccei-Quinn formalism. Models with a warped fifth dimension contain five-dimensional (5D) fermion bulk mass matrices in addition to their 5D Yukawa matrices, which thus represent an additional source of flavour violation beyond MFV. They can address the flavour puzzle since their eigenvalues allow for a different localisation of the fermion zero mode profiles along the extra dimension which leads to a hierarchy in the effective four-dimensional (4D) Yukawa matrices. At the same time, the fermion splitting introduces non-universal fermion couplings to Kaluza-Klein (KK) gauge boson modes, inducing tree-level FCNCs. Within a Randall-Sundrum model with custodial protection (RSc model) we carefully work 13. Two approaches towards the flavour puzzle. Dynamical minimal flavour violation and warped extra dimensions International Nuclear Information System (INIS) Albrecht, Michaela E. 2010-01-01 The minimal-flavour-violating (MFV) hypothesis considers the Standard Model (SM) Yukawa matrices as the only source of flavour violation. In this work, we promote their entries to dynamical scalar spurion fields, using an effective field theory approach, such that the maximal flavour symmetry (FS) of the SM gauge sector is formally restored at high energy scales. The non-vanishing vacuum expectation values of the spurions induce a sequence of FS breaking and generate the observed hierarchy in the SM quark masses and mixings. The fact that there exists no explanation for it in the SM is known as the flavour puzzle. Gauging the non-abelian subgroup of the spontaneously broken FS, we interpret the associated Goldstone bosons as the longitudinal degrees of freedom of the corresponding massive gauge bosons. Integrating out the heavy Higgs modes in the Yukawa spurions leads directly to flavour-changing neutral currents (FCNCs) at tree level. The coefficients of the effective four-quark operators, resulting from the exchange of heavy flavoured gauge bosons, strictly follow the MFV principle. On the other hand, the Goldstone bosons associated with the global abelian symmetry group behave as weakly coupled axions which can be used to solve the strong CP problem within a modified Peccei-Quinn formalism. Models with a warped fifth dimension contain five-dimensional (5D) fermion bulk mass matrices in addition to their 5D Yukawa matrices, which thus represent an additional source of flavour violation beyond MFV. They can address the flavour puzzle since their eigenvalues allow for a different localisation of the fermion zero mode profiles along the extra dimension which leads to a hierarchy in the effective four-dimensional (4D) Yukawa matrices. At the same time, the fermion splitting introduces non-universal fermion couplings to Kaluza-Klein (KK) gauge boson modes, inducing tree-level FCNCs. Within a Randall-Sundrum model with custodial protection (RSc model) we carefully work 14. 15C-15F Charge Symmetry and the 14C(n,γ)15C Reaction Puzzle International Nuclear Information System (INIS) Timofeyuk, N.K.; Thompson, I.J.; Baye, D.; Descouvemont, P.; Kamouni, R. 2006-01-01 The low-energy reaction 14 C(n,γ) 15 C provides a rare opportunity to test indirect methods for the determination of neutron capture cross sections by radioactive isotopes versus direct measurements. It is also important for various astrophysical scenarios. Currently, puzzling disagreements exist between the 14 C(n,γ) 15 C cross sections measured directly, determined indirectly, and calculated theoretically. To solve this puzzle, we offer a strong test based on a novel idea that the amplitudes for the virtual 15 C→ 14 C+n and the real 15 F→ 14 O+p decays are related. Our study of this relation, performed in a microscopic model, shows that existing direct and some indirect measurements strongly contradict charge symmetry in the 15 C and 15 F mirror pair. This brings into question the experimental determinations of the astrophysically important (n,γ) cross sections for short-lived radioactive targets 15. Penerapan Model Aktive Learning dengan Metode Crossword Puzzle dalam Pembelajaran Ekonomi Kelas X pada Sman 10 Pontianak OpenAIRE Susiana, Elis 2017-01-01 The purpose of this study is to determine the effectiveness of the implementation of Active Learning model with Crossword Puzzle method in the learning of class X economics at SMAN 10 Pontianak. The method used in this research is the experimental method with the experimental quasi-experimental design with the design of time series research design. The subjects in this study are students of class XE SMAN 10 Pontianak with 39 students. Data collection techniques used are direct observation tec... 16. Changes in food intake and abnormal behavior using a puzzle feeder in newly acquired sub-adult rhesus monkeys (Macaca mulatta): a short term study. Science.gov (United States) Lee, Jae-Il; Lee, Chi-Woo; Kwon, Hyouk-Sang; Kim, Young-Tae; Park, Chung-Gyu; Kim, Sang-Joon; Kang, Byeong-Cheol 2008-10-01 The majority of newly acquired nonhuman primates encounter serious problems adapting themselves to new environments or facilities. In particular, loss of appetite and abnormal behavior can occur in response to environmental stresses. These adaptation abnormalities can ultimately have an affect on the animal's growth and well-being. In this study, we evaluated the affects of a puzzle feeder on the food intake and abnormal behavior of newly acquired rhesus monkeys for a short period. The puzzle feeder was applied to 47- to 58-month-old animals that had never previously encountered one. We found that there was no difference in the change of food intake between the bucket condition and the puzzle feeder condition. In contrast, the time spent for consumption of food was three times longer in the puzzle feeder condition than in the bucket condition. Two monkeys initially exhibited stereotypic behavior. One showed a decreasing, and the other an increasing pattern of abnormal behavior after introduction of the puzzle feeder. In conclusion, this result suggests that over a short period, the puzzle feeder can only affect the time for food consumption since it failed to affect the food intake and did not consistently influence stereotypic behaviors in newly acquired rhesus monkeys. 17. Ratio of hadronic decay rates of J/ψ and ψ(2S) and the ρπ puzzle International Nuclear Information System (INIS) Gu, Y. F.; Li, X. H. 2001-01-01 The so-called ρπ puzzle of J/ψ and ψ(2S) decays is examined using the experimental data available to date. Two different approaches were taken to estimate the ratio of J/ψ and ψ(2S) hadronic decay rates. While one of the estimates could not yield the exact ratio of ψ(2S) to J/ψ inclusive hadronic decay rates, the other, based on a computation of the inclusive ggg decay rate for ψ(2S)(J/ψ) by subtracting other decay rates from the total decay rate, differs by two standard deviations from the naive prediction of perturbative QCD, even though its central value is nearly twice as large as what was naively expected. A comparison between this ratio, upon making corrections for specific exclusive two-body decay modes, and the corresponding experimental data confirms the puzzles in J/ψ and ψ(2S) decays. We find from our analysis that the exclusively reconstructed hadronic decays of the ψ(2S) account for only a small fraction of its total decays, and a ratio exceeding the above estimate should be expected to occur for a considerable number of the remaining decay channels. We also show that the recent new results from the BES experiment provide crucial tests of various theoretical models proposed to explain the puzzle 18. Learning and Memory Processes Following Cochlear Implantation:The Missing Piece of the Puzzle Directory of Open Access Journals (Sweden) David B. Pisoni 2016-04-01 Full Text Available At the present time, there is no question that cochlear implants work and often work very well in quiet listening conditions for many profoundly deaf children and adults. The speech and language outcomes data published over the last two decades document quite extensively the clinically significant benefits of cochlear implants. Although there now is a large body of evidence supporting the efficacy of cochlear implants as a medical intervention for profound hearing loss in both children and adults, there still remain a number of challenging unresolved clinical and theoretical issues that deal with the effectiveness of cochlear implants in individual patients that have not yet been successfully resolved. In this paper, we review recent findings on learning and memory, two central topics in the field of cognition that have been seriously neglected in research on cochlear implants. Our research findings on sequence learning, memory and organization processes, and retrieval strategies used in verbal learning and memory of categorized word lists suggests that basic domain-general learning abilities may be the missing piece of the puzzle in terms of understanding the cognitive factors that underlie the enormous individual differences and variability routinely observed in speech and language outcomes following cochlear implantation. 19. The grand unified link between the Peccei-Quinn mechanism and the generation puzzle International Nuclear Information System (INIS) Davidson, A.; Wali, K.C. 1982-03-01 The essential ingredients of the Peccei-Quinn mechanism are shown to be dictated by a proper choice of a grand unification scheme. The presence of U(1)sub(PQ) gives rise to the possibility that the same physics which resolves the strong CP-violation problem may decode the generation puzzle with no extra cost. Multigenerational signatures of the invisible axion scenario, such as the canonical fermion mass matrix, are discussed. The uniqueness and the special values of the quantized PQ-assignments, namely 1,-3,5-7,... for successive generations, acquire an automatic explanation once the idea of ''horizontal compositeness'' is invoked. A characteristic feature then is that the muon appears to have a less complicated structure than the electron. Furthermore, U(1)sub(PQ) chooses SO(10) to be its only tenable gauge symmetry partner, and at the same time crucially restricts the associated Higgs system. All this finally results in a consistent fermion mass hierarchy with log m, to the crudest estimation, varying linearly with respect to the generation index. (author) 20. Antarctic Temperature Extremes from MODIS Land Surface Temperatures: New Processing Methods Reveal Data Quality Puzzles Science.gov (United States) Grant, G.; Gallaher, D. W. 2017-12-01 New methods for processing massive remotely sensed datasets are used to evaluate Antarctic land surface temperature (LST) extremes. Data from the MODIS/Terra sensor (Collection 6) provides a twice-daily look at Antarctic LSTs over a 17 year period, at a higher spatiotemporal resolution than past studies. Using a data condensation process that creates databases of anomalous values, our processes create statistical images of Antarctic LSTs. In general, the results find few significant trends in extremes; however, they do reveal a puzzling picture of inconsistent cloud detection and possible systemic errors, perhaps due to viewing geometry. Cloud discrimination shows a distinct jump in clear-sky detections starting in 2011, and LSTs around the South Pole exhibit a circular cooling pattern, which may also be related to cloud contamination. Possible root causes are discussed. Ongoing investigations seek to determine whether the results are a natural phenomenon or, as seems likely, the results of sensor degradation or processing artefacts. If the unusual LST patterns or cloud detection discontinuities are natural, they point to new, interesting processes on the Antarctic continent. If the data artefacts are artificial, MODIS LST users should be alerted to the potential issues. 1. Low-voltage puzzle-like fractal microelectromechanial system variable capacitor suppressing pull-in KAUST Repository Elshurafa, Amro M. 2012-10-01 This Letter introduces an electrostatically actuated fractal MEMS variable capacitor that, by utilising the substrate, extends the tuning range (TR) beyond the theoretical limit of 1.5 as dictated by the pull-in phenomenon. The backbone concept behind the fractal varactor is to create a suspended movable plate possessing a specific fractal geometry, and to simultaneously create a bottom fixed plate complementary in shape to the top plate. Thus, when the top plate is actuated, it moves towards the bottom plate and fills the void present within the bottom plate without touching it akin to how puzzle pieces are assembled. Further, a reasonable horizontal separation is maintained between both the plates to avoid shorting. The electrostatic forces come from the capacitance formed between the top plate and bottom plate, and from the capacitance formed between the top plate and the doped substrate. The variable capacitor was fabricated in the PolyMUMPS process and provided a TR of 4.1 at 6 V, and its resonant frequency was in excess of 40 GHz. 2. Exosome-Mediated Genetic Information Transfer, a Missing Piece of Osteoblast-Osteoclast Communication Puzzle. Science.gov (United States) Yin, Pengbin; Lv, Houchen; Li, Yi; Deng, Yuan; Zhang, Licheng; Tang, Peifu 2017-01-01 The skeletal system functions and maintains itself based on communication between cells of diverse origins, especially between osteoblasts (OBs) and osteoclasts (OCs), accounting for bone formation and resorption, respectively. Previously, protein-level information exchange has been the research focus, and this has been discussed in detail. The regulative effects of microRNAs (miRNAs) on OB and OC ignite the question as to whether genetic information could be transferred between bone cells. Exosomes, extracellular membrane vesicles 30-100 nm in diameter, have recently been demonstrated to transfer functional proteins, mRNAs, and miRNAs, and serve as mediators of intercellular communication. By reviewing the distinguishing features of exosomes, a hypothesis was formulated and evaluated in this article that exosome-mediated genetic information transfer may represent a novel strategy for OB-OC communication. The exosomes may coordinately regulate these two cells under certain physiological conditions by transferring genetic information. Further research in exosome-shuttered miRNAs in OB-OC communication may add a missing piece to the bone cells communication "puzzle." 3. The High-Density Lipoprotein Puzzle: Why Classic Epidemiology, Genetic Epidemiology, and Clinical Trials Conflict? Science.gov (United States) Rosenson, Robert S 2016-05-01 Classical epidemiology has established the incremental contribution of the high-density lipoprotein (HDL) cholesterol measure in the assessment of atherosclerotic cardiovascular disease risk; yet, genetic epidemiology does not support a causal relationship between HDL cholesterol and the future risk of myocardial infarction. Therapeutic interventions directed toward cholesterol loading of the HDL particle have been based on epidemiological studies that have established HDL cholesterol as a biomarker of atherosclerotic cardiovascular risk. However, therapeutic interventions such as niacin, cholesteryl ester transfer protein inhibitors increase HDL cholesterol in patients treated with statins, but have repeatedly failed to reduce cardiovascular events. Statin therapy interferes with ATP-binding cassette transporter-mediated macrophage cholesterol efflux via miR33 and thus may diminish certain HDL functional properties. Unraveling the HDL puzzle will require continued technical advances in the characterization and quantification of multiple HDL subclasses and their functional properties. Key mechanistic criteria for clinical outcomes trials with HDL-based therapies include formation of HDL subclasses that improve the efficiency of macrophage cholesterol efflux and compositional changes in the proteome and lipidome of the HDL particle that are associated with improved antioxidant and anti-inflammatory properties. These measures require validation in genetic studies and clinical trials of HDL-based therapies on the background of statins. © 2016 American Heart Association, Inc. 4. Orbital Wall Reconstruction with Two-Piece Puzzle 3D Printed Implants: Technical Note Science.gov (United States) Mommaerts, Maurice Y.; Büttner, Michael; Vercruysse, Herman; Wauters, Lauri; Beerens, Maikel 2015-01-01 The purpose of this article is to describe a technique for secondary reconstruction of traumatic orbital wall defects using titanium implants that act as three-dimensional (3D) puzzle pieces. We present three cases of large defect reconstruction using implants produced by Xilloc Medical B.V. (Maastricht, the Netherlands) with a 3D printer manufactured by LayerWise (3D Systems; Heverlee, Belgium), and designed using the biomedical engineering software programs ProPlan and 3-Matic (Materialise, Heverlee, Belgium). The smaller size of the implants allowed sequential implantation for the reconstruction of extensive two-wall defects via a limited transconjunctival incision. The precise fit of the implants with regard to the surrounding ledges and each other was confirmed by intraoperative 3D imaging (Mobile C-arm Systems B.V. Pulsera, Philips Medical Systems, Eindhoven, the Netherlands). The patients showed near-complete restoration of orbital volume and ocular motility. However, challenges remain, including traumatic fat atrophy and fibrosis. PMID:26889349 5. Genome puzzle master (GPM): an integrated pipeline for building and editing pseudomolecules from fragmented sequences. Science.gov (United States) Zhang, Jianwei; Kudrna, Dave; Mu, Ting; Li, Weiming; Copetti, Dario; Yu, Yeisoo; Goicoechea, Jose Luis; Lei, Yang; Wing, Rod A 2016-10-15 Next generation sequencing technologies have revolutionized our ability to rapidly and affordably generate vast quantities of sequence data. Once generated, raw sequences are assembled into contigs or scaffolds. However, these assemblies are mostly fragmented and inaccurate at the whole genome scale, largely due to the inability to integrate additional informative datasets (e.g. physical, optical and genetic maps). To address this problem, we developed a semi-automated software tool-Genome Puzzle Master (GPM)-that enables the integration of additional genomic signposts to edit and build 'new-gen-assemblies' that result in high-quality 'annotation-ready' pseudomolecules. With GPM, loaded datasets can be connected to each other via their logical relationships which accomplishes tasks to 'group,' 'merge,' 'order and orient' sequences in a draft assembly. Manual editing can also be performed with a user-friendly graphical interface. Final pseudomolecules reflect a user's total data package and are available for long-term project management. GPM is a web-based pipeline and an important part of a Laboratory Information Management System (LIMS) which can be easily deployed on local servers for any genome research laboratory. The GPM (with LIMS) package is available at https://github.com/Jianwei-Zhang/LIMS CONTACTS: [email protected] or [email protected] information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. 6. Infectious agents and amyotrophic lateral sclerosis: another piece of the puzzle of motor neuron degeneration. Science.gov (United States) Castanedo-Vazquez, David; Bosque-Varela, Pilar; Sainz-Pelayo, Arancha; Riancho, Javier 2018-05-29 Amyotrophic lateral sclerosis (ALS) is the most common neurodegenerative disease affecting motor neurons (MN). This fatal disease is characterized by progressive muscle wasting and lacks an effective treatment. ALS pathogenesis has not been elucidated yet. In a small proportion of ALS patients, the disease has a familial origin, related to mutations in specific genes, which directly result in MN degeneration. By contrast, the vast majority of cases are though to be sporadic, in which genes and environment interact leading to disease in genetically predisposed individuals. Lately, the role of the environment has gained relevance in this field and an extensive list of environmental conditions have been postulated to be involved in ALS. Among them, infectious agents, particularly viruses, have been suggested to play an important role in the pathogenesis of the disease. These agents could act by interacting with some crucial pathways in MN degeneration, such as gene processing, oxidative stress or neuroinflammation. In this article, we will review the main studies about the involvement of microorganisms in ALS, subsequently discussing their potential pathogenic effect and integrating them as another piece in the puzzle of ALS pathogenesis. 7. The Compatibility Between Biomphalaria glabrata Snails and Schistosoma mansoni: An Increasingly Complex Puzzle. Science.gov (United States) Mitta, G; Gourbal, B; Grunau, C; Knight, M; Bridger, J M; Théron, A 2017-01-01 This review reexamines the results obtained in recent decades regarding the compatibility polymorphism between the snail, Biomphalaria glabrata, and the pathogen, Schistosoma mansoni, which is one of the agents responsible for human schistosomiasis. Some results point to the snail's resistance as explaining the incompatibility, while others support a "matching hypothesis" between the snail's immune receptors and the schistosome's antigens. We propose here that the two hypotheses are not exclusive, and that the compatible/incompatible status of a particular host/parasite couple probably reflects the balance of multiple molecular determinants that support one hypothesis or the other. Because these genes are involved in a coevolutionary arms race, we also propose that the underlying mechanisms can vary. Finally, some recent results show that environmental factors could influence compatibility. Together, these results make the compatibility between B. glabrata and S. mansoni an increasingly complex puzzle. We need to develop more integrative approaches in order to find targets that could potentially be manipulated to control the transmission of schistosomiasis. Copyright © 2017 Elsevier Ltd. All rights reserved. 8. Learning and Memory Processes Following Cochlear Implantation: The Missing Piece of the Puzzle. Science.gov (United States) Pisoni, David B; Kronenberger, William G; Chandramouli, Suyog H; Conway, Christopher M 2016-01-01 At the present time, there is no question that cochlear implants (CIs) work and often work very well in quiet listening conditions for many profoundly deaf children and adults. The speech and language outcomes data published over the last two decades document quite extensively the clinically significant benefits of CIs. Although there now is a large body of evidence supporting the "efficacy" of CIs as a medical intervention for profound hearing loss in both children and adults, there still remain a number of challenging unresolved clinical and theoretical issues that deal with the "effectiveness" of CIs in individual patients that have not yet been successfully resolved. In this paper, we review recent findings on learning and memory, two central topics in the field of cognition that have been seriously neglected in research on CIs. Our research findings on sequence learning, memory and organization processes, and retrieval strategies used in verbal learning and memory of categorized word lists suggests that basic domain-general learning abilities may be the missing piece of the puzzle in terms of understanding the cognitive factors that underlie the enormous individual differences and variability routinely observed in speech and language outcomes following cochlear implantation. 9. Low level radioactive waste management in New York State all the pieces of the puzzle, but International Nuclear Information System (INIS) Gerber, C.A. 1995-01-01 Unlike many other states and compacts, New York not only has a volunteer community, but that community has over three thousand acres of state owned land that was intended for commercial nuclear use. A poll conducted by the Siting Commission indicates that the citizens of New York understand the need for a central monitored disposal facility for LLRW. Hundreds of widely dispersed storage sites are unacceptable to the majority of New Yorkers. New York State has a law requiring the siting and construction of a facility for the permanent disposal of LLRW. The regulations are in place and the Siting Commission has gone through the siting process forward, and should be about finished doing it backwards as the amended state law required. The State Health Department has authorized a \$800,000 contract to the National Academy of Science to review the process. The representatives of the Cortland County opponents went on record and already stated that the NAS Study is unacceptable. After fifty million dollars the results are catastrophic. The generators in the state will be forced to store their waste because of the failure by the state to obey it's own law. According to the law, the facility was to be in operation no later than January 1, 1993. While New York State has all the pieces of the LLRW management puzzle on the table, there are many people with political and social agendas ready to knock the pieces onto the floor 10. The mechanism of collagen cross-linking in diabetes: a puzzle nearing resolution. Science.gov (United States) Monnier, V M; Glomb, M; Elgawish, A; Sell, D R 1996-07-01 11. Puzzles in modern biology. IV. Neurodegeneration, localized origin and widespread decay [version 1; referees: 2 approved Directory of Open Access Journals (Sweden) Steven A. Frank 2016-10-01 Full Text Available The motor neuron disease amyotrophic lateral sclerosis (ALS typically begins with localized muscle weakness. Progressive, widespread paralysis often follows over a few years. Does the disease begin with local changes in a small piece of neural tissue and then spread? Or does neural decay happen independently across diverse spatial locations? The distinction matters, because local initiation may arise by local changes in a tissue microenvironment, by somatic mutation, or by various epigenetic or regulatory fluctuations in a few cells. A local trigger must be coupled with a mechanism for spread. By contrast, independent decay across spatial locations cannot begin by a local change, but must depend on some global predisposition or spatially distributed change that leads to approximately synchronous decay. This article outlines the conceptual frame by which one contrasts local triggers and spread versus parallel spatially distributed decay. Various neurodegenerative diseases differ in their mechanistic details, but all can usefully be understood as falling along a continuum of interacting local and global processes. Cancer provides an example of disease progression by local triggers and spatial spread, setting a conceptual basis for clarifying puzzles in neurodegeneration. Heart disease also has crucial interactions between global processes, such as circulating lipid levels, and local processes in the development of atherosclerotic plaques. The distinction between local and global processes helps to understand these various age-related diseases. 12. Geneva University - Measurement of the Lamb shift in muonic hydrogen: the proton radius puzzle CERN Multimedia 2010-01-01 GENEVA UNIVERSITY École de physique Département de physique nucléaire et corspusculaire 24, quai Ernest-Ansermet 1211 GENEVA 4 Tel: (022) 379 62 73 Fax: (022) 379 69 92 Wednesday 12 May 2010 PARTICLE PHYSICS SEMINAR at 17.00 hrs – Stückelberg Auditorium Measurement of the Lamb shift in muonic hydrogen: the proton radius puzzle Dr Aldo Antogninia , CREMA Collaboration, Max Planck Institute, Germany At the Paul Scherrer Institut, Switzerland, we have measured several 2S-2P transition frequencies in muonic hydrogen (µp) and deuterium (µd) by means of laser spectroscopy. This results in an order of magnitude improvement on the rms charge radius values of the proton and the deuteron. Additionally the Zemach radii and the deuteron polarizability are also inferred. The new proton radius value is deduced with a relative accuracy of 0.1% but strongly disagrees from CODATA. The origin of this discrepancy is not yet known. It may come from theo... 13. Learning and Memory Processes Following Cochlear Implantation: The Missing Piece of the Puzzle Science.gov (United States) Pisoni, David B.; Kronenberger, William G.; Chandramouli, Suyog H.; Conway, Christopher M. 2016-01-01 At the present time, there is no question that cochlear implants (CIs) work and often work very well in quiet listening conditions for many profoundly deaf children and adults. The speech and language outcomes data published over the last two decades document quite extensively the clinically significant benefits of CIs. Although there now is a large body of evidence supporting the “efficacy” of CIs as a medical intervention for profound hearing loss in both children and adults, there still remain a number of challenging unresolved clinical and theoretical issues that deal with the “effectiveness” of CIs in individual patients that have not yet been successfully resolved. In this paper, we review recent findings on learning and memory, two central topics in the field of cognition that have been seriously neglected in research on CIs. Our research findings on sequence learning, memory and organization processes, and retrieval strategies used in verbal learning and memory of categorized word lists suggests that basic domain-general learning abilities may be the missing piece of the puzzle in terms of understanding the cognitive factors that underlie the enormous individual differences and variability routinely observed in speech and language outcomes following cochlear implantation. PMID:27092098 14. Puzzle of the 6Li Quadrupole Moment: Steps toward Solving It International Nuclear Information System (INIS) Blokhintsev, L.D.; Kukulin, V.I.; Pomerantsev, V.N. 2005-01-01 The problem of the origin of the quadrupole deformation in the 6 Li ground state is investigated with allowance for the three-deuteron component of the 6 Li wave function. Two long-standing puzzles related to the tensor interaction in the 6 Li nucleus are known: that of an anomalous smallness of the 6 Li quadrupole moment (being negative, it is smaller in magnitude than the 7 Li quadrupole moment by a factor of 5) and that of an anomalous behavior of the tensor analyzing power T 2q in the scattering of polarized 6 Li nuclei on various targets. It is shown that a large (in magnitude) negative exchange contribution to the 6 Li quadrupole moment from the three-deuteron configuration cancels almost completely the 'direct' positive contribution due to the αd folding potential. As a result, the total quadrupole moment proves to be close to zero and highly sensitive to fine details of the tensor nucleon-nucleon interaction in the 4 He nucleus and of its wave function 15. Gravity does not exist a puzzle for the 21st century CERN Document Server Icke, Vincent 2014-01-01 Every scientific fact begins as an opinion about the unknown—a theory—that becomes fact as evidence piles up to support it. But what if two theories exist that correspond perfectly to observed phenomena and they cannot be reconciled with each other? Can theory become fact? Such is the dilemma in contemporary physics. In seeking to understand the mechanisms of the universe, physicists have arrived at two conflicting theories: one explains the mystery of gravity through a precise model of space and time, and the other explains the mystery of matter via the behavior of quantum particles. Each theory reigns in its own domain. But 13.8 billion years ago, when the universe first came into being, gravity and matter belonged to a single realm. Can these theories be united, and if so, what facts will be revealed? This, contends Vincent Icke, is the central puzzle facing physics in our century. Combining Icke’s expertise with a robust argument and intellectual playfulness, Gravity Does Not Exist makes a notorious... 16. Genetics in arterial calcification: pieces of a puzzle and cogs in a wheel. Science.gov (United States) Rutsch, Frank; Nitschke, Yvonne; Terkeltaub, Robert 2011-08-19 Artery calcification reflects an admixture of factors such as ectopic osteochondral differentiation with primary host pathological conditions. We review how genetic factors, as identified by human genome-wide association studies, and incomplete correlations with various mouse studies, including knockout and strain analyses, fit into "pieces of the puzzle" in intimal calcification in human atherosclerosis, and artery tunica media calcification in aging, diabetes mellitus, and chronic kidney disease. We also describe in sharp contrast how ENPP1, CD73, and ABCC6 serve as "cogs in a wheel" of arterial calcification. Specifically, each is a minor component in the function of a much larger network of factors that exert balanced effects to promote and suppress arterial calcification. For the network to normally suppress spontaneous arterial calcification, the "cogs" ENPP1, CD73, and ABCC6 must be present and in working order. Monogenic ENPP1, CD73, and ABCC6 deficiencies each drive a molecular pathophysiology of closely related but phenotypically different diseases (generalized arterial calcification of infancy (GACI), pseudoxanthoma elasticum (PXE) and arterial calcification caused by CD73 deficiency (ACDC)), in which premature onset arterial calcification is a prominent but not the sole feature. 17. Low-voltage puzzle-like fractal microelectromechanial system variable capacitor suppressing pull-in KAUST Repository Elshurafa, Amro M.; Ho, P.H.; Ouda, Mahmoud H.; Radwan, Ahmed Gomaa; Salama, Khaled N. 2012-01-01 This Letter introduces an electrostatically actuated fractal MEMS variable capacitor that, by utilising the substrate, extends the tuning range (TR) beyond the theoretical limit of 1.5 as dictated by the pull-in phenomenon. The backbone concept behind the fractal varactor is to create a suspended movable plate possessing a specific fractal geometry, and to simultaneously create a bottom fixed plate complementary in shape to the top plate. Thus, when the top plate is actuated, it moves towards the bottom plate and fills the void present within the bottom plate without touching it akin to how puzzle pieces are assembled. Further, a reasonable horizontal separation is maintained between both the plates to avoid shorting. The electrostatic forces come from the capacitance formed between the top plate and bottom plate, and from the capacitance formed between the top plate and the doped substrate. The variable capacitor was fabricated in the PolyMUMPS process and provided a TR of 4.1 at 6 V, and its resonant frequency was in excess of 40 GHz. 18. The π+-emission puzzle in 4 over Lambda He decay International Nuclear Information System (INIS) Gibson, B.F.; Timmermans, R. 1997-10-01 The observed π + emission from the weak decay of 4 over Λ He has long been an intriguing puzzle. Experimentally, the π + to π - ratio for 4 over Λ He decay is about 5%. Because mesonic decay modes of the free Λ (→ p + π - , n + π 0 ) produce no π + s, more complicated mechanisms must be responsible for the π + decay of 4 over Λ He. Dalitz and von Hippel explored two-body decay processes of the type: (1) Λ → π 0 + n decay followed by a π 0 + p → π + + n charge-exchange reaction, and (2) Σ + → π + + n decay following a Λ + p → Σ + + n conversion. They concluded that neither process could account for even a 1% π + + n decay as a p-wave process ruled out the promising explanation coming from von Hippel's calculations, which had found that s-wave Σ + decay might yield a sufficiently high rate. Cieply and Gal re-examined the charge-exchange contribution and concluded that, although up-to-date input parameters yield a 1.2% branching ratio, the charge-exchange mechanism cannot account for the experimental value of about 5% 19. Natural killer cells: the journey from puzzles in biology to treatment of cancer. Science.gov (United States) Bodduluru, Lakshmi Narendra; Kasala, Eshvendar Reddy; Madhana, Rajaram Mohan Rao; Sriram, Chandra Shaker 2015-02-28 Natural Killer (NK) cells are innate immune effectors that are primarily involved in immunosurveillance to spontaneously eliminate malignantly transformed and virally infected cells without prior sensitization. NK cells trigger targeted attack through release of cytotoxic granules, and secrete various cytokines and chemokines to promote subsequent adaptive immune responses. NK cells selectively attack target cells with diminished major histocompatibility complex (MHC) class I expression. This "Missing-self" recognition by NK cells at first puzzled researchers in the early 1990s, and the mystery was solved with the discovery of germ line encoded killer immunoglobulin receptors that recognize MHC-I molecules. This review summarizes the biology of NK cells detailing the phenotypes, receptors and functions; interactions of NK cells with dendritic cells (DCs), macrophages and T cells. Further we discuss the various strategies to modulate NK cell activity and the practice of NK cells in cancer immunotherapy employing NK cell lines, autologous, allogeneic and genetically engineered cell populations. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved. 20. Animal egg as evolutionary innovation: a solution to the "embryonic hourglass" puzzle. Science.gov (United States) Newman, Stuart A 2011-11-15 mechanisms operate. Finally, I describe how this new perspective provides a resolution to the embryonic hourglass puzzle. Copyright © 2011 Wiley Periodicals, Inc. 1. Neural bases for basic processes in heuristic problem solving: Take solving Sudoku puzzles as an example. Science.gov (United States) Qin, Yulin; Xiang, Jie; Wang, Rifeng; Zhou, Haiyan; Li, Kuncheng; Zhong, Ning 2012-12-01 Newell and Simon postulated that the basic steps in human problem-solving involve iteratively applying operators to transform the state of the problem to eventually achieve a goal. To check the neural basis of this framework, the present study focused on the basic processes in human heuristic problem-solving that the participants identified the current problem state and then recalled and applied the corresponding heuristic rules to change the problem state. A new paradigm, solving simplified Sudoku puzzles, was developed for an event-related functional magnetic resonance imaging (fMRI) study in problem solving. Regions of interest (ROIs), including the left prefrontal cortex, the bilateral posterior parietal cortex, the anterior cingulated cortex, the bilateral caudate nuclei, the bilateral fusiform, as well as the bilateral frontal eye fields, were found to be involved in the task. To obtain convergent evidence, in addition to traditional statistical analysis, we used the multivariate voxel classification method to check the accuracy of the predictions for the condition of the task from the blood oxygen level dependent (BOLD) response of the ROIs, using a new classifier developed in this study for fMRI data. To reveal the roles that the ROIs play in problem solving, we developed an ACT-R computational model of the information-processing processes in human problem solving, and tried to predict the BOLD response of the ROIs from the task. Advances in human problem-solving research after Newell and Simon are then briefly discussed. © 2012 The Institute of Psychology, Chinese Academy of Sciences and Blackwell Publishing Asia Pty Ltd. 2. Pathophysiological understanding of HFpEF: microRNAs as part of the puzzle. Science.gov (United States) Rech, Monika; Barandiarán Aizpurua, Arantxa; van Empel, Vanessa; van Bilsen, Marc; Schroen, Blanche 2018-05-01 Half of all heart failure patients have preserved ejection fraction (HFpEF). Comorbidities associated with and contributing to HFpEF include obesity, diabetes and hypertension. Still, the underlying pathophysiological mechanisms of HFpEF are unknown. A preliminary consensus proposes that the multi-morbidity triggers a state of systemic, chronic low-grade inflammation, and microvascular dysfunction, causing reduced nitric oxide bioavailability to adjacent cardiomyocytes. As a result, the cardiomyocyte remodels its contractile elements and fails to relax properly, causing diastolic dysfunction, and eventually HFpEF. HFpEF is a complex syndrome for which currently no efficient therapies exist. This is notably due to the current one-size-fits-all therapy approach that ignores individual patient differences. MicroRNAs have been studied in relation to pathophysiological mechanisms and comorbidities underlying and contributing to HFpEF. As regulators of gene expression, microRNAs may contribute to the pathophysiology of HFpEF. In addition, secreted circulating microRNAs are potential biomarkers and as such, they could help stratify the HFpEF population and open new ways for individualized therapies. In this review, we provide an overview of the ever-expanding world of non-coding RNAs and their contribution to the molecular mechanisms underlying HFpEF. We propose prospects for microRNAs in stratifying the HFpEF population. MicroRNAs add a new level of complexity to the regulatory network controlling cardiac function and hence the understanding of gene regulation becomes a fundamental piece in solving the HFpEF puzzle. 3. Lifetime of (e+e-) puzzle's composite particle: No valid limits yet International Nuclear Information System (INIS) Griffin, J.J. 1993-01-01 4. The puzzling role of CXCR4 in human immunodeficiency virus infection. Science.gov (United States) Vicenzi, Elisa; Liò, Pietro; Poli, Guido 2013-01-01 The human immunodeficiency virus type-1 (HIV-1) is the etiological agent of the acquired immunodeficiency syndrome (AIDS), a disease highly lethal in the absence of combination antiretroviral therapy. HIV infects CD4(+) cells of the immune system (T cells, monocyte-macrophages and dendritic cells) via interaction with a universal primary receptor, the CD4 molecule, followed by a mandatory interaction with a second receptor (co-receptor) belonging to the chemokine receptor family. Apart from some rare cases, two chemokine receptors have been evolutionarily selected to accomplish this need for HIV-1: CCR5 and CXCR4. Yet, usage of these two receptors appears to be neither casual nor simply explained by their levels of cell surface expression. While CCR5 use is the universal rule at the start of every infection regardless of the transmission route (blood-related, sexual or mother to child), CXCR4 utilization emerges later in disease coinciding with the immunological deficient phase of infection. Moreover, in most instances CXCR4 use as viral entry co-receptor is associated with maintenance of CCR5 use. Since antiviral agents preventing CCR5 utilization by the virus are already in use, while others targeting either CCR5 or CXCR4 (or both) are under investigation, understanding the biological correlates of this "asymmetrical" utilization of HIV entry co-receptors bears relevance for the clinical choice of which therapeutics should be administered to infected individuals. We will here summarize the basic knowledge and the hypotheses underlying the puzzling and yet unequivocal role of CXCR4 in HIV-1 infection. 5. New data and an old puzzle: the negative association between schizophrenia and rheumatoid arthritis. Science.gov (United States) Lee, S Hong; Byrne, Enda M; Hultman, Christina M; Kähler, Anna; Vinkhuyzen, Anna A E; Ripke, Stephan; Andreassen, Ole A; Frisell, Thomas; Gusev, Alexander; Hu, Xinli; Karlsson, Robert; Mantzioris, Vasilis X; McGrath, John J; Mehta, Divya; Stahl, Eli A; Zhao, Qiongyi; Kendler, Kenneth S; Sullivan, Patrick F; Price, Alkes L; O'Donovan, Michael; Okada, Yukinori; Mowry, Bryan J; Raychaudhuri, Soumya; Wray, Naomi R; Byerley, William; Cahn, Wiepke; Cantor, Rita M; Cichon, Sven; Cormican, Paul; Curtis, David; Djurovic, Srdjan; Escott-Price, Valentina; Gejman, Pablo V; Georgieva, Lyudmila; Giegling, Ina; Hansen, Thomas F; Ingason, Andrés; Kim, Yunjung; Konte, Bettina; Lee, Phil H; McIntosh, Andrew; McQuillin, Andrew; Morris, Derek W; Nöthen, Markus M; O'Dushlaine, Colm; Olincy, Ann; Olsen, Line; Pato, Carlos N; Pato, Michele T; Pickard, Benjamin S; Posthuma, Danielle; Rasmussen, Henrik B; Rietschel, Marcella; Rujescu, Dan; Schulze, Thomas G; Silverman, Jeremy M; Thirumalai, Srinivasa; Werge, Thomas; Agartz, Ingrid; Amin, Farooq; Azevedo, Maria H; Bass, Nicholas; Black, Donald W; Blackwood, Douglas H R; Bruggeman, Richard; Buccola, Nancy G; Choudhury, Khalid; Cloninger, Robert C; Corvin, Aiden; Craddock, Nicholas; Daly, Mark J; Datta, Susmita; Donohoe, Gary J; Duan, Jubao; Dudbridge, Frank; Fanous, Ayman; Freedman, Robert; Freimer, Nelson B; Friedl, Marion; Gill, Michael; Gurling, Hugh; De Haan, Lieuwe; Hamshere, Marian L; Hartmann, Annette M; Holmans, Peter A; Kahn, René S; Keller, Matthew C; Kenny, Elaine; Kirov, George K; Krabbendam, Lydia; Krasucki, Robert; Lawrence, Jacob; Lencz, Todd; Levinson, Douglas F; Lieberman, Jeffrey A; Lin, Dan-Yu; Linszen, Don H; Magnusson, Patrik K E; Maier, Wolfgang; Malhotra, Anil K; Mattheisen, Manuel; Mattingsdal, Morten; McCarroll, Steven A; Medeiros, Helena; Melle, Ingrid; Milanova, Vihra; Myin-Germeys, Inez; Neale, Benjamin M; Ophoff, Roel A; Owen, Michael J; Pimm, Jonathan; Purcell, Shaun M; Puri, Vinay; Quested, Digby J; Rossin, Lizzy; Ruderfer, Douglas; Sanders, Alan R; Shi, Jianxin; Sklar, Pamela; St Clair, David; Stroup, T Scott; Van Os, Jim; Visscher, Peter M; Wiersma, Durk; Zammit, Stanley; Bridges, S Louis; Choi, Hyon K; Coenen, Marieke J H; de Vries, Niek; Dieud, Philippe; Greenberg, Jeffrey D; Huizinga, Tom W J; Padyukov, Leonid; Siminovitch, Katherine A; Tak, Paul P; Worthington, Jane; De Jager, Philip L; Denny, Joshua C; Gregersen, Peter K; Klareskog, Lars; Mariette, Xavier; Plenge, Robert M; van Laar, Mart; van Riel, Piet 2015-10-01 A long-standing epidemiological puzzle is the reduced rate of rheumatoid arthritis (RA) in those with schizophrenia (SZ) and vice versa. Traditional epidemiological approaches to determine if this negative association is underpinned by genetic factors would test for reduced rates of one disorder in relatives of the other, but sufficiently powered data sets are difficult to achieve. The genomics era presents an alternative paradigm for investigating the genetic relationship between two uncommon disorders. We use genome-wide common single nucleotide polymorphism (SNP) data from independently collected SZ and RA case-control cohorts to estimate the SNP correlation between the disorders. We test a genotype X environment (GxE) hypothesis for SZ with environment defined as winter- vs summer-born. We estimate a small but significant negative SNP-genetic correlation between SZ and RA (-0.046, s.e. 0.026, P = 0.036). The negative correlation was stronger for the SNP set attributed to coding or regulatory regions (-0.174, s.e. 0.071, P = 0.0075). Our analyses led us to hypothesize a gene-environment interaction for SZ in the form of immune challenge. We used month of birth as a proxy for environmental immune challenge and estimated the genetic correlation between winter-born and non-winter born SZ to be significantly less than 1 for coding/regulatory region SNPs (0.56, s.e. 0.14, P = 0.00090). Our results are consistent with epidemiological observations of a negative relationship between SZ and RA reflecting, at least in part, genetic factors. Results of the month of birth analysis are consistent with pleiotropic effects of genetic variants dependent on environmental context. 6. The puzzle of the ankle in the Ultrahigh Energy Cosmic Ray Spectrum, and composition indicators Science.gov (United States) Farrar, Glennys 2015-08-01 The sharp change in slope of the ultra-high energy cosmic ray spectrum around 10^18.6 eV (the ankle), combined with evidence of a light but extragalactic component near and below the ankle and intermediate composition above, has proved exceedingly challenging to understand theoretically. In this talk I discuss two possible solutions to the puzzle and how they can be (in)validated.First, I present a new mechanism whereby photo-disintegration of ultra-high energy nuclei in the region surrounding a UHECR accelerator naturally accounts for the observed spectrum and inferred composition (using LHC-tuned models extrapolated to UHE) at Earth. We discuss the conditions required to reproduce the spectrum above 10^17.5 eV and the composition, which -- in our model -- consists below the ankle of extragalactic protons and the high energy tail of Galactic Cosmic Rays, and above the ankle of surviving nuclei from the extended source. Predictions for the spectrum and flavors of neutrinos resulting from this process will be presented, and also implications for candidate sources.The other possible explanation is that in actuality UHECRs are entirely or almost entirely protons, and the cross-section for p-Air scattering increases more rapidly above center-of-mass energy of 70 TeV (10 times the current LHC cm energy) than predicted in conventional models. This gives an equally good fit to the depth-of-shower maximum behavior obverved by Auger, while being an intriguing sign of new state in QCD at extremely high energy density. 7. New Measurement of the 1 S -3 S Transition Frequency of Hydrogen: Contribution to the Proton Charge Radius Puzzle Science.gov (United States) Fleurbaey, Hélène; Galtier, Sandrine; Thomas, Simon; Bonnaud, Marie; Julien, Lucile; Biraben, François; Nez, François; Abgrall, Michel; Guéna, Jocelyne 2018-05-01 We present a new measurement of the 1 S -3 S two-photon transition frequency of hydrogen, realized with a continuous-wave excitation laser at 205 nm on a room-temperature atomic beam, with a relative uncertainty of 9 ×10-13. The proton charge radius deduced from this measurement, rp=0.877 (13 ) fm , is in very good agreement with the current CODATA-recommended value. This result contributes to the ongoing search to solve the proton charge radius puzzle, which arose from a discrepancy between the CODATA value and a more precise determination of rp from muonic hydrogen spectroscopy. 8. Definition of a visuospatial dimension as a step forward in the diagnostic puzzle of nonverbal learning disability. Science.gov (United States) Poletti, Michele 2017-01-01 Although clinically recognized for almost 50 years, the categorical distinction of specific learning disabilities due to an impairment of the nonverbal domain (nonverbal learning disability [NLD]) is still debated and controversial. Unsolved issues involve theoretical models, diagnostic criteria, rehabilitative interventions, and moderator factors. These issues are briefly overviewed to sustain the need for a shift toward dimensional approaches, as suggested by research domain criteria, as a step forward in the diagnostic puzzle of NLD. With this aim, a visuospatial dimension, or spectrum, is proposed, and then clinical conditions that may fit with its impaired side are systemized, while specifying in which conditions a visuospatial impairment may be considered an NLD. 9. On the reversibility of the Meissner effect and the angular momentum puzzle International Nuclear Information System (INIS) Hirsch, J.E. 2016-01-01 suppress Foucault currents, charge has to flow in direction perpendicular to the phase boundary. • The charge carriers have to be holes. • This solves also the angular momentum puzzle associated with the Meissner effect. 10. The Puzzle of HCN in Comets: Is it both a Product and a Primary Species? Science.gov (United States) Mumma, Michael J.; Bonev, Boncho P.; Charnley, Steven B.; Cordiner, Martin A.; DiSanti, Michael A.; Gibb, Erika L.; Magee-Sauer, Karen; Paganini, Lucas; Villanueva, Geronimo L. 2014-11-01 Hydrogen cyanide has long been regarded as a primary volatile in comets, stemming from its presence in dense molecular cloud cores and its supposed storage in the cometary nucleus. Here, we examine the observational evidence for and against that hypothesis, and argue that HCN may also result from near-nucleus chemical reactions in the coma. The distinction (product vs. primary species) is important for multiple reasons: 1. HCN is often used as a proxy for water when the dominant species (H2O) is not available for simultaneous measurement, as at radio wavelengths. 2. HCN is one of the few volatile carriers of nitrogen accessible to remote sensing. If HCN is mainly a product species, its precursor becomes the more important metric for compiling a taxonomic classification based on nitrogen chemistry. 3. The stereoisomer HNC is now confirmed as a product species. Could reaction of a primary precursor (X-CN) with a hydrocarbon co-produce both HNC and HCN? 4. The production rate for CN greatly exceeds that of HCN in some comets, demonstrating the presence of another (more important) precursor of CN. Several puzzling lines of evidence raise issues about the origin of HCN: a. The production rates of HCN measured through rotational (radio) and vibrational (infrared) spectroscopy agree in some comets - in others the infrared rate exceeds the radio rate substantially. b. With its strong dipole moment and H-bonding character, HCN should be linked more strongly in the nuclear ice to other molecules with similar properties (H2O, CH3OH), but instead its spatial release in some comets seems strongly coupled to volatiles that lack a dipole moment and thus do not form H-bonds (methane, ethane). c. The nucleus-centered rotational temperatures measured for H2O and other species (C2H6, CH3OH) usually agree within error, but those for HCN are often slightly smaller. d. In comet ISON, ALMA maps of HCN and the dust continuum show a slight displacement 80 km) in the centroids. We will 11. There and back again: Two views on the protein folding puzzle. Science.gov (United States) Finkelstein, Alexei V; Badretdin, Azat J; Galzitskaya, Oxana V; Ivankov, Dmitry N; Bogatyreva, Natalya S; Garbuzynskiy, Sergiy O 2017-07-01 The ability of protein chains to spontaneously form their spatial structures is a long-standing puzzle in molecular biology. Experimentally measured folding times of single-domain globular proteins range from microseconds to hours: the difference (10-11 orders of magnitude) is the same as that between the life span of a mosquito and the age of the universe. This review describes physical theories of rates of overcoming the free-energy barrier separating the natively folded (N) and unfolded (U) states of protein chains in both directions: "U-to-N" and "N-to-U". In the theory of protein folding rates a special role is played by the point of thermodynamic (and kinetic) equilibrium between the native and unfolded state of the chain; here, the theory obtains the simplest form. Paradoxically, a theoretical estimate of the folding time is easier to get from consideration of protein unfolding (the "N-to-U" transition) rather than folding, because it is easier to outline a good unfolding pathway of any structure than a good folding pathway that leads to the stable fold, which is yet unknown to the folding protein chain. And since the rates of direct and reverse reactions are equal at the equilibrium point (as follows from the physical "detailed balance" principle), the estimated folding time can be derived from the estimated unfolding time. Theoretical analysis of the "N-to-U" transition outlines the range of protein folding rates in a good agreement with experiment. Theoretical analysis of folding (the "U-to-N" transition), performed at the level of formation and assembly of protein secondary structures, outlines the upper limit of protein folding times (i.e., of the time of search for the most stable fold). Both theories come to essentially the same results; this is not a surprise, because they describe overcoming one and the same free-energy barrier, although the way to the top of this barrier from the side of the unfolded state is very different from the way from the 12. On the reversibility of the Meissner effect and the angular momentum puzzle Energy Technology Data Exchange (ETDEWEB) Hirsch, J.E., E-mail: [email protected] 2016-10-15 suppress Foucault currents, charge has to flow in direction perpendicular to the phase boundary. • The charge carriers have to be holes. • This solves also the angular momentum puzzle associated with the Meissner effect. 13. Juggling the life-puzzle with Geosciences: personal experience and strategies from a female leader Science.gov (United States) Arheimer, Berit 2017-04-01 recommendations from being a single-mother with scientific and international ambitions, working in an operational environment, on how to juggle the dynamic life puzzle. 14. William Wales and the 1769 transit of Venus: puzzle solving and the determination of the astronomical unit Science.gov (United States) Metz, Don 2009-05-01 According to Thomas Kuhn, a significant part of “normal science” is the fact gathering, empirical work which is intended to illustrate an existing paradigm. Some of this effort focuses on the determination of physical constants such as the astronomical unit (AU). For Kuhn, normal science is also what prepares students for membership in a particular scientific community and is embodied in some form in our science textbooks. However, neither Kuhn nor the textbook says much about the individuals who practice normal science, especially those who had been relegated to the “hack” duties of long and arduous measurement and calculation. In this paper, to provide a context for students of astronomy, I will outline the story of the determination of the AU and in particular the contribution of William Wales, an obscure British astronomer. Wales, toiling in the shadow of Halley (of Halley’s comet fame), Mason and Dixon (of Mason and Dixon line fame) and the infamous Captain Cook endured a brutal winter in northern Canada for a brief glimpse of the 1769 transit of Venus. In the end, Wales supplied one small piece of the puzzle in the determination of the AU and he exemplified the human spirit and persistence of a Kuhnian “puzzle solver”. 15. DNA is structured as a linear "jigsaw puzzle" in the genomes of Arabidopsis, rice, and budding yeast. Science.gov (United States) Liu, Yun-Hua; Zhang, Meiping; Wu, Chengcang; Huang, James J; Zhang, Hong-Bin 2014-01-01 Knowledge of how a genome is structured and organized from its constituent elements is crucial to understanding its biology and evolution. Here, we report the genome structuring and organization pattern as revealed by systems analysis of the sequences of three model species, Arabidopsis, rice and yeast, at the whole-genome and chromosome levels. We found that all fundamental function elements (FFE) constituting the genomes, including genes (GEN), DNA transposable elements (DTE), retrotransposable elements (RTE), simple sequence repeats (SSR), and (or) low complexity repeats (LCR), are structured in a nonrandom and correlative manner, thus leading to a hypothesis that the DNA of the species is structured as a linear "jigsaw puzzle". Furthermore, we showed that different FFE differ in their importance in the formation and evolution of the DNA jigsaw puzzle structure between species. DTE and RTE play more important roles than GEN, LCR, and SSR in Arabidopsis, whereas GEN and RTE play more important roles than LCR, SSR, and DTE in rice. The genes having multiple recognized functions play more important roles than those having single functions. These results provide useful knowledge necessary for better understanding genome biology and evolution of the species and for effective molecular breeding of rice. 16. Can the consumption-free nonexpected utility model solve the risk premium puzzle? An empirical study of the Japanese stock market OpenAIRE Kang, Myong-Il 2010-01-01 This paper investigates whether the consumption-free two-beta intertemporal capital asset-pricing model developed by Campbell and Vuolteenaho (2004) is able to solve the risk premium puzzle in the Japanese stock market over the period 1984-2002. Using the cash flow and discount rate betas as risk factors, the model is able to explain about half of the market returns by selection of suitable vector autoregression variables. On this basis, the model proposed solves the risk premium puzzle in Ja... 17. The concepts of asymmetric and symmetric power can help resolve the puzzle of altruistic and cooperative behaviour. Science.gov (United States) Phillips, Tim 2018-02-01 Evolutionary theory predicts competition in nature yet altruistic and cooperative behaviour appears to reduce the ability to compete in order to help others compete better. This evolutionary puzzle is usually explained by kin selection where close relatives perform altruistic and cooperative acts to help each other and by reciprocity theory (i.e. direct, indirect and generalized reciprocity) among non-kin. Here, it is proposed that the concepts of asymmetry and symmetry in power and dominance are critical if we are ever to resolve the puzzle of altruism and cooperation towards non-kin. Asymmetry in power and dominance is likely to emerge under competition in nature as individuals strive to gain greater access to the scarce resources needed to survive and reproduce successfully. Yet asymmetric power presents serious problems for reciprocity theory in that a dominant individual faces a temptation to cheat in interactions with subordinates that is likely to far outweigh any individual selective benefits gained through reciprocal mechanisms. Furthermore, action taken by subordinates to deter non-reciprocation by dominants is likely to prove prohibitively costly to their fitness, making successful enforcement of reciprocal mechanisms unlikely. It is also argued here that many apparently puzzling forms of cooperation observed in nature (e.g. cooperative breeding in which unrelated subordinates help dominants to breed) might be best explained by asymmetry in power and dominance. Once it is recognized that individuals in these cooperative interactions are subject to the constraints and opportunities imposed on them by asymmetric power then they can be seen as pursuing a 'least bad' strategy to promote individual fitness - one that is nevertheless consistent with evolutionary theory. The concept of symmetric power also provides important insights. It can inhibit reciprocal mechanisms in the sense that symmetric power makes it easier for a cheat to appropriate common 18. The puzzle box as a simple and efficient behavioral test for exploring impairments of general cognition and executive functions in mouse models of schizophrenia. Science.gov (United States) Ben Abdallah, Nada M-B; Fuss, Johannes; Trusel, Massimo; Galsworthy, Michael J; Bobsin, Kristin; Colacicco, Giovanni; Deacon, Robert M J; Riva, Marco A; Kellendonk, Christoph; Sprengel, Rolf; Lipp, Hans-Peter; Gass, Peter 2011-01-01 Deficits in executive functions are key features of schizophrenia. Rodent behavioral paradigms used so far to find animal correlates of such deficits require extensive effort and time. The puzzle box is a problem-solving test in which mice are required to complete escape tasks of increasing difficulty within a limited amount of time. Previous data have indicated that it is a quick but highly reliable test of higher-order cognitive functioning. We evaluated the use of the puzzle box to explore executive functioning in five different mouse models of schizophrenia: mice with prefrontal cortex and hippocampus lesions, mice treated sub-chronically with the NMDA-receptor antagonist MK-801, mice constitutively lacking the GluA1 subunit of AMPA-receptors, and mice over-expressing dopamine D2 receptors in the striatum. All mice displayed altered executive functions in the puzzle box, although the nature and extent of the deficits varied between the different models. Deficits were strongest in hippocampus-lesioned and GluA1 knockout mice, while more subtle deficits but specific to problem solving were found in the medial prefrontal-lesioned mice, MK-801-treated mice, and in mice with striatal overexpression of D2 receptors. Data from this study demonstrate the utility of the puzzle box as an effective screening tool for executive functions in general and for schizophrenia mouse models in particular. Published by Elsevier Inc. 19. A new theory of cryptogenic stroke and its relationship to patent foramen ovale; or, the puzzle of the missing extra risk. Science.gov (United States) Eggers, Arnold E 2006-01-01 Cryptogenic stroke (or stroke of undetermined cause) is a common cause of stroke and is statistically associated with patent foramen ovale (PFO). The largest study of cryptogenic stroke is the Homma study, which is a sub-study of the WARSS trial; it produced the following data: cryptogenic stroke patients with and without PFO, when treated with either aspirin or warfarin, all had identical recurrence rates. This is puzzling because it seems as though there ought to have been some extra risk in one of the two groups under one of the two treatments. How could everything come out the same? A review of the epidemiology of cryptogenic stroke shows that, compared to patients with stroke of determined cause, cryptogenic stroke patients are a little younger and have lower doses of the usual risk factors (hypertension and diabetes mellitus) but more PFO. Cryptogenic strokes appear to be embolic strokes from an unknown source. A previously published article setting forth a hypothetical theory of stress-induced stroke was used to analyze these data. It is suggested that stress can induce episodic systemic platelet activation and hypercoagulability, which causes transient thrombus formation and subsequent embolization on both the arterial and venous sides of the circulation; the latter requires a PFO to cause a stroke (paradoxical embolism). The sum of these two mechanisms explains cryptogenic stroke. The PFO subset of cryptogenic stroke includes patients with both early and late stage disease who have an aggregate risk approximately equal to that of patients without PFO. Cryptogenic stroke is part of the disease of stress-induced cerebrovascular disease. Aspirin and warfarin have already been shown to be equally effective in secondary prevention of ischemic stroke. 20. A note on the puzzling spindown behavior of the Galactic center magnetar SGR J1745–2900 International Nuclear Information System (INIS) Tong, Hao 2015-01-01 SGR J1745–2900 is a magnetar near the Galactic center. X-ray observations of this source found a decreasing X-ray luminosity accompanied by an enhanced spindown rate. This negative correlation between X-ray luminosity and spindown rate is hard to understand. The wind braking model of magnetars is employed to explain this puzzling spindown behavior. During the release of magnetic energy of magnetars, a system of particles may be generated. Some of these particles remain trapped in the magnetosphere and may contribute to the X-ray luminosity. The rest of the particles can flow out and take away the rotational energy of the central neutron star. A smaller polar cap angle will cause the decrease of X-ray luminosity and enhanced spindown rate of SGR J1745–2900. This magnetar is shortly expected to have a maximum spindown rate. (paper) 1. Double-Pionic Fusion of Nuclear Systems and the ''ABC'' Effect: Approaching a Puzzle by Exclusive and Kinematically Complete Measurements International Nuclear Information System (INIS) Bashkanov, M.; Clement, H.; Doroshkevich, E.; Khakimova, O.; Kren, F.; Meier, R.; Pricking, A.; Skorodko, T.; Wagner, G. J.; Bargholtz, C.; Geren, L.; Lindberg, K.; Tegner, P.-E.; Zartova, I.; Berlowski, M.; Stepaniak, J.; Bogoslawsky, D.; Ivanov, G.; Jiganov, E.; Morosov, B. 2009-01-01 The ABC effect--a puzzling low-mass enhancement in the ππ invariant mass spectrum, first observed by Abashian, Booth, and Crowe--is well known from inclusive measurements of two-pion production in nuclear fusion reactions. Here we report on the first exclusive and kinematically complete measurements of the most basic double-pionic fusion reaction pn→dπ 0 π 0 at beam energies of 1.03 and 1.35 GeV. The measurements, which have been carried out at CELSIUS-WASA, reveal the ABC effect to be a (ππ) I=L=0 channel phenomenon associated with both a resonancelike energy dependence in the integral cross section and the formation of a ΔΔ system in the intermediate state. A corresponding simple s-channel resonance ansatz provides a surprisingly good description of the data 2. Collecting the Puzzle Pieces: Completing HST's UV+NIR Survey of the TRAPPIST-1 System ahead of JWST Science.gov (United States) de Wit, Julien 2017-08-01 Using the Spitzer Space Telescope, our team has discovered 7 Earth-sized planets around the nearby Ultra-cool dwarf star TRAPPIST-1. These planets are the first to be simultaneously Earth-sized, temperate, and amenable for in-depth atmospheric studies with space-based observatories (notably, JWST). TRAPPIST-1's system thus provides us with the first opportunity to probe the atmospheres of Earth-sized exoplanets and search for signs of habitability beyond our solar system, which will require spectral information from the UV to the IR to complete their atmospheric puzzles.We request 114 HST orbits to complete the UV+NIR survey of the 7 planets in preparation for their in-depth followup with JWST. The suggested low-density of the planets combined with their complex orbital resonance chain indicate that they migrated inward to their current positions and may harbor large water rich reservoir or leftover primordial H2 atmospheres. We have already ruled out the presence of clear H2 atmospheres for the 5 innermost planets using WFC3 and are requesting 16 WFC3 orbits to complete the TRAPPIST-1 NIR reconnaissance survey. Our primary request consists in 98 STIS orbits to complete the survey for extended H-exospheres around each of the planets. H-exospheres are the most accessible observables for volatile reservoirs, which have not been ruled out by our WFC3 observations. Exosphere detection is only amenable using HST unique capabilities in the UV and are pivotal to guide JWST's in-depth followup. The combined information from HST's UV and NIR observations will allow us put the first critical pieces of the atmospheric puzzle in place for these temperate earth-sized worlds. 3. The role of helioseismology in the knowledge of the solar interior dynamics and in the solar neutrino puzzle International Nuclear Information System (INIS) Couvidat, Sebastien 2002-01-01 This dissertation focuses on the solar interior dynamics and the neutrino puzzle, using helioseismology and more specifically the SoHO/GOLF data as a tool to probe the radiative interior of the Sun. We show how helioseismology gives us a direct access to the deep-layer dynamics through the solar rotation profile. Our data favor a decrease of the rotation velocity near the nuclear core. This can be used to constrain the angular momentum distribution processes, and to set an upper bound on the intensity of the magnetic field in this part of the Sun. The search for gravity modes with an original method is another topic of this dissertation. Several candidates are detected that need now to be confirmed. Gravity modes will give us a precious insight into the solar core structure and dynamics. We also use the stellar evolution code CESAM. By combining seismic data and solar modelling, we produce solar seismic models. The neutrino flux predictions from these models are partly derived on an observational basis. The comparison of these fluxes with the SNO results gives the solution to the solar neutrino puzzle: neutrinos have masses and they oscillate between different lepton flavors. This explains the deficit of detections observed since the sixties. We also work on the internal magnetic fields that take part to the dynamic processes. In particular, we start to study the impact of these fields on the neutrino production and transport. Finally, we reach the limits of the 1D stellar codes: they cannot take into account the dynamic processes efficiently. This justifies the current development of 2D or 3D codes. (author) [fr 4. RED DWARF DYNAMO RAISES PUZZLE OVER INTERIORS OF LOWEST-MASS STARS Science.gov (United States) 2002-01-01 -years away in the constellation Aquila. Gliese 752A is a red dwarf that is one-third the mass of the Sun and slightly more than half its diameter. By contrast, VB10 is physically smaller than the planet Jupiter and only about nine percent the mass of our Sun. This very faint star is near the threshold of the lowest possible mass for a true star (.08 solar masses), below which nuclear fusion processes cannot take place according to current models. A team led by Linsky used Hubble's Goddard High Resolution Spectrograph (GHRS) to make a one-hour long exposure of VB10 on October 12, 1994. No detectable ultraviolet emission was seen until the last five minutes, when bright emission was detected in a flare. Though the star's normal surface temperature is 4,500 degrees Fahrenheit, Hubble's GHRS detected a sudden burst of 270,000 degrees Fahrenheit in the star's outer atmosphere. Linsky attributes this rapid heating to the presence of an intense, but unstable, magnetic field. THE INTERIOR WORKINGS OF A STELLAR DYNAMO Before the Hubble observation, astronomers thought magnetic fields in stars required the same dynamo process which creates magnetic fields on the Sun. In the classic solar model, heat generated by nuclear fusion reactions at the star's center escapes through a radiative zone just outside the core. The heat travels from the radiative core to the star's surface through a convection zone. In this region, heat bubbles to the surface by motions similar to boiling in a pot of water. Dynamos, which accelerate electrons to create magnetic forces, operate when the interior of a star rotates faster than the surface. Recent studies of the Sun indicate its convective zone rotates at nearly the same rate at all depths. This means the solar dynamo must operate in the more rapidly rotating radiative core just below the convective zone. The puzzle is that stars below 20 percent the mass of our Sun do not have radiative cores, but instead transport heat from their core through 5. Puzzling with online games (BAM-COG): reliability, validity, and feasibility of an online self-monitor for cognitive performance in aging adults. Science.gov (United States) Aalbers, Teun; Baars, Maria A E; Olde Rikkert, Marcel G M; Kessels, Roy P C 2013-12-03 Online interventions are aiming increasingly at cognitive outcome measures but so far no easy and fast self-monitors for cognition have been validated or proven reliable and feasible. This study examines a new instrument called the Brain Aging Monitor-Cognitive Assessment Battery (BAM-COG) for its alternate forms reliability, face and content validity, and convergent and divergent validity. Also, reference values are provided. The BAM-COG consists of four easily accessible, short, yet challenging puzzle games that have been developed to measure working memory ("Conveyer Belt"), visuospatial short-term memory ("Sunshine"), episodic recognition memory ("Viewpoint"), and planning ("Papyrinth"). A total of 641 participants were recruited for this study. Of these, 397 adults, 40 years and older (mean 54.9, SD 9.6), were eligible for analysis. Study participants played all games three times with 14 days in between sets. Face and content validity were based on expert opinion. Alternate forms reliability (AFR) was measured by comparing scores on different versions of the BAM-COG and expressed with an intraclass correlation (ICC: two-way mixed; consistency at 95%). Convergent validity (CV) was provided by comparing BAM-COG scores to gold-standard paper-and-pencil and computer-assisted cognitive assessment. Divergent validity (DV) was measured by comparing BAM-COG scores to the National Adult Reading Test IQ (NART-IQ) estimate. Both CV and DV are expressed as Spearman rho correlation coefficients. Three out of four games showed adequate results on AFR, CV, and DV measures. The games Conveyer Belt, Sunshine, and Papyrinth have AFR ICCs of .420, .426, and .645 respectively. Also, these games had good to very good CV correlations: rho=.577 (P=.001), rho=.669 (Pgame Viewpoint provided less desirable results with an AFR ICC of .167, CV rho=.202 (P=.15), and DV rho=-.162 (P=.21). This study provides evidence for the use of the BAM-COG test battery as a feasible, reliable, and 6. ESA's Rosetta mission and the puzzles that Hale-Bopp left behind Science.gov (United States) 1997-04-01 kilometres with a set of remote-sensing instruments. As the spacecraft moves around the nucleus at a very leisurely walking pace, other onboard instruments will analyse the dust and vapours, which will emanate from Comet Wirtanen with ever-increasing vigour as the Sun's rays warm it. Rosetta will drop a lander on to the comet's surface, for close inspection of its physical condition and chemical composition. The lander is a venture led by Germany, France and Italy, with participation from Austria, Finland, Hungary, Poland and the UK. As a box packed with scientific instruments and standing on three legs, the lander will be capable of anchoring itself to one spot and drilling into the surface. It may also be able to hop like a flea to visit another part of the nucleus. A combination of solar energy and electric batteries will enable operations to last for several months. "The combination of Rosetta in orbit around the comet and the lander on its surface is very powerful from a scientific point of view," says Gerhard Schwehm, ESA's project scientist for Rosetta. "We shall watch Comet Wirtanen brewing up like a volcano as it feels the heat of the Sun. In place of hazy impressions of the nucleus of a comet half hidden by its dust clouds, we shall see all the details with unprecedented clarity." Unanswered questions During and after the 1986 appearance of Halley's Comet, comet science made great progress. More recent comets have revealed important secrets to ESA's Infrared Space Observatory and to other space telescopes examining them at wavelengths unobservable from the Earth. Yet basic questions about comets remain unanswered. Just as the Rosetta Stone was the key that unlocked the meaning of Egyptian hieroglyphs, so the Rosetta spacecraft is intended to decipher the meaning of comets and their role in the origin and history of the Solar System. Here are a few of the main puzzles. * What does a comet weigh? Guesses about the density of cometary material vary widely, and only an 7. The Annuity Puzzle Remains a Puzzle NARCIS (Netherlands) Peijnenburg, J.M.J.; Werker, Bas; Nijman, Theo We examine incomplete annuity menus and background risk as possible drivers of divergence from full annuitization. Contrary to what is often suggested in the literature, we find that full annuitization remains optimal if saving is possible after retirement. This holds irrespective of whether real or 8. Puzzling out the proton radius puzzle Directory of Open Access Journals (Sweden) Mihovilovič Miha 2014-01-01 Full Text Available The discrepancy between the proton charge radius extracted from the muonic hydrogen Lamb shift measurement and the best present value obtained from the elastic scattering experiments, remains unexplained and represents a burning problem of today’s nuclear physics: after more than 50 years of research the radius of a basic constituent of matter is still not understood. This paper presents a summary of the best existing proton radius measurements, followed by an overview of the possible explanations for the observed inconsistency between the hydrogen and the muonic-hydrogen data. In the last part the upcoming experiments, dedicated to remeasuring the proton radius, are described. 9. Puzzling out the proton radius puzzle Energy Technology Data Exchange (ETDEWEB) Mihovilovič, M.; Merkel, H.; Weber, A. [Institut für Kernphysik, Johannes Gutenberg-Universität Mainz, Johann-Joachim-Becher-Weg 45, 55128 Mainz (Germany) 2016-01-22 The discrepancy between the proton charge radius extracted from the muonic hydrogen Lamb shift measurement and the best present value obtained from the elastic scattering experiments, remains unexplained and represents a burning problem of today’s nuclear physics: after more than 50 years of research the radius of a basic constituent of matter is still not understood. This paper presents a summary of the best existing proton radius measurements, followed by an overview of the possible explanations for the observed inconsistency between the hydrogen and the muonic-hydrogen data. In the last part the upcoming experiments, dedicated to remeasuring the proton radius, are described. 10. Visual Puzzles, Figure Weights, and Cancellation: Some Preliminary Hypotheses on the Functional and Neural Substrates of These Three New WAIS-IV Subtests Science.gov (United States) McCrea, Simon M.; Robinson, Thomas P. 2011-01-01 In this study, five consecutive patients with focal strokes and/or cortical excisions were examined with the Wechsler Adult Intelligence Scale and Wechsler Memory Scale—Fourth Editions along with a comprehensive battery of other neuropsychological tasks. All five of the lesions were large and typically involved frontal, temporal, and/or parietal lobes and were lateralized to one hemisphere. The clinical case method was used to determine the cognitive neuropsychological correlates of mental rotation (Visual Puzzles), Piagetian balance beam (Figure Weights), and visual search (Cancellation) tasks. The pattern of results on Visual Puzzles and Figure Weights suggested that both subtests involve predominately right frontoparietal networks involved in visual working memory. It appeared that Visual Puzzles could also critically rely on the integrity of the left temporoparietal junction. The left temporoparietal junction could be involved in temporal ordering and integration of local elements into a nonverbal gestalt. In contrast, the Figure Weights task appears to critically involve the right temporoparietal junction involved in numerical magnitude estimation. Cancellation was sensitive to left frontotemporal lesions and not right posterior parietal lesions typical of other visual search tasks. In addition, the Cancellation subtest was sensitive to verbal search strategies and perhaps object-based attention demands, thereby constituting a unique task in comparison with previous visual search tasks. PMID:22389807 11. Recovering the Genetic Identity of an Extinct-in-the-Wild Species: The Puzzling Case of the Alagoas Curassow. Science.gov (United States) Costa, Mariellen C; Oliveira, Paulo R R; Davanço, Paulo V; Camargo, Crisley de; Laganaro, Natasha M; Azeredo, Roberto A; Simpson, James; Silveira, Luis F; Francisco, Mercival R 2017-01-01 The conservation of many endangered taxa relies on hybrid identification, and when hybrids become morphologically indistinguishable from the parental species, the use of molecular markers can assign individual admixture levels. Here, we present the puzzling case of the extinct in the wild Alagoas Curassow (Pauxi mitu), whose captive population descends from only three individuals. Hybridization with the Razor-billed Curassow (P. tuberosa) began more than eight generations ago, and admixture uncertainty affects the whole population. We applied an analysis framework that combined morphological diagnostic traits, Bayesian clustering analyses using 14 microsatellite loci, and mtDNA haplotypes to assess the ancestry of all individuals that were alive from 2008 to 2012. Simulated data revealed that our microsatellites could accurately assign an individual a hybrid origin until the second backcross generation, which permitted us to identify a pure group among the older, but still reproductive animals. No wild species has ever survived such a severe bottleneck, followed by hybridization, and studying the recovery capability of the selected pure Alagoas Curassow group might provide valuable insights into biological conservation theory. 12. Recovering the Genetic Identity of an Extinct-in-the-Wild Species: The Puzzling Case of the Alagoas Curassow. Directory of Open Access Journals (Sweden) Mariellen C Costa Full Text Available The conservation of many endangered taxa relies on hybrid identification, and when hybrids become morphologically indistinguishable from the parental species, the use of molecular markers can assign individual admixture levels. Here, we present the puzzling case of the extinct in the wild Alagoas Curassow (Pauxi mitu, whose captive population descends from only three individuals. Hybridization with the Razor-billed Curassow (P. tuberosa began more than eight generations ago, and admixture uncertainty affects the whole population. We applied an analysis framework that combined morphological diagnostic traits, Bayesian clustering analyses using 14 microsatellite loci, and mtDNA haplotypes to assess the ancestry of all individuals that were alive from 2008 to 2012. Simulated data revealed that our microsatellites could accurately assign an individual a hybrid origin until the second backcross generation, which permitted us to identify a pure group among the older, but still reproductive animals. No wild species has ever survived such a severe bottleneck, followed by hybridization, and studying the recovery capability of the selected pure Alagoas Curassow group might provide valuable insights into biological conservation theory. 13. Event-by-Event Hydrodynamics+Jet Energy Loss: A Solution to the R_{AA}⊗v_{2} Puzzle. Science.gov (United States) Noronha-Hostler, Jacquelyn; Betz, Barbara; Noronha, Jorge; Gyulassy, Miklos 2016-06-24 High p_{T}>10  GeV elliptic flow, which is experimentally measured via the correlation between soft and hard hadrons, receives competing contributions from event-by-event fluctuations of the low-p_{T} elliptic flow and event-plane angle fluctuations in the soft sector. In this Letter, a proper account of these event-by-event fluctuations in the soft sector, modeled via viscous hydrodynamics, is combined with a jet-energy-loss model to reveal that the positive contribution from low-p_{T} v_{2} fluctuations overwhelms the negative contributions from event-plane fluctuations. This leads to an enhancement of high-p_{T}>10  GeV elliptic flow in comparison to previous calculations and provides a natural solution to the decade-long high-p_{T} R_{AA}⊗v_{2} puzzle. We also present the first theoretical calculation of high-p_{T} v_{3}, which is shown to be compatible with current LHC data. Furthermore, we discuss how short-wavelength jet-medium physics can be deconvoluted from the physics of soft, bulk event-by-event flow observables using event-shape engineering techniques. 14. The puzzle of the 1996 Bárdarbunga, Iceland, earthquake: no volumetric component in the source mechanism Science.gov (United States) Tkalcic, Hrvoje; Dreger, Douglas S.; Foulger, Gillian R.; Julian, Bruce R. 2009-01-01 A volcanic earthquake with Mw 5.6 occurred beneath the Bárdarbunga caldera in Iceland on 29 September 1996. This earthquake is one of a decade-long sequence of  events at Bárdarbunga with non-double-couple mechanisms in the Global Centroid Moment Tensor catalog. Fortunately, it was recorded well by the regional-scale Iceland Hotspot Project seismic experiment. We investigated the event with a complete moment tensor inversion method using regional long-period seismic waveforms and a composite structural model. The moment tensor inversion using data from stations of the Iceland Hotspot Project yields a non-double-couple solution with a 67% vertically oriented compensated linear vector dipole component, a 32% double-couple component, and a statistically insignificant (2%) volumetric (isotropic) contraction. This indicates the absence of a net volumetric component, which is puzzling in the case of a large volcanic earthquake that apparently is not explained by shear slip on a planar fault. A possible volcanic mechanism that can produce an earthquake without a volumetric component involves two offset sources with similar but opposite volume changes. We show that although such a model cannot be ruled out, the circumstances under which it could happen are rare. 15. The puzzle of Italian rice origin and evolution: determining genetic divergence and affinity of rice germplasm from Italy and Asia. Directory of Open Access Journals (Sweden) Xingxing Cai Full Text Available The characterization of genetic divergence and relationships of a set of germplasm is essential for its efficient applications in crop breeding and understanding of the origin/evolution of crop varieties from a given geographical region. As the largest rice producing country in Europe, Italy holds rice germplasm with abundant genetic diversity. Although Italian rice varieties and the traditional ones in particular have played important roles in rice production and breeding, knowledge concerning the origin and evolution of Italian traditional varieties is still limited. To solve the puzzle of Italian rice origin, we characterized genetic divergence and relationships of 348 rice varieties from Italy and Asia based on the polymorphisms of microsatellite fingerprints. We also included common wild rice O. rufipogon as a reference in the characterization. Results indicated relatively rich genetic diversity (H(e = 0.63-0.65 in Italian rice varieties. Further analyses revealed a close genetic relationship of the Italian traditional varieties with those from northern China, which provides strong genetic evidence for tracing the possible origin of early established rice varieties in Italy. These findings have significant implications for the rice breeding programs, in which appropriate germplasm can be selected from a given region and utilized for transferring unique genetic traits based on its genetic diversity and evolutionary relationships. 16. A novel edge based embedding in medical images based on unique key generated using sudoku puzzle design. Science.gov (United States) Santhi, B; Dheeptha, B 2016-01-01 The field of telemedicine has gained immense momentum, owing to the need for transmitting patients' information securely. This paper puts forth a unique method for embedding data in medical images. It is based on edge based embedding and XOR coding. The algorithm proposes a novel key generation technique by utilizing the design of a sudoku puzzle to enhance the security of the transmitted message. The edge blocks of the cover image alone, are utilized to embed the payloads. The least significant bit of the pixel values are changed by XOR coding depending on the data to be embedded and the key generated. Hence the distortion in the stego image is minimized and the information is retrieved accurately. Data is embedded in the RGB planes of the cover image, thus increasing its embedding capacity. Several measures including peak signal noise ratio (PSNR), mean square error (MSE), universal image quality index (UIQI) and correlation coefficient (R) are the image quality measures that have been used to analyze the quality of the stego image. It is evident from the results that the proposed technique outperforms the former methodologies. 17. Solving the productivity and impact puzzle: Do men outperform women, or are metrics biased? Science.gov (United States) Elissa Z. Cameron; Angela M. White; Meeghan E. Gray 2016-01-01 The attrition of women from science with increasing career stage continues, suggesting that current strategies are unsuccessful. Research evaluation using unbiased metrics could be important for the retention of women, because other factors such as implicit bias are unlikely to quickly change. We compare the publishing patterns of men and women within the... 18. Puzzle-solving in psychology : The neo-Galtonian vs. nomothetic research focuses NARCIS (Netherlands) Vautier, Stephane; Lacot, Emilie; Veldhuis, Michiel We compare the neo-Galtonian and nomothetic approaches of psychological research. While the former focuses on summarized statistics that depict average subjects, the latter focuses on general facts of form 'if conditions then restricted outcomes'. The nomothetic approach does not require 19. Gender Wage Inequality and Economic Growth: Is There Really a Puzzle?—A Comment OpenAIRE Schober, Thomas; Winter-Ebmer, Rudolf 2011-01-01 Summary Seguino (2000) shows that gender wage discrimination in export-oriented semi-industrialized countries might be fostering investment and growth in general. While the original analysis does not have internationally comparable wage discrimination data, we replicate the analysis using data from a meta-study on gender wage discrimination and do not find any evidence that more discrimination might further economic growth—on the contrary: if anything the impact of gender inequality is negati... 20. Gender wage inequality and economic growth: is there really a puzzle? OpenAIRE Schober, Thomas; Winter-Ebmer, Rudolf 2009-01-01 Seguino (2000) shows that gender wage discrimination in export-oriented semi-industrialized countries might be fostering investment and growth in general. While the original analysis does not have internationally comparable wage discrimination data, we replicate the analysis using data from a meta-study on gender wage discrimination and do not find any evidence that more discrimination might further economic growth – on the contrary: if anything the impact of gender inequality is negative for... 1. Finite pt contribution to relativistic Coulomb excitation: A possible explanation for the clean fission puzzle International Nuclear Information System (INIS) Galetti, D.; Kodama, T.; Nemes, M.C. 1986-10-01 The quantum relativistic Coulomb excitation process including reccil effects is studied in the plane wave Born approximation. Quantum and relativistic recoil effects allow for relatively large transverse momentum transfers, usually neglected. This specific feature is shown to modify the angular distribution of Coulomb induced fission fragmentation in an essential manner. In contrast with usual treatments it is found that these results compare favourably with recent data. (Authors) [pt 2. Emotional intelligence in anorexia nervosa: is anxiety a missing piece of the puzzle? Science.gov (United States) Hambrook, David; Brown, Gary; Tchanturia, Kate 2012-11-30 Problematic emotional processing has been implicated in the genesis and maintenance of anorexia nervosa (AN). This study built on existing research and explored performance-based emotional intelligence (EI) in people with AN. The Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT) was administered to 32 women diagnosed with AN and 32 female healthy controls (HC). Compared to HC women, the AN group demonstrated significantly lower total EI scores and poorer ability to understand how emotions can progress and change over time. Despite scores within the broadly average range compared to published EI norms, there was a general pattern of poorer performance in the AN sample. Self-reported anxiety symptoms were the strongest predictor of EI, over and above a diagnosis of AN. This study adds to the literature documenting the socioemotional phenotype of AN, suggesting this group of individuals may find it relatively difficult to carry out accurate reasoning about emotions, and to use emotions and emotional knowledge to enhance thought. Anxiety was highlighted as a putative variable partially explaining why people with AN demonstrated lower EI compared to controls. Implications for further research are discussed, including the need to explore the specificity of EI difficulties in AN using larger samples and additional control groups. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved. 3. Myeloid Sarcoma Predicts Superior Outcome in Pediatric AML; Can Cytogenetics Solve the Puzzle? Science.gov (United States) Pramanik, Raja; Tyagi, Anudishi; Chopra, Anita; Kumar, Akash; Vishnubhatla, Sreenivas; Bakhshi, Sameer 2018-06-01 The purpose of our study was to evaluate the clinical, cytogenetic, and molecular features, and survival outcomes in patients with acute myeloid leukemia (AML) with myeloid sarcoma (MS) and compare them with patients with AML without MS. This was a retrospective analysis of de novo pediatric AML patients with or without MS diagnosed at our cancer center between June 2003 and June 2016. MS was present in 121 of 570 (21.2%), the most frequent site being the orbit. Patients with MS had a younger median age (6 years vs. 10 years) and presented with higher hemoglobin and platelet but lower white blood cell count compared with patients without MS. Further, t (8; 21) (P < .01), loss of Y chromosome (P < .01), and deletion 9q (P = .03) were significantly higher in patients with AML with MS. Event-free survival (EFS; P = .003) and overall survival (OS; P = .001) were better among patients with AML with MS (median EFS 21.0 months and median OS 37.1 months) compared with those with AML without MS (median EFS 11.2 months and median OS 16.2 months). The t (8; 21) was significantly associated with MS (odds ratio, 3.92). In a comparison of the 4 groups divided according to the presence or absence of MS and t (8; 21), the subgroup of patients having MS without concomitant t (8; 21) was the only group to have a significantly better OS (hazard ratio, 0.53; 95% confidence interval, 0.34-0.82; P = .005). Although t (8; 21) was more frequently associated with MS, it did not appear to be the reason for better outcome. Copyright © 2018 Elsevier Inc. All rights reserved. 4. Gender Wage Inequality and Economic Growth: Is There Really a Puzzle?-A Comment. Science.gov (United States) Schober, Thomas; Winter-Ebmer, Rudolf 2011-08-01 Seguino (2000) shows that gender wage discrimination in export-oriented semi-industrialized countries might be fostering investment and growth in general. While the original analysis does not have internationally comparable wage discrimination data, we replicate the analysis using data from a meta-study on gender wage discrimination and do not find any evidence that more discrimination might further economic growth-on the contrary: if anything the impact of gender inequality is negative for growth. Standing up for more gender equality-also in terms of wages-is good for equity considerations and at least not negative for growth. 5. Gender Wage Inequality and Economic Growth: Is There Really a Puzzle?—A Comment Science.gov (United States) Schober, Thomas; Winter-Ebmer, Rudolf 2011-01-01 Summary Seguino (2000) shows that gender wage discrimination in export-oriented semi-industrialized countries might be fostering investment and growth in general. While the original analysis does not have internationally comparable wage discrimination data, we replicate the analysis using data from a meta-study on gender wage discrimination and do not find any evidence that more discrimination might further economic growth—on the contrary: if anything the impact of gender inequality is negative for growth. Standing up for more gender equality—also in terms of wages—is good for equity considerations and at least not negative for growth. PMID:21857765 6. A deficit in optimizing task solution but robust and well-retained speed and accuracy gains in complex skill acquisition in Parkinson׳s disease: multi-session training on the Tower of Hanoi Puzzle. Science.gov (United States) Vakil, Eli; Hassin-Baer, Sharon; Karni, Avi 2014-05-01 There are inconsistent results in the research literature relating to whether a procedural memory dysfunction exists as a core deficit in Parkinson׳s disease (PD). To address this issue, we examined the acquisition and long-term retention of a cognitive skill in patients with moderately severe PD. To this end, we used a computerized version of the Tower of Hanoi Puzzle. Sixteen patients with PD (11 males, age 60.9±10.26 years, education 13.8±3.5 years, disease duration 8.6±4.7 years, UPDRS III "On" score 16±5.3) were compared with 20 healthy individuals matched for age, gender, education and MMSE scores. The patients were assessed while taking their anti-Parkinsonian medication. All participants underwent three consecutive practice sessions, 24-48h apart, and a retention-test session six months later. A computerized version of the Tower of Hanoi Puzzle, with four disks, was used for training. Participants completed the task 18 times in each session. Number of moves (Nom) to solution, and time per move (Tpm), were used as measures of acquisition and retention of the learned skill. Robust learning, a significant reduction in Nom and a concurrent decrease in Tpm, were found across all three training sessions, in both groups. Moreover, both patients and controls showed significant savings for both measures at six months post-training. However, while their Tpm was no slower than that of controls, patients with PD required more Nom (in 3rd and 4th sessions) and tended to stabilize on less-than-optimal solutions. The results do not support the notion of a core deficit in gaining speed (fluency) or generating procedural memory in PD. However, PD patients settled on less-than-optimal solutions of the task, i.e., less efficient task solving process. The results are consistent with animal studies of the effects of dopamine depletion on task exploration. Thus, patients with PD may have a problem in exploring for optimal task solution rather than in skill acquisition and 7. A 72-year-old Danish puzzle resolved--comparative analysis of phenotypes in families with different-sized HOXD13 polyalanine expansions DEFF Research Database (Denmark) Kjær, Klaus Wilbrandt; Hansen, Lars; Eiberg, Hans 2005-01-01 A phenotype-genotype correlation was previously described for carriers of different sized of polyalanine expansions in HOXD13. We report on a detailed comparison of 55 members (approximately 220 limbs) from 4 Danish families with duplications of 21 or 27 bp, expanding the polyalanine repeat from ... 8. THE PUZZLING MUTUAL ORBIT OF THE BINARY TROJAN ASTEROID (624) HEKTOR Energy Technology Data Exchange (ETDEWEB) Marchis, F.; Cuk, M. [Carl Sagan Center at the SETI Institute, Mountain View, CA 94043 (United States); Durech, J. [Astronomical Institute, Faculty of Mathematics and Physics, Charles University, Prague (Czech Republic); Castillo-Rogez, J. [Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109 (United States); Vachier, F.; Berthier, J. [IMCCE-Obs de Paris, F-75014 Paris (France); Wong, M. H.; Kalas, P.; Duchene, G. [Department of Astronomy, University of California at Berkeley, Berkeley, CA 94720 (United States); Van Dam, M. A. [Flat Wavefronts, Christchurch 8140 (New Zealand); Hamanowa, H. [Hamanowa Observatory, Motomiya, Fukushima 969-1204 (Japan); Viikinkoski, M., E-mail: [email protected] [Tampere University of Technology, FI-33101 Tampere (Finland) 2014-03-10 Asteroids with satellites are natural laboratories to constrain the formation and evolution of our solar system. The binary Trojan asteroid (624) Hektor is the only known Trojan asteroid to possess a small satellite. Based on W. M. Keck adaptive optics observations, we found a unique and stable orbital solution, which is uncommon in comparison to the orbits of other large multiple asteroid systems studied so far. From lightcurve observations recorded since 1957, we showed that because the large Req = 125 km primary may be made of two joint lobes, the moon could be ejecta of the low-velocity encounter, which formed the system. The inferred density of Hektor's system is comparable to the L5 Trojan doublet (617) Patroclus but due to their difference in physical properties and in reflectance spectra, both captured Trojan asteroids could have a different composition and origin. 9. THE PUZZLING MUTUAL ORBIT OF THE BINARY TROJAN ASTEROID (624) HEKTOR International Nuclear Information System (INIS) Marchis, F.; Cuk, M.; Durech, J.; Castillo-Rogez, J.; Vachier, F.; Berthier, J.; Wong, M. H.; Kalas, P.; Duchene, G.; Van Dam, M. A.; Hamanowa, H.; Viikinkoski, M. 2014-01-01 Asteroids with satellites are natural laboratories to constrain the formation and evolution of our solar system. The binary Trojan asteroid (624) Hektor is the only known Trojan asteroid to possess a small satellite. Based on W. M. Keck adaptive optics observations, we found a unique and stable orbital solution, which is uncommon in comparison to the orbits of other large multiple asteroid systems studied so far. From lightcurve observations recorded since 1957, we showed that because the large Req = 125 km primary may be made of two joint lobes, the moon could be ejecta of the low-velocity encounter, which formed the system. The inferred density of Hektor's system is comparable to the L5 Trojan doublet (617) Patroclus but due to their difference in physical properties and in reflectance spectra, both captured Trojan asteroids could have a different composition and origin 10. Maternal Obesity and Impaired Fetal and Infant Survival-One More Piece Added to the Puzzle DEFF Research Database (Denmark) Nohr, Ellen A 2016-01-01 The association between maternal obesity and increased risks of stillbirth and infant mortality is well documented, but it has often been questioned whether the association is driven by obesity per se or by unmeasured factors such as insulin resistance or genes. In this issue of the Journal, Lindam...... et al. compared the body mass indices (weight (kg)/height (m)(2)) of women who had stillbirths and infant deaths with those of their sisters or of population controls. Significant excess risks of both outcomes were observed in obese women (body mass index ≥30), and associations were strongest when...... sister controls were used. Although this careful analysis adds to the existing evidence of a causal relationship between maternal obesity and impaired fetal and infant survival, a biological pathway has not yet been established. Additionally, we are in urgent need of effective tools to reduce obesity... 11. PENGEMBANGAN PENCEGAHAN SERANGAN DISTRIBUTED DENIAL OF SERVICE (DDOS PADA SUMBER DAYA JARINGAN DENGAN INTEGRASI NETWORK BEHAVIOR ANALYSIS DAN CLIENT PUZZLE Directory of Open Access Journals (Sweden) Septian Geges 2015-01-01 Full Text Available Denial of Service (DoS merupakan permasalahan keamanan jaringan yang sampai saat ini terus berkembang secara dinamis. Semakin tinggi kemampuan komputasi suatu komputer penyerang, serangan DoS yang dapat dihasilkan juga semakin membahayakan. Serangan ini dapat mengakibatkan ketidakmampuan server untuk melayani service request yang sah. Karena itu serangan DoS sangat merugikan dan perlu diberikan pencegahan yang efektif. Ancaman berikutnya yang juga sangat membahayakan adalah Distributed Denial of Service (DDoS, dimana serangan ini memanfaatkan sejumlah besar komputer untuk menjalankan serangan DoS kepada server, web service, atau sumber daya jaringan lain. Mengingat resiko besar yang diakibatkan serangan DDoS ini, banyak peneliti yang terdorong untuk merancang mekanisme penga-manan sumber daya jaringan. Pada penelitian ini, penulis mengkhususkan pokok permasalahan pada pengamanan web service. Penulis mengemuka-kan sebuah mekanisme untuk mengamankan web service dengan cara melakukan filtrasi dan validasi permintaan yang diterima untuk mengakses sumber daya jaringan. Filtrasi dan validasi ini dilakukan dengan gabungan metode Network Behavior Analysis (NBA dan Client Puzzle (CP. Metode NBA menjadi lapisan pertahanan pertama untuk mendeteksi apakah sedang terjadi serangan DDoS dengan mengukur tingkat kepadatan jaringan/Network density. Dari metode NBA, didapatkan IP Address yang perlu divalidasi dengan metode CP sebagai lapisan pertahanan kedua. Apabila suatu service request sudah berhasil melewati proses filtrasi dan validasi ini, maka service request ini baru akan dilayani. Dari hasil percobaan, terbukti metode ini dapat mendeteksi serangan DDoS sekaligus menjamin bahwa service request yang sah mendapat pelayanan yang seharusnya sehingga server dapat melayani service request dengan baik. 12. APRENDER JUGANDO CON "TEJIDOS PRECOLOMBINOS" MEDIANTE ROMPECABEZAS VIRTUALES LEARN BY PLAYING WITH "PRE-COLUMBIAN TEXTILES" THROUGH VIRTUAL PUZZLES Directory of Open Access Journals (Sweden) Diego Aracena Pizarro 2008-09-01 Full Text Available Este trabajo presenta un ambiente multimedia de rompecabezas sobre tejidos precolombinos de gran complejidad ornamental, expuestos en el Museo Arqueológico San Miguel de Azapa, Arica-Chile. El rompecabezas permite interaccionar de manera más entretenida y didáctica, con el objetivo de que la facilidad del uso del software permita aprender jugando, observando los intrincados símbolos y signos precolombinos que de otra manera pasan desapercibidos por no formar parte del ideario de los estudiantes actuales. El software se evaluó con alumnos de enseñanza básica y media de establecimientos educacionales de la ciudad, con el objetivo de complementar sus estudios a las asignaturas de historia que contienen temática prehispánica. Esta representación educacional fue implementada con herramienta Flash multimedia, con el propósito de estimular la parte creativa de los estudiantes y de abrir un mundo de juegos complementarios referentes al mismo tema, con la filosofía de aprender jugando.This paper presents a multimedia environment puzzle about Pre-Columbian textiles exhibiting ornamental complexities, found in the Archeological Museum San Miguel de Azapa, Arica-Chile. The software allow for an enjoyable and didactic interactive way of learning by playing, giving the users the opportunity of identifying intricate pre-Columbian symbols and signals, that otherwise would be unnoticed. The software was tested with schoolboys of ages 12 to 16 years old, from schools in Arica, with the purpose of complementing their studies in History courses containing Pre-hispanic topics. This educational software was implemented with Multimedia Flash Tool, so as to stimulate the creativity of the students, opening a world of complementary games with the philosophy of "learning by playing". 13. Sacubitril/valsartan: An important piece in the therapeutic puzzle of heart failure. Science.gov (United States) Marques da Silva, Pedro; Aguiar, Carlos 2017-09-01 14. Disproportionate entrance length in superfluid flows and the puzzle of counterflow instabilities Science.gov (United States) Bertolaccini, J.; Lévêque, E.; Roche, P.-E. 2017-12-01 Systematic simulations of the two-fluid model of superfluid helium (He-II) encompassing the Hall-Vinen-Bekharevich-Khalatnikov (HVBK) mutual coupling have been performed in two-dimensional pipe counterflows between 1.3 and 1.96 K. The numerical scheme relies on the lattice Boltzmann method. A Boussinesq-like hypothesis is introduced to omit temperature variations along the pipe. In return, the thermomechanical forcings of the normal and superfuid components are fueled by a pressure term related to their mass-density variations under an approximation of weak compressibility. This modeling framework reproduces the essential features of a thermally driven counterflow. A generalized definition of the entrance length is introduced to suitably compare entry effects (of different nature) at opposite ends of the pipe. This definition is related to the excess of pressure loss with respect to the developed Poiseuille-flow solution. At the heated end of the pipe, it is found that the entrance length for the normal fluid follows a classical law and increases linearly with the Reynolds number. At the cooled end, the entrance length for the superfluid is enhanced as compared to the normal fluid by up to one order of magnitude. At this end, the normal fluid flows into the cooling bath of He-II and produces large-scale superfluid vortical motions in the bath that partly re-enter the pipe along its sidewalls before being damped by mutual friction. In the superfluid entry region, the resulting frictional coupling in the superfluid boundary layer distorts the velocity profiles toward tail flattening for the normal fluid and tail raising for the superfluid. Eventually, a simple analytical model of entry effects allows us to re-examine the long-debated thresholds of T 1 and T 2 instabilities in superfluid counterflows. Inconsistencies in the T 1 thresholds reported since the 1960s disappear if an aspect-ratio criterion based on our modeling is used to discard data sets with the 15. CIMP status of interval colon cancers: another piece to the puzzle. Science.gov (United States) Arain, Mustafa A; Sawhney, Mandeep; Sheikh, Shehla; Anway, Ruth; Thyagarajan, Bharat; Bond, John H; Shaukat, Aasma 2010-05-01 Colon cancers diagnosed in the interval after a complete colonoscopy may occur due to limitations of colonoscopy or due to the development of new tumors, possibly reflecting molecular and environmental differences in tumorigenesis resulting in rapid tumor growth. In a previous study from our group, interval cancers (colon cancers diagnosed within 5 years of a complete colonoscopy) were almost four times more likely to demonstrate microsatellite instability (MSI) than non-interval cancers. In this study we extended our molecular analysis to compare the CpG island methylator phenotype (CIMP) status of interval and non-interval colorectal cancers and investigate the relationship between the CIMP and MSI pathways in the pathogenesis of interval cancers. We searched our institution's cancer registry for interval cancers, defined as colon cancers that developed within 5 years of a complete colonoscopy. These were frequency matched in a 1:2 ratio by age and sex to patients with non-interval cancers (defined as colon cancers diagnosed on a patient's first recorded colonoscopy). Archived cancer specimens for all subjects were retrieved and tested for CIMP gene markers. The MSI status of subjects identified between 1989 and 2004 was known from our previous study. Tissue specimens of newly identified cases and controls (between 2005 and 2006) were tested for MSI. There were 1,323 cases of colon cancer diagnosed over the 17-year study period, of which 63 were identified as having interval cancer and matched to 131 subjects with non-interval cancer. Study subjects were almost all Caucasian men. CIMP was present in 57% of interval cancers compared to 33% of non-interval cancers (P=0.004). As shown previously, interval cancers were more likely than non-interval cancers to occur in the proximal colon (63% vs. 39%; P=0.002), and have MSI 29% vs. 11%, P=0.004). In multivariable logistic regression model, proximal location (odds ratio (OR) 1.85; 95% confidence interval (CI) 1 16. Observations of low mass stars in clusters: some constraints and puzzles for stellar evolution theory International Nuclear Information System (INIS) Cannon, R.D. 1984-01-01 The author attempts to: (i) discuss some of the data which are available for testing the theory of evolution of low mass stars; and (ii) point out some problem areas where observations and theory do not seem to agree very well. He concentrates on one particular aspect, namely the study of star clusters and especially their colour-magnitude (CM) diagrams. Star clusters provide large samples of stars at the same distance and with the same age, and the CM diagram gives the easiest way of comparing theoretical predictions with observations, although crucial evidence is also provided by spectroscopic abundance analyses and studies of variable stars. Since this is primarily a review of observational data it is natural to divide it into two parts: (i) galactic globular clusters, and (ii) old and intermediate-age open clusters. Some additional evidence comes from Local Group galaxies, especially now that CM diagrams which reach the old main sequence are becoming available. For each class of cluster successive stages of evolution from the main sequence, up the hydrogen-burning red giant branch, and through the helium-burning giant phase are considered. (Auth.) 17. The conundrum of Greece and the Eurozone: Puzzles, paradoxes and contradictions Directory of Open Access Journals (Sweden) Kitromilides Yiannis 2016-01-01 Full Text Available This paper examines three questions regarding the controversial relationship between Greece and the eurozone during the current crisis. First, why was Greece “bailed-out” in 2010? Second, why the Greek economy collapsed despite the largest “bail-out” in global financial history? Third, was the electoral mandate of the Syriza government for ending austerity while remaining in the eurozone contradictory? There are conflicting answers to all three questions and the paper compares the answers of the so called “dominant narrative” to those provided by the “counter-narrative” of the eurozone crisis. The paper reaches the following conclusions. First, the primary motivation for the “bail-out” of Greece was the maintenance of European and global financial stability. Second, although programme implementation was less successful in Greece than in other “programme” countries the catastrophic collapse of the Greek economy had more to do with the programme itself than its implementation. Third, the meaning of democratic decision-making in the Euro-group needs re-appraisal and must go beyond seeing the Greek demand of a policy reversal in the eurozone as simply a clash of democratic mandates in a 19 member monetary union. Political unity will not only improve efficiency but also democracy and accountability in eurozone policymaking. 18. Stymied Mobility or Temporary Lull? The Puzzle of Lagging Hispanic College Degree Attainment. Science.gov (United States) Alon, Sigal; Domina, Thurston; Tienda, Marta 2010-06-01 We assess the intergenerational educational mobility of recent cohorts of high school graduates to consider whether Hispanics' lagging postsecondary attainment reflects a temporary lull due to immigration of low education parents or a more enduring pattern of unequal transmission of social status relative to whites. Using data from three national longitudinal studies, a recent longitudinal study of Texas high school seniors and a sample of students attending elite institutions, we track post-secondary enrollment and degree attainment patterns at institutions of differing selectivity. We find that group differences in parental education and nativity only partly explain the Hispanic-white gap in college enrollment, and not evenly over time. Both foreign- and native-born college-educated Hispanic parents are handicapped in their ability to transmit their educational advantages to their children compared with white parents. We conclude that both changing population composition and unequal ability to confer status advantages to offspring are responsible for the growing Hispanic-white degree attainment gap. 19. Solving the puzzle of an isolated high-Alpine drumlin: Hornkees, Austria Science.gov (United States) Lukas, Sven; Busfield, Marie 2017-04-01 Larger streamlined landforms, in particular drumlins, are frequently found in lowland environments where they attest to fast ice flow; they are comparatively rare in upland environments where smaller streamlined landforms (i.e. flutes) and erosional landforms (e.g. ice-moulded bedrock) are found much more prominent. We here report geomorphological and sedimentological field observations from a small drumlin formed during the last c. 200 years in the foreland of Hornkees, a small valley glacier in the Eastern Alps. This drumlin is located in the middle of the valley floor, upvalley of a bedrock obstacle, and consists of overridden and glaciotectonised outwash overlain by subglacial traction till of varying consistency. Using lithofacies analysis, clast fabric and clast shape data as well as structural measurements (e.g. of shear planes and fold axes) and in-situ soil penetrometer measurements we demonstrate that this drumlin is likely to represent one of the rare cases in upland environments where the primary mechanisms of fast flow and subglacial sediment deformation have been preserved and can thus be studied in detail. We present our dataset with the aim of generating discussion of these mechanisms and outline the significance of such rare cases as modern analogues not just for palaeo-studies, but also for our understanding of material properties from an engineering-geological standpoint. 20. Does ecosystem variability explain phytoplankton diversity? Solving an ecological puzzle with long-term data sets Science.gov (United States) Sarker, Subrata; Lemke, Peter; Wiltshire, Karen H. 2018-05-01 Explaining species diversity as a function of ecosystem variability is a long-term discussion in community-ecology research. Here, we aimed to establish a causal relationship between ecosystem variability and phytoplankton diversity in a shallow-sea ecosystem. We used long-term data on biotic and abiotic factors from Helgoland Roads, along with climate data to assess the effect of ecosystem variability on phytoplankton diversity. A point cumulative semi-variogram method was used to estimate the long-term ecosystem variability. A Markov chain model was used to estimate dynamical processes of species i.e. occurrence, absence and outcompete probability. We identified that the 1980s was a period of high ecosystem variability while the last two decades were comparatively less variable. Ecosystem variability was found as an important predictor of phytoplankton diversity at Helgoland Roads. High diversity was related to low ecosystem variability due to non-significant relationship between probability of a species occurrence and absence, significant negative relationship between probability of a species occurrence and probability of a species to be outcompeted by others, and high species occurrence at low ecosystem variability. Using an exceptional marine long-term data set, this study established a causal relationship between ecosystem variability and phytoplankton diversity. 1. Hospital Compare Data.gov (United States) U.S. Department of Health & Human Services — Hospital Compare has information about the quality of care at over 4,000 Medicare-certified hospitals across the country. You can use Hospital Compare to find... 2. INVESTIGATION OF THE PUZZLING ABUNDANCE PATTERN IN THE STARS OF THE FORNAX DWARF SPHEROIDAL GALAXY Energy Technology Data Exchange (ETDEWEB) Li Hongjie; Cui Wenyuan; Zhang Bo, E-mail: [email protected] [Department of Physics, Hebei Normal University, No. 20 East of South 2nd Ring Road, Shijiazhuang 050024 (China) 2013-09-20 Many works have found unusual characteristics of elemental abundances in nearby dwarf galaxies. This implies that there is a key factor of galactic evolution that is different from that of the Milky Way (MW). The chemical abundances of the stars in the Fornax dwarf spheroidal galaxy (Fornax dSph) provide excellent information for setting constraints on the models of galactic chemical evolution. In this work, adopting the five-component approach, we fit the abundances of the Fornax dSph stars, including {alpha} elements, iron group elements, and neutron-capture elements. For most sample stars, the relative contributions from the various processes to the elemental abundances are not usually in the MW proportions. We find that the contributions from massive stars to the primary {alpha} elements and iron group elements increase monotonically with increasing [Fe/H]. This means that the effect of the galactic wind is not strong enough to halt star formation and the contributions from the massive stars to {alpha} elements did not halt for [Fe/H] {approx}< -0.5. The average contribution ratios of various processes between the dSph stars and the MW stars monotonically decrease with increasing progenitor mass. This is important evidence of a bottom-heavy initial mass function (IMF) for the Fornax dSph, compared to the MW. Considering a bottom-heavy IMF for the dSph, the observed relations of [{alpha}/Fe] versus [Fe/H], [iron group/Fe] versus [Fe/H], and [neutron-capture/Fe] versus [Fe/H] for the dSph stars can be explained. 3. Arthropod Distribution in a Tropical Rainforest: Tackling a Four Dimensional Puzzle. Science.gov (United States) Basset, Yves; Cizek, Lukas; Cuénoud, Philippe; Didham, Raphael K; Novotny, Vojtech; Ødegaard, Frode; Roslin, Tomas; Tishechkin, Alexey K; Schmidl, Jürgen; Winchester, Neville N; Roubik, David W; Aberlenc, Henri-Pierre; Bail, Johannes; Barrios, Héctor; Bridle, Jonathan R; Castaño-Meneses, Gabriela; Corbara, Bruno; Curletti, Gianfranco; Duarte da Rocha, Wesley; De Bakker, Domir; Delabie, Jacques H C; Dejean, Alain; Fagan, Laura L; Floren, Andreas; Kitching, Roger L; Medianero, Enrique; Gama de Oliveira, Evandro; Orivel, Jérôme; Pollet, Marc; Rapp, Mathieu; Ribeiro, Sérvio P; Roisin, Yves; Schmidt, Jesper B; Sørensen, Line; Lewinsohn, Thomas M; Leponce, Maurice 2015-01-01 Quantifying the spatio-temporal distribution of arthropods in tropical rainforests represents a first step towards scrutinizing the global distribution of biodiversity on Earth. To date most studies have focused on narrow taxonomic groups or lack a design that allows partitioning of the components of diversity. Here, we consider an exceptionally large dataset (113,952 individuals representing 5,858 species), obtained from the San Lorenzo forest in Panama, where the phylogenetic breadth of arthropod taxa was surveyed using 14 protocols targeting the soil, litter, understory, lower and upper canopy habitats, replicated across seasons in 2003 and 2004. This dataset is used to explore the relative influence of horizontal, vertical and seasonal drivers of arthropod distribution in this forest. We considered arthropod abundance, observed and estimated species richness, additive decomposition of species richness, multiplicative partitioning of species diversity, variation in species composition, species turnover and guild structure as components of diversity. At the scale of our study (2 km of distance, 40 m in height and 400 days), the effects related to the vertical and seasonal dimensions were most important. Most adult arthropods were collected from the soil/litter or the upper canopy and species richness was highest in the canopy. We compared the distribution of arthropods and trees within our study system. Effects related to the seasonal dimension were stronger for arthropods than for trees. We conclude that: (1) models of beta diversity developed for tropical trees are unlikely to be applicable to tropical arthropods; (2) it is imperative that estimates of global biodiversity derived from mass collecting of arthropods in tropical rainforests embrace the strong vertical and seasonal partitioning observed here; and (3) given the high species turnover observed between seasons, global climate change may have severe consequences for rainforest arthropods. 4. Culture, risk factors and mortality: can Switzerland add missing pieces to the European puzzle? Science.gov (United States) Faeh, D; Minder, C; Gutzwiller, F; Bopp, M 2009-08-01 The aim was to compare cause-specific mortality, self-rated health (SRH) and risk factors in the French and German part of Switzerland and to discuss to what extent variations between these regions reflect differences between France and Germany. Data were used from the general population of German and French Switzerland with 2.8 million individuals aged 45-74 years, contributing 176 782 deaths between 1990 and 2000. Adjusted mortality risks were calculated from the Swiss National Cohort, a longitudinal census-based record linkage study. Results were contrasted with cross-sectional analyses of SRH and risk factors (Swiss Health Survey 1992/3) and with cross-sectional national and international mortality rates for 1980, 1990 and 2000. Despite similar all-cause mortality, there were substantial differences in cause-specific mortality between Swiss regions. Deaths from circulatory disease were more common in German Switzerland, while causes related to alcohol consumption were more prevalent in French Switzerland. Many but not all of the mortality differences between the two regions could be explained by variations in risk factors. Similar patterns were found between Germany and France. Characteristic mortality and behavioural differentials between the German- and the French-speaking parts of Switzerland could also be found between Germany and France. However, some of the international variations in mortality were not in line with the Swiss regional comparison nor with differences in risk factors. These could relate to peculiarities in assignment of cause of death. With its cultural diversity, Switzerland offers the opportunity to examine cultural determinants of mortality without bias due to different statistical systems or national health policies. 5. Arthropod Distribution in a Tropical Rainforest: Tackling a Four Dimensional Puzzle Science.gov (United States) Basset, Yves; Cizek, Lukas; Cuénoud, Philippe; Didham, Raphael K.; Novotny, Vojtech; Ødegaard, Frode; Roslin, Tomas; Tishechkin, Alexey K.; Schmidl, Jürgen; Winchester, Neville N.; Roubik, David W.; Aberlenc, Henri-Pierre; Bail, Johannes; Barrios, Héctor; Bridle, Jonathan R.; Castaño-Meneses, Gabriela; Corbara, Bruno; Curletti, Gianfranco; Duarte da Rocha, Wesley; De Bakker, Domir; Delabie, Jacques H. C.; Dejean, Alain; Fagan, Laura L.; Floren, Andreas; Kitching, Roger L.; Medianero, Enrique; Gama de Oliveira, Evandro; Orivel, Jérôme; Pollet, Marc; Rapp, Mathieu; Ribeiro, Sérvio P.; Roisin, Yves; Schmidt, Jesper B.; Sørensen, Line; Lewinsohn, Thomas M.; Leponce, Maurice 2015-01-01 Quantifying the spatio-temporal distribution of arthropods in tropical rainforests represents a first step towards scrutinizing the global distribution of biodiversity on Earth. To date most studies have focused on narrow taxonomic groups or lack a design that allows partitioning of the components of diversity. Here, we consider an exceptionally large dataset (113,952 individuals representing 5,858 species), obtained from the San Lorenzo forest in Panama, where the phylogenetic breadth of arthropod taxa was surveyed using 14 protocols targeting the soil, litter, understory, lower and upper canopy habitats, replicated across seasons in 2003 and 2004. This dataset is used to explore the relative influence of horizontal, vertical and seasonal drivers of arthropod distribution in this forest. We considered arthropod abundance, observed and estimated species richness, additive decomposition of species richness, multiplicative partitioning of species diversity, variation in species composition, species turnover and guild structure as components of diversity. At the scale of our study (2km of distance, 40m in height and 400 days), the effects related to the vertical and seasonal dimensions were most important. Most adult arthropods were collected from the soil/litter or the upper canopy and species richness was highest in the canopy. We compared the distribution of arthropods and trees within our study system. Effects related to the seasonal dimension were stronger for arthropods than for trees. We conclude that: (1) models of beta diversity developed for tropical trees are unlikely to be applicable to tropical arthropods; (2) it is imperative that estimates of global biodiversity derived from mass collecting of arthropods in tropical rainforests embrace the strong vertical and seasonal partitioning observed here; and (3) given the high species turnover observed between seasons, global climate change may have severe consequences for rainforest arthropods. PMID:26633187 6. The puzzle of the CNO isotope ratios in asymptotic giant branch carbon stars Science.gov (United States) Abia, C.; Hedrosa, R. P.; Domínguez, I.; Straniero, O. 2017-03-01 Context. The abundance ratios of the main isotopes of carbon, nitrogen and oxygen are modified by the CNO-cycle in the stellar interiors. When the different dredge-up events mix the burning material with the envelope, valuable information on the nucleosynthesis and mixing processes can be extracted by measuring these isotope ratios. Aims: Previous determinations of the oxygen isotopic ratios in asymptotic giant branch (AGB) carbon stars were at odds with the existing theoretical predictions. We aim to redetermine the oxygen ratios in these stars using new spectral analysis tools and further develop discussions on the carbon and nitrogen isotopic ratios in order to elucidate this problem. Methods: Oxygen isotopic ratios were derived from spectra in the K-band in a sample of galactic AGB carbon stars of different spectral types and near solar metallicity. Synthetic spectra calculated in local thermodynamic equillibrium (LTE) with spherical carbon-rich atmosphere models and updated molecular line lists were used. The CNO isotope ratios derived in a homogeneous way, were compared with theoretical predictions for low-mass (1.5-3 M⊙) AGB stars computed with the FUNS code assuming extra mixing both during the RGB and AGB phases. Results: For most of the stars the 16O/17O/18O ratios derived are in good agreement with theoretical predictions confirming that, for AGB stars, are established using the values reached after the first dredge-up (FDU) according to the initial stellar mass. This fact, as far as the oxygen isotopic ratios are concerned, leaves little space for the operation of any extra mixing mechanism during the AGB phase. Nevertheless, for a few stars with large 16O/17O/18O, the operation of such a mechanism might be required, although their observed 12C/13C and 14N/15N ratios would be difficult to reconcile within this scenario. Furthermore, J-type stars tend to have lower 16O/17O ratios than the normal carbon stars, as already indicated in previous studies 7. PENINGKATAN KEMAMPUAN MEMBACA PETA DUNIA MENGGUNAKAN MEDIA PUZZLE BAGI SISWA KELAS VI SDN PEDALANGAN 02 KOTA SEMARANG TAHUN PELAJARAN 2014/2015 Directory of Open Access Journals (Sweden) Turasmi Turasmi 2014-06-01 Full Text Available Action researchinthe classroomVI SDN Pedalangan 02 Semarang year 2014/2015 by using the media puzzle aimed to improve students' ability to read the world map. The way the data was collected through  observation, questionnaires, documentation of activities for students to learn and teachers through the use of game media, and students' work to determine student understanding through assessment/ evaluation. The results of prior research by the number of 38 students, who have reached the KKM (> 7 had 11 students (28.90%, while the remaining 27 (71.10% students are not yet Media puzzles, learning base map complete. Student activity in learning is also very low, with only 12 students indicated that attention well, 4 active child asks, while the other passive. Seeing this fact then do the research and show improved results in reading maps world blind. Evidenced by tests in cycle 1, indicating as many as 22 students (57.90% has reached mastery learning (> 7, while 16 students (42.10% was not finished. Increased student activity shown by 31 (81.60% students actively asked, 34 students are actively trying / learning media responds. With the results of tests on the first cycle has been no increase in yield, but not maximized. 2 Cycle to be done to improve the learning process in the first cycle and showed maximum improvement, from 38 students who have reached the KKM 35 students (92.1%, while the remaining try students (7.90% was not finished. All students (100% going to try to respond to the media or the media. Thus declared a class action meets the criteria expected in the second cycle. Suggested for educators in teaching and learning aids or media is very important, to planting concept on students. Media Puzzle is ideal to use in maps. 8. Patient-specific puzzle implant preformed with 3D-printed rapid prototype model for combined orbital floor and medial wall fracture. Science.gov (United States) Kim, Young Chul; Min, Kyung Hyun; Choi, Jong Woo; Koh, Kyung S; Oh, Tae Suk; Jeong, Woo Shik 2018-04-01 The management of combined orbital floor and medial wall fractures involving the inferomedial strut is challenging due to absence of stable cornerstone. In this article, we proposed surgical strategies using customized 3D puzzle implant preformed with Rapid Prototype (RP) skull model. Retrospective review was done in 28 patients diagnosed with combined orbital floor and medial wall fracture. Using preoperative CT scans, original and mirror-imaged RP skull models for each patient were prepared and sterilized. In all patients, porous polyethylene-coated titanium mesh was premolded onto RP skull model in two ways; Customized 3D jigsaw puzzle technique was used in 15 patients with comminuted inferomedial strut, whereas individual 3D implant technique was used in each fracture for 13 patients with intact inferomedial strut. Outcomes including enophthalmos, visual acuity, and presence of diplopia were assessed and orbital volume was measured using OsiriX software preoperatively and postoperatively. Satisfactory results were achieved in both groups in terms of clinical improvements. Of 10 patients with preoperative diplopia, 9 improved in 6 months, except one with persistent symptom who underwent extraocular muscle rupture. 18 patients who had moderate to severe enophthalmos preoperatively improved, and one remained with mild degree. Orbital volume ratio, defined as volumetric ratio between affected and control orbit, decreased from 127.6% to 99.79% (p puzzle and individual reconstruction technique provide accurate restoration of combined orbital floor and medial wall fractures. Copyright © 2017 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved. 9. The Infinity Puzzle - The story of the Higgs Boson:From QED to the LHC via Higgs and the Gang of Six CERN Multimedia CERN. Geneva 2013-01-01 Rutherford and Bohr discovered the nuclear atom 100 years ago. Roughly 50 years ago a theory of this basic structure of matter was inspired by the work of Peter Higgs and others. In July 2012 the discovery "beyond reasonable doubt" of Higgs's boson, and the experimental proof of the theory, was announced and speculations about Nobel prizes mushroomed. The Economist said of Frank Close's book, The Infinity Puzzle (OUP,2012): "The Nobel Committee would be well advised to read Mr Close’s book before making their decision." This pedagogic talk reviews the ideas and the history, and assesses how the credits should be shared. The conclusions may not be what you anticipate 10. Structural analysis of group II chitinase (ChtII) catalysis completes the puzzle of chitin hydrolysis in insects. Science.gov (United States) Chen, Wei; Qu, Mingbo; Zhou, Yong; Yang, Qing 2018-02-23 Chitin is a linear homopolymer of N -acetyl-β-d-glucosamines and a major structural component of insect cuticles. Chitin hydrolysis involves glycoside hydrolase family 18 (GH18) chitinases. In insects, chitin hydrolysis is essential for periodic shedding of the old cuticle ecdysis and proceeds via a pathway different from that in the well studied bacterial chitinolytic system. Group II chitinase (ChtII) is a widespread chitinolytic enzyme in insects and contains the greatest number of catalytic domains and chitin-binding domains among chitinases. In Lepidopterans, ChtII and two other chitinases, ChtI and Chi-h, are essential for chitin hydrolysis. Although ChtI and Chi-h have been well studied, the role of ChtII remains elusive. Here, we investigated the structure and enzymology of Of ChtII, a ChtII derived from the insect pest Ostrinia furnacalis We present the crystal structures of two catalytically active domains of Of ChtII, Of ChtII-C1 and Of ChtII-C2, both in unliganded form and complexed with chitooligosaccharide substrates. We found that Of ChtII-C1 and Of ChtII-C2 both possess long, deep substrate-binding clefts with endochitinase activities. Of ChtII exhibited structural characteristics within the substrate-binding cleft similar to those in Of Chi-h and Of ChtI. However, Of ChtII lacked structural elements favoring substrate binding beyond the active sites, including an extra wall structure present in Of Chi-h. Nevertheless, the numerous domains in Of ChtII may compensate for this difference; a truncation containing one catalytic domain and three chitin-binding modules ( Of ChtII-B4C1) displayed activity toward insoluble polymeric substrates that was higher than those of Of Chi-h and Of ChtI. Our observations provide the last piece of the puzzle of chitin hydrolysis in insects. © 2018 by The American Society for Biochemistry and Molecular Biology, Inc. 11. Hypernuclear weak decay puzzle International Nuclear Information System (INIS) Barbero, C.; Horvat, D.; Narancic, Z.; Krmpotic, F.; Kuo, T.T.S.; Tadic, D. 2002-01-01 A general shell model formalism for the nonmesonic weak decay of the hypernuclei has been developed. It involves a partial wave expansion of the emitted nucleon waves, preserves naturally the antisymmetrization between the escaping particles and the residual core, and contains as a particular case the weak Λ-core coupling formalism. The extreme particle-hole model and the quasiparticle Tamm-Dancoff approximation are explicitly worked out. It is shown that the nuclear structure manifests itself basically through the Pauli principle, and a very simple expression is derived for the neutron- and proton-induced decays rates Γ n and Γ p , which does not involve the spectroscopic factors. We use the standard strangeness-changing weak ΛN→NN transition potential which comprises the exchange of the complete pseudoscalar and vector meson octets (π,η,K,ρ,ω,K * ), taking into account some important parity-violating transition operators that are systematically omitted in the literature. The interplay between different mesons in the decay of Λ 12 C is carefully analyzed. With the commonly used parametrization in the one-meson-exchange model (OMEM), the calculated rate Γ NM =Γ n +Γ p is of the order of the free Λ decay rate Γ 0 (Γ NM th congruent with Γ 0 ) and is consistent with experiments. Yet the measurements of Γ n/p =Γ n /Γ p and of Γ p are not well accounted for by the theory (Γ n/p th p th > or approx. 0.60Γ 0 ). It is suggested that, unless additional degrees of freedom are incorporated, the OMEM parameters should be radically modified 12. The Format Puzzle DEFF Research Database (Denmark) Knudsen, Bo Nissen The first volume in the printed place-name series Danmarks Stednavne (Place-names of Denmark) was published in 1922 – 12 years after the establishment of Stednavneudvalget (the Place-Name Commission) in 1910. In 2013 volume 26 is due to be published, and still only about 2/3 of the area of Denmar... 13. The Puzzle of Coherence DEFF Research Database (Denmark) Andersen, Anne Bendix; Frederiksen, Kirsten; Beedholm, Kirsten 2016-01-01 Background During the past decade, politicians and healthcare providers have strived to create a coherent healthcare system across primary and secondary healthcare sectors in Denmark. Nevertheless, elderly patients with chronic diseases (EPCD) continue to report experiences of poor-quality care a... 14. Pebble Puzzle Solved Science.gov (United States) 2004-01-01 [figure removed for brevity, see original site] Figure 1 In the quest to determine if a pebble was jamming the rock abrasion tool on NASA's Mars Exploration Rover Opportunity, scientists and engineers examined this up-close, approximate true-color image of the tool. The picture was taken by the rover's panoramic camera, using filters centered at 601, 535, and 482 nanometers, at 12:47 local solar time on sol 200 (August 16, 2004). Colored spots have been drawn on this image corresponding to regions where panoramic camera reflectance spectra were acquired (see chart in Figure 1). Those regions are: the grinding wheel heads (yellow); the rock abrasion tool magnets (green); the supposed pebble (red); a sunlit portion of the aluminum rock abrasion tool housing (purple); and a shadowed portion of the rock abrasion tool housing (brown). These spectra demonstrated that the composition of the supposed pebble was clearly different from that of the sunlit and shadowed portions of the rock abrasion tool, while similar to that of the dust-coated rock abrasion tool magnets and grinding heads. This led the team to conclude that the object disabling the rock abrasion tool was indeed a martian pebble. 15. The puzzle of Chernobyl International Nuclear Information System (INIS) Fischetti, M.A. 1986-01-01 News of the event itself-the world's worst nuclear reactor accident-emerged in an agonizing trickle. Answers about the cause of the explosion at Chernobyl and what can be done to prevent similar catastrophes in the electric utility industry may be even slower in coming. But already top nuclear experts in the United States and Europe have put forth plausible hypotheses. The scenarios note that the accident occurred in an inherently hazardous type of reactor, little used in any country but the Soviet Union. Yet the incident has raised questions about safety measures in all reactors. As for the long-range health damage from radioactive iodine, cesium, and other products released by the Chernobyl meltdown, there is little precise knowledge. Some scientists have predicted an increase in the incidence of cancer and premature deaths in the Soviet Union and Eastern Europe in the years ahead, but they concede their estimates are only tentative. This article presents an analysis of the accident as known at the time of publication 16. The carbon market puzzle International Nuclear Information System (INIS) Perthuis, Ch. de 2008-01-01 The kyoto protocol forces the developed countries which ratify it to reduce their greenhouse effect gases emissions. The reductions cost is decreased by the clean development mechanisms: the carbon markets. That is why the protocol implementation will not have a major effect on the evolution of the greenhouse effect gases for 2012. The author presents the situation and discusses the economic tools of the Kyoto protocol, the european system of quotas, the clean development mechanisms and the impacts on a future and more ambitious climatic agreement. (A.L.B.) 17. The puzzle of homeopathy. Science.gov (United States) Reilly, D 2001-01-01 Homeopathy is a branch of Western medicine that has mostly been rejected by Western orthodoxy for the last 200 years because of conceptual and scientific clashes. Homeopathy uses microdoses of potential toxins to provoke defense and self-regulatory responses, rather than the more orthodox approach of blocking body reactions. This approach hints at its clinical scope: it can help, at times resolve, conditions that are intrinsically reversible rather than mechanical problems, deficiencies, or irreversible breakdowns in body functions where it is only palliative. In recent years, there has been a renaissance of interest. Public demand has soared, and with it professional interest. Approximately 20% of Scotland's general practitioners have completed basic training. This is partly occasioned by public interest in complementary medicine and a sympathy with the more mind-body approach of homeopathy, and partly by recent scientific evidence. Some homeopathic dilutions are so extreme they are dismissed by critics as only placebo. Yet trials and meta-analyses of controlled trials are pointing toward real effects, mechanism of action unknown. Clinical outcome studies suggest useful clinical impact and excellent safety. There seems to be a potential to enhance patient care by integrating the two systems. 18. Das DNA-Puzzle Science.gov (United States) Kirchner, Stefan Im Jahre 1953 wurde von James Watson und Francis Crick erstmalig der strukturelle Aufbau der sogenannten DNA (Desoxyribonukleinsäure) beschrieben, welche das Erbgut jedes Lebewesens enthält. Der wesentliche Teil des Erbguts wird dabei durch eine sehr lange Folge der vier Basen Adenin (A), Cytosin (C), Guanin (G) und Thymin (T) codiert. Seit einigen Jahren ist es möglich, die Folge der vier Basen zu einer gegebenen DNA zu bestimmen. Biologen bezeichnen diesen Vorgang als Sequenzierung. 19. Amino Acid Crossword Puzzle Science.gov (United States) Sims, Paul A. 2011-01-01 Learning the 20 standard amino acids is an essential component of an introductory course in biochemistry. Later in the course, the students study metabolism and learn about various catabolic and anabolic pathways involving amino acids. Learning new material or concepts often is easier if one can connect the new material to what one already knows;… 20. Spin puzzle in nucleon International Nuclear Information System (INIS) Ramachandran, R. 1994-09-01 The object of this brief review is to reconcile different points of view on how the spin of proton is made up from its constituents. On the basis of naive quark model with flavour symmetry such as isospin or SU(3) one finds a static description. On the contrary the local SU(3) colour symmetry gives a dynamical view. Both these views are contrasted and the role of U(1) axial anomaly and the ambiguity for the measurable spin content is discussed. (author). 16 refs, 1 fig 1. The Puzzle of Coherence DEFF Research Database (Denmark) Andersen, Anne Bendix; Frederiksen, Kirsten; Beedholm, Kirsten 2016-01-01 During the past decade, politicians and health care providers have strived to create a coherent health care system across primary and secondary health care systems in Denmark. Nevertheless, elderly patients with chronic diseases (EPCD) continue to report experiences of poor-quality care and lack ...... both nationally and internationally in preparation of health agreements, implementation of new collaboration forms among health care providers, and in improvement of delegation and transfer of information and assignments across sectors in health care.... 2. The Birth Order Puzzle. Science.gov (United States) Zajonc, R. B.; And Others 1979-01-01 Discusses the controversy of the relationship between birth order and intellectual performance through a detailed evaluation of the confluence model which assumes that the rate of intellectual growth is a function of the intellectual environment within the family and associated with the special circumstances of last children. (CM) 3. Physician Compare Data.gov (United States) U.S. Department of Health & Human Services — Physician Compare, which meets Affordable Care Act of 2010 requirements, helps you search for and select physicians and other healthcare professionals enrolled in... 4. Comparative Genomics Home; Journals; Resonance – Journal of Science Education; Volume 11; Issue 8. Comparative Genomics - A Powerful New Tool in Biology. Anand K Bachhawat. General Article Volume 11 Issue 8 August 2006 pp 22-40. Fulltext. Click here to view fulltext PDF. Permanent link: DEFF Research Database (Denmark) Zhang, Jie; Jensen, Camilla 2007-01-01 that are typically explained from the supply-side variables, the comparative advantage of the exporting countries. A simple model is proposed and tested. The results render strong support for the relevance of supply-side factors such as natural endowments, technology, and infrastructure in explaining international... 6. Comparative perspectives African Journals Online (AJOL) IT Ideology, policy and implementation: Comparative perspectives from two ... how both political as well as particular language ideologies play a major role in influencing and ..... attitudes as a field of research, many scholars still draw on the concept of .... The data for this study were collected through the use of questionnaires ... 7. Worry about racial discrimination: A missing piece of the puzzle of Black-White disparities in preterm birth? Directory of Open Access Journals (Sweden) Paula Braveman adjustment for chronic worry (PR 1.30, 95% CI 0.93-1.81; it appeared further attenuated after adding the covariates (PR 1.17, 95% CI 0.85-1.63.Chronic worry about racial discrimination may play an important role in Black-White disparities in PTB and may help explain the puzzling and repeatedly observed greater PTB disparities among more socioeconomically-advantaged women. Although the single measure of experiences of racial discrimination used in this study precluded examination of the role of other experiences of racial discrimination, such as overt incidents, it is likely that our findings reflect an association between one or more experiences of racial discrimination and PTB. Further research should examine a range of experiences of racial discrimination, including not only chronic worry but other psychological and emotional states and both subtle and overt incidents as well. These dramatic results from a large statewide-representative study add to a growing-but not widely known-literature linking racism-related stress with physical health in general, and shed light on the links between racism-related stress and PTB specifically. Without being causally definitive, this study's findings should stimulate further research and heighten awareness of the potential role of unmeasured social variables, such as diverse experiences of racial discrimination, in racial disparities in health. 8. Worry about racial discrimination: A missing piece of the puzzle of Black-White disparities in preterm birth? Science.gov (United States) Braveman, Paula; Heck, Katherine; Egerter, Susan; Dominguez, Tyan Parker; Rinki, Christine; Marchi, Kristen S; Curtis, Michael 2017-01-01 chronic worry (PR 1.30, 95% CI 0.93-1.81); it appeared further attenuated after adding the covariates (PR 1.17, 95% CI 0.85-1.63). Chronic worry about racial discrimination may play an important role in Black-White disparities in PTB and may help explain the puzzling and repeatedly observed greater PTB disparities among more socioeconomically-advantaged women. Although the single measure of experiences of racial discrimination used in this study precluded examination of the role of other experiences of racial discrimination, such as overt incidents, it is likely that our findings reflect an association between one or more experiences of racial discrimination and PTB. Further research should examine a range of experiences of racial discrimination, including not only chronic worry but other psychological and emotional states and both subtle and overt incidents as well. These dramatic results from a large statewide-representative study add to a growing-but not widely known-literature linking racism-related stress with physical health in general, and shed light on the links between racism-related stress and PTB specifically. Without being causally definitive, this study's findings should stimulate further research and heighten awareness of the potential role of unmeasured social variables, such as diverse experiences of racial discrimination, in racial disparities in health. 9. Video Comparator International Nuclear Information System (INIS) Rose, R.P. 1978-01-01 The Video Comparator is a comparative gage that uses electronic images from two sources, a standard and an unknown. Two matched video cameras are used to obtain the electronic images. The video signals are mixed and displayed on a single video receiver (CRT). The video system is manufactured by ITP of Chatsworth, CA and is a Tele-Microscope II, Model 148. One of the cameras is mounted on a toolmaker's microscope stand and produces a 250X image of a cast. The other camera is mounted on a stand and produces an image of a 250X template. The two video images are mixed in a control box provided by ITP and displayed on a CRT. The template or the cast can be moved to align the desired features. Vertical reference lines are provided on the CRT, and a feature on the cast can be aligned with a line on the CRT screen. The stage containing the casts can be moved using a Boeckleler micrometer equipped with a digital readout, and a second feature aligned with the reference line and the distance moved obtained from the digital display 10. Putting the Whole Grain Puzzle Together: Health Benefits Associated with Whole Grains—Summary of American Society for Nutrition 2010 Satellite Symposium123 Science.gov (United States) Jonnalagadda, Satya S.; Harnack, Lisa; Hai Liu, Rui; McKeown, Nicola; Seal, Chris; Liu, Simin; Fahey, George C. 2011-01-01 The symposium “Putting the Whole Grain Puzzle Together: Health Benefits Associated with Whole Grains” sponsored by the ASN brought together researchers to review the evidence regarding the health benefits associated with whole grains. Current scientific evidence indicates that whole grains play an important role in lowering the risk of chronic diseases, such as coronary heart disease, diabetes, and cancer, and also contribute to body weight management and gastrointestinal health. The essential macro- and micronutrients, along with the phytonutrients present in whole grains, synergistically contribute to their beneficial effects. Current evidence lends credence to the recommendations to incorporate whole grain foods into a healthy diet and lifestyle program. The symposium also highlighted the need for further research to examine the role of whole grain foods in disease prevention and management to gain a better understanding of their mechanisms of action. PMID:21451131 11. Proceedings of High Energy Physics Workshop ''Scalar Mesons: An Interesting Puzzle for QCD'' held at SUNY Institute of Technology, May 16-18, 2003 Published by the American Institute of Physics AIP Conference Proceedings 688 Editor: Amir H. Fariborz International Nuclear Information System (INIS) Fariborz, Amir H. 2003-01-01 The proceedings of the workshop: ''Scalar Mesons: An Interesting Puzzle for QCD'' contains papers that were presented at the workshop by a number of experts from around the world. It includes three main categories of Theoretical, Computational and Experimental works. The topics that are presented in this proceedings are of interest to senior and junior investigators in high energy physics, nuclear physics and computational physics, and provide most recent ideas, techniques, and directions for future research in these fields 12. anti B_d_,_s → D"*_d_,_sV and anti B"*_d_,_s → D_d_,_sV decays in QCD factorization and possible puzzles International Nuclear Information System (INIS) Chang, Qin; Chen, Ling-Xin; Zhang, Yun-Yun; Sun, Jun-Feng; Yang, Yue-Ling 2016-01-01 Motivated by the rapid development of heavy-flavor experiments, phenomenological studies of nonleptonic anti B_d_,_s → D"*_d_,_sV and anti B"*_d_,_s → D_d_,_sV (V = ρ, K*) decays are performed within the framework of QCD factorization. Relative to the previous work, the QCD corrections to the transverse amplitudes are evaluated at next-to-leading order. The theoretical predictions of the observables are updated. For the measured anti B_d_,_s → D"*_d_,_sV decays, the tensions between theoretical results and experimental measurements, i.e. the ''R_d_s"V puzzle'' and ''D*V (or R_V_/_l _a_n_t_i _ν__l_) puzzle'', are presented after detailed analyses. For the anti B"*_d_,_s → D_d_,_sV decays, they have relatively large branching fractions of the order >or similar O(10"-"9) and are in the scope of Belle-II and LHCb experiments. Moreover, they also provide a way to crosscheck the possible puzzles mentioned above through the similar ratios R_d_s"'"V and R"'_V_/_l _a_n_t_i _ν__l_. More refined experimental measurements and theoretical efforts are required to confirm or refute such two anomalies. (orig.) 13. Neutron-deuteron analyzing power data at En = 21 MeV and the energy dependence of the three-nucleon analyzing power puzzle Science.gov (United States) Weisel, G. J.; Tornow, W.; Esterline, J. H. 2015-08-01 We present measurements of n-d analyzing power, {A}y(θ ), at En = 21.0 MeV. The experiment produces neutrons via the 2H(d, n)3He reaction and uses a deuterated liquid-scintillator center detector and six pairs of liquid-scintillator neutron side detectors. Elastic neutron scattering events are identified by using time-of-flight techniques and by setting a gate in the center-detector pulse-height spectrum. Beam polarization is monitored by using a high-pressure helium gas scintillator. The n-d {A}y(θ ) data at 21.0 MeV show a significant discrepancy with the results of rigorous three-body calculations and are consistent with data taken previously by us at 19.0 and 22.5 MeV. We review the overall energy dependence of the three-nucleon analyzing power puzzle in neutron-deuteron elastic scattering, using the best data available. We find that the relative difference between calculations and data is nearly constant at 25% up to En = 22.5 MeV. 14. The Jigsaw Puzzle of mRNA Translation Initiation in Eukaryotes: A Decade of Structures Unraveling the Mechanics of the Process. Science.gov (United States) Hashem, Yaser; Frank, Joachim 2018-03-01 Translation initiation in eukaryotes is a highly regulated and rate-limiting process. It results in the assembly and disassembly of numerous transient and intermediate complexes involving over a dozen eukaryotic initiation factors (eIFs). This process culminates in the accommodation of a start codon marking the beginning of an open reading frame at the appropriate ribosomal site. Although this process has been extensively studied by hundreds of groups for nearly half a century, it has been only recently, especially during the last decade, that we have gained deeper insight into the mechanics of the eukaryotic translation initiation process. This advance in knowledge is due in part to the contributions of structural biology, which have shed light on the molecular mechanics underlying the different functions of various eukaryotic initiation factors. In this review, we focus exclusively on the contribution of structural biology to the understanding of the eukaryotic initiation process, a long-standing jigsaw puzzle that is just starting to yield the bigger picture. Expected final online publication date for the Annual Review of Biophysics Volume 47 is May 20, 2018. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates. 15. Bridging Type 2 Diabetes and Alzheimer's Disease: Assembling the Puzzle Pieces in the Quest for the Molecules With Therapeutic and Preventive Potential. Science.gov (United States) de Matos, Ana Marta; de Macedo, Maria Paula; Rauter, Amélia Pilar 2018-01-01 Type 2 diabetes (T2D) and Alzheimer's disease (AD) are two age-related amyloid diseases that affect millions of people worldwide. Broadly supported by epidemiological data, the higher incidence of AD among type 2 diabetic patients led to the recognition of T2D as a tangible risk factor for the development of AD. Indeed, there is now growing evidence on brain structural and functional abnormalities arising from brain insulin resistance and deficiency, ultimately highlighting the need for new approaches capable of preventing the development of AD in type 2 diabetic patients. This review provides an update on overlapping pathophysiological mechanisms and pathways in T2D and AD, such as amyloidogenic events, oxidative stress, endothelial dysfunction, aberrant enzymatic activity, and even shared genetic background. These events will be presented as puzzle pieces put together, thus establishing potential therapeutic targets for drug discovery and development against T2D and diabetes-induced cognitive decline-a heavyweight contributor to the increasing incidence of dementia in developed countries. Hoping to pave the way in this direction, we will present some of the most promising and well-studied drug leads with potential against both pathologies, including their respective bioactivity reports, mechanisms of action, and structure-activity relationships. © 2017 Wiley Periodicals, Inc. 16. Improved proton-deuteron phase-shift analysis above the deuteron breakup threshold and the three-nucleon analyzing-power puzzle International Nuclear Information System (INIS) Tornow, W.; Kievsky, A.; Witala, H. 2002-01-01 Using the existing high-accuracy data for proton-deuteron and deuteron-proton elastic scattering, a phase-shift analysis has been performed in the laboratory proton energy range from E p = 4 to 10 MeV The AV 18-based proton-deuteron phase shifts were used as starting values in the phase-shift search procedure. The low-partial wave phase shifts, especially the 4 P j phase shifts have been determined very precisely, thus providing valuable guidance for theoretical approaches to tackle the quest for a successful description of three-nucleon bound-state and continuum observables in a more efficient and consistent way. Furthermore, it was found that the 4 P 1/2 phase shift and the mixing parameter ε 3/2 - determined in the present analysis cannot be generated by 3 P j nucleon-nucleon interactions which are consistent with two-nucleon analyzing power data. Therefore, three-nucleon forces must play an essential role in resolving the long-standing three-nucleon analyzing-power puzzle. Refs. 44 (author) 17. The Puzzle of Adolescent Risk Taking : An Experimental-Longitudinal Investigation of Individual, Social and Cultural Influences OpenAIRE Defoe, I.N. 2016-01-01 Adolescents are known as stereotypical risk-takers, as they engage in disproportionate levels of risk-taking (e.g., binge drinking and delinquency). However, meta-analytic findings based on experimental studies using behavioral risky decision-making tasks revealed that adolescents do not always engage in heightened risk-taking compared to children and adults. Namely, although adolescents took more risks than adults on such tasks, overall adolescents took equal levels of risks as children. Mor... 18. Paleomagnetic contributions to the Klamath Mountains terrane puzzle-a new piece from the Ironside Mountain batholith, northern California Science.gov (United States) Mankinen, Edward A.; Gromme, C. Sherman; Irwin, W. Porter 2013-01-01 We obtained paleomagnetic samples from six sites within the Middle Jurassic Ironside Mountain batholith (~170 Ma), which constitutes the structurally lowest part of the Western Hayfork terrane, in the Klamath Mountains province of northern California and southern Oregon. Structural attitudes measured in the coeval Hayfork Bally Meta-andesite were used to correct paleomagnetic data from the batholith. Comparing the corrected paleomagnetic pole with a 170-Ma reference pole for North America indicates 73.5° ± 10.6° of clockwise rotation relative to the craton. Nearly one-half of this rotation may have occurred before the terrane accreted to the composite Klamath province at ~168 Ma. No latitudinal displacement of the batholith was detected. 19. A pinning puzzle: two similar, non-superconducting chemical deposits in YBCO-one pins, the other does not Energy Technology Data Exchange (ETDEWEB) Sawh, Ravi-Persad; Weinstein, Roy; Gandini, Alberto; Skorpenske, Harley; Parks, Drew, E-mail: [email protected] [Beam Particle Dynamics Laboratories, University of Houston, Houston, TX 77204-5005 (United States); Department of Physics, University of Houston, Houston, TX 77204-5005 (United States); Texas Center for Superconductivity at UH, University of Houston, Houston, TX 77204-5002 (United States) 2009-09-15 The pinning effects of two kinds of U-rich deposits in YBCO (YBa{sub 2}Cu{sub 3}O{sub 7-{delta}}) are compared. One is a five-element compound, (U{sub 0.6}Pt{sub 0.4})YBa{sub 2}O{sub 6}, which is a paramagnetic double perovskite which forms as profuse stable nanosize deposits, and pins very well. The other is a four-element compound, (U{sub 0.4}Y{sub 0.6})BaO{sub 3}, which is a ferromagnetic single perovskite which forms as profuse stable nanosize deposits and pins very weakly or not at all. The pinning comparison is done with nearly equal deposit sizes and number of deposits per unit volume for the two compounds. Evidence for the pinning capability, chemical makeup, x-ray diffraction signature, and magnetic properties of the two compounds is reported. 20. The puzzling afterglow of GRB 050721: a rebrightening seen in the optical but not in the X-ray International Nuclear Information System (INIS) Antonelli, L. A.; Romano, P.; Testa, V.; D'Elia, V.; Guetta, D.; Torii, K.; Malesani, D. 2007-01-01 We present here the analysis of the early and late multiwavelength afterglow emission, as observed by Swift a small robotic telescope, and the VLT. We compare early observations with late afterglow observations obtained with Swift and the VLT and we observe an intense rebrightening in the optical band at about one day after the burst which is not present in the X-ray band. The lack of detection in X-ray of such a strong rebrightening at lower energies may be described with a variable external density profile. In such a scenario, the combined X-ray and optical observations allow us to derive that the matter density located at ∼ 1017 cm from the burst is about a factor of 10 higher than in the inner region. This is the first time in which a rebrightening has been observed in the optical afterglow of a GRB that is clearly absent in the X-ray afterglow 1. Simultaneous explanation of the R{sub K} and R{sub D{sup (}{sup ∗}{sup )}} puzzles: a model analysis Energy Technology Data Exchange (ETDEWEB) Bhattacharya, Bhubanjyoti [Physique des Particules, Université de Montréal,C.P. 6128, succ. centre-ville, Montréal, QC, H3C 3J7 (Canada); Department of Physics and Astronomy, Wayne State University,Detroit, MI 48201 (United States); Datta, Alakabha [Department of Physics and Astronomy, University of Mississippi,108 Lewis Hall, Oxford, MS 38677-1848 (United States); Guévin, Jean-Pascal; London, David [Physique des Particules, Université de Montréal,C.P. 6128, succ. centre-ville, Montréal, QC, H3C 3J7 (Canada); Watanabe, Ryoutaro [Physique des Particules, Université de Montréal,C.P. 6128, succ. centre-ville, Montréal, QC, H3C 3J7 (Canada); Center for Theoretical Physics of the Universe, Institute for Basic Science (IBS),Daejeon 305-811 (Korea, Republic of) 2017-01-04 R{sub K} and R{sub D{sup (}{sup ∗}{sup )}} are two B-decay measurements that presently exhibit discrepancies with the SM. Recently, using an effective field theory approach, it was demonstrated that a new-physics model can simultaneously explain both the R{sub K} and R{sub D{sup (}{sup ∗}{sup )}} puzzles. There are two UV completions that can give rise to the effective Lagrangian: (i) VB: a vector boson that transforms as an SU(2){sub L} triplet, as in the SM, (ii) U{sub 1}: an SU(2){sub L}-singlet vector leptoquark. In this paper, we examine these models individually. A key point is that VB contributes to B{sub s}{sup 0}-B̄{sub s}{sup 0} mixing and τ→3μ, while U{sub 1} does not. We show that, when constraints from these processes are taken into account, the VB model is just barely viable. It predicts B(τ{sup −}→μ{sup −}μ{sup +}μ{sup −})≃2.1×10{sup −8}. This is measurable at Belle II and LHCb, and therefore constitutes a smoking-gun signal of VB. For U{sub 1}, there are several observables that may point to this model. Perhaps the most interesting is the lepton-flavor-violating decay υ(3S)→μτ, which has previously been overlooked in the literature. U{sub 1} predicts B(υ(3S)→μτ)|{sub max}=8.0×10{sup −7}. Thus, if a large value of B(υ(3S)→μτ) is observed — and this should be measurable at Belle II — the U{sub 1} model would be indicated. 2. Protein folding: Over half a century lasting quest. Comment on "There and back again: Two views on the protein folding puzzle" by Alexei V. Finkelstein et al. Science.gov (United States) Krokhotin, Andrey; Dokholyan, Nikolay V. 2017-07-01 Most proteins fold into unique three-dimensional (3D) structures that determine their biological functions, such as catalytic activity or macromolecular binding. Misfolded proteins can pose a threat through aberrant interactions with other proteins leading to a number of diseases including Alzheimer's disease, Parkinson's disease, and amyotrophic lateral sclerosis [1,2]. What does determine 3D structure of proteins? The first clue to this question came more than fifty years ago when Anfinsen demonstrated that unfolded proteins can spontaneously fold to their native 3D structures [3,4]. Anfinsen's experiments lead to the conclusion that proteins fold to unique native structure corresponding to the stable and kinetically accessible free energy minimum, and protein native structure is solely determined by its amino acid sequence. The question of how exactly proteins find their free energy minimum proved to be a difficult problem. One of the puzzles, initially pointed out by Levinthal, was an inconsistency between observed protein folding times and theoretical estimates. A self-avoiding polymer model of a globular protein of 100-residues length on a cubic lattice can sample at least 1047 states. Based on the assumption that conformational sampling occurs at the highest vibrational mode of proteins (∼picoseconds), predicted folding time by searching among all the possible conformations leads to ∼1027 years (much larger than the age of the universe) [5]. In contrast, observed protein folding time range from microseconds to minutes. Due to tremendous theoretical progress in protein folding field that has been achieved in past decades, the source of this inconsistency is currently understood that is thoroughly described in the review by Finkelstein et al. [6]. 3. Puzzling Findings in Studying the Outcome of “Real World” Adolescent Mental Health Services: The TRAILS Study Science.gov (United States) Jörg, Frederike; Ormel, Johan; Reijneveld, Sijmen A.; Jansen, Daniëlle E. M. C.; Verhulst, Frank C.; Oldehinkel, Albertine J. 2012-01-01 Background The increased use and costs of specialist child and adolescent mental health services (MHS) urge us to assess the effectiveness of these services. The aim of this paper is to compare the course of emotional and behavioural problems in adolescents with and without MHS use in a naturalistic setting. Method and Findings Participants are 2230 (pre)adolescents that enrolled in a prospective cohort study, the TRacking Adolescents' Individual Lives Survey (TRAILS). Response rate was 76%, mean age at baseline 11.09 (SD 0.56), 50.8% girls. We used data from the first three assessment waves, covering a six year period. Multiple linear regression analysis, propensity score matching, and data validation were used to compare the course of emotional and behavioural problems of adolescents with and without MHS use. The association between MHS and follow-up problem score (β 0.20, SE 0.03, p-value<0.001) was not confounded by baseline severity, markers of adolescent vulnerability or resilience nor stressful life events. The propensity score matching strategy revealed that follow-up problem scores of non-MHS-users decreased while the problem scores of MHS users remained high. When taking into account future MHS (non)use, it appeared that problem scores decreased with limited MHS use, albeit not as much as without any MHS use, and that problem scores with continuous MHS use remained high. Data validation showed that using a different outcome measure, multiple assessment waves and multiple imputation of missing values did not alter the results. A limitation of the study is that, although we know what type of MHS participants used, and during which period, we lack information on the duration of the treatment. Conclusions The benefits of MHS are questionable. Replication studies should reveal whether a critical examination of everyday care is necessary or an artefact is responsible for these results. PMID:23028584 4. Combined Adiponectin Deficiency and Resistance in Obese Patients: Can It Solve Part of the Puzzle in Nonalcoholic Steatohepatitis. Science.gov (United States) Salman, Ahmed; Hegazy, Mona; AbdElfadl, Soheir 2015-06-15 5. Interviewing patients and practitioners working together in teams. A multi-layered puzzle: putting the pieces together. Science.gov (United States) 2010-08-01 This paper presents and evaluates a methodological approach aiming at analysing some of the complex interaction between patients and different health care practitioners working together in teams. Qualitative health care research describes the values, perceptions and conceptions of patients and practitioners. In modern clinical work patients and professional practitioners often work together on complex cases involving different kinds of knowledge and values, each of them representing different perspectives. We need studies designed to capture this complexity. The methodological approach presented here is exemplified with a study in rehabilitation medicine. In this part of the health care system the clinical work is organized in multi-professional clinical teams including patients, handling complex rehabilitation processes. In the presented approach data are collected in individual in-depth interviews to have thorough descriptions of each individual perspective. The interaction in the teams is analysed by comparing different descriptions of the same situations from the involved individuals. We may then discuss how these perceptions relate to each other and how the individuals in the team interact. Two examples from an empirical study are presented and discussed, illustrating how communication, differences in evaluations and the interpretation of incidents, arguments, emotions and interpersonal relations may be discussed. It is argued that this approach may give information which can supplement the methods commonly applied in qualitative health care research today. 6. A new piece of the Shigella Pathogenicity puzzle: spermidine accumulation by silencing of the speG gene [corrected]. Directory of Open Access Journals (Sweden) Marialuisa Barbagallo Full Text Available The genome of Shigella, a gram negative bacterium which is the causative agent of bacillary dysentery, shares strong homologies with that of its commensal ancestor, Escherichia coli. The acquisition, by lateral gene transfer, of a large plasmid carrying virulence determinants has been a crucial event in the evolution towards the pathogenic lifestyle and has been paralleled by the occurrence of mutations affecting genes, which negatively interfere with the expression of virulence factors. In this context, we have analysed to what extent the presence of the plasmid-encoded virF gene, the major activator of the Shigella regulon for invasive phenotype, has modified the transcriptional profile of E. coli. Combining results from transcriptome assays and comparative genome analyses we show that in E. coli VirF, besides being able to up-regulate several chromosomal genes, which potentially influence bacterial fitness within the host, also activates genes which have been lost by Shigella. We have focused our attention on the speG gene, which encodes spermidine acetyltransferase, an enzyme catalysing the conversion of spermidine into the physiologically inert acetylspermidine, since recent evidence stresses the involvement of polyamines in microbial pathogenesis. Through identification of diverse mutations, which prevent expression of a functional SpeG protein, we show that the speG gene has been silenced by convergent evolution and that its inactivation causes the marked increase of intracellular spermidine in all Shigella spp. This enhances the survival of Shigella under oxidative stress and allows it to better face the adverse conditions it encounters inside macrophage. This is supported by the outcome of infection assays performed in mouse peritoneal macrophages and of a competitive-infection assay on J774 macrophage cell culture. Our observations fully support the pathoadaptive nature of speG inactivation in Shigella and reveal that the accumulation 7. Theoretical paper: exploring overlooked natural mitochondria-rejuvenative intervention: the puzzle of bowhead whales and naked mole rats. Science.gov (United States) 2007-12-01 There is an imperative need for exploring and implementing mitochondria-rejuvenative interventions that can bridge the current gap toward the step-by step realization of strategies for engineered negligible senescence (SENS) agenda. Recently discovered in mammals, natural mechanism mitoptosis-a selective "suicide" of mutated mitochondria-can facilitate continuous purification of mitochondrial pool in an organism from the most reactive oxygen species (ROS)-producing mitochondria. Mitoptosis, which is considered to be the first stage of ROS-induced apoptosis, underlies follicular atresia (a "quality control" mechanism in female germline cells that eliminates most germinal follicles in female embryos). Mitoptosis can be also activated in adult postmitotic somatic cells by evolutionary conserved phenotypic adaptations to intermittent oxygen restriction (IOR) and synergistically acting intermittent caloric restriction (ICR). IOR and ICR are common in mammals and seem to underlie extraordinary longevity and augmented cancer resistance in bowhead whales (Balena mysticetus) and naked mole rats (Heterocephalus glaber). Furthermore, in mammals IOR can facilitate continuous stromal stem cells-de-pendent tissue repair. A comparative analysis of IOR and ICR mechanisms in both mammals, in conjunction with the experience of decades of biomedical and clinical research on emerging preventative, therapeutic, and rehabilitative modality-the intermittent hypoxic training/therapy (IHT)-indicates that the notable clinical efficiency of IHT is based on the universal adaptational mechanisms that are common in mammals. Further exploration of natural mitochondria-preserving and -rejuvenating strategies can help refinement of IOR- and ICR-based synergistic protocols, having value in clinical human rejuvenation. 8. The missing piece in the puzzle: Prediction of aggregation via the protein-protein interaction parameter A∗2. Science.gov (United States) Koepf, Ellen; Schroeder, Rudolf; Brezesinski, Gerald; Friess, Wolfgang 2018-07-01 The tendency of protein pharmaceuticals to form aggregates is a major challenge during formulation development, as aggregation affects quality and safety of the product. In particular, the formation of large native-like particles in the context of liquid-air interfacial stress is a well-known but not fully understood problem. Focusing on the two most fundamental criteria of protein formulation affecting protein-protein interaction, the impact of pH and ionic strength on the interaction parameter A ∗ 2 and its link to aggregation upon mechanical stress was investigated. A ∗ 2 of two monoclonal antibodies (mABs) and a polyclonal IgG was determined using dynamic light scattering and was correlated to the number of particles formed upon shaking in vials analyzed by visual inspection, turbidity analysis, light obscuration and micro-flow imaging. A good correlation between aggregation induced by interfacial stress and formulation pH was given. It could be shown that A ∗ 2 was highest for mAB 1 and lowest for IgG, what was in good accordance with the number of particles formed. Shaking of IgG resulted in overall higher numbers of particles compared to the two mABs. A ∗ 2 decreased and particle numbers increased with increasing pH. Different to pH, ionic strength only slightly affected A ∗ 2 . Nevertheless, at high ionic (100 mM) strength the samples exhibited more pronounced particle formation, particularly of large particles >25 µm, which was most pronounced at high pH. Protein solutions were identified to form continuous films with an inhomogeneous protein distribution at the liquid-air interface. These areas of agglomerated, native-like protein material can be transferred into the bulk solution by compression-decompression of the interface. Whether or not those clusters lead to the appearance of large protein aggregates or fall apart depends on the attractive or repulsive forces between protein molecules. Thus, protein aggregation due to interfacial 9. anti B{sub d,s} → D{sup *}{sub d,s}V and anti B{sup *}{sub d,s} → D{sub d,s}V decays in QCD factorization and possible puzzles Energy Technology Data Exchange (ETDEWEB) Chang, Qin [Henan Normal University, Institute of Particle and Nuclear Physics, Henan (China); Central China Normal University, Institute of Particle Physics, Wuhan (China); Chen, Ling-Xin; Zhang, Yun-Yun; Sun, Jun-Feng; Yang, Yue-Ling [Henan Normal University, Institute of Particle and Nuclear Physics, Henan (China) 2016-10-15 Motivated by the rapid development of heavy-flavor experiments, phenomenological studies of nonleptonic anti B{sub d,s} → D{sup *}{sub d,s}V and anti B{sup *}{sub d,s} → D{sub d,s}V (V = ρ, K*) decays are performed within the framework of QCD factorization. Relative to the previous work, the QCD corrections to the transverse amplitudes are evaluated at next-to-leading order. The theoretical predictions of the observables are updated. For the measured anti B{sub d,s} → D{sup *}{sub d,s}V decays, the tensions between theoretical results and experimental measurements, i.e. the ''R{sub ds}{sup V} puzzle'' and ''D*V (or R{sub V/l} {sub anti} {sub ν{sub l)}} puzzle'', are presented after detailed analyses. For the anti B{sup *}{sub d,s} → D{sub d,s}V decays, they have relatively large branching fractions of the order >or similar O(10{sup -9}) and are in the scope of Belle-II and LHCb experiments. Moreover, they also provide a way to crosscheck the possible puzzles mentioned above through the similar ratios R{sub ds}{sup 'V} and R{sup '}{sub V/l} {sub anti} {sub ν{sub l.}} More refined experimental measurements and theoretical efforts are required to confirm or refute such two anomalies. (orig.) 10. Plaguicidas en la dieta: aportando piezas al rompecabezas Pesticides in the diet: adding pieces to the puzzle Directory of Open Access Journals (Sweden) Ángel Vicente 2004-12-01 11. Hydralazine-induced anti-neutrophil cytoplasmic antibody-positive renal vasculitis presenting with a vasculitic syndrome, acute nephritis and a puzzling skin rash: a case report Directory of Open Access Journals (Sweden) Keasberry Justin 2013-01-01 Full Text Available Abstract Introduction Anti-neutrophil cytoplasmic antibody-associated vasculitis has been associated with many drugs and it is a relatively rare side effect of the antihypertensive drug hydralazine. The diagnosis and management of patients who have anti-neutrophil cytoplasmic antibody-associated vasculitis may be challenging because of its relative infrequency, variability of clinical expression and changing nomenclature. The spectrum of anti-neutrophil cytoplasmic antibody-associated vasculitis is wide and can be fatal. This case documents a 62-year-old woman who presented with hydralazine-induced anti-neutrophil cytoplasmic antibody-positive renal vasculitis with a puzzling cutaneous rash. Case presentation We report a rare case of hydralazine-induced anti-neutrophil cytoplasmic antibody-associated vasculitis in a 62-year-old Caucasian woman who presented with a vasculitic syndrome with a sore throat, mouth ulcers and otalgia after several months of constitutional symptoms. She then proceeded to develop a rash over her right lower limb. Clinically, the rash had features to suggest Sweet’s syndrome, but also had some appearances consistent with embolic phenomena and did not have the appearance of palpable purpure usually associated with cutaneous vasculitis. Differential diagnoses were hydralazine-associated Sweet’s syndrome, streptococcal-induced cutaneous eruption or an unrelated contact dermatitis. A midstream urine sample detected glomerular blood cells in the setting of anti-neutrophil cytoplasmic antibody-positive renal vasculitis and Streptococcus pyogenes bacteremia. A renal biopsy revealed a pauci-immune, focally necrotizing glomerulonephritis with small crescents. Her skin biopsy revealed a heavy neutrophil infiltrate involving the full thickness of the dermis with no evidence of a leucocytoclastic vasculitis, but was non-specific. She was initially commenced on intravenous lincomycin for her bloodstream infection and subsequently 12. The Middle East population puzzle. Science.gov (United States) Omran, A R; Roudi, F 1993-07-01 An overview is provided of Middle Eastern countries on the following topics; population change, epidemiological transition theory and 4 patterns of transition in the middle East, transition in causes of death, infant mortality declines, war mortality, fertility, family planning, age and sex composition, ethnicity, educational status, urbanization, labor force, international labor migration, refugees, Jewish immigration, families, marriage patterns, and future growth. The Middle East is geographically defined as Bahrain, Egypt, Iraq, Jordan, Kuwait, Lebanon, Oman, Qatar, Saudi Arabia, Syria, United Arab Emirates, Yemen, Gaza and the West Bank, Iran, Turkey, and Israel. The Middle East's population grew very little until 1990 when the population was 43 million. Population was about doubled in the mid-1950s at 80 million. Rapid growth occurred after 1950 with declines in mortality due to widespread disease control and sanitation efforts. Countries are grouped in the following ways: persistent high fertility and declining mortality with low to medium socioeconomic conditions (Jordan, Oman, Syria, Yemen, and the West Bank and Gaza), declining fertility and mortality in intermediate socioeconomic development (Egypt, Lebanon, Turkey, and Iran), high fertility and declining mortality in high socioeconomic conditions (Bahrain, Iraq, Kuwait, Qatar, Saudi Arabia, and the United Arab Emirates), and low fertility and mortality in average socioeconomic conditions (Israel). As birth and death rates decline, there is an accompanying shift from communicable diseases to degenerative diseases and increases in life expectancy; this pattern is reflected in the available data from Egypt, Kuwait, and Israel. High infant and child mortality tends to remain a problem throughout the Middle East, with the exception of Israel and the Gulf States. War casualties are undetermined, yet have not impeded the fastest growing population growth rate in the world. The average fertility is 5 births/woman by the age of 45. Muslim countries tend to have larger families. Contraceptive use is low in the region, with the exception of Turkey and Egypt and among urban and educated populations. More than 40% of the population is under 15 years of age. The region is about 50% Arabic (140 million). Educational status has increased, particularly for men; the lowest literacy rates for women are in Yemen and Egypt. The largest countries are Iran, Turkey, and Egypt. 13. Oki-Doku: Number Puzzles Science.gov (United States) Gomez, Cristina; Novak, Dani 2014-01-01 The Common Core State Standards for Mathematics (CCSSM) (CCSSI 2010) emphasize the Standards for Mathematical Practice (SMP) that describe processes and proficiencies included in the NCTM Process Standards (NCTM 2000) and in the Strands for Mathematical Proficiency (NRC 2001). The development of these mathematical practices should happen in… 14. Horse trichinellosis, an unresolved puzzle Directory of Open Access Journals (Sweden) Pozio E. 2001-06-01 Full Text Available In spite of routine controls to detect Trichinella larvae in horse-meat, human infections due to horse-meat consumption continue to occur in France and Italy, The epidemiology of horse trichinellosis since its discovery in 1975 is outlined, addressing the possible modes of natural transmission to horses, the need to develop more sensitive methods for detecting Trichinella larvae in horses, and the economic impact of horse trichinellosis. Investigations of human outbreaks due to horse-meat consumption have implicated single cases of inadequate veterinary controls on horses imported from non-European Union countries. In particular, most cases of human infection have been attributed to horses imported from Eastern Europe, where pig trichinellosis is re-emerging and the main source of infection in horses. 15. Pedagogy Corner: The Year Puzzle Science.gov (United States) Lovitt, Charles 2017-01-01 As a self described lesson collector, author Charles Lovett enjoys gathering "interesting" lessons and teasing them apart to find out what makes them "tick", particularly the pedagogy. He often wonders what decisions the teacher made that generated such an interesting and successful learning environment. Here he describes a… 16. Chlorophyll d: the puzzle resolved DEFF Research Database (Denmark) Larkum, Anthony W D; Kühl, Michael 2005-01-01 Chlorophyll a (Chl a) has always been regarded as the sole chlorophyll with a role in photochemical conversion in oxygen-evolving phototrophs, whereas chlorophyll d (Chl d), discovered in small quantities in red algae in 1943, was often regarded as an artefact of isolation. Now, as a result...... of discoveries over the past year, it has become clear that Chl d is the major chlorophyll of a free-living and widely distributed cyanobacterium that lives in light environments depleted in visible light and enhanced in infrared radiation. Moreover, Chl d not only has a light-harvesting role but might also...... replace Chl a in the special pair of chlorophylls in both reactions centers of photosynthesis. Udgivelsesdato: 2005-Aug... 17. The Evolutionary Puzzle of Suicide Directory of Open Access Journals (Sweden) Henri-Jean Aubin 2013-12-01 Full Text Available Mechanisms of self-destruction are difficult to reconcile with evolution’s first rule of thumb: survive and reproduce. However, evolutionary success ultimately depends on inclusive fitness. The altruistic suicide hypothesis posits that the presence of low reproductive potential and burdensomeness toward kin can increase the inclusive fitness payoff of self-removal. The bargaining hypothesis assumes that suicide attempts could function as an honest signal of need. The payoff may be positive if the suicidal person has a low reproductive potential. The parasite manipulation hypothesis is founded on the rodent—Toxoplasma gondii host-parasite model, in which the parasite induces a “suicidal” feline attraction that allows the parasite to complete its life cycle. Interestingly, latent infection by T. gondii has been shown to cause behavioral alterations in humans, including increased suicide attempts. Finally, we discuss how suicide risk factors can be understood as nonadaptive byproducts of evolved mechanisms that malfunction. Although most of the mechanisms proposed in this article are largely speculative, the hypotheses that we raise accept self-destructive behavior within the framework of evolutionary theory. 18. Yet Another Puzzle of Ground NARCIS (Netherlands) Korbmacher, J. 2015-01-01 We show that any predicational theory of partial ground that extends a standard theory of syntax and that proves some commonly accepted principles for partial ground is inconsistent. We suggest a way to obtain a consistent predicational theory of ground. 19. A Cat Bond Premium Puzzle? OpenAIRE Vivek J. Bantwal; Howard C. Kunreuther 1999-01-01 Catastrophe Bonds whose payoffs are tied to the occurrence of natural disasters offer insurers the ability to hedge event risk through the capital markets that could otherwise leave them insolvent if concentrated solely on their own balance sheets. At the same time, they offer investors a unique opportunity to enhance their portfolios with an asset that provides an attractive return that is uncorrelated with typical financial securities Despite its attractiveness, spreads in this market remai... 20. Solar neutrinos: a scientific puzzle International Nuclear Information System (INIS) Bahcall, J.N.; Davis, R. 1975-01-01 An experiment designed to capture neutrinos produced by solar thermonuclear reactions is a crucial one for the theory of stellar evolution. The conventional wisdom regarding nuclear fusion as the energy source for main sequence stars like the sun is briefly outlined. It is assumed that the sun shines because of fusion reactions similar to those envisioned for terrestrial fusion reactors. The basic solar process is the fusion of four protons to form an alpha particle, two positrons (e + ), and two neutrinos (νsub(e)), i.e., 4p → α + 2e + + 2νsub(e). The principal reactions are shown and the percentage of each reaction is given. Several experiments carried out toward this aim are discussed. (B.G.) 1. Addressing the Puzzle of Race Science.gov (United States) Coleman, Samuel 2011-01-01 Although racial discrimination poses a devastating instrument of oppression, social work texts lack a clear and consistent definition of "race". The solution lies in according race the status of an "actor version" concept, while exploring the origins and variations of race ideas using "scientific observer version" explanations. This distinction… 2. Patents: Recent Trends and Puzzles OpenAIRE Zvi Griliches 1989-01-01 This paper reviews the historical data on patenting in the United States with special reference to the last 20 years and their potential relation, if any, to the recent productivity slowdown. Two Points are made: Patents are not a "constant-yardstick" indicator of either inventive input or output. Moreover, they are "produced" by a governmental agency which goes through its own budgetary and inefficiency cycles. The paper shows that the appearance of an absolute decline in patenting in the 19... 3. AirCompare Data.gov (United States) U.S. Environmental Protection Agency — AirCompare contains air quality information that allows a user to compare conditions in different localities over time and compare conditions in the same location at... 4. Reviews Equipment: BioLite Camp Stove Game: Burnout Paradise Equipment: 850 Universal interface and Capstone software Equipment: xllogger Book: Science Magic Tricks and Puzzles Equipment: Spinthariscope Equipment: DC Power Supply HY5002 Web Watch Science.gov (United States) 2013-05-01 WE RECOMMEND BioLite CampStove Robust and multifaceted stove illuminates physics concepts 850 Universal interface and Capstone software Powerful data-acquisition system offers many options for student experiments and demonstrations xllogger Obtaining results is far from an uphill struggle with this easy-to-use datalogger Science Magic Tricks and Puzzles Small but perfectly formed and inexpensive book packed with 'magic-of-science' demonstrations Spinthariscope Kit for older students to have the memorable experience of 'seeing' radioactivity WORTH A LOOK DC Power Supply HY5002 Solid and effective, but noisy and lacks portability HANDLE WITH CARE Burnout Paradise Car computer game may be quick off the mark, but goes nowhere fast when it comes to lab use WEB WATCH 'Live' tube map and free apps would be a useful addition to school physics, but maths-questions website of no more use than a textbook 5. Comparative Test Case Specification DEFF Research Database (Denmark) Kalyanova, Olena; Heiselberg, Per This document includes the specification on the IEA task of evaluation building energy simulation computer programs for the Double Skin Facades (DSF) constructions. There are two approaches involved into this procedure, one is the comparative approach and another is the empirical one. In the comp....... In the comparative approach the outcomes of different software tools are compared, while in the empirical approach the modelling results are compared with the results of experimental test cases. The comparative test cases include: ventilation, shading and geometry.... 6. Dialysis Facility Compare Data.gov (United States) U.S. Department of Health & Human Services — Dialysis Facility Compare helps you find detailed information about Medicare-certified dialysis facilities. You can compare the services and the quality of care that... 7. Approaching comparative company law OpenAIRE Donald, David C. 2008-01-01 This paper identifies some common errors that occur in comparative law, offers some guidelines to help avoid such errors, and provides a framework for entering into studies of the company laws of three major jurisdictions. The first section illustrates why a conscious approach to comparative company law is useful. Part I discusses some of the problems that can arise in comparative law and offers a few points of caution that can be useful for practical, theoretical and legislative comparative ... 8. Semantic paradox. A comparative analysis of scholastic and analytic views Czech Academy of Sciences Publication Activity Database Hanke, Miroslav 2014-01-01 Roč. 91, č. 3 (2014), s. 367-386 ISSN 2168-9105 R&D Projects: GA ČR(CZ) GP13-08389P Institutional support: RVO:67985955 Keywords : semantic paradoxes * scholastic logic * groundlessness * circularity * semantic pathology * two-line puzzles Subject RIV: AA - Philosophy ; Religion 9. Comparative Test Case Specification DEFF Research Database (Denmark) Kalyanova, Olena; Heiselberg, Per This document includes a definition of the comparative test cases DSF200_3 and DSF200_4, which previously described in the comparative test case specification for the test cases DSF100_3 and DSF200_3 [Ref.1]....... This document includes a definition of the comparative test cases DSF200_3 and DSF200_4, which previously described in the comparative test case specification for the test cases DSF100_3 and DSF200_3 [Ref.1].... 10. Text File Comparator Science.gov (United States) Kotler, R. S. 1983-01-01 File Comparator program IFCOMP, is text file comparator for IBM OS/VScompatable systems. IFCOMP accepts as input two text files and produces listing of differences in pseudo-update form. IFCOMP is very useful in monitoring changes made to software at the source code level. 11. Hospital Compare Data Data.gov (United States) U.S. Department of Health & Human Services — These are the official datasets used on the Medicare.gov Hospital Compare Website provided by the Centers for Medicare and Medicaid Services. These data allow you to... DEFF Research Database (Denmark) Jensen, Merete Storgaard Globalization is the imitation and adaptation of knowledgesolutions or innovations, as they are diffused from one country to another” (Peter Jarvis 2007) Conducting comparative, educational research of school leadership that effects student achievement in an international perspective is of scient......Globalization is the imitation and adaptation of knowledgesolutions or innovations, as they are diffused from one country to another” (Peter Jarvis 2007) Conducting comparative, educational research of school leadership that effects student achievement in an international perspective...... is of scientific value in qualifying the international and national knowledgebase on effective school leadership. In a methodological perspective comparative analysis in an international setting creates specifically a scientific demand of comparability and a theory based leadership - framework to guide...... the empirical, qualitative research of effective leadership.... 13. Comparing Demonstratives in Kwa African Journals Online (AJOL) This paper is a comparative study of demonstrative forms in three K wa languages, ... relative distance from the deictic centre, such as English this and that, here and there. ... Mostly, the referents of demonstratives are 'activated' or at least. 14. Home Health Compare Data.gov (United States) U.S. Department of Health & Human Services — Home Health Compare has information about the quality of care provided by Medicare-certified home health agencies throughout the nation. Medicare-certified means the... 15. Nursing Home Compare Data Data.gov (United States) U.S. Department of Health & Human Services — These are the official datasets used on the Medicare.gov Nursing Home Compare Website provided by the Centers for Medicare and Medicaid Services. These data allow... 16. Hospital Compare - Archived Data Data.gov (United States) U.S. Department of Health & Human Services — Hospital Compare is a consumer-oriented website that provides information on how well hospitals provide recommended care to their patients. This information can help... 17. Home Health Compare Data Data.gov (United States) U.S. Department of Health & Human Services — These are the official datasets used on the Medicare.gov Home Health Compare Website provided by the Centers for Medicare and Medicaid Services. These data allow you... 18. Nursing Home Compare Data.gov (United States) U.S. Department of Health & Human Services — The data that is used by the Nursing Home Compare tool can be downloaded for public use. This functionality is primarily used by health policy researchers and the... 19. Comparative Climatic Data Data.gov (United States) National Oceanic and Atmospheric Administration, Department of Commerce — Comparative Climatic Data is a publication containing data tables of meteorological elements; the publication outlines the climatic conditions at major weather... 20. Dialysis Facility Compare Data Data.gov (United States) U.S. Department of Health & Human Services — These are the official datasets used on the Medicare.gov Dialysis Facility Compare Website provided by the Centers for Medicare and Medicaid Services. These data...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6552828550338745, "perplexity": 5193.391777625688}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589404.50/warc/CC-MAIN-20180716154548-20180716174548-00280.warc.gz"}
https://www.zbmath.org/authors/?q=ai%3Apaulino.glaucio-h
× # zbMATH — the first resource for mathematics ## Paulino, Glaucio H. Compute Distance To: Author ID: paulino.glaucio-h Published as: H. Paulino, Glaucio; Paulino, G. H.; Paulino, Glaucio H. Documents Indexed: 101 Publications since 1993, including 1 Book all top 5 #### Co-Authors 0 single-authored 11 Talischi, Cameron 9 Gray, Leonard J. 9 Menezes, Ivan F. M. 9 Mukherjee, Subrata 8 Sutradhar, Alok 7 Chan, Youn-Sha 7 Kim, Jeongho 6 Silva, Emílio Carlos Nelli 5 Celes, Waldemar 5 Fannjiang, Albert C. 4 Gain, Arun L. 4 Park, Kyoungsoo 3 Buttlar, William G. 3 Chi, Heng 3 Dodds, Robert H. jun. 3 Espinha, Rodrigo 3 Gattass, Marcelo 3 Le, Chau H. 3 Lopez-Pamies, Oscar 3 Nguyen, Tam H. 3 Song, Junho 2 Abel, John F. 2 Almeida, Sylvia R. M. 2 Chati, Mandar K. 2 Choi, Hyung Jip 2 de Sturler, Eric 2 Hsieh, Shang-Hsien 2 Liu, Yijun 2 Liu, Yong 2 Menon, Govind K. 2 Pereira Anderson 2 Pereira, Anderson 2 Phan, Anh-Vu 2 Ye, Wenjing 2 Yin, Hui-Min 2 Zhang, Zhengyu 1 Albino, Juan C. R. 1 Alfano, Marco 1 Alhadeff, A. 1 Almeida, Carlos A. S. 1 Aluru, Narayana R. 1 Carbonari, Ronny C. 1 Carroll, Jay 1 Cavalcante-Neto, Joaquim B. 1 Chaves, Ricardo A. P. 1 Duarte, C. Armando 1 Duarte, Leonardo S. 1 Dumont, Ney Augusto 1 Feng, Baofeng 1 Filipov, E. T. 1 Furgiuele, Franco 1 Gortaire C., J. C. 1 Huang, Young-Ye 1 Jin, Zhenghong 1 Jin, Zhihong 1 Jin, Zunhe 1 Kaplan, Theodore 1 Kaplan, Todd R. 1 Lambros, John 1 Le Chau H. 1 Leon, S. E. 1 Leonardi, Alessandro 1 Li, Gang 1 Liang, Lihua 1 Liu, Cheng 1 Liu, Kepan 1 Maletta, Carmine 1 Martha, Luiz Fernando 1 Mello, Luís Augusto Motta 1 Mosalam, Khalid M. 1 Pereira, Jeronymo P. 1 Ramesh, Palghat S. 1 Ravichandran, Guruswami 1 Richardson, J. D. 1 Saif, Muhammed T. A. 1 Shen, Bin 1 Shi, Fan 1 Shim, Do-Jun 1 Silva Emilio C. N. 1 Song, Seong Hyeok 1 Spring, D. W. 1 Stanciulescu, Ilinca 1 Stump, Fernando V. 1 Sun, Lizhi 1 Sun, Lizhu 1 Tachi, Tomohiro 1 Tan, Henry 1 Theotokoglou, Efstathios E. 1 Vatanabe, S. L. 1 Walters, Matthew C. 1 Wang, Shun 1 Yin, Huiming 1 Zarikian, Vrej 1 Zegard, Tomás 1 Zhang, Zhengyu Jenny all top 5 #### Serials 23 International Journal for Numerical Methods in Engineering 12 Computer Methods in Applied Mechanics and Engineering 10 Structural and Multidisciplinary Optimization 9 Journal of Applied Mechanics 7 International Journal of Fracture 6 International Journal of Solids and Structures 6 Engineering Analysis with Boundary Elements 3 Mechanics Research Communications 3 Computational Mechanics 2 Journal of the Mechanics and Physics of Solids 2 SIAM Journal on Applied Mathematics 2 Communications in Numerical Methods in Engineering 2 Proceedings of the Royal Society of London. Series A. Mathematical, Physical and Engineering Sciences 2 Proceedings of the Royal Society of London. A. Mathematical, Physical and Engineering Sciences 1 Acta Mechanica 1 Computers and Structures 1 International Journal of Engineering Science 1 International Journal of Heat and Mass Transfer 1 International Journal of Plasticity 1 Finite Elements in Analysis and Design 1 M$^3$AS. Mathematical Models & Methods in Applied Sciences 1 Science in China. Series E 1 Inverse Problems in Science and Engineering all top 5 #### Fields 87 Mechanics of deformable solids (74-XX) 26 Numerical analysis (65-XX) 6 Classical thermodynamics, heat transfer (80-XX) 4 Computer science (68-XX) 3 Operations research, mathematical programming (90-XX) 2 History and biography (01-XX) 2 Potential theory (31-XX) 2 Partial differential equations (35-XX) 2 Integral equations (45-XX) 2 Calculus of variations and optimal control; optimization (49-XX) 1 Combinatorics (05-XX) 1 Linear and multilinear algebra; matrix theory (15-XX) 1 Convex and discrete geometry (52-XX) 1 Fluid mechanics (76-XX) 1 Optics, electromagnetic theory (78-XX) 1 Biology and other natural sciences (92-XX)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4179713726043701, "perplexity": 29341.21836984782}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574018.53/warc/CC-MAIN-20190920113425-20190920135425-00358.warc.gz"}
https://www.computer.org/csdl/trans/tp/2008/12/ttp2008122098-abs.html
The Community for Technology Leaders Issue No. 12 - December (2008 vol. 30) ISSN: 0162-8828 pp: 2098-2108 ABSTRACT Image registration consists in estimating geometric and photometric transformations that align two images as best as possible. The direct approach consists in minimizing the discrepancy in the intensity or color of the pixels. The inverse compositional algorithm has been recently proposed by Baker et al. for the direct estimation of groupwise geometric transformations. It is efficient in that it performs several computationally expensive calculations at a pre-computation phase. Photometric transformations act on the value of the pixels. They account for effects such as lighting change. Jointly estimating geometric and photometric transformations is thus important for many tasks such as image mosaicing. We propose an algorithm to jointly estimate groupwise geometric and photometric transformations while preserving the efficient pre-computation based design of the original inverse compositional algorithm. It is called the dual inverse compositional algorithm. It uses different approximations than the simultaneous inverse compositional algorithm and handles groupwise geometric and global photometric transformations. Its name stems from the fact that it uses an inverse compositional update rule for both the geometric and the photometric transformations. We demonstrate the proposed algorithm and compare it to previous ones on simulated and real data. This shows clear improvements in computational efficiency and in terms of convergence. INDEX TERMS Computer vision, Intensity, color, photometry, and thresholding CITATION Adrien Bartoli, "Groupwise Geometric and Photometric Direct Image Registration", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 30, no. , pp. 2098-2108, December 2008, doi:10.1109/TPAMI.2008.22
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9138222336769104, "perplexity": 1331.5266539467646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542009.32/warc/CC-MAIN-20161202170902-00465-ip-10-31-129-80.ec2.internal.warc.gz"}
http://www.aimsciences.org/article/doi/10.3934/cpaa.2010.9.1161
American Institute of Mathematical Sciences • Previous Article Homogenization limit and asymptotic decay for electrical conduction in biological tissues in the high radiofrequency range • CPAA Home • This Issue • Next Article Imperfect bifurcations in nonlinear elliptic equations on spherical caps 2010, 9(5): 1161-1188. doi: 10.3934/cpaa.2010.9.1161 Kirchhoff systems with nonlinear source and boundary damping terms 1 Dipartimento di Matematica e Informatica, Università degli Studi di Perugia, Via Vanvitelli 1, I–06123 Perugia, Italy, Italy Received  August 2009 Revised  November 2009 Published  May 2010 In this paper we treat the question of the non--existence of global solutions, or their long time behavior, of nonlinear hyperbolic Kirchhoff systems. The main $p$--Kirchhoff operator may be affected by a perturbation which behaves like $|u|^{p-2} u$ and the systems also involve an external force $f$ and a nonlinear boundary damping $Q$. When $p=2$, we consider some problems involving a higher order dissipation term, under dynamic boundary conditions. For them we give criteria in order that $|| u(t,\cdot) ||_q\to\infty$ as $t \to\infty$ along any global solution $u=u(t,x)$, where $q$ is a parameter related to the growth of $f$ in $u$. Special subcases of $f$ and $Q$, interesting in applications, are presented in Sections 4, 5 and 6. Citation: Giuseppina Autuori, Patrizia Pucci. Kirchhoff systems with nonlinear source and boundary damping terms. Communications on Pure & Applied Analysis, 2010, 9 (5) : 1161-1188. doi: 10.3934/cpaa.2010.9.1161 [1] Claudianor O. Alves, M. M. Cavalcanti, Valeria N. Domingos Cavalcanti, Mohammad A. Rammaha, Daniel Toundykov. On existence, uniform decay rates and blow up for solutions of systems of nonlinear wave equations with damping and source terms. Discrete & Continuous Dynamical Systems - S, 2009, 2 (3) : 583-608. doi: 10.3934/dcdss.2009.2.583 [2] Zhijun Zhang. Boundary blow-up for elliptic problems involving exponential nonlinearities with nonlinear gradient terms and singular weights. Communications on Pure & Applied Analysis, 2007, 6 (2) : 521-529. doi: 10.3934/cpaa.2007.6.521 [3] Evgeny Galakhov, Olga Salieva. Blow-up for nonlinear inequalities with gradient terms and singularities on unbounded sets. Conference Publications, 2015, 2015 (special) : 489-494. doi: 10.3934/proc.2015.0489 [4] Huiling Li, Mingxin Wang. Properties of blow-up solutions to a parabolic system with nonlinear localized terms. Discrete & Continuous Dynamical Systems - A, 2005, 13 (3) : 683-700. doi: 10.3934/dcds.2005.13.683 [5] Tae Gab Ha. On viscoelastic wave equation with nonlinear boundary damping and source term. Communications on Pure & Applied Analysis, 2010, 9 (6) : 1543-1576. doi: 10.3934/cpaa.2010.9.1543 [6] Yanbing Yang, Runzhang Xu. Nonlinear wave equation with both strongly and weakly damped terms: supercritical initial energy finite time blow up. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1351-1358. doi: 10.3934/cpaa.2019065 [7] Jong-Shenq Guo. Blow-up behavior for a quasilinear parabolic equation with nonlinear boundary condition. Discrete & Continuous Dynamical Systems - A, 2007, 18 (1) : 71-84. doi: 10.3934/dcds.2007.18.71 [8] Pavol Quittner, Philippe Souplet. Blow-up rate of solutions of parabolic poblems with nonlinear boundary conditions. Discrete & Continuous Dynamical Systems - S, 2012, 5 (3) : 671-681. doi: 10.3934/dcdss.2012.5.671 [9] Françoise Demengel, O. Goubet. Existence of boundary blow up solutions for singular or degenerate fully nonlinear equations. Communications on Pure & Applied Analysis, 2013, 12 (2) : 621-645. doi: 10.3934/cpaa.2013.12.621 [10] Belkacem Said-Houari, Flávio A. Falcão Nascimento. Global existence and nonexistence for the viscoelastic wave equation with nonlinear boundary damping-source interaction. Communications on Pure & Applied Analysis, 2013, 12 (1) : 375-403. doi: 10.3934/cpaa.2013.12.375 [11] Luigi C. Berselli, Carlo R. Grisanti. On the regularity up to the boundary for certain nonlinear elliptic systems. Discrete & Continuous Dynamical Systems - S, 2016, 9 (1) : 53-71. doi: 10.3934/dcdss.2016.9.53 [12] Monica Marras, Stella Vernier Piro. Bounds for blow-up time in nonlinear parabolic systems. Conference Publications, 2011, 2011 (Special) : 1025-1031. doi: 10.3934/proc.2011.2011.1025 [13] Jonathan J. Wylie, Robert M. Miura, Huaxiong Huang. Systems of coupled diffusion equations with degenerate nonlinear source terms: Linear stability and traveling waves. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 561-569. doi: 10.3934/dcds.2009.23.561 [14] Helin Guo, Yimin Zhang, Huansong Zhou. Blow-up solutions for a Kirchhoff type elliptic equation with trapping potential. Communications on Pure & Applied Analysis, 2018, 17 (5) : 1875-1897. doi: 10.3934/cpaa.2018089 [15] C. Brändle, F. Quirós, Julio D. Rossi. Non-simultaneous blow-up for a quasilinear parabolic system with reaction at the boundary. Communications on Pure & Applied Analysis, 2005, 4 (3) : 523-536. doi: 10.3934/cpaa.2005.4.523 [16] Shouming Zhou, Chunlai Mu, Yongsheng Mi, Fuchen Zhang. Blow-up for a non-local diffusion equation with exponential reaction term and Neumann boundary condition. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2935-2946. doi: 10.3934/cpaa.2013.12.2935 [17] Lan Qiao, Sining Zheng. Non-simultaneous blow-up for heat equations with positive-negative sources and coupled boundary flux. Communications on Pure & Applied Analysis, 2007, 6 (4) : 1113-1129. doi: 10.3934/cpaa.2007.6.1113 [18] Pierre Garnier. Damping to prevent the blow-up of the korteweg-de vries equation. Communications on Pure & Applied Analysis, 2017, 16 (4) : 1455-1470. doi: 10.3934/cpaa.2017069 [19] Alexander Gladkov. Blow-up problem for semilinear heat equation with nonlinear nonlocal Neumann boundary condition. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2053-2068. doi: 10.3934/cpaa.2017101 [20] Zhiqing Liu, Zhong Bo Fang. Blow-up phenomena for a nonlocal quasilinear parabolic equation with time-dependent coefficients under nonlinear boundary flux. Discrete & Continuous Dynamical Systems - B, 2016, 21 (10) : 3619-3635. doi: 10.3934/dcdsb.2016113 2017 Impact Factor: 0.884
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7129865288734436, "perplexity": 3654.894011183692}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742937.37/warc/CC-MAIN-20181115203132-20181115225132-00400.warc.gz"}
http://medical-dictionary.thefreedictionary.com/insulin
# insulin Also found in: Dictionary, Thesaurus, Acronyms, Encyclopedia, Wikipedia. Related to insulin: glucagon, diabetes, insulin injection, insulin resistance ## insulin [in´su-lin] 1. the major fuel-regulating hormone of the body, a double-chain protein formed from proinsulin in the beta cells of the islets of Langerhans in the pancreas. Insulin promotes the storage of glucose and the uptake of amino acids, increases protein and lipid synthesis, and inhibits lipolysis and gluconeogenesis. Secretion of insulin is a response of the beta cells to a stimulus; the primary stimulus is glucose, and others are amino acids and hormones such as secretin, pancreozymin, and gastrin. These chemicals play an important role in maintaining normal blood glucose levels by triggering insulin release after a meal. After insulin is released from the beta cells, it enters the blood stream and is transported to cells throughout the body. The cell membranes have insulin receptors to which the hormone becomes bonded or “fixed.” An interaction between the insulin and its receptors leads to biochemical processes that include (1) the transport of glucose, amino acids, and certain ions across the membrane and into the cell body; (2) the storage of glycogen in liver and muscle cells; (3) the synthesis of triglycerides and storage of fat; (4) the synthesis of protein, RNA, and DNA, and (5) inhibition of gluconeogenesis, degradation of glycogen and protein, and lipolysis. Although insulin increases the transport of glucose across the cell membrane of most cells, in the brain glucose enters the cells by simple diffusion through the blood--brain barrier. 2. a preparation of the hormone, first discovered in 1921, used in treatment of diabetes mellitus; it may be bovine or porcine in origin (prepared from the pancreas of the animals) or a recombinant human type, although insulin of bovine origin is no longer available in the United States. Recombinant human types may duplicate exactly the human insulin protein sequence, or may be analogues with small differences in sequence. Commercially prepared insulin is available in various types that differ in the speed with which they act and in the duration of their effectiveness. There are several different types of insulin, usually classified by their onset and duration of action. (See table.) Patients with diabetes react differently in the rate at which they absorb and utilize exogenous insulin; therefore, the duration of action varies from person to person. Moreover, the site of injection, volume of injection, and the condition of the tissues into which the insulin is injected can alter its rate of absorption and peak action times, and exercising the limb which has been injected immediately after injection can increase the speed of absorption. Insulin is measured in units. Problems of Insulin Therapy. The problem of either too much or too little insulin is always a potential hazard for the person on insulin therapy. The causes, symptoms, and treatment of hypoglycemic or insulin reaction and hyperglycemia are discussed under diabetes mellitus. Other problems of insulin therapy include insulin allergy, insulin resistance, insulin rebound due to the somogyi effect, and lipodystrophies or other localized tissue changes at injection sites. Lipodystrophies are localized manifestations of disordered fat metabolism at the sites of insulin injection. Tissue hypertrophy can be seen as a mass of fibrous scar tissue and is sometimes called “insulin tumor.” atrophy of the tissues at the injection site appears as dimpling and pitting of the skin and underlying tissues. These problems are more common in adult females and in children. Atrophy of the tissues is relatively harmless, but hypertrophy can cause malabsorption of the insulin and a possible misdiagnosis of insulin resistance. Measures that can help prevent lipodystrophies include (1) systematic rotation of injection sites, (2) warming insulin to room temperature before injection, (3) pinching the skin when injecting the insulin so that it is deposited between fat and muscle tissue, and (4) use of human insulin. insulin allergy a hypersensitivity reaction to insulin, usually a reaction to its protein components. More purified insulins have now been developed that are less likely to cause an allergic reaction and other complications. Human insulin, prepared by recombinant genetic engineering, eliminates many problems associated with repeated insulin injections, because of reduced antibody concentrations. insulin pump a device consisting of a syringe filled with a predetermined amount of short-acting insulin, a plastic cannula and a needle, and a pump that periodically delivers the desired amount of insulin. The basal rate of insulin delivery usually is one pulse every 8 minutes, but the pump can deliver as many as 60 pulses at a time. Before each meal or snack the patient manually administers a bolus of insulin by adjusting the pump setting to the desired one-time dose. Some insulin pumps will automatically reset themselves to the basal rate of infusion after each bolus. Research is ongoing regarding implantable pumps that release insulin in response to the pump's glucose sensor. This method could potentially administer insulin in a manner resembling the normal absorption from the pancreas. Insulin pumps are worn externally and connected to an indwelling subcutaneous needle, usually inserted in the abdomen. From Black and Matassarin-Jacobs, 2001. insulin rebound extreme fluctuations in blood sugar levels owing to overreaction of the body's homeostatic feedback mechanisms for control of glucose metabolism. When exogenous insulin is given, the hypoglycemia triggers an outpouring of glucagon and epinephrine, both of which raise the blood sugar concentration markedly. Although the patient may actually have periods of hypoglycemia, urine and blood glucose tests will show hyperglycemia. Treatment is aimed at modifying the extremes by gradually lowering the insulin dosage so as to reduce stimulation of the feedback system of glucose regulation. The patient may need to take smaller doses of insulin or take it at more frequent intervals and at different times during the day insulin resistance impairment of the normal biologic response to insulin, which may result from abnormalities in the B-cell products, binding of insulin to antagonists such as anti-insulin antibodies, defects in or reduced numbers of receptors, and defects in the insulin action cascade in the target cell. Diabetic persons with this problem require more than 100 units daily, and some may need as much as 500 or 1000 units daily. Besides diabetes, the condition has also been associated with diseases such as obesity, acromegaly, uremia, and certain rare, possibly genetic, autoimmune diseases. insulin sensitivity test a test used to differentiate diabetes mellitus from pituitary and adrenal diabetes. A test dose of exogenous insulin will produce a rapid and marked decrease in blood glucose if the pancreas is not secreting sufficient quantities of insulin. A much less dramatic response is produced if hyperglycemia is due to excessive secretion of either pituitary or adrenocortical hormones rather than insufficient insulin production. ## in·su·lin (in'sŭ-lin), [MIM*176730] A polypeptide hormone, secreted by β cells in the islets of Langerhans, which promotes glucose use, protein synthesis, and the formation and storage of neutral lipids; available in various preparations including genetically engineered human insulin, which is currently favored. Insulin is used parenterally in the treatment of diabetes mellitus. [L. insula, island, + -in] ## insulin /in·su·lin/ (in´sdbobr-lin) 1. a protein hormone formed from proinsulin in the beta cells of the pancreatic islets of Langerhans. The major fuel-regulating hormone, it is secreted into the blood in response to a rise in concentration of blood glucose or amino acids. Insulin promotes the storage of glucose and the uptake of amino acids, increases protein and lipid synthesis, and inhibits lipolysis and gluconeogenesis. Insulin. The precursor proinsulin is cleaved internally at two sides (arrows) to yield insulin and C peptide. 2. a preparation of insulin, either of porcine or bovine origin or a recombinant form with sequence the same as or similar to that in humans, used in the treatment of diabetes mellitus; classified as rapid-acting, intermediate-acting, or long-acting on the basis of speed of onset and duration of activity. 3. regular insulin; a rapid-acting, unmodified form of insulin prepared from crystalline bovine or porcine insulin. insulin aspart  a rapid-acting analogue of human insulin created by recombinant DNA technology. buffered insulin human  insulin human buffered with phosphate; used particularly in continuous infusion pumps. extended insulin zinc suspension  a long-acting insulin consisting of porcine or human insulin in the form of large zinc-insulin crystals. insulin glargine  an analogue of human insulin produced by recombinant DNA technology, having a slow, steady release over 24 hours. insulin human  a protein corresponding to insulin elaborated in the human pancreas, derived from pork insulin by enzymatic action or produced synthetically by recombinant DNA techniques; sometimes used specifically to denote a rapid-acting regular insulin preparation of this protein. isophane insulin suspension  an intermediate-acting insulin consisting of porcine or human insulin reacted with zinc chloride and protamine sulfate. Lente insulin  insulin zinc suspension. insulin lispro  a rapid-acting analogue of human insulin synthesized by means of recombinant DNA technology. NPH insulin  isophane i. suspension. prompt insulin zinc suspension  a rapid-acting insulin consisting of porcine insulin with zinc chloride added to produce a suspension of amorphous insulin. regular insulin  insulin (3). Semilente insulin  prompt insulin zinc suspension. Ultralente insulin  extended insulin zinc suspension. insulin zinc suspension  an intermediate-acting insulin consisting of porcine or human insulin with a zinc salt added such that the solid phase of the suspension contains a 7:3 ratio of crystalline to amorphous insulin. ## insulin (ĭn′sə-lĭn) n. 1. A polypeptide hormone that is secreted by the beta cells of the islets of Langerhans in the pancreas and functions in the regulation of carbohydrate and fat metabolism, especially the conversion of glucose to glycogen, which lowers the blood glucose level. It consists of two linked polypeptide chains called A and B. 2. Any of various pharmaceutical preparations containing this hormone or a close chemical analog, derived from the pancreas of certain animals or produced through genetic engineering and used in the medical treatment and management of type 1 and type 2 diabetes. ## insulin [in′səlin] Etymology: L, insula, island 1 a naturally occurring polypeptide hormone secreted by the beta cells of the islets of Langerhans in the pancreas in response to increased levels of glucose in the blood as well as to the parasympathetic nervous system and other stimuli. The hormone acts to regulate the metabolism of glucose and the processes necessary for the intermediary metabolism of fats, carbohydrates, and proteins. Insulin lowers the blood glucose level and promotes transport of glucose into the muscle cells and other tissues. Inadequate secretion of insulin causes elevated blood glucose and triglyceride levels and ketonemia, as well as the characteristic signs of diabetes mellitus, including increased desire to eat, excessive thirst, increased urination, and eventually lethargy and weight loss. Uncorrected severe deficiency of insulin is incompatible with life. Normal findings of insulin assay in adults are levels of 5 to 24 mmU/mL. 2 a pharmacological preparation of the hormone administered in treating diabetes mellitus. The various preparations of insulin available for prescription vary in onset, intensity, and duration of action. Animal source insulins, pork and beef, have been discontinued in the U.S. market. Human insulin is derived by recombinant DNA technology and is termed quick acting, intermediate acting, or long acting. Most replacement insulin is given by subcutaneous injection in individualized dosage schedules and insulin pumps, but insulin also can be replaced intravenously. Adverse reactions include hypoglycemia and insulin shock that result from excess dosage and hyperglycemia and diabetic ketoacidosis from inadequate dosage. Fever, stress, infection, pregnancy, surgery, and hyperthyroidism may significantly increase insulin requirements; liver disease, hypothyroidism, vomiting, and renal disease may decrease them. Blood tests for glucose and ketones are performed to determine the need for adjustment of the dosage or of the schedule of administration. See also human insulin. ## insulin Physiology A disulfide-linked polypeptide hormone produced by the beta cells of the pancreatic islets, which controls serum glucose and anabolism of carbohydrates, fat, protein. See Biphasic insulin, PEPCK, Proinsulin, rDNA insulin. ## in·su·lin (in'sŭ-lin) A polypeptide hormone, secreted by beta cells in the islets of Langerhans, which promotes glucose use, protein synthesis, and the formation and storage of neutral lipids; available in a variety of preparations including genetically engineered human insulin, which is currently favored, insulin is used parenterally in the treatment of diabetes mellitus. Compare: bioregulator [L. insula, island, + -in] ## insulin (in'su-lin) [L. insula, island + -in] INSULIN AND GLUCAGON FUNCTIONS A hormone secreted by the beta cells of the pancreas. As a drug, insulin is used principally to control diabetes mellitus. Insulin therapy is required in the management of type 1 diabetes mellitus because patients with this illness do not make enough insulin on their own to survive. The drug also is used in the care of patients with gestational diabetes to prevent fetal complications caused by maternal hyperglycemia (insulin itself does not cross the placenta or enter breast milk). In type 2 diabetes mellitus, its use typically is reserved for those patients who have failed to control their blood sugars with diet, exercise, and oral drugs. See: illustration; diabetes mellitus Insulin preparations differ with respect to the speed with which they act and their duration and potency following subcutaneous injection. See: table In the past, insulin for injection was obtained from beef or swine pancreas. These peptides differed from human insulin by a few amino acids, causing some immune reactions and drug resistance. Most insulin now in use is made by recombinant DNA technology and from an immunological perspective is equivalent to human insulin. ### Physiology In health, the pancreas secretes insulin in response to elevations of blood glucose, such as occur after meals. It stimulates cells, esp. in muscular tissue, to take up sugar from the bloodstream. It also facilitates the storage of excess glucose as glycogen in the liver and prevents the breakdown of stored fats. In type 1 diabetes mellitus, failure of the beta cells to produce insulin results in hyperglycemia and ketoacidosis. ### Dosage The insulin dosage should always be expressed in units. There is no average dose of insulin for diabetics; each patient must be assessed and treated individually Doses are titrated gradually to achieve near normal glucose levels, about 90–125 mg/dl. ### Storage The FDA requires that all preparations of insulin contain instructions to keep in a cold place and to avoid freezing. ### CAUTION! Those who use insulin should wear an easily seen bracelet or necklace stating that they have diabetes and use the drug. This helps to ensure that patients with hypoglycemic reactions will be diagnosed and treated promptly. See: analog ### insulin aspart A rapidly acting insulin administered subcutaneously, with action similar to that of insulin lispro. Aspartic acid replaces proline at a crucial position in the insulin molecule. ### biphasic insulin An insulin preparation that includes two components, typically a rapidly acting insulin, e.g., regular insulin, and an insulin that has a longer duration of action, e.g., NPH insulin. ### insulin glargine A form of insulin that provides basal insulin coverage throughout the day, with little variation in drug levels. It is typically administered as a single injection (often at bedtime) and is usually part of a regimen that includes multiple injections of short-acting insulins or multiple doses of metformin at meal time. It is made by changing the glycine and arginine content of the insulin polypeptide. ### human insulin Insulin prepared by recombinant DNA technology utilizing strains of Escherichia coli. In its effect it is similar to insulins secreted by the human pancreas. Trade names are Humulin and Novolin. Synonym: Novolin 70/30 See: Humulin 50/50; Humulin 70/30; insulin for table ### inhaled insulin Insulin given by inspiration, with the use of an inhaler. It may be composed of liquid droplets or a dry powder. One inhaled insulin product was removed from use in 2008 because of its adverse effects on the lungs. See: site ### insulin isophane suspension Intermediate-acting insulin with onset in 1 2 to 1 hr and a duration of 18 to 28 hr. See: insulin for table ### insulin lipodystrophy See: lipodystrophy ### insulin lispro A synthetic insulin with a very rapid onset and short duration of action. Diabetic patients typically use it immediately before meals to prevent postprandial hyperglycemia. Its absorption is more rapid than regular insulin. It is made by reversing the amino acids lysine and proline in the beta chain of the insulin polypeptide (hence its name lispro). ### monocomponent insulin Single-component insulin. ### insulin protamine zinc suspension Long-acting insulin with onset in 6 to 8 hr and a duration of 30 to 36 hr. See: insulin for table See: pump ### insulin shock Hypoglycemic shock. ### single-component insulin Highly purified insulin that contains less than 10 parts per million of proinsulin, which is capable of inducing formation of anti-insulin antibodies. Synonym: monocomponent insulin ### synthetic insulin Insulin made by the use of recombinant DNA technology. ### insulin zinc extended suspension Long-acting insulin with onset in 5 to 8 hr and duration of more than 36 hr. ### insulin zinc prompt suspension Fast-acting insulin with onset less than 1 hr and a duration of 12 to 16 hr. * These times are estimates and may vary in individual patients. ** Contain NPH plus a rapid-acting insulin (Aspart, Lispro, or Regular); Novolog 70/30 contains 70% NPH, 30% Novolog Type of InsulinGeneric (Trade Names)Onset (hr)Maximum (hr)Duration (hr) Very rapid Aspart (NovoLog) 0.2–0.51–33–5 Very rapid Lispro (Humalog)0.2–0.50.5–2.53–5 Very rapid Glulisine (Apidra)0.2–0.51.6–2.83–4 Rapid Regular0.5–1.02.5–54–6 Intermediate-actingNPH (Humulin N, Novolin N)2–44–1210–18 Fixed-dose combination insulins **70/30, 50/50, etc. Variable, depending on mixture used Very long- actingLantus (Glargine)2–4 none11–32 Very long- actingdetermir (Levemir)3–43–96–23 Dose dependent U 500 regular very concentrated (5 X U100)0.5–1.02.5–5up to 24 hr ## insulin A peptide hormone produced in the beta cells of the Islets of Langerhans in the PANCREAS. Insulin facilitates and accelerates the movement of glucose and amino acids across cell membranes. It also controls the activity of certain enzymes within the cells concerned with carbohydrate, fat and protein metabolism. Insulin production is regulated by constant monitoring of the blood glucose levels by the beta cells. Deficiency of insulin causes DIABETES. Insulin preparations may be in the ‘soluble’ form for immediate action or in a ‘retard’ form for prolonged action or as mixtures of these. Most insulins for medical use are now produced by recombinant DNA methods (genetic engineering) and are identical to human insulin. Bovine and porcine insulins are still used. Brand names include: Neutral Insulin injections: Humalog, Actrapid, Velosulin, Humulin S, Hypurin Bovine Neutral, Hypurin Porcine Neutral, Insuman Rapid, NovoRapid and Pork Actrapid. Biphasic Insulin injections: Humalog Mix25 and Mix50, Mixtard, Humulin, Hypurin Porcine, Insuman Comb, NovoMix 30 and Pork Mixtard 30. Isophane Insulin injections: Insulatard, Humulin, Hypurin Bovine Isophane, Isuman Basal and Pork Insulatard. Insulin Zinc Suspension (Mixed): Monotard, Humulin Lente and Hypurin Bovine Lente. Insulin Zinc Suspension (Crystalline): Ultratard and Humulin Zn. Protamine Zinc Insulin injection: Hypurin Bovine PZI. Long-acting Insulin Analogue: Lantus. The prefix ‘Human’ was deleted from insulin products in mid-2003. ## insulin the hormone controlling the amount of blood sugar, which is secreted by the beta cells of the ISLETS OF LANGERHANS in the pancreas. Insulin has three targets: the liver, the muscles, and adipose tissue, where its action helps to reduce the blood sugar level in the following ways: 1. it stimulates the absorption of more glucose from the blood into respiring cells, by altering cell-membrane permeability; 2. it stimulates the conversion of glucose into GLYCOGEN in the liver and muscles, reducing the supply of free glucose; 3. it promotes the conversion of glucose into fats in the liver and adipose cells (LIPOGENESIS); 4. it inhibits GLUCONEOGENESIS; 5. it promotes GLYCOLYSIS of glucose in all cells. Underproduction of insulin causes diabetes mellitus , resulting in an increase in blood sugar (hyperglycaemia) and sugar appearing in the urine (see GLYCOSURIA). The condition can be fatal if untreated, treatment being by injection of insulin into the blood stream. The hormone cannot be taken orally as, being a protein, it would be digested. Insulin was discovered by BANTING and BEST in 1921. The control of blood sugar, where a change in its level automatically brings about the opposite effect, is a good example of a negative FEEDBACK MECHANISM. ## Insulin A hormone secreted by the pancreas in response to high blood sugar levels that induces hypoglycemia. Insulin regulates the body's use of glucose and the levels of glucose in the blood by acting to open the cells so that they can intake glucose. ## insulin a polypeptide hormone produced by the beta cells of the islets of Langerhans in the pancreas, associated mainly with regulation of blood glucose, in which it exerts an opposite effect to that of glucagon. Involved also in distribution, utilization and storage of protein and fat, as well as of carbohydrate, and in interconversion among them. Insulin secretion is stimulated by a rising blood glucose concentration and by the parasympathetic nervous system. It lowers blood glucose by promoting its transport into cells (notably muscle and fat cells) and diminishing its output from the liver, and it promotes formation of glycogen in liver and muscle. An absolute or relative lack of insulin results in hyperglycaemia (high blood glucose) and presence of glucose in the urine (glycosuria), along with decreased utilization of carbohydrate and increased breakdown of fat and protein: the condition of diabetes mellitus. Sporting activity by diabetics tends to reduce blood glucose, so good diabetic control with frequent blood sugar testing and adjustment of insulin dosage is important. See also diabetes. ## insulin pancreatic hormone promoting glucose utilization, protein synthesis and formation and storage of neutral lipids; acts by binding to tissue cell membrane insulin receptors, triggering membrane transport processes that move glucose, amino acids and electrolytes in and out of cells; synthesized by beta cells of islets of Langerhans and released into circulation at 1 unit per hour (food intake increases insulin release 5-10-fold); normal total daily secretion = 40 units ## insulin (inˑ·s·lin), n hormone produced by the pancreas that regulates blood glucose levels by stimulating the absorption of sugars into the cells. Insulin injection sites. ## in·su·lin (in'sŭ-lin) [MIM*176730] Polypeptide hormone, secreted by β cells in islets of Langerhans, which promotes glucose use, protein synthesis, and formation and storage of neutral lipids; available in various preparations including genetically engineered human insulin, which is currently favored; used parenterally to treat diabetes mellitus. [L. insula, island, + -in] ## insulin (antidiabetic hormone) (in´-səlin´ an´tēdī´əbet´ik), n a hormone produced by the beta cells of the islets of Langerhans in the pancreas. It promotes a decrease in blood sugar. Its action may be influenced by the pituitary growth hormone, adrenocorticotropic hormone; hormones of the adrenal cortex; epinephrine; glucagon; and thyroid hormone. ## insulin (obtained from beef or pork, or human recombinant technology), n brand names: Velosulin, Humulin R, Novolin R, Lente Insulin; drug class: exogenous insulin, antidiabetic; action: decreases blood glucose; important in regulation of fat and protein metabolism; uses: ketoacidosis; type 1 and type 2 diabetes mellitus; hyperkalemia; hyperalimentation. insulin, exogenous n a type that comes from a source external to a diabetic patient's body, taken to offset the patient's natural deficiency of insulin. insulin, intermediate-acting, n a type that is a medium between rapid-acting and long-acting insulins; the onset is not as fast as rapid-acting insulin, but it reaches its peak action over a 4- to 12-hour period. insulin, Lente n.pr an intermediate-acting type that reaches its peak action over a 4- to 12-hour period. insulin, Lispro, n.pr a rapid-acting type that reaches its peak action in 30 to 90 minutes. insulin, long-acting, n a type that has a slow onset but reaches its peak action from 12 to 16 hours after administration. insulin, NPH, n a synthetic type used to treat diabetes. Classified as intermediate acting; peak action occurs 4 to 10 hours after administering. insulin, rapid-acting, n a synthetic type of insulin used to treat diabetes. Reaches peak action 30 to 90 minutes after administering. insulin, regular, n a synthetic type used to treat diabetes. Classified as short acting; peak action occurs 2 to 3 hours after administering. insulin resistance, n a complication of diabetes mellitus characterized by a need for more than 200 units of insulin per day to control hyperglycemia and ketosis. The cause is associated with insulin binding by high levels of antibody. insulin shock, insulin, short-acting, n a synthetic type used to treat diabetes. Reaches peak action 2 to 3 hours after administering. Also called regular insulin. insulin, ultralente n a synthetic type used to treat diabetes. Classified as long acting, with peak action occurring 12 to 16 hours after administering. ## insulin a double-chain peptide hormone formed from proinsulin in the beta cells of the pancreatic islets of Langerhans. Insulin promotes the storage of glucose and the uptake of amino acids, increases protein and lipid synthesis, and inhibits lipolysis and gluconeogenesis. The secretion of endogenous insulin is a response of the beta cells to a stimulus. The primary stimulus is glucose; others are amino acids, particularly leucine, and the 'gut hormones', such as secretin, pancreozymin and gastrin. These chemicals play an important role in maintaining normal blood glucose levels by triggering the release of insulin after ingestion of a meal. Commercially prepared insulin is available in various types, which differ in the speed with which they act and in the duration of their effectiveness. There are three main groups: rapid acting (regular or semilente), intermediate acting (isophane suspension or NPH, zinc suspension or lente), and long acting (protamine zinc suspension or PZI, or ultralente). Mixtures are also marketed. insulin deficiency diabetes mellitus. insulin-dextrose therapy a combination used in emergencies to lower blood potassium levels in acute hypoadrenocorticism. insulin:glucagon ratio ratio of insulin to glucagon; thought to determine the predominance of the action of one hormone over the other. insulin:glucose ratio a comparison of simultaneously obtained blood levels of immunoreactive insulin and plasma glucose. An increased ratio suggests an insulin-secreting tumor of the pancreas. A modification is the amended insulin:glucose ratio, based on the calculation: $$\vskip13.5pt{\rm {serum\ insulin (\rmmu U/ml)\times100} \over {\rm plasma \ glucose (mg/dl) - 30}$$ immunoreactive insulin radioimmunoassay methods are used in determining blood levels of insulin. Increased levels are found with hypoglycemia caused by functional islet cell tumors. insulin pump a device consisting of a syringe filled with a predetermined amount of short-acting insulin, a plastic cannula and a needle, and a pump that periodically delivers the desired amount of insulin. Sometimes used in humans, but of limited application in animals. insulin sensitivity test, insulin response test used to differentiate diabetes mellitus from pituitary and adrenal diabetes. A test dose of exogenous insulin will produce a rapid and marked decrease in blood glucose if the pancreas is not secreting sufficient insulin. A much less dramatic response is produced if hyperglycemia is due to excessive secretion of either pituitary or adrenocortical hormones rather than insufficient insulin production. insulin syringe disposable syringe with a capacity of 1 ml or less and a fine gauge needle (27-29G) attached, and graduation markings corresponding to insulin units in standard preparations. Needles may also be treated to minimize pain on injection. Q. what does an insulin shot do? and what is it good for? A. Insulin is a hormone (substance that controls the activity of the body) that enables muscles and fat to use the glucose (sugar) we get from the diet as a source of energy for activity or for storage as fat. Thus, it lowers the concentration of glucose in the blood. It's produced and secreted from the pancreas, a gland located in the back of the abdomen. When people don't have insulin, or if the body doesn't respond to insulin (essentially diabetes mellitus type 1 and 2, respectively), therapy with insulin helps the body maintain a normal level of glucose. Excessive concentration of glucose in the blood is termed "hyperglycemia" and is deleterious in the long term. You may read more here: http://en.wikipedia.org/wiki/Insulin Q. Why is insulin injected and not taken as a pill? A. so if that's the case, why can't you use a patch (like a nicotine patch)? wouldn't that do the same trick? Q. is there an alternative for the Insulin shots? something less painful but yet effective as the old way? A. Here is a good site on alternative insulin delivery: http://www.diabetes.org/for-parents-and-kids/diabetes-care/alternative-insulin.jsp Hope this helps. References in periodicals archive ? Increasing prevalence of diabetes, increasing number of geriatric population, rising prevalence of obesity and increasing R&D investment for more effective insulin are driving the growth of the human insulin market in North America. Its two lead pharmaceutical product candidates are insulin and a developmental cardiovascular drug called Apo AI. Among them: chromium picolinate "may reduce abnormally elevated blood sugar levels," "may reduce the risk of insulin resistance," and "may reduce the risk of type 2 diabetes. 7) Diabetes patients may wear insulin pumps such as the MiniMed 508 as they would a pager. Notably, the children born prematurely had similar insulin resistance and insulin overproduction, regardless of whether their weight had been appropriate for their gestational age. In NIDDM, the pancreas usually produces insulin, but for some reason, the body cannot use the insulin effectively. Chapter One Introduction of Insulin (API & Injection) Industry 1. In time, these patients often require insulin injections, as people with type 1 diabetes do. Raine has worn an insulin pump around his waist for several years. 19, October 4, 2000), presented strong evidence linking colorectal cancer with high insulin levels. Liebman has missed the central focus of The Zone, which is insulin control. Site: Follow: Share: Open / Close
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31052231788635254, "perplexity": 10473.320625404138}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689490.64/warc/CC-MAIN-20170923052100-20170923072100-00422.warc.gz"}
https://orinanobworld.blogspot.com/2015/07/
## Tuesday, July 28, 2015 ### ORiginals - Videos About Research ORiginals is a YouTube channel co-hosted by Dr. Banafsheh Behzad (@banafsheh_b) of CSU Long Beach and my colleague Dr. David Morrison (@drmorr0). They present short (five or six minute) videos featuring researchers describing their research to a general (non-expert) audience. Their tag line is "Outstanding research in everyday language", and I think the first two installments have lived up to that mantra. The first two videos, by Dr. Behzad and the net-biquitous Dr. Laura McLay (@lauramclay) of the University of Wisconsin, fall into the category of operations research. The aim of the channel, however, is more general. Quoting Dr. Behzad: The goal of ORiginals is to promote science and engineering topics among the general public, using everyday language. We are featuring a diverse selection of scientists doing cutting-edge research. This is the first season of ORiginals and even though we aren't specifically OR/MS-focused, we'll have a slight bias in that direction with our guest selection, as David and I are both OR people. If you're interested in seeing quality research explained in lay terms, I highly recommend subscribing to the channel. If you're doing scientific/engineering research that has measurable impact (or the potential for measurable impact) in the real world (sorry, boson-chasers), and you'd like to spread the gospel, I suggest you contact one of the co-hosts. (They're millennials, so a DM on Twitter is probably more effective than an email message. ) ## Saturday, July 25, 2015 ### Shiny Hack: Vertical Scrollbar I bumped into a scrolling issue while writing a web-based application in Shiny, using the shinydashboard package. Actually, there were two separate problems. 1. The browser apparently cannot discern page height. In Firefox and Chrome, this resulted in vertical scrollbars that could scroll well beyond the bottom of a page. That's mildly odd, but not a problem as far as I'm concerned. In Internet Exploder, however, the page height was underestimated, and as a result in some cases it was not possible to reach the bottom of the page (at least not with the vertical scrollbar). 2. In Internet Exploder only, the viewport scrollbar, on the right side of the window, behaves intermittently. If I click on the "elevator car" (handle) while it is at the top of the bar, it jumps to the bottom of the track, and the spot where I clicked gains a duplicate copy of the up arrow icon that appears just above the handle. If the handle is at the bottom of the bar, it behaves symmetrically. The down arrow icon on the vertical scrollbar lets you scroll downward, but not fully to the bottom of the page. I have only seen the second problem on one machine, so I don't know if it is specific to a particular version of IE, but the first problem was reported by two different users (and I saw it myself). As a kludge to get around the first problem, which in my app is triggered by extensive help text (and some input controls) in the sidebar that makes the sidebar taller than the main body, I decided to introduce a separate vertical scrollbar in the sidebar. That turned out to be tricky, or at least I could not find an easy, documented method. I thought I would share the code that ultimately did the job for me. I goes in the ui.R file. dashboardSidebar( tags$head( tags$style(HTML(" .sidebar { height: 90vh; overflow-y: auto; } " ) ) ), ... The height: 90vh style attribute sets the height of the sidebar at 90% of the viewport height, so that it adjusts automatically if the user maximizes or resizes the window, opens or closes a tool bar, etc. You need to pick a percentage that works for your particular application. Make it too large and the inability to scroll to the bottom of the sidebar will persist. Make it too small and the sidebar will be noticeably shorter than the main body, leaving a gap at the bottom of the sidebar (and introducing a vertical scrollbar when the entire sidebar is already visible). Three last notes on the scrolling issue: • In my application, the scrolling problem only appeared on pages where the sidebar was taller than the main body (as far as I know). • Although the vertical scrollbar in IE is balky, scrolling via the mouse wheel (if you have one) or the arrow keys seems to work fine. • This is as yet untested on Safari. ## Thursday, July 23, 2015 ### Autocorrupt in R You know that "autocomplete" feature on your smart phone or tablet that occasionally (or, in my case, frequently) turns into an "autocorrupt" feature? I just ran into it in an R script. I wrote a web-based application for a colleague that lets students upload data, run a regression, ponder various outputs and, if they wish, export (download) selected results. In the server script, I created an empty list named "export". As users generated various outputs, they would be added to the list for possible download (to avoid having to regenerate them at download time). For instance, if the user generated a histogram of the residuals, then the plot would be stored in export$hist. Similarly, if the user looked at the adjusted R-squared, it would be parked in export$adjr2. All was well until, in beta testing, I bumped into a bug involving the p-value for the F test of overall fit (you know, the test where failure to reject the null hypothesis would signal that your model contended for the worst regression model in the history of statistics). Rather than getting a single number between 0 and 1, in one test it printed out as a vector of numbers well outside that range. Huh??? I beat my head against an assortment of flat surfaces before I found the bug. The following chunk of demonstration code sums it up. export <- list() # create an empty export list print(export$f) # result: NULL export$fitted <- c(2, 3, 1, 7) # (simulated) fitted values print(export$f) # result: [1] 2 3 1 7 Created by Pretty R at inside-R.org The intent was to store the p-value of the test of overall fit in export$f, and the fitted values in export$fitted. If the user never checked the F test, I wanted export$f to be null, which would signal the export subroutine to skip it. Instead, the export subroutine autocompleted export$f (which did not exist) to export$fitted (which did exist) and spat out the mystery vector. There are multiple ways to avoid the bug, the simplest being to rename export$f to something like export$fprob, where "fprob" is not a substring of the name of any other entry of export. I do my R coding inside RStudio, which provides autocompletion suggestions. Somewhere along the line, I think I came across the fact that the R interpreter autocompletes some things. It never occurred to me that this would happen when a script ran. When running commands interactively, I suppose the autocomplete feature saves some keystrokes. That's not generally an issue when running scripts, so I don't know why autocomplete is not turned off when "sourcing" a script. At any rate, letting the betting commence on how long it will take me to forget this (and trip over it again). ## Thursday, July 2, 2015 ### Tabulating Prediction Intervals in R I just wrapped up (knock on wood!) a coding project using R and Shiny. (Shiny, while way cool, is incidental to this post.) It was a favor for a friend, something she intends to use teaching an online course. Two of the tasks, while fairly mundane, generated code that was just barely obscure enough to be possibly worth sharing. It's straight R code, so you need not use (or have installed) Shiny to use it. The first task was to output, in tabular form, the coefficients of a linear regression model, along with their respective confidence intervals. The second task was to output, again in tabular form, the fitted values, confidence intervals and prediction intervals for the same model. Here is the function I wrote to do the first task (with Roxygen comments): #' #' Summarize a fitted linear model, displaying both coefficient significance #' and confidence intervals. #' #' @param model an instance of class lm #' @param level the confidence level (default 0.95) #' #' @return a matrix combining the coefficient summary and confidence intervals #' model.ctable <- function(model, level = 0.95) { cbind(summary(model)\$coefficients, confint(model, level = level)) } To demonstrate its operation, I'll generate a small sample with random data and run a linear regression on it. x <- rnorm(20) y <- rnorm(20) z <- 6 + 3 * x - 5 * y + rnorm(20) m <- lm(z ~ x + y) I'll generate the coefficient table using confidence level 0.9, rather than the default 0.95, for the coefficients. model.ctable(m, level = 0.9) The output is as follows: Estimate Std. Error t value Pr(>|t|) 5 % 95 % (Intercept) 6.039951 0.2285568 26.42648 3.022477e-15 5.642352 6.437550 x 3.615331 0.2532292 14.27691 6.763279e-11 3.174812 4.055850 y -5.442428 0.3072587 -17.71285 2.156161e-12 -5.976937 -4.907918 The code for the second table (fits, confidence intervals and prediction intervals) is a bit longer: #' #' Compute a table of fitted values, confidence intervals and #' prediction intervals from a regression model. #' #' @param model a fitted regression model #' @param level the desired confidence level (default 0.95) #' @param names the names to assign to the columns (after #' resequencing if necessry) #' @param order the order in which to list the columns #' (1 = fitted, 2 = lower c.i. limit, 3 = upper c.i. limit, #' 4 = lower p.i. limit, 5 = upper p.i. limit) #' #' @return a matrix with one row per observation and five #' columns (fitted value, lower/upper c.i. bounds, lower/upper #' p.i. bounds) in the order specified by the user #' intervals <- function(model, level = 0.95, names = c("Fitted", "CI Low", "CI High", "PI Low", "PI High"), order = c(4, 2, 1, 3, 5)) { # generate fits and confidence intervals temp <- predict(model, interval = "confidence", level = level) # generate fits and prediciton intervals (suppressing # the warning about predicting past values) temp2 <- suppressWarnings( predict(model, interval = "prediction", level = level) ) # drop the redundant fit column temp2 <- temp2[,2:3] # merge the tables and reorder the columns temp <- cbind(temp, temp2)[, order] # rename the columns colnames(temp) <- names[order] temp } Here is the call with default arguments (using head() to limit the amount of output): head(intervals(m)) The output is this: PI Low CI Low Fitted CI High PI High 1 -0.7928115 0.65769280 1.5196870 2.381681 3.832185 2 7.9056270 9.40123642 10.1928094 10.984382 12.479992 3 4.9125024 6.61897662 7.1149000 7.610823 9.317298 4 7.3386447 8.66123993 9.7406923 10.820145 12.142740 5 -1.4295587 0.05464529 0.8637503 1.672855 3.157059 6 4.1962493 5.84893725 6.4156619 6.982387 8.635074 Finally, I'll run it again, changing the confidence level to 0.9, tweaking the column headings a bit, and reordering them: head(intervals(m, level = 0.9, names = c("Fit", "CI_l", "CI_u", "PI_l", "PI_u"), order = 1:5 ) ) The output is: Fit CI_l CI_u PI_l PI_u 1 1.5196870 0.8089467 2.230427 -0.3870379 3.426412 2 10.1928094 9.5401335 10.845485 8.3069584 12.078660 3 7.1149000 6.7059962 7.523804 5.2989566 8.930843 4 9.7406923 8.8506512 10.630733 7.7601314 11.721253 5 0.8637503 0.1966188 1.530882 -1.0271523 2.754653 6 6.4156619 5.9483803 6.882943 4.5856891 8.245635 By the way, all syntax highlighting was Created by Pretty R at inside-R.org.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4194698929786682, "perplexity": 2929.783647970426}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588113.25/warc/CC-MAIN-20211027084718-20211027114718-00415.warc.gz"}
https://export.arxiv.org/abs/2005.01839
econ.TH (what is this?) # Title: Equilibria of nonatomic anonymous games Abstract: We add here another layer to the literature on nonatomic anonymous games started with the 1973 paper by Schmeidler. More specifically, we define a new notion of equilibrium which we call $\varepsilon$-estimated equilibrium and prove its existence for any positive $\varepsilon$. This notion encompasses and brings to nonatomic games recent concepts of equilibrium such as self-confirming, peer-confirming, and Berk--Nash. This augmented scope is our main motivation. At the same time, our approach also resolves some conceptual problems present in Schmeidler (1973), pointed out by Shapley. In that paper\ the existence of pure-strategy Nash equilibria has been proved for any nonatomic game with a continuum of players, endowed with an atomless countably additive probability. But, requiring Borel measurability of strategy profiles may impose some limitation on players' choices and introduce an exogenous dependence among\ players' actions, which clashes with the nature of noncooperative game theory. Our suggested solution is to consider every subset of players as measurable. This leads to a nontrivial purely finitely additive component which might prevent the existence of equilibria and requires a novel mathematical approach to prove the existence of $\varepsilon$-equilibria. Subjects: Theoretical Economics (econ.TH); Computer Science and Game Theory (cs.GT) Cite as: arXiv:2005.01839 [econ.TH] (or arXiv:2005.01839v1 [econ.TH] for this version) ## Submission history From: Fabio Angelo Maccheroni [view email] [v1] Mon, 4 May 2020 20:45:24 GMT (34kb) Link back to: arXiv, form interface, contact.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5774707794189453, "perplexity": 2497.147366173775}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989820.78/warc/CC-MAIN-20210518033148-20210518063148-00094.warc.gz"}
https://www.mersenneforum.org/showpost.php?s=6611e5a07a0787c83e681835aa86f410&p=567603&postcount=2
View Single Post 2020-12-29, 04:43   #2 CRGreathouse Aug 2006 176116 Posts Quote: Originally Posted by ONeil Although I really wanted to find a Mersenne Prime number with 2^109947391-1, the facts are facts and it has factors other than 1 and itself. I spent a couple of weeks messing around with Pythonic code tweaking it to see if I could reveal factors. Well 2^109947391-1 starts its factors low with the number 13 and produces a monster cofactor, the cofactor I cannot put in the spoiler, because its to large. Code: > Mod(2,13)^109947391-1 %1 = Mod(10, 13) Sorry, try again next time. You might want to read up on the special form of Mersenne divisors.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23613803088665009, "perplexity": 1673.7772566541585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039398307.76/warc/CC-MAIN-20210420122023-20210420152023-00254.warc.gz"}
https://discuss.tlapl.us/msg03314.html
# Re: [tlaplus] Re: How to understand the concept "step simulation" hi Stephan, thanks for the informative example. I am familiar with inductive invariants, my sticking point is that the formula in question does not require Inv to be initially true. Here's how it appears So as it stands, it seems to me we can establish the 2nd conjunct by setting Inv to False. Or am I still missing something? thanks On Thursday, December 5, 2019 at 1:44:46 AM UTC-8, Stephan Merz wrote: Let me try another attempt for explaining the issue. To make things simpler, let's leave aside the issue of refinement mapping, that is, consider just the identity refinement mapping. We want to prove (0) InitL /\ [][NextL]_varsL => InitH /\ [][NextH]_varsH Obviously, this can be reduced to proving (1) InitL => InitH (2) NextL \/ varsL' = varsL => NextH \/ varsH' = varsH However, the obligation (2) is very likely going to be unprovable because really, we only have to prove that implication for all reachable states (i.e., reachable in runs of the low-level spec) rather than for completely arbitrary states, and that's where the invariant comes in. Let's look at a stupid example: our high-level specification has one variable initialized to zero and that keeps growing while remaining an even number: Even == {n \in Int : n % 2 = 0} InitH == x = 0 NextH == x' > x /\ x' \in Even Our low-level specification has two variables x and y that evolve as follows: InitL == x = 0 /\ y = 0 NextL == x' = x+y /\ y' = y+2 Condition (2) now requires us to show (x' = x+y /\ y'=y+2) \/ (x'=x /\ y'=y) => (x' > x /\ x' \in Even) \/ x' = x but this implication is not true and therefore cannot be proved: we have no information about the "types" of x and y, so they could be strings for example. Even if they are integers, we could have x=42 and y=7, x' = 49 and y'=9, and the right-hand side will be false. And indeed, such a state cannot be reached because the low-level spec ensures that x and y are always even. Therefore we define Inv == x \in Even /\ y \in Even and relax our proof obligations to be (1') InitL => InitH /\ Inv (2') Inv /\ (NextL \/ varsL' = varsL) => Inv' /\ (NextH \/ varsH' = varsH) which still implies our high-level goal (0). I leave proving (1') and (2') for our example as an exercise to you. Indeed, Inv can be an arbitrary state predicate and has to be "invented" by the system designer / verifier. But it must be an invariant of the low-level specification, and FALSE is unlikely to be one. You may also want to read section 6.8 of the Hyperbook. Hope this helps, Stephan On 5 Dec 2019, at 09:58, ss.ne...@xxxxxxxxx wrote: I have to admit I'm a bit confused by the explanation in the text too and don't quite see what Inv is representing. For example, can I set Inv to anything that allows me to prove the formula, what if I set it to False? Thanks On Sunday, March 10, 2019 at 12:52:43 PM UTC-7, Leslie Lamport wrote: (1) If we remove the Inv from the formula Inv /\ Next => ..., it would assert that a step starting in any state that satisfies Next satisfies "..." -- for example a state in which memQ is a sequence of imaginary numbers. I have no idea if that assertion is true for such a starting state. However, it suffices to prove the assertion for steps starting in a reachable state.  Conjoining the invariant Inv allows you to prove the assertion only for reachable states.  You have to choose Inv so it asserts what is true about reachable states that makes the implication true.  To do this, you have to understand why the theorem you're trying to prove is true. (2) That mapping isn't derived; you have to invent it.  The sentence beginning "Intuitively" that starts on line 9 of page 63 tells you what condition that substitution must satisfy.  To be able to choose the necessary mapping, y ou need to understand why the theorem you're trying to prove is true . Leslie On Wednesday, March 6, 2019 at 10:53:35 PM UTC-8, Oliver Yang wrote: Hi All, In Section 5.8 of book "Specifying Systems", the "Proving Impl" is introduced. I have a rough understanding of refinement mapping, which essentially maps states of Spec A to the states of Spec B. However, I have a hard time understanding "step simulation". 1) What's the purpose of introducing the invariant Inv in Formula 5.3? What are we trying to achieve here? 2) How do we derive the mapping: omem = vmem, octl = ..., obuf = buf? It looks like we jumped to the conclusion without showing any proof? Thanks, Oliver -- You received this message because you are subscribed to the Google Groups "tlaplus" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. -- You received this message because you are subscribed to the Google Groups "tlaplus" group. To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+unsubscribe@xxxxxxxxxxxxxxxx.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8115874528884888, "perplexity": 1894.9610581143427}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151972.40/warc/CC-MAIN-20210726000859-20210726030859-00653.warc.gz"}
https://mikepawliuk.ca/2017/05/23/euclidean-ramsey-theory-2-ramsey-doccourse-prague-2016/
# Euclidean Ramsey Theory 2 – Ramsey DocCourse Prague 2016 The following notes are from the Ramsey DocCourse in Prague 2016. The notes are taken by me and I have edited them. In the process I may have introduced some errors; email me or comment below and I will happily fix them. Title: Euclidean Ramsey Theory 2 (of 3). Lecturer: David Conlon. Date: November 25, 2016. Main Topics: Ramsey implies spherical, an algebraic condition for spherical, partition regular equations, an analogous result for edge Ramsey. Definitions: Spherical, partition regular. Lecture 1 – Lecture 2 – Lecture 3 Ramsey DocCourse Prague 2016 Index of lectures. ## Introduction In the first lecture we defined the relevant terms and then established that all (non-degenerate) triangles are Ramsey. In this lecture we will compare the property of being spherical with being Ramsey. In this lecture we will show that Ramsey implies spherical (or more precisely, that non spherical sets cannot be Ramsey). Definition. A set $X$ is spherical if there is an $n$ such that $X \subseteq S^n$. Typically $S$ will be finite, but this is not formally required. The proofs are those of Erdos et Al, and go by establishing a tight algebraic condition for a set being spherical. 1. Show that three points in a line are not Ramsey. 2. Define a partition regular equation. 3. Prove two colouring lemmas about partition regular equations. 4. Relate spherical sets to a tight algebraic condition. 5. Put everything together to prove that Ramsey implies spherical. ## (Evenly spaced) lines aren’t Ramsey Let $L = \{x,y,z\}$ where $d(x,y) = d(y,z) = 1$ and $d(x,z) = 2$; it is a line segment with three points equally spaced. Theorem. The line segment $L$ is not Ramsey. “The reason is you can take a spherical shell’ colouring.” These shell colourings are very important. This doesn’t work for cube colourings’ (i.e. using a different norm) since by Dvoretsky’s Theorem, hyperplane slices of cubes basically look spherical. Proof. Fix $n$. Define the colouring $\chi : \mathbb{R}^n \rightarrow \{0,1,2,3\}$ by $\chi(x) = \lfloor x \cdot x \rfloor$. (You’re taking spherical shells of radii $\sqrt{n}$.) [Picture] By the Cosine rule we get $a^2 = b^2 + 1 - 2b\cos(\theta)$ and $c^2 = b^2 + 1 + 2b\cos(\theta)$. So we get $a^2 + c^2 = 2b^2 +2$. Suppose that $x,y,z$ have the same colour. This means that there is an $i \in \{0,1,2,3\}$ such that $a^2 = 4k_1 + i + \epsilon_1$ and $b^2 = 4k_2 + i + \epsilon_2$ and $c^2 = 4k_3 + i + \epsilon_3$, where each $0 \leq \epsilon_j < 1$. Putting this into our cosine law info gives $\displaystyle 4(k_1 + k_3 - 2k_2) -2 = 2\epsilon_2 - \epsilon_1 - \epsilon_3,$ which is a contradiction since the left is $2 \mod 4$ but the right is strictly between $-2$ and $2$. ## Partition regular equations Eventually we will relate the condition of a set being spherical with a tight algebraic condition. With this in mind, we examine when algebraic conditions can yield Ramsey witnesses. We start with a general discussion of partition regular equations. Definition. An equation is partition regular if every finite colouring of $\mathbb{R}^n$ contains a monochromatic solution to the equation. For example, 1. Schur. $x + y = z$. 2. Van der Waerden. $x + y = 2z$. 3. Rado. A simple equation $\sum_{i=1}^k c_i x_i = 0$ is partition regular if and only if there is a non empty $I$ such that $\sum_{i \in I} c_i = 0$. Exercise. If the equation is translation invariant then you get a corresponding density result. Use this to show that you always get a non-trivial solution. ## Are there inhomogeneous equations that are partition regular? Two lemmas. First an example. Example. $x + y = z + 1$. We can homogenize this equation by replacing the variables. Use $x = x^\prime+1, y = y^\prime +1$ and $z = z^\prime+1$. This gives the equation $x^\prime + y^\prime = z^\prime$. Basically, these are the only types of partition regular equations. Lemma 1. There is a $2n$ colouring $\chi$ of $\mathbb{R}$ with no solution of $\displaystyle \sum_{i=1}^n (x_i - x^\prime_i) = 1$ with $\chi(x_i) = \chi(x^\prime_i)$ for all $i$. The number of colours is equal to the number of variables. This is a strong result of the equation not being partition regular. You can’t have a monochromatic solution, you can’t even have all the paired variables agree! The idea is to colour whether you are in a certain interval. Proof. Fix $n$. Colour $x \in \mathbb{R}$ with $j$ if $x \in [2m + \frac{j}{n}, 2m + \frac{j+1}{n}]$ for some integer $m$. If $\chi(x_i) = \chi(x^\prime_i)$, then $x_i - x^\prime_i = 2m_i + \epsilon_i$ where $\vert \epsilon_i \vert < \frac{1}{n}$. So $\displaystyle 1 = \sum_{i=1}^n (x_i - x^\prime_i) = \sum_{i=1}^n 2m_i + \sum_{i=1}^n \epsilon_i.$ Here the first sum is an even number, and the second is $< 1$, a contradiction. Now we increase the number of colours to deal with a more general equation. Lemma 2. There is a $(2n)^n$ colouring $\chi$ of $\mathbb{R}$ with no solution of $\displaystyle \sum_{i=1}^n c_i (x_i - x^\prime_i) = b \neq 0$ with $\chi(x_i) = \chi(x^\prime_i)$ for all $i$. Proof. Fix $n$. By dividing by $b$ it suffices to consider $b = 1$. Let $\chi$ be the ($2n$) colouring from Lemma 1. Define $\chi^\prime(x) = (\chi(c_1 x), \chi(c_2 x), \ldots, \chi(c_n x))$. Now if $\chi^\prime(x_i) = \chi^\prime(x^\prime_i)$, then $\chi(c_i x_i) = \chi(c_i x^\prime_i)$. So $c_i(x_i - x_i^\prime) = 2m_i + \epsilon_i$ where $\vert \epsilon_i \vert < \frac{1}{n}$. If this happens for all $i$, then we have a contradiction identical to the one in Lemma 1. In the original paper there was a similar lemma but it had a worse bound on the number of colours. This improvement was observed by Strauss a little later. Note that these equations are not susceptible to the “translation trick” since $(y_i + 1) - (y_i^\prime + 1) = y_i - y_i^\prime$. ## Characterizing spherical in terms of an algebraic condition The following is the main technical lemma. The proof is purely algebraic. Theorem. A set $X = \{\vec{x}_0, \ldots, \vec{x}_t\} \subset \mathbb{R}^n$ is not spherical if and only if there are constants $c_i$, not all $0$, such that $\displaystyle \sum_{i=1}^t c_i (\vec{x}_i - \vec{x}_0) = 0$ and $\displaystyle \sum_{i=1}^t c_i (\vec{x}_i^2 - \vec{x}_0^2) = \vec{b}.$ For readability, we will write $x$ instead of $\vec{x}$. We will make use of the following useful fact: Useful identity. $\displaystyle a^2 - b^2 = (a - c)^2 - (b - c)^2 + 2a \cdot c - 2 b \cdot c.$ Using $c=b$ yields $\displaystyle a^2 - b^2 = (a - b)^2 + 2b(a - b).$ Proof of $\Leftarrow$. Assume that $X$ is spherical and satisfies the first equation. We will show the second equality fails. Say $X$ has centre $w (\in \mathbb{R}^n)$ and radius $r$. For each $i$ we have: $r^2$ • $= (x_i - w) \cdot (x_i - w)$ • $= ((x_i -x_0) + (x_0 - w)) \cdot ((x_i -x_0) + (x_0 - w))$ • $= (x_i -x_0)^2 + (x_0 - w)^2 + 2(x_i - x_0)(x_0-w)$. Here the second term is $r^2$. So we must have $(x_i -x_0)^2 = -2(x_i - x_0)(x_0-w)$ for each $i$. So by multiplying by $c_i$ and adding up we get $\displaystyle \sum_{i=1}^t c_i (x_i - x_0)^2 = -2(x_0-w)\sum_{i=1}^t c_i (x_i-x_0) = 0.$ By using the special case of the useful identity, we get: $\displaystyle \sum_{i=1}^t c_i (x_i^2 - x_0^2) = \sum_{i=1}^t(x_i-x_0)^2 - 2x_0 \sum_{i=1}^t c_i (x_0 - x_i).$ We know the first sum is $0$ by our above calculations, and by assumption we know $\displaystyle 2x_0 \cdot \sum_{i=1}^t c_i (x_i - x_0) = 0,$ Proof of $\Rightarrow$. Assume $X$ is not spherical, and moreover that it is minimal (in the sense that removing any one point makes it spherical). In particular, $X$ is not a non-degenerate simplex. So there is a linear relation $\displaystyle \sum_{i=1}^t c_i (x_i - x_0).$ Assume that $c_t \neq 0$. By minimality, $\{x_0, \ldots, x_{t-1}\}$ is spherical, and is on a sphere with centre $w$ and radius $r$. Thus $\displaystyle x_i^2 - x_0^2 = (x_i - w)^2 - (x_0 - w)^2 + 2x_i \cdot w - 2 x_0 \cdot w.$ So $\displaystyle \sum c_i (x_i^2 - x_0^2) = \sum c_i ((x_i - w)^2 - (x_0 - w)^2) + 2w \cdot \sum c_i (x_i - x_0),$ here the second sum is $0$, and the first, by minimality, is $\displaystyle c_t ((x_t - w)^2 - (x_0 - w)^2) \neq 0,$ which isn’t $0$ since the distances of $x_t$ and $x_0$ to $w$ are different. ## Ramsey implies spherical We are now in a position to put everything together. Theorem. All Ramsey sets are spherical. Proof. Assume $X$ is not spherical. So there are constants $c_1, \ldots, c_t$ and a vector $\vec{b} \neq \vec{0}$ such that $\displaystyle \sum c_i (\vec{x}_i - \vec{x}_0) = 0$ and $\displaystyle \sum c_i (\vec{x}_i^2 - \vec{x}_0^2) = \vec{b}.$ Technical exercise. Any congruent copy of $X$ satisfies the same equations. (Use the fact that congruence is formed by rotations and translations. The translations will spit out terms like $\star$.) In every non-zero coordinate of $\vec{b}$ use the colouring $\chi$ from Lemma 2, and set $\chi^\prime(x) = \chi(x^2)$. This will give no monochromatic solution to $\displaystyle \sum c_i (\vec{x}_i^2 - \vec{x}_0^2) = \vec{b}.$ This is the end of this lecture’s material on point-Ramsey. We shift gears a little now. ## Edge Ramsey Instead of colouring points, we can colour pairs of points. This leads to the notion of edge Ramsey. We mention two results in this area. Theorem. If the edge set $X$ is not vertex spherical and not bipartite, it is not edge Ramsey. Proof. Suppose the vertex set is not spherical. Colour the points, using $\chi$, so that no copy of $X$ has a monochromatic vertex set. Now colour the edge $(x,y)$ with $\chi^\prime (x,y) = (\chi(x), \chi(y))$. Each edge has the same colour and must contain two distinct vertex colours. So the edge set is bi-partite. This gives us an analogous theorem to the theorem that Ramsey implies spherical. Theorem. If $X$ is edge Ramsey then the points lie on two concentric spheres. The proof is a variation on what we’ve seen. ## References See lecture 1 for references.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 130, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9576830863952637, "perplexity": 423.19012158347357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141164142.1/warc/CC-MAIN-20201123182720-20201123212720-00462.warc.gz"}
https://www.physicsforums.com/threads/positronium-from-vacuum-fluctuations.750170/
# Positronium from vacuum fluctuations ? 1. Apr 22, 2014 ### xortdsc Hi, I wondered if it is theoretically possible that the vacuum energy produces an electron/positron pair which then bonds into positronium instead of directly annihilating again. And if it is theoretically possible has this ever been observed ? Thanks and cheers. 2. Apr 22, 2014 ### The_Duck No. This would violate energy conservation. 3. Apr 23, 2014 ### xortdsc I see. So how could I visualize this ? The electron/positron pair which can be spontanously produced by vacuum fluctuations (which should be possible, causing the casimir effect) do not separate far enough to escape each other (to produce a "real" electron/positron pair) nor to separate enough to create a positronium system ? Is that right ? 4. Apr 23, 2014 ### craigi It's a really new paper. http://arxiv.org/pdf/1404.5243v1.pdf Submitted on 21 Apr 2014 Abstract: Positron scattering and annihilation on noble gas atoms below the positronium formation threshold is studied ab initio using many-body theory methods. The many-body theory provides a near-complete understanding of the positron-noble-gas-atom system at these energies and yields accurate numerical results. It accounts for positron-atom and electron-positron correlations, e.g., polarization of the atom by the incident positron and the non-perturbative process of virtual positronium formation. These correlations have a large effect on the scattering dynamics and result in a strong enhancement of the annihilation rates compared to the independent-particle mean-field description. Computed elastic scattering cross sections are found to be in good agreement with recent experimental results and Kohn variational and convergent close-coupling calculations. The calculated values of the annihilation rate parameter Zeff (effective number of electrons participating in annihilation) rise steeply along the sequence of noble gas atoms due to the increasing strength of the correlation effects, and agree well with experimental data. Last edited: Apr 23, 2014 5. Apr 28, 2014 ### xortdsc ah thank you. so if i understand it correctly this article seems to suggest that spontaneous creation of virtual positronium is indeed a possibility. Similar Discussions: Positronium from vacuum fluctuations ?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8567666411399841, "perplexity": 1778.8783685352562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934804666.54/warc/CC-MAIN-20171118055757-20171118075757-00671.warc.gz"}
http://www.haskell.org/pipermail/haskell-cafe/2008-December/051574.html
Hans Aberg haberg at math.su.se Sun Dec 7 05:48:25 EST 2008 On 7 Dec 2008, at 11:34, Luke Palmer wrote: > On Sun, Dec 7, 2008 at 3:05 AM, Hans Aberg <haberg at math.su.se> wrote: > One can define operators > a ^ b := b(a) -- Application in inverse. > (a * b)(x) := b(a(x)) -- Function composition in inverse. > (a + b)(x) := a(x) * b(x) > O(x) := I -- Constant function returning identity. > I(x) := x -- Identity. > and use them to define lambda calculus (suffices with the first > four; Church reverses the order of "*"). > > The simple elegance of writing this encoding just increased my > > a .^ b = b a > (a .* b) x = b (a x) > (a .+ b) x = a x .* b x > o x = i > i x = x > > toNat x = x (+1) 0 > fromNat n = foldr (.) id . replicate n I have some more notes on this that you might translate, if possible (see below). If one implements integers this way, time complexity of the operators will be of high order, but it is in fact due to representing n in effect as 1+...+1. If one represents them, using these operators, in a positional notation system, that should be fixed, though there is a Hans Associativity: a*(b*c) = (a*b)*c, a+(b+c) = (a+b)+c RHS Relations: a^O = I, a^I = a a^(b * c) = (a^b)^c a^(b + c) = a^b * a^c a*(b + c) = a*b + a*c LHS Relations: I * a = a, O + a = a, O * a = I ^ a c functor (i.e., c(a*b) = c(a)*c(b), c(I) = I) => (a*b)^c = a^c * b^c (a+b)*c = a*c + b*c I^c = I If n in Natural, f: A -> A an endo-function, define f^n := I, if n = 0 f * ... * f, if n > 1 |-n times-| The natural number functionals, corresponding to Church's number functionals, are then defined by \bar n(f) := f^k If S(x) := x + 1 (regular integer addition), then \bar n(S)(0) = n Also write (following Hancock) log_x b := \lambda x b Then log_x I = O log_x x = I log_x(a * b) = log_x a + log_x b log_x(a ^ b) = (log_x a) * b, x not free in b.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9026936888694763, "perplexity": 10628.914556733465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657127503.54/warc/CC-MAIN-20140914011207-00092-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
https://www.physicsforums.com/threads/row-reduction-help.266881/
# Row reduction help 1. Oct 25, 2008 ### rock.freak667 1. The problem statement, all variables and given/known data Find the condition of k such that the set of equations x+y-z=1, x+2y+kz=-1, x+ky-z=1, has a unique soltuion,infinite sol'n or no solution. 2. Relevant equations 3. The attempt at a solution In the augemented matrix form [1 1 -1 1] [1 2 k -1] [1 k -1 -1] R2-R1,R3-R1 [1 1 -1 1] [0 1 (k+1) -2] [0 (k-2) -(k+1) -2] R3-(k-2)R2 [1 1 -1 1] [0 1 (k+1) -2] [0 0 -(k+1)(k+3) (2k-6)] For a unique solution. $-(k+1)(k+3) \neq 0$ so that $k \neq 1,3$ For infinite soltutions -(k+1)(k+3)=0 AND 2k-6=0 so that k=-1,-3 AND k=3 This doesn't make sense to me, as k can only be on value at a time, and if k=3, there will be no solution as the ranks of the augmented matrix and the initial matrix won't be the same. SO where in my row reduction did I go wrong? Last edited: Oct 25, 2008 2. Oct 25, 2008 ### Staff: Mentor You should have this: [1 1 -1 1] [0 1 (k+1) -2] [0 (k-1) 0 -2] To get the new 3rd row, you added (-1) times R1 to R3. I think you misread the entries in the 3rd row of your first matrix as 1 (k -1) ?? 1, when they actually are 1 k (-1) 1. 3. Oct 25, 2008 ### rock.freak667 I got the 2nd matrix you put, and then interchanged row2 and row3. Then did (k-1)R3-R2 to get [1 1 -1 |-1] [0 (k-1) 0 |-2] [0 0 (k+1)(k-1)| -2(k-1)+1] which would make no sense to me when I try to give the set of infinite solutions with paramters as it would mean that k should be either 1 or -1 AND -2(k-1)+1=0 at the same time,which can't occur. 4. Oct 26, 2008 ### Staff: Mentor So, clearly there's something going on if k = 1 or if k = -1. If k = 1, the original system is: x + y - z = 1 x + 2y + z = -1 x + y - z = 1 Notice that the 1st and 3rd equations are identical. The augmented matrix is: [1 1 -1 | 1] [1 2 1 | -1] [1 1 -1 | 1] This row-reduces to [1 0 -3 | 3] [0 1 0 | -2] [0 0 0 | 0] Infinite number of solutions. Geometrically the two planes intersect in a line. What went wrong on your row-reduction is that when you multiplied R3 by (k - 1), you were multiplying by 0. If k = -1, the original system looks like this: x + y - z = 1 x + 2y - z = -1 x - y - z = 1 And the augmented matrix is like so: [1 1 -1 | 1] [1 2 -1 | -1] [1 -1 -1 | 1] This reduces to [1 1 -1 | 1] [0 1 0 | -2] [0 0 0 | -4] From the 3rd row, you can see that 0x + 0y + 0z = -4, which is impossible, so there are no solutions. Geometrically, the three planes don't intersect. Finally, if k is any value other than 1 or -1, you get a unique solution for (x, y, z), with a different set of values for each value of k. Geometrically, for each value of k other than 1 or -1, the three planes intersect at a single point. 5. Oct 26, 2008 ### HallsofIvy Staff Emeritus If there is no one value of k that makes all numbers in the last row 0, then there is no value of k that will give infinite solutions. Values of k that make all except the last number in the last row 0 give no solution. Values of k that make the next to last number in the last rwo non-zero give a unique solution. Similar Discussions: Row reduction help
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7256492972373962, "perplexity": 851.7962025764505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122619.71/warc/CC-MAIN-20170423031202-00076-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.controlbooth.com/forums/general-advice/page-50
General tips, tricks, and rules that every technician should know. Replies 3 Views 3K Replies 1 Views 2K Replies 12 Views 7K Replies 7 Views 3K Replies 16 Views 7K
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9529361724853516, "perplexity": 14689.61785908292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487598213.5/warc/CC-MAIN-20210613012009-20210613042009-00008.warc.gz"}
https://combostrap.com/styling/background-styling-add-any-background-to-your-combostrap-component-yvz4upjy
# ComboStrap Styling - Background The background element defines and adds a background. It's mostly used with slide component to create compelling landing page. You can create cover or tile background with: The background area is defined as being the content plus the padding ## Syntax <background fill="cover|tile" color="value" pattern="name" pattern-color="colorValue" opacity="0 to 1" scroll="local|fixed" >{{image}}</background> where: • image is to use an image (raster or svg) as background • fill defines how the image takes the space of its container. • cover (default) - The image is scaled within its container and cropped, if necessary (ie responsive background) • tile: The image is repeated until the container is filled. • color: defines a background color - if you have set an image, the color value is seen while the image is loading. • pattern and pattern-color are used to define the css pattern name and color respectively. • opacity brings depth level on color. Value goes from 1 (no opacity, default) to 0, completely opaque, not visible). By layer, below are typical values. • First: 0.15 • Second: 0.3 • Third: 0.45 • Fourth: 0.6 • Fifth: 0.75 • Sixth: 0.9 • scroll: the scroll effect - how the background is moving while scrolling • local: the image is moving with its container • fixed: the image stays at the same position on the screen • If you set more than one background, the first background in the list is the farthest on the stack. • The background area is the content plus the padding but not the margin. ## Color You can set a background color with: • the color attribute of the background component • via the background-color attribute on all component All color value (name and hex) described in the color page are supported. ### Uniform Color <note info> <background color="light"></background> ==== Background Color ==== A info note with a bootstrap light color as background </note> ### Background Color A info note with a bootstrap light color as background The background-color attribute is just a short cut. <note info background-color="#6FE2E9"> ==== Direct Background Color ==== A info note with the background-color attribute </note> ### Direct Background Color A info note with the background-color attribute ComboStrap implements also a gradient color naming scheme in order to show a linear color gradient in the background. If you want to have a gradient on your color, just add the gradient- prefix to a color value. Example with a light gradient background. <slide background-color="gradient-light"> === A gradient light slide === </slide> #### A gradient light slide If you want to have full gradient customization, you can add a gradient css style in the userstyle Because the gradient generated a image, you can't use a gradient color and an image at the same time in a background element. You just need to add another background. ## Raster Image This sections shows you how to use a raster image as background. ### Raster Cover <slide color="steelblue" text-align="center" > <background opacity="0.3">{{:docs:block:stock_image_surfer_in_the_see.png|}}</background> <title 2>A blue slide</title> **A blue slide with a little bit of opacity on the background image to increase contrast with the heading. \\ A complete solution to building great landing pages.** </typo> </slide> ## A blue slide A blue slide with a little bit of opacity on the background image to increase contrast with the heading. A complete solution to building great landing pages. You will find plenty of background image on image bank such as: ### Raster Tile Create a tile background based on a raster image that represents a pattern. <slide text-align="center" height="20vh"> <background fill="tile" opacity="0.5">{{:docs:styling:patternico.png|}}</background> <title 3>**A tile background based on icons**</title> </slide> ### A tile background based on icons <slide text-align="center" height="25vh"> <background fill="tile" opacity="0.1">{{:docs:styling:subway-lines.png}}</background> <title 2 color="#3669b3">A tile background based on subway lines</title> </slide> ## A tile background based on subway lines You will find plenty of tile patterns generator on the web such as ## Svg ### Svg Cover A svg can be used as background cover. Their dimension are constraint only by their parent. bgjar can generate a lot of different type of svg background cover Example with a stacked wave where the colors were changed with attractive colors. <slide> <background>{{:docs:styling:stacked_wave.svg|}}</background> <title 1>A svg background cover</title> </slide> # A svg background cover ### Svg Tile A svg can also be used as a background tile. They got by default a size of 192px Below is an example on how to use a svg as a tile background with a the heropatterns jigsaw tile pattern where • the foreground color is just the color of the svg (ie color=#9C92AC) • the background color is the background color (ie color=“#DFDBE5”) • the opacity is the just the background opacity • the size was adjusted to 50px <slide text-align="center"> <background fill="tile" color="#DFDBE5" opacity="0.3">{{:docs:styling:jigsaw.svg?50&color=#9C92AC|}}</background> <title 1>An Svg Tile Based on Hero Pattern</title> </slide> # An Svg Tile Based on Hero Pattern ## CSS Pattern CSS Pattern are pre-created background created directly via CSS. ComboStrap supports the css pattern library with the following syntax. <background pattern="name[-size]" color="colorValue" pattern-color="colorValue" /> where: • color is the background color • pattern-color is the color of the pattern • pattern is the pattern name with its optional size. The possible values are listed below. Name Size checks sm (small) grid md (medium, default) dots lg (large) cross-dots xl (extra-large) diagonal-lines horizontal-lines vertical-lines diagonal-stripes horizontal-stripes vertical-stripes triangles zigzag Example: <slide> <background pattern="vertical-lines" color="#FACB0E" pattern-color="#FDE482" /> <title 1>A slide with verticale line Css Pattern</title> </slide>
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2591252326965332, "perplexity": 9550.383062475392}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500041.18/warc/CC-MAIN-20230202200542-20230202230542-00261.warc.gz"}
https://www.varsitytutors.com/psat_math-help/how-to-multiply-square-roots
# PSAT Math : How to multiply square roots ## Example Questions ### Example Question #11 : Square Roots And Operations Multiply and simplify. Assuming all integers are positive real numbers. Explanation: Multiply the coefficents outside of the radicals. Then multiply the radicans. Simplify by checking for a perfect square. ### Example Question #1 : How To Multiply Square Roots Mulitply and simplify. Assume all integers are positive real numbers. Explanation: Order of operations, first distributing the  to all terms inside the parentheses. ### Example Question #1 : How To Multiply Square Roots The square root(s) of 36 is/are ________. 6 and -6 None of these answers are correct. 6 -6 6, -6, and 0 6 and -6 Explanation: To square a number is to multiply that number by itself. Because 6 x 6 = 36 AND -6 x -6 = 36, both 6 and -6 are square roots of 36. ### Example Question #12 : Basic Squaring / Square Roots Simplify: Explanation: Multiplication of square roots is easy! You just have to multiply their contents by each other. Just don't forget to put the result "under" a square root! Therefore: becomes Now, you need to simplify this: You can "pull out" two s.  (Note, that it would be even easier to do this problem if you factor immediately instead of finding out that .) After pulling out the s, you get:
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9634414315223694, "perplexity": 1595.4040377215767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590901.10/warc/CC-MAIN-20180719125339-20180719145339-00323.warc.gz"}
http://mathoverflow.net/questions/122605/do-the-solutions-of-the-maurer-cartan-equation-form-a-simplicial-set/123128
# Do the solutions of the Maurer--Cartan equation form a simplicial set? The Maurer--Cartan equation is the equation: $$d\gamma+\frac 12[\gamma,\gamma]=0$$ where $\gamma$ represents a degree one element in a differential graded Lie algebra $\mathfrak g^\ast$. Let's denote the set of solutions by $MC(\mathfrak g^\ast)$. I need to have some notion of (higher) homotopies between elements of the set $MC(\mathfrak g^\ast)$. One way of doing this would be to define a simplicial set $\mathsf{MC}(\mathfrak g^\ast)$ whose zero simplices are the set $MC(\mathfrak g^\ast)$. I have indeed seen the phrase "simplicial set of solutions to the Maurer--Cartan equation" in papers. Is there a standard construction of this simplicial set? If so, how should I think about it, and what are some good references? In fact, it seems that the $n$-simplices in $\mathsf{MC}(\mathfrak g^\ast)$ should be $MC(\mathfrak g^\ast\otimes\Omega^\ast(\Delta^n))$ where $\Omega^\ast(\Delta^n)$ is the differential graded algebra of differential forms on the standard simplex $\Delta^n$. Can I use something smaller (hopefully finite dimensional) instead of $\Omega^\ast(\Delta^n)$? Perhaps just the simplicial cochain complex $C^\ast(\Delta^n)$? - I think there's an Annals paper by Getzler on this topic. –  Fernando Muro Feb 22 '13 at 7:10 By the way, one cannot use simplicial cochains (at least in an obvious way) since that algebra is not commutative, but you can work with polynomial differential forms. –  Fernando Muro Feb 22 '13 at 10:46 I wrote a paper on that: arxiv.org/abs/math.AT/0603563 –  André Henriques Mar 10 '13 at 13:47 I thought I'd expand on Fernando's comments and add a little bit. Your instincts are correct, there is in fact a finite dimensional model of $\Omega^{\ast}( \Delta_{n} )$, which are called the polynomial differential forms: $$k[ \Delta_{n} ] = k [ t_0, \ldots , t_n, dt_0, \ldots, dt_n ] / \left( (\sum t_i) - 1, \sum dt_i \right)$$ where $|t_i| = 0, d(t_i) = dt_i.$ Note that we take the free graded commutative algebra, so $dt_i^{2} = 0$ for degree reasons. Each $k[\Delta_n]$ is a differential graded commutative algebra, and the assignment $$n \mapsto k[\Delta_n]$$ is a simplicial object in the category of differential graded commutative algebras. In their book "On PL DeRham Theory and Rational Homotopy Type" (Memoirs of the AMS, Number 179), Bousfield and Gugenheim demonstrate that the simplicial sets $cdgA(R, L \otimes k[\Delta_{\bullet}] )$ give a simplicial enrichment of the model category structure on commutative differential graded algebras which behave like a simplicial model category. In "Homological Algebra of Homotopy Algebras," Hinich shows that over a field of characteristic zero, this is true for the model category structure on $O$-algebras for any operad $O$, ie, $dgOA(R, L \otimes k[\Delta_{\bullet}])$ is a simplicial enrichment of the category $dgOA$ which behaves like a simplicial model category. Now, I don't know how to answer your question for the set $MC(\mathfrak{g}^{\ast})$. However, when you consider $MC(\mathfrak{g}^{\ast} \otimes R)$ for some finite dimensional, nilpotent commutative dgA $R$, we have the identification: $$MC( \mathfrak{g} \otimes R ) = dgLie ( \Omega(R^{\vee}), \mathfrak{g}^{\ast}).$$ where $R^{\vee}$ is the hom-dual commutative differential graded coalgebra to $R$, and $\Omega$ is the cobar construction which carries commutative differential graded coalgebra to quasifree dg Lie algebras. In particular, the right-hand side is precisely the points in the simplicial set $$dgLie(\Omega(R^{\vee}), \mathfrak{g}^{\ast} \otimes k[\Delta^{\bullet}]),$$ and the simplicial set is a Kan-complex because every dg Lie algebra is fibrant, and $\Omega$ takes values in cofibrant dg-Lie algebras.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9400426745414734, "perplexity": 199.86756848436605}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737940789.96/warc/CC-MAIN-20151001221900-00086-ip-10-137-6-227.ec2.internal.warc.gz"}
https://arxiv.org/abs/1801.08798
cond-mat.soft (what is this?) # Title:Edge-induced shear banding in entangled polymeric fluids Abstract: Despite decades of research, the question of whether solutions and melts of highly entangled polymers exhibit shear banding as their steady state response to a steadily imposed shear flow remains controversial. From a theoretical viewpoint, an important unanswered question is whether the underlying constitutive curve of shear stress $\sigma$ as a function of shear rate $\dot{\gamma}$ (for states of homogeneous shear) is monotonic, or has a region of negative slope, $d\sigma/d\dot{\gamma}<0$, which would trigger banding. Attempts to settle the question experimentally via velocimetry of the flow field inside the fluid are often confounded by an instability of the free surface where the sample meets the outside air, known as "edge fracture". Here we show by numerical simulation that in fact even only very modest edge disturbances - which are the precursor of full edge fracture but might well, in themselves, go unnoticed experimentally - can cause strong secondary flows in the form of shear bands that invade deep into the fluid bulk. Crucially, this is true even when the underlying constitutive curve is monotonically increasing, precluding true bulk shear banding in the absence of edge effects. Comments: 5 pages, 4 figures; v2: updated to post-referee version Subjects: Soft Condensed Matter (cond-mat.soft); Fluid Dynamics (physics.flu-dyn) Journal reference: Phys. Rev. Lett. 120, 138002 (2018) DOI: 10.1103/PhysRevLett.120.138002 Cite as: arXiv:1801.08798 [cond-mat.soft] (or arXiv:1801.08798v2 [cond-mat.soft] for this version) ## Submission history From: Ewan Hemingway [view email] [v1] Fri, 26 Jan 2018 13:23:18 UTC (1,180 KB) [v2] Tue, 3 Apr 2018 18:21:38 UTC (1,181 KB)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5361104011535645, "perplexity": 3452.1147404454528}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578602767.67/warc/CC-MAIN-20190423114901-20190423135810-00074.warc.gz"}
http://www.mzan.com/article/19465049-changing-api-level-android-studio.shtml
Home Changing API level Android Studio I want to change the minimum SDK version in Android Studio from API 12 to API 14. I have tried changing it in the manifest file, i.e., and rebuilding the project, but I still get the Android Studio IDE throwing up some errors. I presume I have to set the min SDK in 'project properties' or something similar so the IDE recognizes the change, but I can't find where this is done in Android Studio.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16283012926578522, "perplexity": 2272.6705720691075}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864953.36/warc/CC-MAIN-20180623074142-20180623094142-00048.warc.gz"}
https://www.zora.uzh.ch/id/eprint/51017/
# Evaluation of different downscaling techniques for hydrological climate-change impact studies at the catchment scale Teutschbein, C; Wetterhall, F; Seibert, Jan (2011). Evaluation of different downscaling techniques for hydrological climate-change impact studies at the catchment scale. Climate Dynamics, 37(9-10):2087-2105. ## Abstract Hydrological modeling for climate-change impact assessment implies using meteorological variables simulated by global climate models (GCMs). Due to mismatching scales, coarse-resolution GCM output cannot be used directly for hydrological impact studies but rather needs to be downscaled. In this study, we investigated the variability of seasonal streamflow and flood-peak projections caused by the use of three statistical approaches to downscale precipitation from two GCMs for a meso-scale catchment in southeastern Sweden: (1) an analog method (AM), (2) a multi-objective fuzzy-rule-based classification (MOFRBC) and (3) the Statistical DownScaling Model (SDSM). The obtained higher-resolution precipitation values were then used to simulate daily streamflow for a control period (1961–1990) and for two future emission scenarios (2071–2100) with the precipitation-streamflow model HBV. The choice of downscaled precipitation time series had a major impact on the streamflow simulations, which was directly related to the ability of the downscaling approaches to reproduce observed precipitation. Although SDSM was considered to be most suitable for downscaling precipitation in the studied river basin, we highlighted the importance of an ensemble approach. The climate and streamflow change signals indicated that the current flow regime with a snowmelt-driven spring flood in April will likely change to a flow regime that is rather dominated by large winter streamflows. Spring flood events are expected to decrease considerably and occur earlier, whereas autumn flood peaks are projected to increase slightly. The simulations demonstrated that projections of future streamflow regimes are highly variable and can even partly point towards different directions. ## Abstract Hydrological modeling for climate-change impact assessment implies using meteorological variables simulated by global climate models (GCMs). Due to mismatching scales, coarse-resolution GCM output cannot be used directly for hydrological impact studies but rather needs to be downscaled. In this study, we investigated the variability of seasonal streamflow and flood-peak projections caused by the use of three statistical approaches to downscale precipitation from two GCMs for a meso-scale catchment in southeastern Sweden: (1) an analog method (AM), (2) a multi-objective fuzzy-rule-based classification (MOFRBC) and (3) the Statistical DownScaling Model (SDSM). The obtained higher-resolution precipitation values were then used to simulate daily streamflow for a control period (1961–1990) and for two future emission scenarios (2071–2100) with the precipitation-streamflow model HBV. The choice of downscaled precipitation time series had a major impact on the streamflow simulations, which was directly related to the ability of the downscaling approaches to reproduce observed precipitation. Although SDSM was considered to be most suitable for downscaling precipitation in the studied river basin, we highlighted the importance of an ensemble approach. The climate and streamflow change signals indicated that the current flow regime with a snowmelt-driven spring flood in April will likely change to a flow regime that is rather dominated by large winter streamflows. Spring flood events are expected to decrease considerably and occur earlier, whereas autumn flood peaks are projected to increase slightly. The simulations demonstrated that projections of future streamflow regimes are highly variable and can even partly point towards different directions. ## Statistics ### Citations Dimensions.ai Metrics 137 citations in Web of Science® 147 citations in Scopus®
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8872652649879456, "perplexity": 6312.814397793903}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499949.24/warc/CC-MAIN-20230201180036-20230201210036-00836.warc.gz"}
https://math.stackexchange.com/questions/28332/is-lagranges-theorem-the-most-basic-result-in-finite-group-theory
# Is Lagrange's theorem the most basic result in finite group theory? Motivated by this question, can one prove that the order of an element in a finite group divides the order of the group without using Lagrange's theorem? (Or, equivalently, that the order of the group is an exponent for every element in the group?) The simplest proof I can think of uses the coset proof of Lagrange's theorem in disguise and goes like this: take $a \in G$ and consider the map $f\colon G \to G$ given by $f(x)=ax$. Consider now the orbits of $f$, that is, the sets $\mathcal{O}(x)=\{ x, f(x), f(f(x)), \dots \}$. Now all orbits have the same number of elements and $|\mathcal{O}(e)| = o(a)$. Hence $o(a)$ divides $|G|$. This proof has perhaps some pedagogical value in introductory courses because it can be generalized in a natural way to non-cyclic subgroups by introducing cosets, leading to the canonical proof of Lagrange's theorem. Has anyone seen a different approach to this result that avoids using Lagrange's theorem? Or is Lagrange's theorem really the most basic result in finite group theory? • How do you define "the most basic result"? Certainly, uniqueness of inverses (et cetera) is a basic result, even though it is a triviality. Anyhow - your question is a good one: I am surprised how often I apply Lagrange's theorem. (even though I hardly think of it under that name. It's just a fact of life) – Fredrik Meyer Mar 25 '11 at 6:19 • @Fredrik: I mean, non-trivial result. – lhf Mar 25 '11 at 11:34 • For abelian groups the proof is pretty simple, just multiply all the elements in the group and in the image of $f$. – N. S. Apr 9 '11 at 20:04 • @lhf: I just saw elsewhere the link you provided to a paper of Pengelley which reproduces Cayley's first paper on group theory with insightful footnotes. In particular (as you know...) Cayley states the theorem in question and says only "it can be shown": neither cosets nor Lagrange's Theorem are anywhere in sight. I think this link would make a nice addition to your question. – Pete L. Clark Aug 6 '11 at 22:39 • @lhf: Also, I like the proof you give above using orbits. – Pete L. Clark Aug 6 '11 at 23:00 Consider the representation of $\langle a \rangle$ on the free vector space on $G$ induced by left multiplication. Its character is $|G|$ at the identity and $0$ everywhere else. Thus it contains $|G|/|\langle a \rangle|$ copies of the trivial representation. Since this must be an integer, $|\langle a \rangle|$ divides $|G|$. Developing character theory without using Lagrange's theorem is left as an exercise to the reader. • Wasn't there a book that said "... is left as an exercise for the masochistic reader"? – Arturo Magidin May 10 '11 at 20:42 • thanks, though it's hardly in the elementary nature I'm looking for. – lhf May 10 '11 at 21:03 • Hmm -- I'm wondering whether the humourous aspect of this answer was apparent enough -- perhaps the last sentence should have been followed by a smiley :-) – joriki May 11 '11 at 7:23 • I wonder what Linderholm would say (Mathematics made difficult) – Bill Dubuque May 12 '11 at 3:44 • While I am aware this remark comes many years later - Spivak in his Comprehensive Introduction to Differential Geometry vol 1 makes this remark in an appendix to the chapter on tangent bundles where he is proving their uniqueness. The argument proceeds in two main steps, and after carrying out the first, and many pages of diagram chasing, he makes the remark Arturo mentions. – Alfred Yerger Jan 3 '18 at 1:52 I am late... Here is a proposal, probably not far from Ihf's answer. For $$a \in G$$ of order $$p$$, define the binary relation $$x\cal Ry$$ : $$\exists k\in \mathbb{N} ; k such that $$y=a^kx$$ $$\cal R$$ is an equivalence relation on $$G$$ and sets up a partition of $$G$$. A class is defined by $$C_x=\left\{a^kx|k=0,1,\ldots,p-1 \right\}$$ All the classes have $$p$$ elements, then $$n=|G|$$ is a multiple of $$p$$ ; $$(n=pm)$$ and $$a^n=a^{pm}=e$$ • This is exactly the coset proof for the cyclic group generated by $a$. – lhf Aug 2 '19 at 10:20
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8451353907585144, "perplexity": 449.125803895452}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107896778.71/warc/CC-MAIN-20201028044037-20201028074037-00377.warc.gz"}
http://math.eretrandre.org/tetrationforum/showthread.php?tid=1399
• 0 Vote(s) - 0 Average • 1 • 2 • 3 • 4 • 5 The Different Fixed Points of Exponentials Catullus Fellow Posts: 205 Threads: 46 Joined: Jun 2022   06/07/2022, 02:42 AM (This post was last modified: 07/28/2022, 11:22 PM by Catullus.) Is there a pattern to the positions of different fixed points of exponentials? Like is there a pattern to the positions of different fixed points of $e\uparrow x$? What about for the fixed points of $\eta\uparrow x$? How are they arranged in the complex plane? Are there more fixed points of exponentials in the quaternions? Or what about in the Octonions? ฅ(ミ⚈ ﻌ ⚈ミ)ฅ Please remember to stay hydrated. Sincerely: Catullus tommy1729 Ultimate Fellow Posts: 1,676 Threads: 368 Joined: Feb 2009 06/08/2022, 12:25 PM Compare the norm of exp iterated n times with the identity Catullus Fellow Posts: 205 Threads: 46 Joined: Jun 2022 07/07/2022, 09:09 AM (This post was last modified: 07/18/2022, 08:41 AM by Catullus.) There does seem to be a pattern to the positions, in the complex plane of the different fixed points of exp(z). There used to be a graph of $F(z)=exp(z)-z$, from -32 to 32 in both directions attached to this post, but I removed it try to replace it with a better one. ฅ(ミ⚈ ﻌ ⚈ミ)ฅ Please remember to stay hydrated. Sincerely: Catullus MphLee Long Time Fellow Posts: 321 Threads: 25 Joined: May 2013 07/07/2022, 08:29 PM (This post was last modified: 07/07/2022, 08:29 PM by MphLee.) I apologize for my fractal "n00bness"... but is there available a diagram of the complex plane showing in addition to the fixed points also the shapes and positions of their respective basin of attraction? I'm interested in visualizing the sets $$A_p=\{z\in\mathbb C\, :\, \lim_{n\to \infty} \exp_b ^n(z)=p\}$$. What is the knowledge about the set $$A=\bigcup_{p\in {\rm fix}(\exp_b)}A_p$$ and the set $$\mathbb C\setminus A$$. Are they open? Closed? Connected by arcs? Are they punctured/have holes? MSE MphLee Mother Law $$(\sigma+1)0=\sigma (\sigma+1)$$ S Law $$\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)$$ JmsNxn Ultimate Fellow Posts: 940 Threads: 112 Joined: Dec 2010 07/08/2022, 02:18 AM (This post was last modified: 07/08/2022, 09:49 AM by JmsNxn.) (07/07/2022, 08:29 PM)MphLee Wrote: I apologize for my fractal "n00bness"... but is there available a diagram of the complex plane showing in addition to the fixed points also the shapes and positions of their respective basin of attraction? I'm interested in visualizing the sets $$A_p=\{z\in\mathbb C\, :\, \lim_{n\to \infty} \exp_b ^n(z)=p\}$$. What is the knowledge about the set $$A=\bigcup_{p\in {\rm fix}(\exp_b)}A_p$$ and the set $$\mathbb C\setminus A$$. Are they open? Closed? Connected by arcs? Are they punctured/have holes? Hey, Mphlee I believe your question isn't the question you intended to ask. So first I'll answer the question you did ask, and then extrapolate what I think you meant to ask. First of all, the only attracting fixed points are in the Shell-Thron region. So you are asking for what values z are in the basin of the fixed points. The fixed points come in a set that looks like $$|\log(p)| < 1$$, and then $$b = e^{\log(p)/p}$$. The basin of these fixed points are very sensitive, and nonsensical. There is not much literature that I've found describing them. An important fact, is when $$b \in (1,\eta)$$, that $$\Re(z) < 0$$ is in the basin. As you move in the complex plane, this half plane deforms, I believe there's always a half plane within $$A_p$$ for all such $$p$$--not sure how to describe it. But don't quote me on that, I'm not certain. Then when you ask for your set $$A$$, it can be written more clearly as: $$A = \{z \in \mathbb{C}\,|\, \exists p\, \text{such that}\,|\log(p)| < 1,\,\lim_{n\to\infty}\exp_{p^{1/p}}^{\circ n}(z) = p\}\\$$ This set isn't all that interesting, at least I don't think so. It is definitely open, just by looking at this definition. It is probably simply connected, as it's the union of simply connected domains that are deformations of each other. This means there shouldn't be any holes. The more interesting chaos, and the much crazier idea, would be to invert your definition of $$A_p$$. Lets call it $$B_p$$: $$B_p = \{z \in \mathbb{C}\,|\, \lim_{n\to\infty} \log^{\circ n}_b(z) = p\}\\$$ This would produce MANY MANY more fixed points. This would be a much more interesting beast, as it'd deal with fixed point pairs of the exponential, and the ridiculous amount of fixedpoints that the exponential has (depending on how you choose the signature of each log in the iteration). The thing is, pretty much all the fixed points are repelling. The only attracting ones are in the shell-thron region. This means, the fixed points are attractive in $$\log$$ though. And choosing each branch of $$\log$$ as you iterate, produces different fixed points. And these are a much more interesting version of your set. It's kinda like the compliment you brought up. Instead of defining $$A_p$$ through attracting fixed points of $$\exp_b$$, which are well understood. Look at $$B_p$$ defined the exact same manner off of repelling fixed points. Could I ask what you want to know about this set--what your goals are? JmsNxn Ultimate Fellow Posts: 940 Threads: 112 Joined: Dec 2010 07/08/2022, 03:16 AM Also, to Catullus' question, there is a pattern to the fixed points. In Devaney's book "An intro to chaotic dynamical systems" He has a large section on iterating: $$\lambda e^z\,\,\text{for}\,\,0 < \lambda\\$$ He describes the structure of the Julia set/fatou sets using symbolic dynamics. It's far too involved to describe here, but it's a standard modeling technique in dynamics. Devaney describes how different parts of the complex plane maps to other parts under iterations, and on intersections of these domains we have the fixedpoints/periodic points. It's absolutely fascinating. I'm not that well versed in it, but to start you off, search for the symbolic dynamics of the exponential functions. Or pick up Devaney's book MphLee Long Time Fellow Posts: 321 Threads: 25 Joined: May 2013 07/09/2022, 09:54 PM (This post was last modified: 07/09/2022, 10:20 PM by MphLee.) I totally forgot Devaney's book! It's amazing. It was on my todo list since ages. Now I'm eating it like bread... many things are super clear now... I think that after it I can begin something more juicy and than straight to Milnor... maybe before september If I manage to read carefully I can parse 60% of this forum's posts. I'll be back on this post when I've finished Devaney... I hope I'll be able to find time for some exercises too Off topic, reading Devaney on symbolic dynamics I just remember that Lawere proposed an algebraic definition of chaos. Let $${\bf X}=(X,f)$$ be a dynamical system. Let $$A^\mathbb N=(A^{\mathbb N},\sigma)$$ be the shift space over the alphabet $$A$$. Given an observable $$\phi:X\to A$$ that assign to every state a measurement, there always exists a canonical dynamical systems map $${\bar \phi}:{\bf X}\to A^\mathbb N$$ assigns what Devaney calls "itinerary" (of observations). $$x \mapsto (\phi(x),\phi(f^1(x)),\phi(f^2(x)),..., \phi(f^n(x)),...)$$ The observable $$\phi$$ is said chaotic if $$\bar \phi$$ is surjective. It means that the observable is so weak that we can observe every possible behavior... we call this chaotic behavior. The reason I asked the question is because I wanted to have a mental picture of the dynamics of exponentiation in term of graphs/phase protrait. MSE MphLee Mother Law $$(\sigma+1)0=\sigma (\sigma+1)$$ S Law $$\bigcirc_f^{\lambda}\square_f^{\lambda^+}(g)=\square_g^{\lambda}\bigcirc_g^{\lambda^+}(f)$$ JmsNxn Ultimate Fellow Posts: 940 Threads: 112 Joined: Dec 2010 07/10/2022, 05:44 AM (This post was last modified: 07/10/2022, 05:46 AM by JmsNxn.) Oh yes! Thank you Mphlee for reading Devaney first. Oddly enough it was a harder book for me compared to Milnor, but that's because I'm analytic calculus (trigonometric reasoning). Devaney was hard for me because of the algebraic stuff; kinda made my head spin. Makes sense that Devaney would be easier for you, lmao. Also, do not be intimidated by Milnor. He gives great break downs of everything in plain english. If you miss some of the proofs, that's okay. You always know that these proofs are 100%. And Milnor is literally like the Shakespeare of mathematics. Guy writes math so clearly and explanatory. You can skip parts if you don't get it, not a problem with Milnor. He reads like Hamlet, where really all it is is a bunch of soliloquy, and they're all stand alone. There's the old joke that John A. Milnor Complex Dynamics is the modern bible. And anyone studying stuff like this needs to read it. Who cares if you don't actually get how he got that bound!! Just watch how he uses it. And the logical structure. The shape of the arguments. So you didn't get why this limit argument works--you can trust that it's true because it's Milnor. But watch how he uses it... Devaney is great, but he's no Milnor. Milnor fucking fucked shit up with that book, Mphlee. We can all hope to be so successful Catullus Fellow Posts: 205 Threads: 46 Joined: Jun 2022 07/18/2022, 08:52 AM (This post was last modified: 07/18/2022, 08:55 AM by Catullus.) I removed the attachment from #3, because it had some issues. I might add a better attachment to #3. ฅ(ミ⚈ ﻌ ⚈ミ)ฅ Please remember to stay hydrated. Sincerely: Catullus Gottfried Ultimate Fellow Posts: 854 Threads: 126 Joined: Aug 2007 07/18/2022, 09:11 AM Hmm, I'm not going into this matter deeply at the moment.   I had looked at the question of location of fixpoints and a possible interpolation-line once in 2008 but had not developed this much. At least there is a picture showing the attempt:     Gottfried Helms, Kassel « Next Oldest | Next Newest » Possibly Related Threads… Thread Author Replies Views Last Post Iteration with two analytic fixed points bo198214 42 568 08/12/2022, 11:28 PM Last Post: JmsNxn Quick way to get the repelling fixed point from the attracting fixed point? JmsNxn 10 378 07/22/2022, 01:51 AM Last Post: JmsNxn Two Attracting Fixed Points Catullus 4 234 07/04/2022, 01:04 PM Last Post: tommy1729 tetration from alternative fixed point sheldonison 22 57,356 12/24/2019, 06:26 AM Last Post: Daniel Are tetrations fixed points analytic? JmsNxn 2 7,036 12/14/2016, 08:50 PM Last Post: JmsNxn Removing the branch points in the base: a uniqueness condition? fivexthethird 0 3,420 03/19/2016, 10:44 AM Last Post: fivexthethird Derivative of exp^[1/2] at the fixed point? sheldonison 10 22,568 01/01/2016, 03:58 PM Last Post: sheldonison [MSE] Fixed point and fractional iteration of a map MphLee 0 4,180 01/08/2015, 03:02 PM Last Post: MphLee attracting fixed point lemma sheldonison 4 16,165 06/03/2011, 05:22 PM Last Post: bo198214 cyclic points tommy1729 3 7,801 04/07/2011, 07:57 PM Last Post: JmsNxn Users browsing this thread: 1 Guest(s)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6791298985481262, "perplexity": 1802.2296959162315}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573699.52/warc/CC-MAIN-20220819131019-20220819161019-00634.warc.gz"}
http://mathhelpforum.com/geometry/156028-co-ordinates-parallelogram.html
# Math Help - co-ordinates of a parallelogram 1. ## co-ordinates of a parallelogram ABCD is a parallelogram where A is (4,2), B is (-6,1), and D is (-3,-4). Find the co-ordinates of C. 2. If you have a sketch, that should be simpler. The properties of a parallelogram is such that the sides AB and CD are parallel, meaning that AB and CD have the same gradient. So: 1. Find the gradient of AB. (this is also gradient of CD) 3. Find equation of line through D having gradient AB. 5. Solve simultaneously for those two lines. The intersection is where point C is. There is a shorter, visual way using vectors, but this makes a diagram a must, if you cannot picture the points in your head. Vector AB = (10, 1) So, C + (10, 1) = (-3, -4) So, C = (-3 -10, -4 -1) = (-13, -5) 3. Hello, euclid2! If you make a sketch, you can "walk" your way to the answer. $ABCD\text{ is a parallelogram with vertices: }A(4,2),\;B(\text{-}6,1),\;D(\text{-}3,\text{-}4)$ $\text{Find the coordinates of }C.$ Code: A B o(4,2) (-6,1)o : : : : : -6 : : : D -7 : C : o - - - + o - - - + (-3,-4) We see that vertex $\,C$ is at the lower-left. Going from $\,A$ to $\,D$, we move: 6 units down and 7 units left. Since $BC \parallel AD$, going from $\,B$ to $\,C$, we do the same. Starting at $B(\text{-}6,1)$, move 6 units down a 7 units left. Therefore, vertex $\,C$ is $(\text{-}13,\text{-}5)$ 4. Originally Posted by euclid2 ABCD is a parallelogram where A is (4,2), B is (-6,1), and D is (-3,-4). Find the co-ordinates of C. B(-6 , 1) , A(4 , 2) C(a , b) , D(-3 , -4) 4 - (-6) = -3 - a 10 = -3 - a ===> a = -13 2 - 1 = -4 - b 1 = -4 -b ===> b = -5 therefore C(-13 , -5) 5. ## Re: co-ordinates of a parallelogram is it just me or is -13, -5 making the parallelogram's C point jut far out compared to the other points when graphing, that can't be right. 7, -3 sounds more accurate but I'm also having trouble figuring out how to arrive at that answer without just testing it on a graph. 6. ## Re: co-ordinates of a parallelogram Figured it out, You need to solve for AB = CD. AB you previously calculated its distance was equal to (-10,-1). so you set x-10 and y-1 and plug in the D point (-3,-4). Now you have x-10=-3 and y-1=-4. Once you've calculated that you'll find C = (7,-3). If you want to confirm this, go ahead and graph the points, A,B,C,D on GeoGebra and you'll see it makes a perfect parallelogram. 7. ## Re: co-ordinates of a parallelogram Well geezzz, much easier if you change D to C !!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7910387516021729, "perplexity": 1619.9028511083832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098849.37/warc/CC-MAIN-20150627031818-00292-ip-10-179-60-89.ec2.internal.warc.gz"}
https://www.lmfdb.org/L/rational/12/405%5E6/1.1/c3e6-0
Label $\alpha$ $A$ $d$ $N$ $\chi$ $\mu$ $\nu$ $w$ prim $\epsilon$ $r$ First zero Origin 12-405e6-1.1-c3e6-0-0 $4.88$ $1.86\times 10^{8}$ $12$ $3^{24} \cdot 5^{6}$ 1.1 $$[3.0]^{6} 3 1 0 0.0789296 Modular form 405.4.e.u 12-405e6-1.1-c3e6-0-1 4.88 1.86\times 10^{8} 12 3^{24} \cdot 5^{6} 1.1$$ $[3.0]^{6}$ $3$ $1$ $0$ $0.136424$ Modular form 405.4.e.q 12-405e6-1.1-c3e6-0-2 $4.88$ $1.86\times 10^{8}$ $12$ $3^{24} \cdot 5^{6}$ 1.1 $$[3.0]^{6} 3 1 0 0.202865 Modular form 405.4.e.v 12-405e6-1.1-c3e6-0-3 4.88 1.86\times 10^{8} 12 3^{24} \cdot 5^{6} 1.1$$ $[3.0]^{6}$ $3$ $1$ $0$ $0.209179$ Modular form 405.4.e.t 12-405e6-1.1-c3e6-0-4 $4.88$ $1.86\times 10^{8}$ $12$ $3^{24} \cdot 5^{6}$ 1.1 $$[3.0]^{6} 3 1 0 0.210937 Modular form 405.4.e.s 12-405e6-1.1-c3e6-0-5 4.88 1.86\times 10^{8} 12 3^{24} \cdot 5^{6} 1.1$$ $[3.0]^{6}$ $3$ $1$ $0$ $0.216864$ Modular form 405.4.e.r 12-405e6-1.1-c3e6-0-6 $4.88$ $1.86\times 10^{8}$ $12$ $3^{24} \cdot 5^{6}$ 1.1 $$[3.0]^{6} 3 1 0 0.675198 Modular form 405.4.a.l 12-405e6-1.1-c3e6-0-7 4.88 1.86\times 10^{8} 12 3^{24} \cdot 5^{6} 1.1$$ $[3.0]^{6}$ $3$ $1$ $6$ $1.31545$ Modular form 405.4.a.k
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9884182810783386, "perplexity": 307.73828600681327}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662561747.42/warc/CC-MAIN-20220523194013-20220523224013-00513.warc.gz"}
https://www.lessonplanet.com/teachers/problem-solving-application-use-formulas-practice
Problem-Solving Application: Use Formulas: Practice For this formula worksheet, students use formulas to solve 5 word problems, showing their work. Houghton Mifflin text is referenced.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9590350389480591, "perplexity": 12713.549036584183}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867055.20/warc/CC-MAIN-20180525082822-20180525102822-00256.warc.gz"}
http://mathhelpforum.com/calculus/16842-help-w-trig-integrals-print.html
# Help w/ Trig Integrals Show 40 post(s) from this thread on one page Page 1 of 2 12 Last • July 14th 2007, 02:03 AM Help w/ Trig Integrals $\int(\sin{x})^3(\cos{x})^3dx$ am i going to use trigonometric or int by parts? help • July 14th 2007, 04:34 AM Soroban I have to ask . . . Are those really $x^3$ ? We can integrate: $\sin^3\!x\cos^3\!x$ . . . but not $\sin(x^3)\cos(x^3)$ • July 14th 2007, 04:35 AM oopss its $(\sin{x})^3 (\cos{x})^3$ • July 14th 2007, 04:40 AM topsquark $\int sin^3(x) cos^3(x) dx$ Let $y = sin(x)$, then $dy = cos(x) dx$ $\int sin^3(x) cos^3(x) dx = \int sin^3(x) cos^2(x) cos(x) dx$ $= \int sin^3(x) (1 - sin^2(x)) cos(x) dx = \int y^3 (1 - y^2) dy$ $= \int (y^3 - y^5)$ I'm sure you can take it from here. -Dan • July 14th 2007, 04:47 AM im sure that either sin(x) or cos(x) is integrable but the integrator has a different answer maybe use integration by parts? • July 14th 2007, 06:37 AM Jhevon Quote: Originally Posted by Soroban We can integrate: $\sin^3\!x\cos^3\!x$ . . . but not $\sin(x^3)\cos(x^3)$ Can someone tell me once and for all how can you tell if you can't integrate something, or something is not integrable analytically using elementary functions or whatever...wait, did i ask this question before? I know we can't integrate $e^{x^2}$ and according to Soroban, we can't integrate $\sin \left( x^3 \right) \cos \left( x^3 \right)$, but how do we know that for sure? What's the proof that we can't integrate those functions by hand? • July 14th 2007, 07:13 AM Plato This totally my own opinion: Your confusion is understandable and it comes from the very sad conflating of the words integral and antiderivative. They are not the same. An integral is a number, quite often gotten by way of an antiderivative using the fundamental theorem of integral calculus. An antiderivative is just what is says it is. Of the antiderivative of $\sin(x^3)$ does exits but we would the series representation for $\sin(x)$ to get it. Therefore, it is proper to say that no elementary representation of the antiderivative of $\sin(x^3)$ exist in the Calculus II sense of the term. • July 14th 2007, 07:23 AM topsquark Quote: Originally Posted by topsquark $\int sin^3(x) cos^3(x) dx$ Let $y = sin(x)$, then $dy = cos(x) dx$ $\int sin^3(x) cos^3(x) dx = \int sin^3(x) cos^2(x) cos(x) dx$ $= \int sin^3(x) (1 - sin^2(x)) cos(x) dx = \int y^3 (1 - y^2) dy$ Now, $= \int (y^3 - y^5)$ I'm sure you can take it from here. -Dan To continue $= \frac{1}{4}y^4 - \frac{1}{6}y^6 + C$ $= \frac{1}{4}sin^4(x) - \frac{1}{6}sin^6(x) + C$ Now, my TI-92 comes up with: $-\frac{sin^2(x) cos^4(x)}{6} - \frac{cos^4(x)}{12}$ and the Integrator comes up with $\frac{1}{192} ( cos(6x) - 9 cos(2x))$ All of these solutions are correct, despite how it might look. The point is that we are doing indefinite integration, so any solution that differs from another by only a constant are all correct. If you spend the time (or just plug it through on your calculator) you will find that all three solutions differ from each other by some constant. (Neither the TI-92 nor the Integrator remind you to add the arbitrary constant on the end.) -Dan • July 14th 2007, 07:27 AM thanks topsquark ohh ok • July 14th 2007, 07:43 AM Krizalid You can use the fact $\int\sin^mx\cos^nx~dx={\color{blue}\frac{\sin^{m+1 }(x)\cos^{n-1}(x)}{m+n}+\frac{n-1}{m+n}\int\sin^mx\cos^{n-2}(x)~dx},~m\ne-n$ :D:D • July 14th 2007, 09:12 AM galactus Quote: Originally Posted by Jhevon Can someone tell me once and for all how can you tell if you can't integrate something, or something is not integrable analytically using elementary functions or whatever...wait, did i ask this question before? I know we can't integrate $e^{x^2}$ and according to Soroban, we can't integrate $\sin \left( x^3 \right) \cos \left( x^3 \right)$, but how do we know that for sure? What's the proof that we can't integrate those functions by hand? I believe sin(x^3) is done using what is known as a Lommel integral. Don't know much about it though. Just as sin(x^2) is a Fresnel. I couldn't find reference to Lommel in wiki. Perhaps, that would be a good MathHelpWiki for someone to take on?. One should be able to use topics from advanced calc to prove sin(x^3) in not integrable by elementary means. Maybe Dirichlet test or something. It is continuous and differentiable. I may have to delve into it some more. • July 14th 2007, 10:13 AM DivideBy0 As far as I know, $ \int {\sin x^3 ~dx} = - \frac{1} {2}i\left( {\frac{{x\Gamma\left( {\displaystyle\frac{1} {3},ix^3 } \right)}} {{3\sqrt[3]{{ix^3 }}}} - \frac{{x\Gamma\left( {\displaystyle\frac{1} {3}, - ix^3 } \right)}} {{3\sqrt[3]{{ - ix^3 }}}}} \right) + k$ • July 14th 2007, 02:46 PM galactus Yeah. I ran it through Maple and it gave me a horrendous result with LommelSi. It may be equivalent to your result, though. Just a different animal. • July 14th 2007, 04:03 PM Krizalid Quote: Originally Posted by DivideBy0 As far as I know, $ \int {\sin x^3 ~dx} = - \frac{1} {2}i\left( {\frac{{x\Gamma\left( {\displaystyle\frac{1} {3},ix^3 } \right)}} {{3\sqrt[3]{{ix^3 }}}} - \frac{{x\Gamma\left( {\displaystyle\frac{1} {3}, - ix^3 } \right)}} {{3\sqrt[3]{{ - ix^3 }}}}} \right) + k$ I know the forum where you got that :D:D • July 14th 2007, 05:42 PM ThePerfectHacker I can find, $\int_0^{\infty} \sin x^3 \cos x^3 dx$ :cool: And, $\int_0^{\infty} \sin x^3 dx$ And, $\int_0^{\infty} \cos x^3 dx$ Eventhough these functions are not elementary. (I sometimes love Complex Analysis). Show 40 post(s) from this thread on one page Page 1 of 2 12 Last
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 36, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.954366147518158, "perplexity": 1407.3183758393873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010491371/warc/CC-MAIN-20140305090811-00029-ip-10-183-142-35.ec2.internal.warc.gz"}
https://brilliant.org/discussions/thread/february-newsletter/
× ### Hello Every $$\dfrac{\frac{1}{n} sin x}{6}$$ This is the very first edition of the moderator newsletter. Due to lack of time and sources, and due to it's being first ever, this edition is a bit short. If you have any jokes, quotes, puzzles, or any interesting thing you think will be liked in the newsletter, or any feedback, you can do so by mailing at [email protected]$$. Brilliant ### New and Exciting Features Over the past couple of months or so, we've had many changes in Brilliant. I'd like to throw light on some of them. The most important and probably the best of them was the introduction of Wikis. Brilliant was only for problems? No more! With the Wikis' getting better and better day-by-day, they're becoming a really nice way to learn. Forget Books! We've also had the introduction of a very important feature that has been wanted for a long time - the notifications feature! Now you can know at a click if someone followed you, or your problem got reshared. ### Latest Challenging Problems As always, we've had some amazing problems here, which were both challenging, and amazing. I would like to feature some of them here, if you somehow didn't face them- Easy Geometry, Would you go on calculating this?, Honey Bottle, Math Flag were some of the easy problems that made Krishna Ar think. Inspired by Mehul Chaturvedi, Not Always Adorable, Minimizing Can Be Difficult!, An algebra problem by Sompong Chuisurichy were some problems that were challenging to Agnishom. ### New and Active Members $$\large W$$e have seen a lot of new people over the past few month, who have been really active and deserve recognition and a word of thanks. We wish that they will continue helping the community and will get lots of love from the same. Here are their names: Jack Rawlin, Robert Haywood, Math Philic, Vishal S, Anna Anant, Sudoku Subbu, Azhaghu Roopesh M, andDanish Ahmed are some of the new members who have been very active in the very recent past, and are definitely worth following. ### Now That's Interesting! interesting $$\large N$$othing on Brilliant can ever be boring. Daniel Liu recently found a very interesting Algebraic Identity: $xyz+(x+y)(y+z)(z+x)=(x+y+z)(xy+yz+zx)$ And Kishlaya Jaiswal found an amazing new identity, which was an over-head bouncer to me. Well, it states: $\sum_{n=1}^\infty \sum_{m=1}^\infty \frac{x^n}{m{n+m \choose m}} = \log \left(\frac{1}{1-x}\right)$ $$\large S$$ome new things going on, and some facts are as follows- $$\bullet$$ Omkar Kulkarni made a series of problems in Trigonometry, and is about to make a century! Oh Us! $$\bullet$$ The name of Sir Lin's cat is Grey, and so is "his" colour. $$\bullet$$ If you want to know about your Heart $$\bullet$$ You can take part in the on-going Valentines Day Themed Contest and get a chance to get to the top 5. $$\bullet$$ Visit for some cool Mathematical Jokes ### WhoToFollow Here's the most awaited WhoToFollow list. The members who have made to this list have been very active and deserve to be on the list. They are a sure WhoToFollow, and are always there to help the community. Here are the names, in no particular order: $$\bullet$$ Michael Mendrin $$\bullet$$ Sandeep Bhardwaj $$\bullet$$ Brian Charlesworth $$\bullet$$ Julian Poon $$\bullet$$ Mursalin Habib $$\bullet$$ Deepanshu Gupta A hearty congratulations to all those who could make to the list. In the end, our Quotist, Jeremy Bansil, has a quote for us- Changing the world into a better place takes a long time to finish, but teaching everyone to change the world into a better place can only take a day. That's all for this edition. Please let us know if you liked it, by liking it and resharing it. Again, I would love any feedback, jokes/riddles/quotes/etc for the next edition by mail at [email protected]$$. Cheers! Note by Satvik Golechha 2 years, 10 months ago MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$...$$ or $...$ to ensure proper formatting. 2 \times 3 $$2 \times 3$$ 2^{34} $$2^{34}$$ a_{i-1} $$a_{i-1}$$ \frac{2}{3} $$\frac{2}{3}$$ \sqrt{2} $$\sqrt{2}$$ \sum_{i=1}^3 $$\sum_{i=1}^3$$ \sin \theta $$\sin \theta$$ \boxed{123} $$\boxed{123}$$ Sort by: Happy Birthday Satvik Golechha bday - 2 years, 10 months ago O.F.Y.I $$\dfrac{1}{n}$$ $$\dfrac{sin(x)}{6}$$ $$sin(x)=six(n)$$ $$\implies$$ $$\dfrac{sin(x)}{6} = \dfrac{six(n)}{6} = n$$ $$\implies$$ $$\dfrac{n}{n} = 1$$ - 2 years, 10 months ago Thanks For Including Me In Your List . I'am feeling Honoured ! And Truly Speak I just Want To Thank To you For Your So much Contribution to our community . You Have Very Sharp Problem Solving Skill . And Also I know that you are an all rounder , I have Read your Besty " Krishna Ar' s " comment That You are a good Poet Also . Hatt's Off Man , Personally I feel that along with genius Brain , You are fun loving guy and a Good Person Too. I realise This Many Times on Brilliant , while studying note's , comment's etc. So In actuall You are the person , to whom every person on Brilliant.org should Follow ! You will Definetly Go to Very Hight's in ur upcoming Life . Gud Luck For ur Bright Future ! $$\ddot\smile$$ - 2 years, 9 months ago They are definitely the amazing people who manage to solve thousands of questions in a short time period.I hope that I might be the one someday but yeah,I joined this when I was 13 but I wasn't able to understand a single question in this website and I gave up but now I rejoin to face this challenge but it wasn't a challenge after I've been through thousands of obstacles,learning through mistakes makes us smarter and knowledgeable :) Happy birthday Satvik Golecha :) May God bless you :) - 2 years, 9 months ago LOL Why have you Capitalised all false words? - 2 years, 9 months ago Great work Satvik! Well, I should say that make a edit in your note and add your name too in the WhoToFollow list. Yes, you really deserve it!!! And yeah, wishing you a great $$5^{th}$$ Bell Number Happy Birthday :) - 2 years, 10 months ago Hi, a great initiative from you Satvik and that too on your B'day !!! Frankly I didn't expect a newsletter here on Brilliant but now that it's here , I'm quite happy . Hope you get a lot of appreciation for it !!! P.S. Thanks for noticing me . Many more Happy Returns of the Day $$\ddot\smile$$ !!! - 2 years, 10 months ago Hi....can u tell me where I stand among the whole brilliant of my age members....please I want to know since u r the moderator - 2 years, 10 months ago You can obtain that information on the stats page, where you can see your improvement over time and your percentile rank! Staff - 2 years, 10 months ago • Moderator's do not have the access to your percentile score so far. You may wish to request @Calvin Lin in his messageboard for that information. • You can post interesting problems, notes, wikis, solutions, help other people, and otherwise participate in the community. (another moderator here) - 2 years, 10 months ago The stand means my rank or percentile persons behind me.... - 2 years, 10 months ago And also what l have to do have my name in that list of who to follow???? - 2 years, 10 months ago Hi @sarvesh dubey , in my opinion, if your goal is just to be up on the list of WhoToFollow, you'll be facing a hard time trying to post more problems, solutions and wiki. Just be yourself, share what you think is good to the community and without you yourself noticing, we'll quietly put you up there. Good luck problem solving! - 2 years, 10 months ago Hi @sarvesh dubey Being on the list of WhoToFollow is really hard work. You need to be very active and helpful to the community, as they have been! :D - 2 years, 10 months ago - 2 years, 9 months ago Well Happy Birthday @Satvik Golechha and keep up the good work of creating this newsletter and adding the names of wothwhile people and also continue ur work in the future - 2 years, 9 months ago @Satvik Golechha @Deepanshu Gupta @Calvin Lin All of moderators could you please edit my problem as don't know much of latex. Click here . Reply when done. - 2 years, 9 months ago I noticed its done. - 2 years, 9 months ago Thanks for informing. Have you tried it ? - 2 years, 9 months ago No.. Not really, busy with boards. - 2 years, 9 months ago The. Greatest. Honour. Ever. :D - 2 years, 10 months ago Yes it needs your name too satvik. I think Agnishom's name should also be there. - 2 years, 10 months ago Thanks for thinking me to be worthy - 2 years, 10 months ago Nice initiative and happy birthday! - 2 years, 10 months ago Comment deleted Feb 09, 2015 Comment deleted Feb 09, 2015 Comment deleted Feb 09, 2015 Comment deleted Feb 09, 2015 Comment deleted Feb 09, 2015 I have delete this thread as it is not relevant. Staff - 2 years, 10 months ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.74644935131073, "perplexity": 4416.03045406049}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948515311.25/warc/CC-MAIN-20171212075935-20171212095935-00570.warc.gz"}
https://www.math10.com/forum/viewtopic.php?f=42&t=8289&amp
# Find all 6-digit multiples of 22 of the form 5d5,22e ### Find all 6-digit multiples of 22 of the form 5d5,22e Find all 6-digit multiples of 22 of the form 5d5,22e where d and e are digits. What is the maximum value of d? CORBELLA Posts: 1 Joined: Tue Apr 16, 2019 11:54 pm Reputation: 0
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.970186173915863, "perplexity": 1056.301396034046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998581.65/warc/CC-MAIN-20190617223249-20190618005249-00316.warc.gz"}
http://mathhelpforum.com/advanced-algebra/44236-another-algebra-question.html
# Math Help - Another Algebra Question 1. ## Another Algebra Question For n ≥ 5 A_n alternating groups which is the group of permutation . Is it a simple group? 2. Originally Posted by mathemanyak For n ≥ 5 A_n alternating groups which is the group of permutation . Is it a simple group? Yes it is a simple group. This is related
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9758514165878296, "perplexity": 1775.3420259215416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
https://se.mathworks.com/help/stats/classificationpartitionedlinearecoc.kfoldmargin.html
Main Content # kfoldMargin Classification margins for observations not used in training ## Syntax ``m = kfoldMargin(CVMdl)`` ``m = kfoldMargin(CVMdl,Name,Value)`` ## Description example ````m = kfoldMargin(CVMdl)` returns the cross-validated classification margins obtained by `CVMdl`, which is a cross-validated, error-correcting output codes (ECOC) model composed of linear classification models. That is, for every fold, `kfoldMargin` estimates the classification margins for observations that it holds out when it trains using all other observations.`m` contains classification margins for each regularization strength in the linear classification models that comprise `CVMdl`.``` example ````m = kfoldMargin(CVMdl,Name,Value)` uses additional options specified by one or more `Name,Value` pair arguments. For example, specify a decoding scheme or verbosity level.``` ## Input Arguments expand all Cross-validated, ECOC model composed of linear classification models, specified as a `ClassificationPartitionedLinearECOC` model object. You can create a `ClassificationPartitionedLinearECOC` model using `fitcecoc` and by: 1. Specifying any one of the cross-validation, name-value pair arguments, for example, `CrossVal` 2. Setting the name-value pair argument `Learners` to `'linear'` or a linear classification model template returned by `templateLinear` To obtain estimates, kfoldMargin applies the same data used to cross-validate the ECOC model (`X` and `Y`). ### Name-Value Arguments Specify optional comma-separated pairs of `Name,Value` arguments. `Name` is the argument name and `Value` is the corresponding value. `Name` must appear inside quotes. You can specify several name and value pair arguments in any order as `Name1,Value1,...,NameN,ValueN`. Binary learner loss function, specified as the comma-separated pair consisting of `'BinaryLoss'` and a built-in, loss-function name or function handle. • This table contains names and descriptions of the built-in functions, where yj is a class label for a particular binary learner (in the set {-1,1,0}), sj is the score for observation j, and g(yj,sj) is the binary loss formula. ValueDescriptionScore Domaing(yj,sj) `'binodeviance'`Binomial deviance(–∞,∞)log[1 + exp(–2yjsj)]/[2log(2)] `'exponential'`Exponential(–∞,∞)exp(–yjsj)/2 `'hamming'`Hamming[0,1] or (–∞,∞)[1 – sign(yjsj)]/2 `'hinge'`Hinge(–∞,∞)max(0,1 – yjsj)/2 `'linear'`Linear(–∞,∞)(1 – yjsj)/2 `'logit'`Logistic(–∞,∞)log[1 + exp(–yjsj)]/[2log(2)] `'quadratic'`Quadratic[0,1][1 – yj(2sj – 1)]2/2 The software normalizes the binary losses such that the loss is 0.5 when yj = 0. Also, the software calculates the mean binary loss for each class. • For a custom binary loss function, e.g., `customFunction`, specify its function handle `'BinaryLoss',@customFunction`. `customFunction` should have this form `bLoss = customFunction(M,s)` where: • `M` is the K-by-L coding matrix stored in `Mdl.CodingMatrix`. • `s` is the 1-by-L row vector of classification scores. • `bLoss` is the classification loss. This scalar aggregates the binary losses for every learner in a particular class. For example, you can use the mean binary loss to aggregate the loss over the learners for each class. • K is the number of classes. • L is the number of binary learners. For an example of passing a custom binary loss function, see Predict Test-Sample Labels of ECOC Model Using Custom Binary Loss Function. By default, if all binary learners are linear classification models using: • SVM, then `BinaryLoss` is `'hinge'` • Logistic regression, then `BinaryLoss` is `'quadratic'` Example: `'BinaryLoss','binodeviance'` Data Types: `char` | `string` | `function_handle` Decoding scheme that aggregates the binary losses, specified as the comma-separated pair consisting of `'Decoding'` and `'lossweighted'` or `'lossbased'`. For more information, see Binary Loss. Example: `'Decoding','lossbased'` Estimation options, specified as the comma-separated pair consisting of `'Options'` and a structure array returned by `statset`. To invoke parallel computing: • You need a Parallel Computing Toolbox™ license. • Specify `'Options',statset('UseParallel',true)`. Verbosity level, specified as the comma-separated pair consisting of `'Verbose'` and `0` or `1`. `Verbose` controls the number of diagnostic messages that the software displays in the Command Window. If `Verbose` is `0`, then the software does not display diagnostic messages. Otherwise, the software displays diagnostic messages. Example: `'Verbose',1` Data Types: `single` | `double` ## Output Arguments expand all Cross-validated classification margins, returned as a numeric vector or matrix. `m` is n-by-L, where n is the number of observations in `X` and L is the number of regularization strengths in `Mdl` (that is, `numel(Mdl.Lambda)`). `m(i,j)` is the cross-validated classification margin of observation i using the ECOC model, composed of linear classification models, that has regularization strength `Mdl.Lambda(j)`. ## Examples expand all Load the NLP data set. `load nlpdata` `X` is a sparse matrix of predictor data, and `Y` is a categorical vector of class labels. For simplicity, use the label 'others' for all observations in `Y` that are not `'simulink'`, `'dsp'`, or `'comm'`. `Y(~(ismember(Y,{'simulink','dsp','comm'}))) = 'others';` Cross-validate a multiclass, linear classification model. ```rng(1); % For reproducibility CVMdl = fitcecoc(X,Y,'Learner','linear','CrossVal','on');``` `CVMdl` is a `ClassificationPartitionedLinearECOC` model. By default, the software implements 10-fold cross validation. You can alter the number of folds using the `'KFold'` name-value pair argument. Estimate the k-fold margins. ```m = kfoldMargin(CVMdl); size(m)``` ```ans = 1×2 31572 1 ``` `m` is a 31572-by-1 vector. `m(j)` is the average of the out-of-fold margins for observation `j`. Plot the k-fold margins using box plots. ```figure; boxplot(m); h = gca; h.YLim = [-5 5]; title('Distribution of Cross-Validated Margins')``` One way to perform feature selection is to compare k-fold margins from multiple models. Based solely on this criterion, the classifier with the larger margins is the better classifier. Load the NLP data set. Preprocess the data as in Estimate k-Fold Cross-Validation Margins, and orient the predictor data so that observations correspond to columns. ```load nlpdata Y(~(ismember(Y,{'simulink','dsp','comm'}))) = 'others'; X = X';``` Create these two data sets: • `fullX` contains all predictors. • `partX` contains 1/2 of the predictors chosen at random. ```rng(1); % For reproducibility p = size(X,1); % Number of predictors halfPredIdx = randsample(p,ceil(0.5*p)); fullX = X; partX = X(halfPredIdx,:);``` Create a linear classification model template that specifies optimizing the objective function using SpaRSA. `t = templateLinear('Solver','sparsa');` Cross-validate two ECOC models composed of binary, linear classification models: one that uses the all of the predictors and one that uses half of the predictors. Indicate that observations correspond to columns. ```CVMdl = fitcecoc(fullX,Y,'Learners',t,'CrossVal','on',... 'ObservationsIn','columns'); PCVMdl = fitcecoc(partX,Y,'Learners',t,'CrossVal','on',... 'ObservationsIn','columns');``` `CVMdl` and `PCVMdl` are `ClassificationPartitionedLinearECOC` models. Estimate the k-fold margins for each classifier. Plot the distribution of the k-fold margins sets using box plots. ```fullMargins = kfoldMargin(CVMdl); partMargins = kfoldMargin(PCVMdl); figure; boxplot([fullMargins partMargins],'Labels',... {'All Predictors','Half of the Predictors'}); h = gca; h.YLim = [-1 1]; title('Distribution of Cross-Validated Margins')``` The distributions of the k-fold margins of the two classifiers are similar. To determine a good lasso-penalty strength for a linear classification model that uses a logistic regression learner, compare distributions of k-fold margins. Load the NLP data set. Preprocess the data as in Feature Selection Using k-fold Margins. ```load nlpdata Y(~(ismember(Y,{'simulink','dsp','comm'}))) = 'others'; X = X';``` Create a set of 11 logarithmically-spaced regularization strengths from $1{0}^{-8}$ through $1{0}^{1}$. `Lambda = logspace(-8,1,11);` Create a linear classification model template that specifies using logistic regression with a lasso penalty, using each of the regularization strengths, optimizing the objective function using SpaRSA, and reducing the tolerance on the gradient of the objective function to `1e-8`. ```t = templateLinear('Learner','logistic','Solver','sparsa',... 'Regularization','lasso','Lambda',Lambda,'GradientTolerance',1e-8);``` Cross-validate an ECOC model composed of binary, linear classification models using 5-fold cross-validation and that ```rng(10); % For reproducibility CVMdl = fitcecoc(X,Y,'Learners',t,'ObservationsIn','columns','KFold',5)``` ```CVMdl = ClassificationPartitionedLinearECOC CrossValidatedModel: 'LinearECOC' ResponseName: 'Y' NumObservations: 31572 KFold: 5 Partition: [1x1 cvpartition] ClassNames: [comm dsp simulink others] ScoreTransform: 'none' Properties, Methods ``` `CVMdl` is a `ClassificationPartitionedLinearECOC` model. Estimate the k-fold margins for each regularization strength. The scores for logistic regression are in [0,1]. Apply the quadratic binary loss. ```m = kfoldMargin(CVMdl,'BinaryLoss','quadratic'); size(m)``` ```ans = 1×2 31572 11 ``` `m` is a 31572-by-11 matrix of cross-validated margins for each observation. The columns correspond to the regularization strengths. Plot the k-fold margins for each regularization strength. ```figure; boxplot(m) ylabel('Cross-validated margins') xlabel('Lambda indices')``` Several values of `Lambda` yield similarly high margin distribution centers with low spreads. Higher values of `Lambda` lead to predictor variable sparsity, which is a good quality of a classifier. Choose the regularization strength that occurs just before the margin distribution center starts decreasing and spread starts increasing. `LambdaFinal = Lambda(5);` Train an ECOC model composed of linear classification model using the entire data set and specify the regularization strength `LambdaFinal`. ```t = templateLinear('Learner','logistic','Solver','sparsa',... 'Regularization','lasso','Lambda',Lambda(5),'GradientTolerance',1e-8); MdlFinal = fitcecoc(X,Y,'Learners',t,'ObservationsIn','columns');``` To estimate labels for new observations, pass `MdlFinal` and the new data to `predict`. expand all ## References [1] Allwein, E., R. Schapire, and Y. Singer. “Reducing multiclass to binary: A unifying approach for margin classifiers.” Journal of Machine Learning Research. Vol. 1, 2000, pp. 113–141. [2] Escalera, S., O. Pujol, and P. Radeva. “On the decoding process in ternary error-correcting output codes.” IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol. 32, Issue 7, 2010, pp. 120–134. [3] Escalera, S., O. Pujol, and P. Radeva. “Separability of ternary codes for sparse designs of error-correcting output codes.” Pattern Recogn. Vol. 30, Issue 3, 2009, pp. 285–297. ## Extended Capabilities Introduced in R2016a Download ebook
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8326311707496643, "perplexity": 4212.835025233213}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585450.39/warc/CC-MAIN-20211022021705-20211022051705-00131.warc.gz"}
http://mathhelpforum.com/discrete-math/143223-hamiltonian-circuits-print.html
# Hamiltonian circuits • May 5th 2010, 11:19 AM Elvorn Hamiltonian circuits Hi guys. Im trying to solve this problem with Hamiltonian circuits. "Show that all simple, complete graphs with at least three vertices has an hamiltonian circuit" My problem is that Im not sure how to "show" this. It is easy to draw a graph with, say, 7 vertices and point out a circuit, but that doesnt show anything else than that specific graph has a circuit. Any thoughts? • May 5th 2010, 11:38 AM Plato Think DIRAC'S. Is it true that if $n\ge 3$ then $n-1\ge \frac{n}{2}?$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4222100079059601, "perplexity": 884.6531645381974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535920849.16/warc/CC-MAIN-20140909054027-00478-ip-10-180-136-8.ec2.internal.warc.gz"}
https://motls.blogspot.tw/2013/08/
## Saturday, August 31, 2013 ... ///// ### Argumentation about de Broglie-Bohm pilot wave theory Guest blog by Ilja Schmelzer, a right-wing anarchist and independent scientist A nice summary of standard arguments against de Broglie-Bohm theory can be found at R. F. Streater's "Lost Causes in Theoretical Physics" website. Ulrich Mohrhoff [broken link, sorry] also combines the presentation of his position with an interesting rejection of pilot wave theory. These arguments I consider in a different file. Here, I consider the arguments proposed in several articles of Luboš Motl's blog "The reference frame": David Bohm born 90 years ago and Bohmists & segregation of primitive and contextual observables, Anti-quantum zeal and in off-topic responses of "Nonsense of the day: click the ball to change its color". Below, we refer to Luboš Motl simply as lumo (his nick in his blog). ## Friday, August 30, 2013 ... ///// ### Pacific waters as an excuse for the warming hiatus Most of the mainstream media offered us a bizarre "story" in the recent two or three days. The absence of global warming in recent years – well, it's really 17 years now – has been "explained" by the Pacific waters. Problem solved, the belief in the global warming ideology may continue unchallenged, we're de facto told. PDO: warm and cool phase The claims are based on the paper by Kosaka and Xie in Nature, Recent global-warming hiatus tied to equatorial Pacific surface cooling, which is bad enough but I will mostly focus on the journalists' added spin which is even worse. The Guardian's Fiona Harvey will be used as my sample but the comments below are applicable almost universally. ### One can't background-independently localize field operators in QG ...because the "basis" of coherent states is overcomplete... Let me begin with something simple. John Preskill asked you "What's inside a black hole?" and offered you four options: 1. An unlimited amount of stuff. 2. Nothing at all. 3. A huge but finite amount of stuff, which is also outside the black hole. 4. None of the above. Well, the option (D) may have been at the beginning and an obvious suboption of (D), "The black hole interior is a region just like any other region and independent from others", should have been offered as a special choice (E). A surprising result is that (E) is almost certainly wrong. Instead, (C) is right – at least if we omit the very highly curved region near the singularity that could justify (A) in a complicated way and if we allow the definition of a black hole to cover its rare microstates – if we only allowed the most generic black hole microstates, the answer would be (B): the interior has to be empty. Well, (B) may also be interpreted as a claim allowing a firewall, in which case it's wrong in general (the firewall isn't necessary or generic) but of course that there are rare black hole microstates that contain something that burns you near the horizon much like there are rare black hole microstates with a bunny in the interior. This point is simple but often misunderstood. A black hole is defined by its event horizon but it doesn't follow that the interior has to be empty. There can be a bunny in it. However, among microstates of localized matter, a black hole with a bunny is an exponentially rarer class of microstates. Most of the mass $M\pm \delta M/2$ black hole microstates look empty – that's why the entropy-increasing evolution converges towards these states as the black hole keeps on devouring the surrounding matter to clean its interior (and vicinity). But don't make a mistake about it: a bunny in a black hole (or a nonzero occupation number of freely falling field operator modes) is unlikely yet possible. But let me switch to a more complicated question. ## Thursday, August 29, 2013 ... ///// ### Two-sigmaish CMS multilepton excesses with a $\tau$ A possible hint of third-generation superpartners Matt Strassler mentioned an interesting anomaly reported by CMS at a SUSY conference this week: A Discrepancy to Keep an Eye On (Prof Dr RNDr Matt Strassler PhD CSc DrSc Dot COM) It's small enough so that you may assume that it's just another example of a fluctuation that will go away with more data. But it's large enough for many of us to gain the right to be intrigued. ;-) The excesses have something to do with multileptons. If you search this blog for multileptons, you find many articles, mostly from late year 2011 and early year 2012. The words "year" were inserted for you to notice that there were many hyperlinks in the previous sentence. It's plausible that those flukes have gone way during the 1.5-2.0 years. What are the overrepresented events this time? ## Wednesday, August 28, 2013 ... ///// ### Imagine that the Universe is not expanding Wetterich's cosmon claimed to be an alternative to the Big Bang singularity, inflation, and the recent apparent expansion Image: NASA/JPL–Caltech... Most papers trying to replace the usual cosmological concepts such as dark matter and dark energy by something entirely different may be shown to be wrong within minutes. As I learned from a Czech server called osel.cz ("osel" is a horse-like animal known as an ass: I don't know of a shorter way to explain that it's not the other ass), a rather achieved cosmologist Christof Wetterich posted an unusual clever yet apparently equally provoking preprint to the astro-ph arXiv at the beginning of this month: Variable gravity Universe Be ready for a wild ride: the proposed model claims to explain all the known observations, eliminate the Big Bang singularity, account for the patterns we attribute to inflation, the radiation-dominated era, and the matter-dominated era. And Wetterich also wants to boast that his construction "produces" the arrow of time – as if cosmology were needed for that (but that didn't make me stop reading). A single scalar field – the cosmon – may do all these wonderful things, the gospel say. It's weird if not exciting, isn't it? ;-) ### 11-year-old quantum physicist enters a Texas college Mr Carson Huey-You is a 11 years old boy who plays the piano and speaks Mandarin fluently. On 9/11/2001, the 75-pound, 4-foot-7 boy who finds calculus relaxing (basketball is OK, too) wasn't born yet but he wants to become a quantum physicist. And as Fox News, TCU 360, Statesman, and others reveal, he just made a non-trivial step in order to become a quantum physicist. He was just accepted to the Texas Christian University as a freshman physics major. ### Light dark matter in NMSSM and non-diagonalization of BH evolution matrices I want to mention two new papers today. First, Jonathan Kozaczuk and Stefano Profumo of Santa Cruz discuss the possibility to embed the very light, sub-$10\GeV$ dark matter particle (indicated by some of the direct search experiments) to the Next to Minimal Supersymmetric Standard Model (NMSSM: it's the MSSM in which the Higgs bilinear coefficient $\mu$ is promoted a chiral superfield $S$ which is, according to many criteria and physicists, more natural than the MSSM itself): Light NMSSM Neutralino Dark Matter in the Wake of CDMS II and a $126\GeV$ Higgs They find out that there are regions in the parameter space of NMSSM that are able to produce this very light higgsino-singlino-mixed LSP dark matter candidate with a huge, spin-dependent cross section coupling it to the nucleons. The Higgs mass may be achieved sort of naturally, other "negative" constraints may also be satisfied, and the scenario produces some automatic predictions, e.g. a large invisible branching fraction of the Higgs decays. ## Tuesday, August 27, 2013 ... ///// ### Getting ready for a war against Syria ...robust Czechoslovak weapons unlikely to resist for too long... The civil war in Syria sucks, like most civil wars. The rebels aren't saints (and Russia accused them of using sarin gas a month ago) but it's the Syrian government that may be expected to behave more responsibly. Given the strong indications that chemical weapons have been used, it's not surprising that the U.S. forces and allies are thinking about an attack that may begin as early as on Thursday. Photo from syrianhistory.com Such a reaction of the West is understandable but needless to say, the West may be playing with fire. And with Czechoslovak weapons, too. ### Thiel-Kasparov debates If you have 52 minutes, you may want to watch this video full of intelligent enough debates between the renowned chess player Garry Kasparov (a chronic world ex-champion and a sort of a political activist in Russia) and the renowned venture capitalist Peter Thiel (the founder of PayPal, the first major Facebook investor, a libertarian, and the supporter of world-changing projects, especially by college dropouts): Video posted via Kasparov's YouTube channel The topics include Google and its vision for the world, the replacement of humans by machines, the bad consequences of any looming nuclear war, politics in Russia, simultaneous chess games (in which Kasparov hasn't lost since 2001; Thiel is a very good player as well, he was surely strong enough in Nice a few years ago to beat your humble correspondent pretty much "reliably" back in Nice – but my chess scalp is surely not a source of pride for anyone who actually plays the game regularly). ## Monday, August 26, 2013 ... ///// ### Lindzen's talk at DDP meeting Alarming Global Warming: What Happens to Science in the Public Square Willie Soon sent us the newly posted video at the bottom, a 56-minute-long talk by Richard Lindzen at the (July) 2012 DDP meeting (Long Island Hotel And Conference Center). DDP stands for "Doctors for Disaster Preparedness" that sounds silly, the talk is 1 year old, and there has already been a 2013 DDP meeting. ### RSS: a negative temperature trend in 16.67 years Before you open this blog entry, you should make sure that all the nuclear power plants in your country are operational. You may need them because the text below contains a really long table (with 415 lines or so) which will be processed as $\rm\LaTeX$ using MathJax. ;-) About two years ago, Kevin Trenberth and others promoted a paper (Ben Santer and 16 co-authors, 2011) that claimed that one needs 17 years – what a precision – to determine the existence of a global warming trend. The purpose of the paper was to inject some patience to the minds of the alarmists and the undecided – 15 years of "no warming" isn't enough to notice the absence of any warming because you need 17 and not 15 years. Your humble correspondent wrote a tirade explaining that people like Santer and Trenberth were numerologists because there can obviously be nothing special about the 17-year-long interval. The whole continuum of the frequencies contributes to the temperature change and all the confidence levels etc. are depending on the duration continuously. There's no sharp "magic deadline" after which a hypothetical trend "must" show up. At any rate, my preferred temperature record – the satellite-based RSS AMSU dataset – has approached a point in which the global warming trend in the recent 17 years is statistically insignificant and negligible. In fact, if you include the latest 200 months i.e. 16 years and 8 months (from December 1996 through July 2013 included) into your calculation of linear regression, you get a negative warming trend! ### Promising class of heterotic $\ZZ_8$ orbifolds Edward Witten is 62 today, congratulations! After many twists and turns and the birth of roughly five competing classes of stringy compactifications, I still consider the heterotic strings to be the most realistic category of the stringy vacua we know. Heterosis is a powerful tool. Most of the time we mention the heterotic strings, we usually think of a compactification on a smooth Calabi-Yau manifold. The world may be described by one of the 10,000 or so known topologies of six-real-dimensional Calabi-Yau three-folds (each of which has several continuous parameters, the moduli). Those require interacting (not free) world sheet theories and the experts who study them quantitatively usually have to be hardcore mathematicians – very good geometers whose home is a higher-dimensional space full of bundles and sheaves. Until recently, they would find it almost impossibly hard to find some allowed and realistic vector bundles but with the help of line bundles, they've made lots of progress in this technical hurdle during the recent years. But the world may be simpler than that. The spacetime coordinates describing the Calabi-Yau directions may be fermionized which leads us to a large subclass of free fermionic heterotic models. And even if these degrees of freedom remain bosonized, we may obtain free world sheet theories because the Calabi-Yau geometry may be based on $T^6$, the six-dimensional torus. We actually need to consider orbifolds to break the unrealistic gang of 16 supercharges to the phenomenologically viable 4 supercharges. ## Sunday, August 25, 2013 ... ///// ### Insiders and outsiders debate: fuzz or fire? There is a KITP rapid response 10-day workshop on the black hole information puzzle in Santa Barbara: Complementarity, Fuzz, Or Fire? Red fire or fuzz? asks a mad professor before she connects the wires and fires some red fuzz. The speakers (click for all the talks in various formats!) include a great part of the most well-known researchers in the area plus some folks who are close enough to them: Marolf, Bousso, Polchinski; van Raamsdonk, Susskind, Maldacena, Sanford; Mathur, Turton, Bena; Harlow, Aaronson (intelligent outsider); Preskill, Oppenheim; Hawking (remotely), Unruh, Wald, Jacobson; Papadodimas, Raju, Nomura, Verlinde, Verlinde; Lowe, Silverstein; Giddings, Banks. It was easier to retype the full list of speakers instead of thinking how to pick, how to order the picked ones, and how to justify the choices. ;-) Juan Maldacena's talk is the only one that I have watched in its entirety so far. He makes lots of jokes – like the comment that the more mathematically precise parts of their work with Lenny were already explained by Susskind (the audience explodes in laughter because Juan is among the most rigorous folks in the field while Susskind is one of the most representative hand-wavers of a sort, but this is not meant as a criticism!). At the beginning, Juan says that paradoxes are normally resolved by realizing that we have been thinking incorrectly about some principles. For example, Loschmidt's paradox (the surprise that reversible fundamental laws are compatible with the irreversible emergent laws in thermodynamics) has been resolved because the irreversibility enters once we consider statistical propositions (Juan says the same thing about these arrow-of-time matters as your humble correspondent). Similarly, strange aspects of dualities were not resolved by abandoning dualities; instead, we figured out how to understand them better. Juan clearly means the usual principles of quantum gravity as well as AdS/CFT that isn't inaccurate or incomplete – as Joe Polchinski is trying to suggest. ## Saturday, August 24, 2013 ... ///// ### Imagining 10 dimensions Peter F. sent me a link to this video, Imagining 10 dimensions. It has 104 minutes but based on various hints and a quick selection, I believe it must be a pretty good one for an apparent amateur creator! Correct me if I am wrong but I hope you won't! ;-) ## Friday, August 23, 2013 ... ///// ### Boddy, Carroll: trying to save physics by sacrificing the Universe ...but no saviors are needed: their irrational Boltzmann Brain alarmism misunderstands what a hypothesis includes... External discussion: Jacques Distler will write a few critical sentences about the Boddy-Carroll paper tomorrow. I completely agree with Distler – as he will reproduce some ideas from the text below (and others). It's not possible for hypothetical future events to influence the present; and there is no particular framework of probability theory into which sentences of the kind "we're likely Boltzmann Brains" may be justifiably embedded. Each of these two bugs is enough to identify the paper as crap and the authors as nuts. On his blog, the Preposterous Universe, Sean Carroll promoted a paper by himself and Kimberly Boddy: The Higgs Boson vs. Boltzmann Brains (his blog) Can the Higgs Boson Save Us From the Menace of the Boltzmann Brains? (arXiv) Last week, I was giving a popular physics talk in a planetarium in Northern Bohemia. It clearly turned out to be too complicated for the bulk of the audience (philosophers are sometimes annoying but a group of philosophers is more ready to listen to some almost real physics than a selected 1/1,000 fraction of the general public in a medium-size town!) but we've had some fun, anyway. One of the longest discussions was dedicated to the phase transition that may destroy the Universe; the Higgs field instability is the most ordinary example of such a scenario. In a "seed of doom", the Higgs field (or, more generally, another usually scalar field) may penetrate to a new, lower energy state that is incompatible with life. This "seed of the new lifeless Universe" starts to expand almost by the speed of light and devour everything. You won't feel the pain because your nerves are slower than the inflating nothingness. I wanted to calm the public. The Universe won't collapse anytime soon. At the end, however, I just couldn't tell them anything else than the truth. And the truth is that empirically, we only know that the approximate lifetime of the Universe after which the "seed of doom" starts to grow somewhere is unlikely to be much shorter than the current age of the Universe, 13.8 billion years. It may be comparable, it may be a bit shorter but it may also be much longer and infinite. If it is finite, it sounds sort of unlikely that it would be comparable to the current age of the Universe which means that it's probably much longer. Don't worry. But there's really no "solid" argument that would prove that the Universe won't start to disappear in the next 1 billion years. You may find the "Higgs decay" scenario frightening. The Universe may die long before the Sun runs out of fuel in 7.5 billion AD and goes red giant. What a waste! It may be tomorrow. We're not able to present any solid enough proof that it won't happen. However, Boddy and Carroll are scared of something else: that the Universe won't die soon. So they claim that the unstable Higgs field is our savior from the genuine threat: the Boltzmann Brains. This fear is utterly irrational because the Boltzmann Brains aren't endangering us. They aren't endangering physics, either. The won't ever appear on the Earth (much like Category 6 hurricanes which are nothing else than another proof that Al Gore is a liar without any scruples). There's no reason to sacrifice the world (or billions of dollars). Similar explanations have repeatedly occurred on this blog but here we go again. ### Tohoku, JP wins the International Linear Collider The project may still be cancelled... In June 2013, I discussed the contest between the Sefuri mountains and the Kitakami mountains who will build the International Linear Collider if any collider will be built. Recall that the former offered a sexy 4-minute musical video; the latter offered a somewhat boring, 21-minute-long educational video. The "boring" video guys won! ;-) Congratulations to Hitoshi Murayama et al. Tohoku pitched for ¥1 trillion [$10 billion] collider (JP Times) Miyagi, Iwate prefecture mountains picked as possible site for int'l particle accelerator (The Mainichi) The Japan Times tell us that 50% of the cost should be paid for by the host country and there seems to be some degree of skepticism in the newspaper and in the ministry of education etc. 83% of the overall expenses are construction costs; the rest is paid for land acquisition, salaries, and the production of the equipment. ### Aspects of Al Gore's lies on category 6 hurricanes Two days ago, Al Gore gave an interview to the Washington Post Al Gore explains why he’s optimistic about stopping global warming in which he pretended to be optimistic that unhinged alarmists like himself may still morph from the hopeless dirty losers such as himself to winners. With his characteristic diplomacy, the former future U.S. president compared climate realists to champions of apartheid, to homophobes, and to an alcoholic father who explodes whenever the elephant in the room (more precisely, ethanol in the bottle) is being mentioned which is why his relatives prefer to be silent. But it was his scientific contributions that led to the most widespread reactions: Al Gore: ...Would there be hurricanes and floods and droughts without man-made global warming? Of course. But they’re stronger now. The extreme events are more extreme. The hurricane scale used to be 1-5 and now they’re adding a 6. The fingerprint of man-made global warming is all over these storms and extreme weather events. Remarkably enough, pretty much all the climate writers on all sides of the conflict agreed – while using various words for the same proposition – that Gore is a shameful lying mongrel who should splash himself into a toilet. Aside from expected critics such as Anthony Watts, Marc Morano, the Wall Street Journal, the Hill, Newsmax, Politico, The National Review, and many others, even the Washington Post's Capital Weather Gang, The Union of Concerned Scientists and Climate Skeptics' Dogs, and their likes pointed out that Gore's claim was a science fiction that isn't backed by any credible enough experts. ## Thursday, August 22, 2013 ... ///// ### Chinese medicine is carcinogenic The following insights look particularly ironic and make me think about the prescientific way of analyzing Nature. One of the most omnipresent chemical compounds in Chinese herbal medicine have been aristolochic acids (AA) which are found in wild ginger, pipevines, and other long tubular flowers. AA I is almost omnipresent in Aristolochia species. This European Asarum edition of wild ginger almost looks like a tumor itself. The three hexagons plus a pentagon in the chemical formula don't look particularly healthy to me. ;-) But just to be sure, not all polycyclic compounds such as AAs are aromatic; aromatic compounds are only those whose ground state ket vector is a linear superposition of the formulae with several distributions of the single and double C-C/C=C bonds so that it's more accurate to indicate the alternating bounds by a circle inside the polygons. This allows the molecule to stregthen the binding relatively to the non-aromatic case and this also tends to make the compounds unhealthy. It's sort of remarkable that this class of organic compounds with an interesting two-state quantum mechanical effect were already named by August Wilhelm Hofmann in 1855. Some people are able to get very far in quantum mechanics without even knowing the symbol $\ket\psi$. ;-) For quite some time, the compound has been known to be carcinogenic – I didn't know about it – and probably more efficient in causing the disease than the UV radiation and smoking. As such, it's been banned in many Western countries. The tumors appear in the urinary tract, kidneys, and liver. The compound is probably behind the Balkan endemic nephopathy (nephro- means "related to kidneys"). A new paper Genome-Wide Mutational Signatures of Aristolochic Acid and Its Application as a Screening Tool (Science) Scientists Find Link Between Aristolochic Acid And Liver Cancer (Asian Scientist, review) by Song Ling Poon and dozens of Taiwanese and Singapore-based co-authors was apparently able to identify some genetic fingerprints of the mutations caused by the compound which may turn out to be very useful. ## Tuesday, August 20, 2013 ... ///// ### Promoting HEP physics in the U.S.: a poll Listen now: I disabled the Odiogo's "listen now" buttons because the company found out that there's insufficient demand for audio ads on the web and the service has to be a paid one, from$129 a year (since September 1st), which seems like too much to me. It's plausible that this blog could be free as a "personal" one but I don't want to investigate. Joseph S. sent me a link to the Symmetry Magazine, Why Particle Physics Matters, that offers you four 1-minute-or-so videos explaining the Americans why HEP physics is worth their money. You may vote for your favorite. I decided not to hide my preferences. This guy from Mississippi is my winner. His passion for learning and his particle physics built on Columbus' shoulders sound appropriate to me. ### Three insightful BH information papers ...I mean papers on entanglement in quantum gravity theories... Yesterday I discussed a paper on the black hole interior that I considered bad but today there's some better news, namely three papers that are interesting and not self-evidently wrong. Let me begin with Black Holes or Firewalls: A Theory of Horizons by Nomura, Varela, and Weinberg, three physicists who were previously pointing out that the black hole firewall arguments were flawed because they didn't treat the superpositions of macroscopically distinct states of black holes correctly, among related "interpretational" flaws. Today, they present an explicit qualitative model of the black hole microstates that is compatible with the unitarity, the locality at long distances, and the equivalence principle. The firewalls are absent and a smooth horizon is present at all times with the probability 100%. An important component of their construction is a tensor doubling of the Hilbert space to account for the interior modes. In that respect, the new paper is close to Papadodimas' and Raju's paper that is being cited as [28] and described, not too prominently, as a similar construction of the black hole interior operators. They also use the eternal black hole and the doubling may resemble the second black hole from the Maldacena-Susskind ER-EPR correspondence as well. However, no paper by Maldacena is being cited (neither the eternal black hole, nor his recent paper with Susskind) which seems bizarre. Perhaps to compensate this surprising absence of Maldacena's papers in the list of references, Juan is the only person thanked for conversations in the acknowledgements. ;-) Note that this is a sign of the typical Maldacena übermodesty. He probably saw the drafts of the paper but he wouldn't mention that it's strange that none of his papers are being referred to. I don't know too many well-known physicists who are this modest. ;-) ## Monday, August 19, 2013 ... ///// ### Bousso's pseudoarguments against $ER=EPR$, black hole complementarity ...the vacuum can't be excited only if one assumes that it can't be excited... When I was living my days in the "physics establishment", it was pretty much true that there was a connected theoretical high-energy theoretical physics community including professors, postdocs, and students that worked hard to learn everything it should learn, that cared about the important new findings, and that cared whether the papers they write are correct ones. You could have taken the arXiv papers from that community pretty seriously and when a paper was wrong, chances were that it would be corrected or withdrawn. A serious enough blunder would be found, especially if the paper were sold as an important one, and experts would quickly learn about it and reduced the attention given to the authors of the wrong paper appropriately. You could have said that the people around loop quantum gravities and similar "approaches" didn't belong because they have never respected any quality standards worth mentioning. Everything was clear but the "pure status" of the community began to be blurred with the arrival of the anthropic papers after the year 2000 that suddenly made it legitimate to write down some very lousy, unsubstantiated, non-quantitative claims, often contradicting some hard knowledge. I tended to think that this decrease of the quality expectations and the propagation of philosophically preconceived and otherwise irrational papers was a temporary fluke connected with the anthropic philosophy – because it's so "philosophically sensitive". However, it ain't the case. When one looks at the literature about the black hole information issues, i.e. a big topic that made a tremendous progress in the 1990s, a very large portion of the literature that is completely wrong began to develop. Raphael Bousso just released his 4-page preprint Frozen Vacuum and it's just so incredibly bad – and so far from the first preprint written by a similarly well-known name that is just awful. ### In defense of five standard deviations Originally posted on August 12th. The second part was added at the end. The third part. Last, fourth part. Five standard deviations are cute. However, Tommaso Dorigo wrote the first part of his two-part "tirade against the five sigma", Demistifying The Five-Sigma Criterion I mostly disagree with his views. The disagreement begins with the first word of the title ;-) that I would personally write as "demystifying" because what we're removing is mystery rather than mist (although the two are related words for "fog"). He "regrets" that the popular science writers tried to explain the five-sigma criterion to the public – I think they should be praised for this particular thing because the very idea that the experimental data are uncertain and scientists must work hard and quantitatively to find out when the certainty is really sufficient is one of the most universal insights that people should know about the real-world science. ## Sunday, August 18, 2013 ... ///// ### LIGO: improving sensitivity by squeezed states Gravitational waves could become visible next year On Friday, SciTechDaily wrote about an interesting recent article in Nature: Improvements to LIGO Detector Will Allow Scientists to ‘Listen’ to Black Holes Forming (SciTechDaily, Daily Galaxy) Enhanced sensitivity of the LIGO gravitational wave detector by using squeezed states of light by J. Aasi and 24 co-authors (Nature Photonics: full PDF paper here) LIGO.org press release LIGO, the Laser Interferometer Gravitational-Wave Observatory, a large L-shaped instrument to detect the gravitational waves, hasn't seen anything yet but it may change soon and dramatically. The authors of the new Nature paper – the whole LIGO collaboration – is sending special packets of light, the squeezed states, to one of the LIGO detectors and this modification is improving the sensitivity. ## Saturday, August 17, 2013 ... ///// ### 95 percent confidence: in HEP vs IPCC When I saw some reports about the IPCC's 95 percent "certainty" that the global warming is mostly man-made, I couldn't avoid thinking about the huge difference between hard sciences (such as particle physics) and soft sciences (such as the contemporary climatology). Reuters saw documents saying something like that: Drafts seen by Reuters of the study by the U.N. panel of experts, due to be published next month, say it is at least 95 percent likely that human activities - chiefly the burning of fossil fuels - are the main cause of warming since the 1950s. That is up from at least 90 percent in the last report in 2007, 66 percent in 2001, and just over 50 in 1995, steadily squeezing out the arguments by a small minority of scientists that natural variations in the climate might be to blame. This figure was discussed by Watts Up With That and The Hockey Schtick (via Climate Depot). I am stunned how underwhelming such statements are and my being stunned has several levels. ## Friday, August 16, 2013 ... ///// ### Krauss-Dent small C.C. from a Higgs seesaw This idea is known to most physicists but it's not a full solution to the C.C. problem Nude Socialist (via Joseph S.) published an article called Dark energy could be the offspring of the Higgs boson which mainly discusses a June 2013 preprint by Lawrence Krauss and James Dent, A Higgs-Saw Mechanism as a Source for Dark Energy. Funny and not terribly serious. And no, no followups to the paper appeared in the first two months. The story quotes Frank Wilczek – without even mentioning his remotely related recent paper on Multiversality (which is somewhat more substantial). We learn that Lawrence Krauss was actually "a Higgs sceptic until the very end". What a poetic way to say "a stubborn moron". At any rate, now Mr Krauss has apparently kindly accepted the belief that there exists a Higgs boson – something he should have learned and understood as an undergrad – so he and James Dent became convinced that they may solve all big problems of physics, too. ## Thursday, August 15, 2013 ... ///// ### Arnold Schwarzenegger orders gas chambers for some conservatives ...with his thick Austrian accent... The Terminator's comments at the National Clean Energy Summit were... unfortunate. According to the Huffington Post (via Junk Science and Climate Depot), he has proposed a final solution to some disagreements with his opponents that seems to be highly popular among famous Austrian-born expats (let me not mention a former German chancellor by his name), perhaps even more popular than their irrational opposition to peaceful nuclear energy: Speaking of greenhouse gas deniers: "Strap some conservative-thinking people to a tailpipe for an hour and then they will agree it's a pollutant!" An interesting but not original (as visitors of a camp in Poland know) technique. But will it achieve what the ex-governor believes that it will achieve? Will the conservative-thinking people agree that CO2 is a pollutant? I don't think so. It seems more likely that they will agree with nothing because they will be dead and dead people can't agree with anything. Instead, Arnold Schwarzenegger will be tried for crimes against humanity – and I guess that the judges would agree that this method to terminate the lives of some conservative-thinking people is a crime against humanity. He will be shown that the Terminator's being human is a film propaganda and his abilities to escape justice indefinitely are movie fantasies. Moreover, their death won't have anything to do with CO2. Why? ## Wednesday, August 14, 2013 ... ///// Carroll's QM, NYT's firewalls, Jester's whining on scales Sean Carroll has unlocked the quantum chapter from his "Eternity" book, which I found better than expected despite its misleading comments about the "collapse", "its" relationships with the "arrow of time", the meaning of the "Copenhagen Interpretation", the "many worlds" as the "leading alternative contender", and many other things (he omits Bohm etc.). Carroll's text is flawed in different ways than e.g. Brian Greene's musings about the interpretation of quantum mechanics but I wouldn't say it's "more flawed". I am still not aware of any popular presentation of the foundations of quantum mechanics that is done right. ### Discussion about old and new theoretical physics forums Update: Physics Overflow is live! I am not taking any positions about these matters – and about the Stack Exchange forums, their contents, and their moderators, among related topics – but this blog entry was written with the only purpose: to allow the exchange of information and opinions between users who are interested in the debate about the sufficiency of the existing forums and about the possibilities to create and sustain new ones (and about their desirability and role). ## Tuesday, August 13, 2013 ... ///// ### Steve Pinker is right to defend "scientism" A week ago, Harvard's top evolutionary psychologist Steven Pinker wrote an essay for The New Republic, Science Is Not Your Enemy: An impassioned plea to neglected novelists, embattled professors, and tenure-less historians, that defends the application of the scientific method to various fields, including those that used to be monopolized by the tools of humanities and other methods and non-methods. I think that both Pinker and your humble correspondent think that the would-be expletive "scientism" is being mostly used for the idea that scientific reasoning shouldn't be confined just to the traditional places but it should be extended to new realms. If that's so, count me as a scientist! Or what's the word for the champion of scientism? ;-) To be sure, I have met people who were applying naive, science-inspired models to very complex systems and they would deserve to be criticized or told why they were wrong. But in my experience, these were not the primary recipients of the label "scientism". Pinker starts by saying that the great folks of the Enlightenment were scientists, science has improved our lives in many ways, the understanding itself is extremely valuable (in contrast with a despicable statement in the 2006-2007 Harvard general education requirement that offended me as much as it offended Pinker). ### Erwin Schrödinger and his cat in Google Doodle Erwin Rudolf Josef Alexander Schrödinger was born on August 12th, 1887, i.e. 126 years and 1 day ago, in our then capital, namely Vienna, Austria-Hungary (where he also died in 1961), to a German-speaking botanist and a mostly British daughter of a chemist. He was their only child. This background may explain some of this physicist's deep interest in the foundations of biology. However, he had some more unusual interests related to Eastern religions and pantheism – religious symbols often appeared in his work. In my opinion, this fact boils down to his family background, too. He was brought up in a Lutheran family but called himself an atheist. What he really meant by the word "atheist" was a "heretic" which is why his generalized "atheism" also allows the Eastern religions and similar things. ## Monday, August 12, 2013 ... ///// ### Both neutralino, sbottom may weigh less than $20\GeV$ Dark matter searches and LHC rumors may converge in a light sbottom-photino point I decided that the most exciting hep-ph preprint today is Supersymmetry with Light Dark Matter confronting the recent CDMS and LHC Results by Alexandre Arbey, Marco Battaglia, and Farvah Mahmoudi. An interesting detail about the list of the authors is that all of them are partially affiliated with the CERN theory division. Why is it interesting? Because the LHC top squark rumor from February 2012 was later rumored to have arrived from the CERN theory division so these three physicists might know much more about the superpartners accessible to the LHC than the rest of us. They are inspired by the "positive side" of the dark matter wars and investigate whether the MSSM (Minimal Supersymmetric Standard Model) may predict the LSPs (Lightest Superpartners: the supersymmetric theories' candidates for the WIMP dark matter particle, in most cases) that are as light as $10\GeV$ or so. Note that the most accurate figure suggested by the "coalition of the willing" is $8.6\GeV$ by CDMS II Silicon. ## Saturday, August 10, 2013 ... ///// ### Enrico Betti: an anniversary Enrico Betti was an Italian mathematician and politician who was born in October 1823, i.e. 190 years ago, and died on August 11th, 1892 – we will have an anniversary tomorrow. He is most famous for a 1871 paper on topology that explained the Betti numbers – a term that was later coined by Henri Poincaré – which I used in my fairy-tale about the Euler characteristic. While Betti was a one-hit wonder of a sort, his life was pretty interesting. ### Detonation of the Sun A frequent source of links has sent me the coordinates of a page Explosion of Sun introducing a paper by Alexander Bolonkin and Joseph Friedlander urging all the physicists to think about the possibility that a malicious regime will send a thermonuclear weapon into the Sun and speed up the reactions inside the Sun – effectively converting all of our beloved star to a giant H-bomb long before our main source of useful energy is expected to go red giant around the year 7.5 billion AD. This picture contains just a real-world eruption! Via IO9. In the authors' opinion, physicists and others have a moral duty to either exclude the possibility, or look for security measures that would protect us against such a rogue regime, or prove that such a protection is impossible. First of all, is such an explosion possible? ## Friday, August 09, 2013 ... ///// ### Skyrmions could make hard disks 100 times smaller Remotely related: sci-fi gets real: tech junkies should look at 27 science-fiction concepts that morphed into reality in 2012. Nature's Ron Cowen reviewed a technical paper in Nature that is one month old, Writing and Deleting Single Magnetic Skyrmions (Niklas Romming and 7 co-authors from Hamburg). Skyrmions, some topologically non-trivial solutions of non-linear sigma-models first described by Tony Skyrme in the 1960s, may be thought of as tiny vortices of atoms. Because in this very recent breakthrough, Romming et al. became able to create and destroy them at will, it's plausible that they may be used in future magnetic information storage technologies. I've been in love with skyrmions decades before I knew their name. ## Thursday, August 08, 2013 ... ///// ### SUSY, a scapegoat: different kinds of belief ...much of the anti-SUSY propaganda is unbelievable... One week ago, I argued that it is totally inappropriate to use the adjective "speculative" for theoretical frameworks such as supersymmetry. It's a topic that's being discussed at many places which is why now, one week later, I will reopen these issues. Two days ago, Alok Jha of the Guardian wrote his text One year on from the Higgs boson find, has physics hit the buffers? and it was discussed at a leading HEP crackpots' website where Giotis, Urs Schreiber, and others kindly debate some nasty and stupid physics haters. Incidentally, the subtitle in Jha's article calls SUSY "the elusive followup theory to the Higgs mechanism". First, it isn't the only or most accurate way to describe SUSY which is mostly independent of the Higgs issues. Second, it isn't "the" only followup theory to the Higgs mechanism. Third, if it were "the [right] followup", it shouldn't be shocking if we need more than 1 year after the discovery of the Higgs boson to discover SUSY. One year is a short time in the history of physics. But the basic point that Giotis began to emphasize to these demagogues is that the existence of SUSY in Nature and the discovery of SUSY at the LHC are two completely different questions. The LHC is an accelerator that allows us to reach energies that are one order of magnitude greater than the energies accessible at the previous top collider, the Tevatron. But even at the logarithmic scale, you would need to make about 15 analogous steps to get close to the fundamental scale (or the string scale). The LHC may look expensive to some people but it's just far from a tool allowing us to directly test the most fundamental questions about Nature. Whether this increase of the log(energy) by 1/15 of what we would like is enough to find groundbreaking discoveries isn't and couldn't be clear. The LHC has found the Higgs boson and it is not "impossible" that this is it. ### Some hysteria about Czech politics in the media The Czech media hype some articles, especially in the German-speaking media such as Die Presse, as evidence that everyone thinks that we have become a banana republic. The president has become a king, we're becoming a Putin-style democracy, and so on. A new Bloomberg article talks about a paralysis and other dramatic words. I just don't understand what they're talking about. What happened? ## Wednesday, August 07, 2013 ... ///// ### Ostragene: realtime evolution in a dirty city Ostrava, an industrial hub in the Northeast of the Czech Republic, is the country's third largest city (300,000). It's full of coal mines and steel mills. ArcelorMittal is the world's largest steel producer and bought a major facility there. The air contains products of a chemical plant and some junk blown from the nearby Poland, too. The history of hardcore pollution in the region goes back to the 19th century. Just to be sure, we're talking about real toxins, not bogus pollution like CO2. The air often contains things like benzo(a)pyrene, a carcinogenic polycyclic aromatic hydrocarbon, in concentrations severely (e.g. by 700%) exceeding the allowed maxima. Yesterday I learned something I should have heard about in late 2011 but I had to miss it. But let me get to the point. One should expect that with this much benzo(a)pyrene, people in Ostrava will get many more tumors than those in Prague, for example. But they didn't. The life expectancy seems to be a bit lower in Ostrava but this difference seems to be associated with cardiovascular diseases. A paradox. ## Tuesday, August 06, 2013 ... ///// ### CATO: against all public funding of science Terence Kealey, a clinical biochemist at a private university (of Buckingham), wrote the lead essay for the new issue of the libertarian CATO Institute's magazine, The Case against Public Science (CATO Unbound) His text is deeply provocative yet insightful but ultimately wrong at many levels. While I have enough libertarian DNA in my blood so that I can imagine a better, more efficient world in which there is no public funding of science, I am also conservative enough to appreciate that the complete abolition of the public funding of science would represent a dramatic revolution and I am against such revolutions unless their positive impact is supported by really solid arguments. In his interesting essay, Kealey overlooks many important things and makes implausible statements about others. So let's start to ask: Why is the taxpayer paying for the science? ## Saturday, August 03, 2013 ... ///// ### A video on loop quantum gravity Sabine Hossenfelder watched a 43-minute video on loop quantum gravity that was posted two days ago. She wasn't too impressed. Well, I am always impressed by low-budget or no-budget teams that manage to shoot a semiprofessional video of this size but it's hard to avoid some criticisms. Most of them are really criticisms of the topic they chose to cover, loop quantum gravity, so they shouldn't be used against the creators of the video. And I will avoid detailed criticisms of the imperfect sound quality (noise filters were used too much at some places), the subtitles (and whether it makes sense to have a video if there exists a written form of it), and the speakers' limited rhetorical abilities. 00:00 It's strange that loop quantum gravity is being connected with the birth of stars etc. because it isn't really capable of explaining the particle spectrum and other parts of physics that are crucial for our understanding of the early Universe. ## Friday, August 02, 2013 ... ///// ### Ex-HEP climate scientist urged to get arrested, hesitates This article in the Guardian offers us quite an amusing combination of climate science and particle physics: Climate scientists must not advocate particular policies That's the main message we hear from Tamsin Edwards, a climate scientist in Bristol. She reminds us of something you've heard many times on this blog: science cannot answer moral questions. It can't even tell you whether you should have a carbon tax or fight for a wetter atmosphere, among many other things. Scientists who violate this rule inevitably reduce the credibility of science in general, especially if and when there are sensible concerns that the political considerations and goals could have determined the scientist's manipulation with the data. Right. A scientist is also a human with her human rights so she can think and say whatever she wants about many political and other issues – at least, in the genuinely free world, she can – but she just shouldn't sell her political opinions as conclusions of scientific research (or as "scientific consensus" as these political statements are often called). This interpretation is an abuse of science. If you were ever denying that the climate scientists are being politically pressured, well, she reminds us that she and her colleagues are repeatedly urged to be persuasive, be brave, and get arrested ;-), whenever necessary. She apparently doesn't want to get arrested. By the way, you may learn several other embarrassing things about the climate pressure groups and their pathological interactions with the climatological community from her essay. So far, researchers such as herself aren't being collected in special AGW Kamikaze units. ## Thursday, August 01, 2013 ... ///// ### Is supersymmetry a "speculative idea"? Matt Strassler wrote a mostly sensible text A Couple of Rare Events on the media's reaction to the LHCb and CMS' measurement of the decay rate of the rare process $B_{(s)}^0\to \mu^+\mu^-$. For the first time, the decay rate was measured to be nonzero and it agrees with the Standard Model within something like a 30% error margin. Matt correctly says that some media sell it as a breathtaking success of the Standard Model that nearly kills all the competitors. And he correctly points out that the media apparently think that the supersymmetry is the only competitor of the Standard Model. Related: The Huffington Post wrote a story about the 4.5-sigma LHCb anomaly (TRF) pointed out by Descotes-Genon, Matias, Virto I agree with much of what he says. In reality, there are other theories beyond the Standard Model; the precision with which the rare decay was measured isn't too great; some values of parameters of beyond-the-Standard-Model theories have been excluded while others remain perfectly fine, in contradiction with the main message of the media. Well, I have some understanding for the media's approach: supersymmetry is the #1 well-motivated theory for beyond-the-Standard-Model physics which is why supersymmetry is sometimes sloppily used as a shortcut for the whole set. In fact, the relative likelihood that SUSY is the first new physics that will be discovered is getting larger which means that this approximation of "Beyond the Standard Model physics" by "supersymmetry" is arguably becoming increasingly accurate. However, there's one detail in Matt's text that I simply can't swallow. He uses the word "speculative" a whopping eight times for supersymmetry and all other ideas for beyond-the-Standard-Model physics. I think that this adjective – something that Matt has clearly adopted as a major part of his idiosyncratic language – is totally inappropriate for these theories. Why? ### 1,700 U.S. cities partially underwater by 2100 Some climate alarmists were celebrating a transmutation they have never seen in their lives: melting of ice. The ice around the North Pole melted and created a small lake. So wonderful! Mother Nature abruptly stopped the celebrations when it did something else that the climate alarmists couldn't have possibly expected: it refroze the water and the lake disappeared again. As soon as Alexander Ač sees this picture, he will be re-energized and will run a long story on his blog about a U.S. city that was finally devoured by the ocean. Shockingly enough, the Earth, the Solar System, the Milky Way, and the Universe survived this melting episode. No doubt, the water near the poles has done a similar flip-flop millions of times in the past. The end of the "polar lake" celebrations doesn't mean that the climate alarmists stopped producing gigatons of insane fantasies. Quite on the contrary. We were just told that 1,700 U.S. Cities Could Be Partially Underwater by 2100 Due to Climate Change Climate Central, an organization whose mission is to answer every question by "climate change will destroy it", wrote a "study" – apparently taken seriously by the Pentagon – claiming that Boston, New York, Miami, and 1,700+ other U.S. cities will be at least 25% underwater by 2100, if measured by the percentage of the current population.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4285418391227722, "perplexity": 2177.72769842132}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886116921.70/warc/CC-MAIN-20170823000718-20170823020718-00719.warc.gz"}
https://www.physicsforums.com/threads/magnetic-flux-emf.228967/
# Magnetic Flux/EMF. 1. Apr 14, 2008 ### jcpwn2004 1. The problem statement, all variables and given/known data A patient's breathing is monitored by wrapping a 200 turn flexible metal belt around the patient's chest. When the patient inhales, the area encircled by the coil increases by .0039m^2. The magnitude of the Earth's magnetic field is 50x10^-6 T and it makes an angle of 50 degrees with the direction perpendicular to the coil. If the patient requires 1.5s to inhale, find the average emf induced in the coil. 2. Relevant equations Magnetic flux=BA. EMF= N x change in(MagFlux)/change in time. 3. The attempt at a solution I don't understand why i'm not gettting this problem it seems pretty straight forward. I do Magnetic Flux = BA so (50x10^-6)(.0039)(cos40) which equals 1.5x10^-7. Then I use the 2nd equation and emf = 200(1.5x10^-7)/1.5 and I get 2x10^-5 V. The answer is supposed to be 16.7x10^-6 V though. 2. Apr 14, 2008 ### mysqlpress I think it is the problem with "makes an angle of 50 degrees with the direction perpendicular to the coil" and magnetic flux is defined as BA not B delta A. The equation should be e.m.f = -N dBA/dt = -NBdA/dt where dA is the area changed. 3. Apr 15, 2008 ### jcpwn2004 i don't see how that makes a difference? don't you get the same thing? 4. Apr 15, 2008 ### alphysicist Hi jcpwn2004 If the magnetic field is constant, the magnetic flux through a loop is B A cos(theta) where theta is the angle between the direction of the field and the perpendicular to the loop. So that angle in your calculation needs to be 50 degrees, not 40 degrees. 5. Apr 15, 2008 ### mysqlpress perpendicular to the coil area. I would say and you don't actually know the magnetic flux since the area is not given, but a change in area is given. 6. Apr 15, 2008 ### alphysicist Hi mysqlpress, Would you say there is a difference between perpendicular to the coil area and perpendicular to the loop? I chose those words because that was the wording in the original problem and I don't think there is any ambiguity. But I've been wrong before. In your first post, remember that the process of breathing will not lead to a uniform rate of change for the flux, and so we do not want to use the instantaneous emf equation with the derivative (we don't know enough to find the derivative), but the average emf equation with the differences. I know in this problem it's rather straightforward to see that you use the difference form (all they give you is the changes) but in certain problems it could be vital to keep the definitions of each in mind. If a problem was A single loop of wire has a magnetic flux of $\Phi = \sin(t)$ for the times from $t=0\to \pi$. What is the instantaneous induced emf magnitude in the loop during this time? then the result would be $${\cal E}(t)=\cos(t)$$ but if the question had been What is the average induced emf magnitude during this time? the result would be ${\cal E}=0$ Similar Discussions: Magnetic Flux/EMF.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8696647882461548, "perplexity": 727.7628147172505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917124478.77/warc/CC-MAIN-20170423031204-00238-ip-10-145-167-34.ec2.internal.warc.gz"}
https://www.physics-in-a-nutshell.com/article/47/polar-representation-and-eulers-formula
Physics in a nutshell $\renewcommand{\D}[2][]{\,\text{d}^{#1} {#2}}$ $\DeclareMathOperator{\Tr}{Tr}$ Polar Representation and Euler's Formula As was seen before, one can represent any complex number as a vector in a two-dimensional plane - the so-called Argand diagram. Commonly, complex numbers are written in terms of rectangular coordinates with the $x$-coordinate being given by the real part and the $y$-coordinate by the imaginary part of the complex number. Polar Representation An equivalent way to represent complex numbers is provided by the polar representation. Here each corresponding vector is characterised by its length $|z|=\sqrt{z \bar{z}} \in [0,\infty)$ and the angle $\varphi \in [0,2\pi)$ between the real axis and this vector.[1] The domains of the coordinates $|z|\in [0,\infty)$ and $\varphi\in [0,2\pi)$ are limited in order to ensure a unique assignment between the complex numbers and the points in space. For instance, raising the angle by $2\pi$ or $360^\circ$ reproduces the same point in space and hence the same complex number. Equivalently, raising the angle by $\pi$ or $180^\circ$ corresponds to a simple multiplication by -1 of the whole number. One can easily see that the rectangular coordinates $x$ and $y$ are related to the polar coordinates $|z|$ and $\varphi$ by basic trigonometric relationships:[2] \begin{align} x &= \Re{z} = |z| \cos (\varphi) \\ y &= \Im{z} = |z| \sin (\varphi) \\[1ex] \Rightarrow \quad z &= x+iy = |z| \left[ \cos (\varphi) + i \sin (\varphi) \right] \label{eq:square-brackets} \end{align} Accordingly, the polar coordinates can be expressed in terms of the rectangular ones as well: \begin{align} |z| &= \sqrt{z\bar{z}} = \sqrt{x^2+y^2} \\ \tan (\varphi) &=\frac{\sin (\varphi)}{\cos (\varphi)} = \frac{y}{x} \end{align} Since $\tan$ is periodic, one needs to be careful with its inverse function (which is not unique). A given ratio of $y$ and $x$ can correspond to different values of $\varphi$. Thus, one needs to pay special attention when calculating $\varphi$. The correct way of doing that is provided by the so-called atan2 function. If one now goes ahead and tries to simplify the expression in square brackets in eq. \eqref{eq:square-brackets}, one obtains a result which is quite remarkable: Euler's Formula The expression $\cos (\varphi) + i \sin (\varphi)$ can be simplified by replacing the trigonometric functions $\cos$ and $\sin$ with their power series representations and by using the relation $i^2=-1$:[3] \begin{align} \cos (\varphi) + i \sin (\varphi) &= \sum_{i=0}^{\infty} (-1)^n \frac{\varphi^{2n}}{(2n)!} + i \sum_{i=0}^{\infty} (-1)^n \frac{\varphi^{2n+1}}{(2n+1)!} \nonumber \\ &= \underbrace{ \sum_{i=0}^{\infty} i^{2n} \frac{\varphi^{2n}}{(2n)!}}_\text{even terms} + \underbrace{ \sum_{i=0}^{\infty} i^{2n+1} \frac{\varphi^{2n+1}}{(2n+1)!} }_\text{odd terms} \nonumber \\ &= \sum_{n=0}^{\infty} \frac{(i\varphi)^n}{n!} = e^{i\varphi} \end{align} The result of this short calculation is referred to as Euler's formula:[4][5] \begin{align} e^{i\varphi} = \cos (\varphi) + i \sin (\varphi) \end{align} The importance of the Euler formula can hardly be overemphasised for multiple reasons: • It indicates that the exponential and the trigonometric functions are closely related to each other for complex arguments even though they exhibit a completely different behaviour for real arguments. In particular, one can express the trigonometric functions in terms of complex exponentials by using the definitions of the real and imaginary part of a complex number:[6][7] \begin{align} \cos(\varphi) &= \Re{e^{i\varphi}} = \frac{e^{i\varphi} + e^{-i\varphi}}{2} \\ \sin(\varphi) &= \Im{e^{i\varphi}} = \frac{e^{i\varphi} - e^{-i\varphi}}{2i} \end{align} In general it is much easier to evaluate expressions that are given in terms of exponentials as compared to trigonometric ones - some examples/applications are given here. • Evaluating the Euler formula for $\varphi=\pi$ yields a result which is considered as one of the most beautiful mathematical expressions that were ever found: \begin{align} e^{i\pi} + 1 = 0 \end{align} This expression unifies the three very fundamental numbers $e$, $\pi$ and $i$ as well as 0 and 1 within a single and even very simple equation. • Furthermore, products of complex numbers can be rather unpleasant to evaluate if the numbers are given in rectangular representation. However, such products can be handled very easily when being given in polar form as will be demonstrated below. Multiplication of Complex Numbers Let $z_1 = |z_1| e^{i\varphi_1}$ and $z_2 = |z_2| e^{i\varphi_2}$ be two complex numbers in polar representation. Their product is given by:[8][9] \begin{align} z_1 z_2 = |z_1||z_2| e^{i(\varphi_1+\varphi_2)} \end{align} Hence, its absolute value $|z_1 z_2 | = |z_1| |z_2|$ is the product of the individual absolute values $|z_1|$ and $|z_2|$ and its angle is equal to the sum of the individual angles $\varphi_1$ and $\varphi_2$. Have a look at the following example \begin{align} z_1 = \frac{3}{2} e^{i\frac{\pi}{6}}& \quad\text{and}\quad z_2 = 2 e^{i \frac{3\pi}{4}} \\ \Leftrightarrow \quad &z_1 z_2 = 3 e^{i\frac{11\pi}{12}} \end{align} which is a straight forward calculation. The result is visualised in figure 2: The same calculation could be done in rectangular coordinates just as well, but it would definitely be less fun as you can convince yourself: \begin{align} z_1 = \sqrt{\frac{3}{4}} + i \frac{1}{2} \quad\text{and}\quad z_2 = -\sqrt{\frac{1}{2}} + i \sqrt{\frac{1}{2}} \end{align} Therefore it is reasonable to use the polar representation when dealing with products of complex numbers. References [1] Christian B. Lang, Norbert Pucker Mathematische Methoden in der Physik Springer Spektrum 2016 (ch. 2.1) [2] Christian B. Lang, Norbert Pucker Mathematische Methoden in der Physik Springer Spektrum 2016 (ch. 2.1) [3] Wolfgang Nolting Grundkurs Theoretische Physik 1 Springer 2012 (ch. 2.3) [4] Wolfgang Nolting Grundkurs Theoretische Physik 1 Springer 2012 (ch. 2.3) [5] Christian B. Lang, Norbert Pucker Mathematische Methoden in der Physik Springer Spektrum 2016 (ch. 2.3.1) [6] Wolfgang Nolting Grundkurs Theoretische Physik 1 Springer 2012 (ch. 2.3) [7] Christian B. Lang, Norbert Pucker Mathematische Methoden in der Physik Springer Spektrum 2016 (ch. 2.3.1) [8] Wolfgang Nolting Grundkurs Theoretische Physik 1 Springer 2012 (ch. 2.3) [9] Christian B. Lang, Norbert Pucker Mathematische Methoden in der Physik Springer Spektrum 2016 (ch. 2.3.1) Your browser does not support all features of this website! more
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 9, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9989856481552124, "perplexity": 741.0430624860851}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039536858.83/warc/CC-MAIN-20210421100029-20210421130029-00169.warc.gz"}
https://supportforums.blackberry.com/t5/Java-Development/How-to-use-external-fonts-in-SVG/m-p/515162/highlight/true
# Welcome! Welcome to the official BlackBerry Support Community Forums. This is your resource to discuss support topics with your peers, and learn from each other. ## Java Development Developer Posts: 122 Registered: ‎07-26-2008 My Device: 8320, 9500, 9700 My Carrier: Vodafone ## Re: How to use external fonts in SVG? AFAIK the font is only accessible via SVG, it's not equivalent to FontManager.load() New Contributor Posts: 6 Registered: ‎05-26-2010 My Device: Bold My Carrier: Bouygues ## Re: How to use external fonts in SVG? What I'm saying is that SVG engine is not using the font defined in the svg file. Plus ExternalResourceHandler.requestResource is not invoked with the ttf URI specified in the svg file. And the Bold 9000 uses a default font. May be the issue comes from the Bold 9000 simulator packaged in the 4.6.0.21 component pack. Highlighted Developer Posts: 122 Registered: ‎07-26-2008 My Device: 8320, 9500, 9700 My Carrier: Vodafone ## Re: How to use external fonts in SVG? [ Edited ] Yes you're right, it's not pulling the font via the external resource handler, it's finding it as an embedded resource, for instance if I have a font named "LCD" and reference this font via the font-family attribute in SVG it will find and load the font if the ttf file is included as a resource (doesn't matter what the ttf file is called), not sure if this OS 4.6 only (haven't had a chance to do more testing). I assumed it was working from an external file because I was switching between external SVG (with external ttf file) and embedded SVG and the font worked in both cases. Sorry for the confusion!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.915756344795227, "perplexity": 7377.648517130507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501173872.97/warc/CC-MAIN-20170219104613-00539-ip-10-171-10-108.ec2.internal.warc.gz"}
http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&tp=&arnumber=6300293
This chapter contains sections titled: Criterion for Inviscid Flow, Acceleration of a Fluid Particle, Euler's Equation, Bernoulli's Equation, Euler's Equation in Streamline Coordinates, Inviscid Flow in Noninertial Reference Frames, Special Flows, Problems, Bibliography
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9911754727363586, "perplexity": 5088.986335806481}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463609837.56/warc/CC-MAIN-20170528141209-20170528161209-00250.warc.gz"}
https://awwalker.com/2017/09/01/the-philosophy-of-square-root-cancellation/
# The Philosophy of Square-Root Cancellation Partial sums of the Möbius function appear to grow no faster than a square-root. A great many problems in analytic number theory concern bounds for finite sums in which some amount of cancellation is to be expected. For example, one might study the partial sums of the Möbius function, $\displaystyle M(x) = \sum_{n \leq x} \mu(n),$ which satisfies the trivial bound $M(x) = O(x)$. As the Möbius function changes sign infinitely often, we expect some amount of cancellation in $M(x)$, but such progress is hard won. Indeed, the unspecified improvement $M(x)=o(x)$ is already equivalent to the famous Prime Number Theorem (PNT). Just how much cancellation do we expect in the sum $M(x)$? Replacing the PNT with the Riemann hypothesis, we could show $M(x) = O(x^{1/2+\epsilon})$ for all $\epsilon > 0$. (In fact, these two statements are equivalent.) Conversely, $\Omega_\pm$-results (due to Kotnik and te Riele, e.g.) imply that the exponent 1/2 is essentially optimal. We might say that $M(x)$ is suspected to demonstrate square-root cancellation, since $M(x)$ is (conjecturally) no larger than the square-root of the length of its defining sum. A second example concerns bounds for the Kloosterman sums $\displaystyle S(m,n;x) = \sum_{u (x)^\times} e\!\left(\frac{m u + n u^{-1}}{x} \right),$ in which $e(z) = e^{2\pi i z}$. Here, the trivial bound $\vert S(m,n;x) \vert \leq \varphi(x) \leq x$ was famously improved upon by Weil’s oh-so-non-trivial estimate $\displaystyle \vert S(m,n;x) \vert \leq d(x) \sqrt{\gcd(m,n,x)}\sqrt{x},$ which is $O(x^{1/2+\epsilon})$ for fixed $m$ and $n$. In other words, Kloosterman sums demonstrate square-root cancellation. In this note, I’ll discuss why square-root cancellation is so typical in problems in number theory, and give a quick survey of important sums known or widely conjectured to satisfy bounds of this form. — RANDOM WALKS — It is widely speculated that the values of the Möbius function behave like a random variable $\{\pm 1\}$ on the square-free integers. Given this, what might we expect out of $M(x)$? Let $\widetilde{\mu}: [1,x] \to \{\pm 1\}$ be a “random” function (ie. chosen uniformly at random from the $2^x$ functions of this form). These are called random walks, and in this particular case, often lumped together as the simple random walk on $\mathbb{Z}$. The number of random walks that would give $\widetilde{M}(x) = k$ is $\displaystyle \binom{x}{(x-k)/2},$ provided of course that $x$ and $k$ have the same parity.  From here, various estimates can give bounds on the probability that $\widetilde{M}(x)$ exceeds $O(x^\alpha)$. To spare you a slog, we might cite Hoeffding’s inequality, which here provides $\displaystyle \mathbb{P}\big(\vert \widetilde{M}(x) \vert \geq t \big) \leq 2 \exp\left( - \frac{x t^2}{2} \right).$ Note that this probability becomes vanishingly small as $t$ exceeds $O(\sqrt{x})$, so we conclude that “most” random walks exhibit a form of square-root cancellation. Considering the partial sums of arithmetic functions as walks leads to the following moral principle, which I’ll call The Philosophy of Square-Root Cancellation: The Philosophy of Square-Root Cancellation: We should expect square root cancellation in the partial sum of any arithmetic function that behaves “randomly”. Returning to the Möbius function, we note that the Riemann hypothesis follows from the conjecture that $\mu$ behaves like a random variable. (This is the probabilistic evidence towards the Riemann hypothesis.) — A SURVEY OF SQUARE-ROOT CANCELLATION IN NUMBER THEORY — We have already seen two cases (Kloosterman sums and $M(x)$) in which square-root cancellation is proven or widely conjectured. In this section, I’d like to fill out the picture with a great many more examples. I: The Pólya–Vinogradov Inequality.  In 1918 Pólya and Vinagrodov (independently) showed that $\displaystyle \bigg\vert \sum_{n \in [A,B]} \chi(n) \bigg\vert = O\big(\sqrt{q} \log q\big),$ in which $\chi$ is any non-prinicipal Dirichlet character of modulus $q$ and $A are integers. Since the sum of $\chi$ over all of the residues mod $q$ is $0$ (by orthogonality), we may assume that the sum above is no longer than $q$ in length, so Pólya–Vinagradov represents square-root cancellation. Under the assumption of GRH, Montgomery and Vaughn (1977) have given the slight improvement $O(\sqrt{q}\log\log q)$. On the other hand, it is known (Paley, 1932) that these character sums are infinitely often $\gg \sqrt{q}\log \log q$, so square-root cancellation is the best one can expect. II: Classical Gauss Sums. The Gauss sum of a Dirichlet character $\chi$ of modulus $q$ is the finite sum $\displaystyle G(\chi) = \sum_{u=1}^q \chi(u) e^{2\pi i u/q}.$ These sums were first considered by Gauss under the assumption that $\chi$ was the Legendre symbol. Here, Gauss proved that $G(\chi)= i \sqrt{q}$ or $\sqrt{q}$ depending on the residue of $q$ mod $4$, and the generalization $\vert G(\chi) \vert = \sqrt{q}$ holds for any primitive character. This cancellation is most easily seen as a consequence of the Plancherel formula with respect to Fourier inversion on $\mathbb{Z}/q\mathbb{Z}$. III: Counting Points on Elliptic Curves.  Fix $a,b \in \mathbb{Z}$ and consider the elliptic curve $\displaystyle E: \quad y^2 =x^3 +ax +b$ over the finite field $\mathbb{F}_p$, with $p$ prime. The number of points $(x,y)$ on the reduction of $E$, which we write as $N_p$, can be written in terms of the Legendre symbol: $\displaystyle N_p = p+ \sum_{x=0}^{p-1} \binom{x^3+ax+b}{p}.$ Square-root cancellation suggests that $\vert N_p - p \vert = O(\sqrt{p})$ may hold. This (proven) result is known as Hasse’s theorem, and essentially follows from a bound on the magnitude of the roots of the local zeta function of $E$. (That is, Hasse’s theorem represents the analogue of the Riemann hypothesis for the local zeta function of $E$.) IV: The Gauss Circle Problem.  The Gauss circle problem concerns estimates for the number $S_2(R)$ of integer lattice points bounded by the circle of radius $\sqrt{R}$. One expects (and Gauss showed) that $S_2(R) \sim \pi R$. More precisely, Gauss proved that $S_2(R) = \pi R + P_2(R),$ in which $P_2(R)$ is an error term satisfying $P_2(R) = O(\sqrt{R})$. To see this another way, let $r_2(n)$ denote the number of representations of $n$ as a sum of two squares. Then, since $\displaystyle P_2(R) = \sum_{m \leq R} \left(r_2(m)-\pi\right),$ we recognize Gauss’ bound as a form of square-root cancellation after accounting for a main term. Surprisingly, greater-than-squareroot cancellation is conjectured to occur within $P_2(R)$, and we expect $P_2(R) = O(R^{1/4+\epsilon})$. This deviation from “random'” behavior can be explained in practice by shortening the sum of length $R$ via Poisson summation. It may also be possible to recognize the additional cancellation as a consequence of the Hecke relations. (Indeed, a general greater-than-squareroot cancellation is known for sums of coefficients of other modular forms.) — EXERCISES — Exercise 1. Let $\widetilde{\mu}: [1,x] \to \{\pm 1\}$ be a random function as before, with random walk $\widetilde{M}(x)$. Use Stirling’s approximation to prove that $\displaystyle \mathbb{P}(\vert \widetilde{M}(x) \vert \leq x^\alpha) = \sqrt{\frac{2}{\pi}} \cdot x^{\alpha-\frac{1}{2}}+O\big(x^{3\alpha-\frac{3}{2}}\big).$ Conclude that “most” random functions $\widetilde{\mu}$ are $\Omega(x^{1/2-\epsilon})$ for all $\epsilon > 0$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 82, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.987760603427887, "perplexity": 756.5129154786126}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506959.34/warc/CC-MAIN-20200402111815-20200402141815-00504.warc.gz"}
https://mathoverflow.net/questions/321071/is-there-a-theorem-showing-that-de-rham-homology-is-isomorphic-to-singular-homol
# Is there a theorem showing that de Rham homology is isomorphic to singular homology? The only exposition of de Rham homology I've found is an appendix to Uranga and Ibanezs book on String Phenomenology. It was brief and gave only basic outline of how to construct this homology. Now de Rhams theorem asserts that there is an isomorphism between de Rham cohomology of smooth manifolds and that of singular cohomology; and so what appears to be an invariant of smooth structure, is actually an invariant of topological structure. Is there a similar theorem showing an isomorphism between de Rham homology and singular homology? • What is deRham homology? – Praphulla Koushik Jan 17 '19 at 4:02 • I think one uses currents instead of differential forms... – Francesco Polizzi Jan 17 '19 at 8:58 • See Chapter IV of De Rham's book Differentiable manifolds. The result you want follows from Thm.16 in Sec. 21. – Liviu Nicolaescu Jan 17 '19 at 9:28 • I think the book of Breadon, Geometry and topology contains a proof (for cohomology) – Ali Taghavi Jan 17 '19 at 10:42 • @FrancescoPolizzi: There is a description using currents; but the book I've alluded to above uses submanifolds. I appreciate currents are more general, and subsume submanifolds by way of Stokes theorem; however, I find the description of homology via submanifolds more intuitive than the simplicial approach in Hatcher. To my mind it makes a better beginning. Though of course one needs to know what a manifold is - but intuitively we know what this is. – Mozibur Ullah Jan 17 '19 at 17:20 I guess that by de Rham homology you mean the homology groups $$H_{k, \, \mathrm{dR}}(X)$$ constructed on a closed manifold $$X$$ by using the complex of currents. In that case, [1, Theorem 2 page 582] shows that there is an isomorphism between $$H^{n-k}_{\mathrm{dR}}(X)$$ and $$H_{k, \, \mathrm{dR}}(X)$$, where the cohomology is the usual one (constructed by using the complex of differential forms) and $$n = \dim X$$. Now, using the standard De Rham isomorphism between $$H^{n-k}_{\mathrm{dR}}(X)$$ and the singular cohomology group $$H^{n-k}_{\mathrm{sing}}(X, \, \mathbb{R})$$, together with the Poincaré duality $$H^{n-k}_{\mathrm{sing}}(X, \, \mathbb{R}) \simeq H_{k, \,\mathrm{sing}}(X, \, \mathbb{R})$$, we deduce the desired isomorphism $$H_{k, \, \mathrm{dR}}(X) \simeq H_{k, \,\mathrm{sing}}(X, \, \mathbb{R}).$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9056735634803772, "perplexity": 343.38287437414937}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107905965.68/warc/CC-MAIN-20201029214439-20201030004439-00358.warc.gz"}
https://pytorch.org/docs/stable/generated/torch.stack.html
# torch.stack¶ torch.stack(tensors, dim=0, out=None) → Tensor Concatenates sequence of tensors along a new dimension. All tensors need to be of the same size. Parameters • tensors (sequence of Tensors) – sequence of tensors to concatenate • dim (int) – dimension to insert. Has to be between 0 and the number of dimensions of concatenated tensors (inclusive) • out (Tensor, optional) – the output tensor.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.44623568654060364, "perplexity": 6525.5999431341725}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738960.69/warc/CC-MAIN-20200813043927-20200813073927-00265.warc.gz"}
https://math.stackexchange.com/questions/778287/number-of-lattice-points-in-an-n-ball
# number of lattice points in an n-ball I have faced a problem in my work and I will appreciate any hint/reference as I am not much into the lattice problems. Assume an n-dimensional lattice $\Lambda_n$ with generator matrix $G$. Note that lattice points are not necessarily integer, i.e., $x\in \mathbb{R}$ where $x$ is a lattice point. Is there a way to count/estimate/bound the number of lattice points inside and on an n-ball? any hint or reference to appropriate literature is appreciated • You mean to say that the lattice points are the images of some integer lattice under a linear transformation, and therefore $x\in\mathbb R^n$? So you might as well ask for integer lattice points in some $n$-dimensional paraboloid. Not that I think that formulation is any easier, mind, just thinking out loud. What kind of performance are you looking for? Would a scan conversion which gives an exact answer $m$ (i.e. there are $m$ points inside the ball) in time $O\left(m^{(n-1)/n}\right)$ be preferable to one which uses the bounding box for a very loose bound in $O(1)$? – MvG May 2 '14 at 13:05 • I was googling about the lattice points in an n-ball and found some papers about integer lattices (Gauss' circle problem) but in my problem the lattice is not integer. With $x\in \mathbb{R}$ I mean that the entries of the lattice point (vector $x$) are real numbers. I am not sure if the exact number of the lattice points inside an n-ball is solved but even a bound can be enough to proceed with my problem. – M.X May 2 '14 at 13:38 • What do you mean by “solved”? Sure you can compute that number, a brute force enumeration will yield the count eventually. On the other hand, a closed formula might be unrealistic. – MvG May 2 '14 at 14:08 The $n$-volume of the fundamental parallelotope is the absolute value of the determinant of $G.$ The $n$-volume of the ball of radius $1$ is, in shorthand, $\pi^{n/2}/ (n/2)!,$ or $$\omega_n = \frac{\pi^{n/2}}{\Gamma \left( 1 + \frac{ n}{2} \right)}.$$ The volume of the ball of radius $R$ is $\omega_n R^n.$ So, there is your estimate of the count, $\omega_n R^n / |G|.$ • @jvnv looked at your posts, not sure what part of this could be a problem for you. The volume of the ball should be in any mathematical statistics book; the easiest method is induction by 2, even/odd dimension, using polar coordinates for the integral. Note that I am assuming $G$ expresses a basis of the lattice as its rows, so the Gram matrix is actually $G G^T.$ For anything in that area, try SPLAG by Conway and Sloane. Sphere Packing Lattices and Groups – Will Jagy Sep 25 '17 at 17:35
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8235827684402466, "perplexity": 215.7180539440173}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027319470.94/warc/CC-MAIN-20190824020840-20190824042840-00506.warc.gz"}
https://kb.osu.edu/dspace/handle/1811/21022
# Knowledge Bank ## University Libraries and the Office of the Chief Information Officer The Knowledge Bank is scheduled for regular maintenance on Sunday, April 20th, 8:00 am to 4:00 pm EDT. During this time users will not be able to register, login, or submit content. # THE GROUND STATE ROTATIONAL SPECTRUM OF $SO_{2}F_{2}$. Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/21022 Files Size Format View 2003-TF-15.jpg 309.9Kb JPEG image Title: THE GROUND STATE ROTATIONAL SPECTRUM OF $SO_{2}F_{2}$. Creators: Rotger, M.; Boudon, V.; Loete, M.; Margules, L.; Demaison, J.; Mäder, H.; Winnewisser, G.; Müller, Holger S. P. Issue Date: 2003 Abstract: The analysis of the ground state rotational spectrum of $SO_{2}F_{2}^{a}$ has been performed with the Watson's Hamiltonian up to sextic terms but shows some limits due to the A and S reductions. Since $SO_{2}F_{2}$ is a quasi-spherical top, it can also be regarded as derived from an hypothetical $XY_{4}$ molecule. Thus we have developped a new tensorial formalism in the $O(3) \supset T_{d} \supset C_{2v}$ group $chain^{b}$. We test it on the ground state of this molecule using the same experimental $data^{c}$ (0-1 THz region, J up to 99). Both fits are comparable even if the formalisms are slightly different. This talk intends to establish a link between the classical approach and the tensorial formalism. In particular, our tensorial parameters at a given order of the development are related to the usual ones. Programs for spectrum simulation and fit using these methods are named $C_{2v}$ TDS. They are freely available at the URL: http://www.u-bourgogne.fr/LPUB/c2vTDS.html URI: http://hdl.handle.net/1811/21022 Other Identifiers: 2003-TF-15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7137729525566101, "perplexity": 2536.6614719980967}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00456-ip-10-147-4-33.ec2.internal.warc.gz"}
https://worldwidescience.org/topicpages/a/argonne+zgs.html
#### Sample records for argonne zgs 1. Wire chamber degradation at the Argonne ZGS Energy Technology Data Exchange (ETDEWEB) Haberichter, W.; Spinka, H. 1986-01-01 Experience with multiwire proportional chambers at high rates at the Argonne Zero Gradient Synchrotron is described. A buildup of silicon on the sense wires was observed where the beam passed through the chamber. Analysis of the chamber gas indicated that the density of silicon was probably less than 10 ppM. 2. Around the laboratories: Dubna: Physics results and progress on bubble chamber techniques; Stanford (SLAC): Operation of a very rapid cycling bubble chamber; Daresbury: Photographs of visitors to the Laboratory; Argonne: Charge exchange injection tests into the ZGS in preparation for a proposed Booster CERN Multimedia 1969-01-01 Around the laboratories: Dubna: Physics results and progress on bubble chamber techniques; Stanford (SLAC): Operation of a very rapid cycling bubble chamber; Daresbury: Photographs of visitors to the Laboratory; Argonne: Charge exchange injection tests into the ZGS in preparation for a proposed Booster 3. ZGS roots of superconductivity: People and devices Energy Technology Data Exchange (ETDEWEB) Pewitt, E.G. 1994-12-31 The ZGS community made basic contributions to the applications of superconducting magnets to high energy physics as well as to other technological areas. ZGS personnel pioneered many significant applications until the time the ZGS was shutdown in 1979. After the shutdown, former ZGS personnel developed magnets for new applications in high energy physics, fusion, and industrial uses. The list of superconducting magnet accomplishments of ZGS personnel is impressive. 4. Polarized proton beams since the ZGS Energy Technology Data Exchange (ETDEWEB) Krisch, A.D. 1994-12-31 The author discusses research involving polarized proton beams since the ZGSs demise. He begins by reminding the attendee that in 1973 the ZGS accelerated the worlds first high energy polarized proton beam; all in attendance at this meeting can be proud of this accomplishment. A few ZGS polarized proton beam experiments were done in the early 1970s; then from about 1976 until 1 October 1979, the majority of the ZGS running time was polarized running. A great deal of fundamental physics was done with the polarized beam when the ZGS ran as a dedicated polarized proton beam from about Fall 1977 until it shut down on 1 October 1979. The newly created polarization enthusiats then dispersed; some spread polarized seeds al over the world by polarizing beams elsewhere; some wound up running the High Energy and SSC programs at DOE. 5. Symposium on the 30th anniversary of the ZGS startup: Proceedings Energy Technology Data Exchange (ETDEWEB) Derrick, M. [ed. 1994-12-31 These proceedings document a number of aspects of a big science facility and its impact on science, on technology, and on the continuing program of a major US research institution. The Zero Gradient Synchrotron (ZGS) was a 12.5 GeV weak focusing proton accelerator that operated at Argonne for fifteen years--from 1964 to 1979. It was a major user facility which led to new close links between the Laboratory and university groups: in the research program; in the choice of experiments to be carried out; in the design and construction of beams and detectors; and even in the Laboratory management. For Argonne, it marked a major move from being a Laboratory dominated by nuclear reactor development to one with a stronger basic research orientation. The present meeting covered the progress in accelerator science, in the applications of technology pioneered or developed by people working at the ZGS, as well as in physics research and detector construction. At this time, when the future of the US research programs in science is being questioned as a result of the ending of the Cold War and plans to balance the Federal budget, the specific place of the National Laboratories in the spectrum of research activities is under particular examination. This Symposium highlights one case history of a major science program that was completed more than a decade ago--so that the further developments of both the science and the technology can be seen in some perspective. The subsequent activities of the people who had worked in the ZGS program as well as the redeployment of the ZGS facilities were addressed. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database. 6. The quark revolution and the ZGS - new quarks physics since the ZGS Energy Technology Data Exchange (ETDEWEB) Lipkin, H.J. [Weizmann Institute of Science, Rehovot (Israel)]|[Tel Aviv Univ. (Israel) 1994-12-31 Overwhelming experimental evidence for quarks as real physical constituents of hadrons along with the QCD analogs of the Balmer Formula, Bohr Atom and Schroedinger Equation already existed in 1966 but was dismissed as heresy. ZGS experiments played an important role in the quark revolution. This role is briefly reviewed and subsequent progress in quark physics is described. 7. History of the ZGS 500 MeV booster. Energy Technology Data Exchange (ETDEWEB) Simpson, J.; Martin; R.; Kustom, R. 2006-05-09 The history of the design and construction of the Argonne 500 MeV booster proton synchrotron from 1969 to 1982 is described. This accelerator has since been in steady use for the past 25 years to power the Argonne Intense Pulsed Neutron Source (IPNS). 8. Argonne Tandem Linac Accelerator System (ATLAS) Data.gov (United States) Federal Laboratory Consortium — ATLAS is a national user facility at Argonne National Laboratory in Argonne, Illinois. The ATLAS facility is a leading facility for nuclear structure research in the... 9. From hyperons to applied optics: {open_quotes}Winston Cones{close_quotes} during and after ZGS era Energy Technology Data Exchange (ETDEWEB) Swallow, E.C. [Elmhurst College, IL (United States)]|[Univ. of Chicago, IL (United States) 1994-12-31 This paper discusses developments in light collection which had their origin in efforts to construct high performance gas Cerenkov detectors for precision studies of hyperon beta decays at the ZGS. The resulting devices, know generally as {open_quotes}compound parabolic concentrators,{close_quotes} have found applications ranging from nuclear and particle physics experiments to solar energy concentration, instrument illumination, and understanding the optics of visual receptors. Interest in these devices and the ideas underlying them stimulated the development of a substantial new subfield of physics: nonimaging optics. This progression provides an excellent example of some ways in which unanticipated - and often unanticipatable - applied science and {open_quotes}practical{close_quotes} devices naturally emerge from first-rate basic science. The characteristics of this process suggest that the term {open_quotes}spinoff{close_quotes} commonly used to denote it is misleading and in need of replacement. 10. Environmental Survey preliminary report, Argonne National Laboratory, Argonne, Illinois Energy Technology Data Exchange (ETDEWEB) 1988-11-01 This report presents the preliminary findings of the first phase of the Environmental Survey of the United States Department of Energy's (DOE) Argonne National Laboratory (ANL), conducted June 15 through 26, 1987. The Survey is being conducted by an interdisciplinary team of environmental specialists, led and managed by the Office of Environment, Safety and Health's Office of Environmental Audit. The team includes outside experts supplied by a private contractor. The objective of the Survey is to identify environmental problems and areas of environmental risk associated with ANL. The Survey covers all environmental media and all areas of environmental regulation. It is being performed in accordance with the DOE Environmental Survey Manual. The on-site phase of the Survey involves the review of existing site environmental data, observations of the operations carried on at ANL, and interviews with site personnel. The Survey team developed a Sampling and Analysis (S A) Plan to assist in further assessing certain of the environmental problems identified during its on-site activities. The S A Plan will be executed by the Oak Ridge National Laboratory (ORNL). When completed, the S A results will be incorporated into the Argonne National Laboratory Environmental Survey findings for inclusion in the Environmental Survey Summary Report. 75 refs., 24 figs., 60 tabs. 11. Proposed environmental remediation at Argonne National Laboratory, Argonne, Illinois Energy Technology Data Exchange (ETDEWEB) NONE 1997-05-01 The Department of Energy (DOE) has prepared an Environmental Assessment evaluating proposed environmental remediation activity at Argonne National Laboratory-East (ANL-E), Argonne, Illinois. The environmental remediation work would (1) reduce, eliminate, or prevent the release of contaminants from a number of Resource Conservation and Recovery Act (RCRA) Solid Waste Management Units (SWMUs) and two radiologically contaminated sites located in areas contiguous with SWMUs, and (2) decrease the potential for exposure of the public, ANL-E employees, and wildlife to such contaminants. The actions proposed for SWMUs are required to comply with the RCRA corrective action process and corrective action requirements of the Illinois Environmental Protection Agency; the actions proposed are also required to reduce the potential for continued contaminant release. Based on the analysis in the EA, the DOE has determined that the proposed action does not constitute a major federal action significantly affecting the quality of the human environment within the meaning of the National Environmental Policy Act of 1969 (NEPA). Therefore, the preparation of an Environmental Impact Statement is not required. 12. Argonne National Laboratory 1985 publications Energy Technology Data Exchange (ETDEWEB) Kopta, J.A. (ED.); Hale, M.R. (comp.) 1987-08-01 This report is a bibliography of scientific and technical 1985 publications of Argonne National Laboratory. Some are ANL contributions to outside organizations' reports published in 1985. This compilation, prepared by the Technical Information Services Technical Publications Section (TPB), lists all nonrestricted 1985 publications submitted to TPS by Laboratory's Divisions. The report is divided into seven parts: Journal Articles - Listed by first author, ANL Reports - Listed by report number, ANL and non-ANL Unnumbered Reports - Listed by report number, Non-ANL Numbered Reports - Listed by report number, Books and Book Chapters - Listed by first author, Conference Papers - Listed by first author, Complete Author Index. 13. 2015 Annual Report - Argonne Leadership Computing Facility Energy Technology Data Exchange (ETDEWEB) Collins, James R. [Argonne National Lab. (ANL), Argonne, IL (United States); Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States); Cerny, Beth A. [Argonne National Lab. (ANL), Argonne, IL (United States); Coffey, Richard M. [Argonne National Lab. (ANL), Argonne, IL (United States) 2015-01-01 The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines. 14. 2014 Annual Report - Argonne Leadership Computing Facility Energy Technology Data Exchange (ETDEWEB) Collins, James R. [Argonne National Lab. (ANL), Argonne, IL (United States); Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States); Cerny, Beth A. [Argonne National Lab. (ANL), Argonne, IL (United States); Coffey, Richard M. [Argonne National Lab. (ANL), Argonne, IL (United States) 2014-01-01 The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines. 15. Argonne National Laboratory 1986 publications Energy Technology Data Exchange (ETDEWEB) Kopta, J.A.; Springer, C.J. 1987-12-01 This report is a bibliography of scientific and technical 1986 publications of Argonne National Laboratory. Some are ANL contributions to outside organizations' reports published in 1986. This compilation, prepared by the Technical Information Services Technical Publications Section (TPS), lists all nonrestricted 1986 publications submitted to TPS by the Laboratory's Divisions. Author indexes list ANL authors only. If a first author is not an ANL employee, an asterisk in the bibliographic citation indicates the first ANL author. The report is divided into seven parts: Journal Articles -- Listed by first author; ANL Reports -- Listed by report number; ANL and non-ANL Unnumbered Reports -- Listed by report number; Non-ANL Numbered Reports -- Listed by report number; Books and Book Chapters -- Listed by first author; Conference Papers -- Listed by first author; and Complete Author Index. 16. Push technology at Argonne National Laboratory. Energy Technology Data Exchange (ETDEWEB) Noel, R. E.; Woell, Y. N. 1999-04-06 Selective dissemination of information (SDI) services, also referred to as current awareness searches, are usually provided by periodically running computer programs (personal profiles) against a cumulative database or databases. This concept of pushing relevant content to users has long been integral to librarianship. Librarians traditionally turned to information companies to implement these searches for their users in business, academia, and the science community. This paper describes how a push technology was implemented on a large scale for scientists and engineers at Argonne National Laboratory, explains some of the challenges to designers/maintainers, and identifies the positive effects that SDI seems to be having on users. Argonne purchases the Institute for Scientific Information (ISI) Current Contents data (all subject areas except Humanities), and scientists no longer need to turn to outside companies for reliable SDI service. Argonne's database and its customized services are known as ACCESS (Argonne-University of Chicago Current Contents Electronic Search Service). 17. The big and little of fifty years of Moessbauer spectroscopy at Argonne. Energy Technology Data Exchange (ETDEWEB) Westfall, C. 2005-09-20 equipment that cost $100,000 by the 1970s alongside work at the$50 million Zero Gradient Synchrotron (ZGS) and the $30 million Experimental Breeder Reactor (EBR) II. Starting in the mid-1990s, Argonne physicists expanded their exploration of the properties of matter by employing a new type of Moessbauer spectroscopy--this time using synchrotron light sources such as Argonne's Advanced Photon Source (APS), which at$1 billion was the most expensive U.S. accelerator project of its time. Traditional Moessbauer spectroscopy looks superficially like prototypical ''Little Science'' and Moessbauer spectroscopy using synchrotrons looks like prototypical ''Big Science''. In addition, the growth from small to larger scale research seems to follow the pattern familiar from high energy physics even though the wide range of science performed using Moessbauer spectroscopy did not include high energy physics. But is the story of Moessbauer spectroscopy really like the tale told by high energy physicists and often echoed by historians? What do U.S. national laboratories, the ''Home'' of Big Science, have to offer small-scale research? And what does the story of the 50-year development of Moessbauer spectroscopy at Argonne tell us about how knowledge is produced at large laboratories? In a recent analysis of the development of relativistic heavy ion science at Lawrence Berkeley Laboratory I questioned whether it was wise for historians to speak in terms of ''Big Science'', pointing out at that Lawrence Berkeley Laboratory hosted large-scale projects at three scales, the grand scale of the Bevatron, the modest scale of the HILAC, and the mezzo scale of the combined machine, the Bevalac. I argue that using the term ''Big Science'', which was coined by participants, leads to a misleading preoccupation with the largest projects and the tendency to see the history of physics as the history 18. Chemical research at Argonne National Laboratory Energy Technology Data Exchange (ETDEWEB) NONE 1997-04-01 Argonne National Laboratory is a research and development laboratory located 25 miles southwest of Chicago, Illinois. It has more than 200 programs in basic and applied sciences and an Industrial Technology Development Center to help move its technologies to the industrial sector. At Argonne, basic energy research is supported by applied research in diverse areas such as biology and biomedicine, energy conservation, fossil and nuclear fuels, environmental science, and parallel computer architectures. These capabilities translate into technological expertise in energy production and use, advanced materials and manufacturing processes, and waste minimization and environmental remediation, which can be shared with the industrial sector. The Laboratorys technologies can be applied to help companies design products, substitute materials, devise innovative industrial processes, develop advanced quality control systems and instrumentation, and address environmental concerns. The latest techniques and facilities, including those involving modeling, simulation, and high-performance computing, are available to industry and academia. At Argonne, there are opportunities for industry to carry out cooperative research, license inventions, exchange technical personnel, use unique research facilities, and attend conferences and workshops. Technology transfer is one of the Laboratorys major missions. High priority is given to strengthening U.S. technological competitiveness through research and development partnerships with industry that capitalize on Argonnes expertise and facilities. The Laboratory is one of three DOE superconductivity technology centers, focusing on manufacturing technology for high-temperature superconducting wires, motors, bearings, and connecting leads. Argonne National Laboratory is operated by the University of Chicago for the U.S. Department of Energy. 19. The big and little of fifty years of Moessbauer spectroscopy at Argonne. Energy Technology Data Exchange (ETDEWEB) Westfall, C. 2005-09-20 equipment that cost $100,000 by the 1970s alongside work at the$50 million Zero Gradient Synchrotron (ZGS) and the $30 million Experimental Breeder Reactor (EBR) II. Starting in the mid-1990s, Argonne physicists expanded their exploration of the properties of matter by employing a new type of Moessbauer spectroscopy--this time using synchrotron light sources such as Argonne's Advanced Photon Source (APS), which at$1 billion was the most expensive U.S. accelerator project of its time. Traditional Moessbauer spectroscopy looks superficially like prototypical ''Little Science'' and Moessbauer spectroscopy using synchrotrons looks like prototypical ''Big Science''. In addition, the growth from small to larger scale research seems to follow the pattern familiar from high energy physics even though the wide range of science performed using Moessbauer spectroscopy did not include high energy physics. But is the story of Moessbauer spectroscopy really like the tale told by high energy physicists and often echoed by historians? What do U.S. national laboratories, the ''Home'' of Big Science, have to offer small-scale research? And what does the story of the 50-year development of Moessbauer spectroscopy at Argonne tell us about how knowledge is produced at large laboratories? In a recent analysis of the development of relativistic heavy ion science at Lawrence Berkeley Laboratory I questioned whether it was wise for historians to speak in terms of ''Big Science'', pointing out at that Lawrence Berkeley Laboratory hosted large-scale projects at three scales, the grand scale of the Bevatron, the modest scale of the HILAC, and the mezzo scale of the combined machine, the Bevalac. I argue that using the term ''Big Science'', which was coined by participants, leads to a misleading preoccupation with the largest projects and the tendency to see the history of physics as the history 20. Argonne Bubble Experiment Thermal Model Development II Energy Technology Data Exchange (ETDEWEB) Buechler, Cynthia Eileen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States) 2016-07-01 This report describes the continuation of the work reported in “Argonne Bubble Experiment Thermal Model Development”. The experiment was performed at Argonne National Laboratory (ANL) in 2014. A rastered 35 MeV electron beam deposited power in a solution of uranyl sulfate, generating heat and radiolytic gas bubbles. Irradiations were performed at three beam power levels, 6, 12 and 15 kW. Solution temperatures were measured by thermocouples, and gas bubble behavior was observed. This report will describe the Computational Fluid Dynamics (CFD) model that was developed to calculate the temperatures and gas volume fractions in the solution vessel during the irradiations. The previous report described an initial analysis performed on a geometry that had not been updated to reflect the as-built solution vessel. Here, the as-built geometry is used. Monte-Carlo N-Particle (MCNP) calculations were performed on the updated geometry, and these results were used to define the power deposition profile for the CFD analyses, which were performed using Fluent, Ver. 16.2. CFD analyses were performed for the 12 and 15 kW irradiations, and further improvements to the model were incorporated, including the consideration of power deposition in nearby vessel components, gas mixture composition, and bubble size distribution. The temperature results of the CFD calculations are compared to experimental measurements. 1. Status of RF superconductivity at Argonne Energy Technology Data Exchange (ETDEWEB) Shepard, K.W. 1989-01-01 Development of a superconducting (SC) slow-wave structures began at Argonne National Laboratory (ANL) in 1971, and led to the first SC heavy-ion linac (ATLAS - the Argonne Tandem-Linac Accelerating System), which began regularly scheduled operation in 1978. To date, more than 40,000 hours of bean-on target operating time has been accumulated with ATLAS. The Physics Division at ANL has continued to develop SC RF technology for accelerating heavy-ions, with the result that the SC linac has, up to the present, has been in an almost continuous process of upgrade and expansion. It should be noted that this has been accomplished while at the same time maintaining a vigorous operating schedule in support of the nuclear and atomic physics research programs of the division. In 1987, the Engineering Physics Division at ANL began development of SC RF components for the acceleration of high-brightness proton and deuterium beams. This work has included the evaluation of RF properties of high-{Tc} oxide superconductors, both for the above and for other applications. The two divisions collaborated while they worked on several applications of RF SC, and also worked to develop the technology generally. 11 refs., 6 figs. 2. Environmental monitoring at Argonne National Laboratory. Annual report, 1981 Energy Technology Data Exchange (ETDEWEB) Golchert, N.W.; Duffy, T.L.; Sedlet, J. 1982-03-01 The results of the environmental monitoring program at Argonne National Laboratory for 1981 are presented and discussed. To evaluate the effect of Argonne operations on the environment, measurements were made for a variety of radionuclides in air, surface water, soil, grass, bottom sediment, and milk; for a variety of chemical constituents in air, surface water, and Argonne effluent water; and of the environmental penetrating radiation dose. Sample collections and measurements were made at the site boundary and off the Argonne site for comparison purposes. Some on-site measurements were made to aid in the interpretation of the boundary and off-site data. The results of the program are interpreted in terms of the sources and origin of the radioactive and chemical substances (natural, fallout, Argonne, and other) and are compared with applicable environmental quality standards. The potential radiation dose to off-site population groups is also estimated. 3. Environmental monitoring at Argonne National Laboratory. Annual report for 1980 Energy Technology Data Exchange (ETDEWEB) Golchert, N. W.; Duffy, T. L.; Sedlet, J. 1981-03-01 The results of the environmental monitoring program at Argonne National Laboratory for 1980 are presented and discussed. To evaluate the effect of Argonne operations on the environment, measurements were made for a variety of radionuclides in air, surface water, soil, grass, bottom sediment, and foodstuffs; for a variety of chemical constituents in air, surface water, and Argonne effluent water; and of the environmental penetrating radiation dose. Sample collections and measurements were made at the site boundary and off the Argonne site for comparison purposes. Some on-site measurements were made to aid in the interpretation of the boundary and off-site data. The results of the program are interpreted in terms of the sources and origin of the radioactive and chemical substances (natural, fallout, Argonne, and other) and are compared with applicable environmental quality standards. The potential radiation dose to off-site population groups is also estimated. 4. Environmental monitoring at Argonne National Laboratory. Annual report for 1979 Energy Technology Data Exchange (ETDEWEB) Golchert, N. W.; Duffy, T. L.; Sedlet, J. 1980-03-01 The results of the environmental monitoring program at Argonne National Laboratory for 1979 are presented and discussed. To evaluate the effect of Argonne operations on the environment, measurements were made for a variety of radionuclides in air, surface water, Argonne effluent water, soil, grass, bottom sediment, and foodstuffs; for a variety of chemical constituents in air, surface water, and Argonne effluent water; and of the environemetal penetrating radiation dose. Sample collections and measurements were made at the site boundary and off the Argonne site for comparison purposes. Some on-site measuremenets were made to aid in the interpretation of the boundary and off-site data. The results of the program are interpreted in terms of the sources and origin of the radioactive and chemical substances and are compared with applicable environmental quality standards. The potential radiation dose to off-site population groups is also estimated. 5. Environmental monitoring at Argonne National Laboratory. Annual report for 1983 Energy Technology Data Exchange (ETDEWEB) Golchert, N.W.; Duffy, T.L.; Sedlet, J. 1984-03-01 The results of the environmental monitoring program at Argonne National Laboratory for 1983 are presented and discussed. To evaluate the effect of Argonne operations on the environment, measurements were made for a variety of radionuclides in air, surface water, soil, grass, bottom sediment, and milk; for a variety of chemical constituents in air, surface water, ground water, and Argonne effluent water; and of the environmental penetrating radiation dose. Sample collections and measurements were made at the site boundary and off the Argonne site for comparison purposes. Some on-site measurements were made to aid in the interpretation of the boundary and off-site data. The potential radiation dose to off-site population groups is also estimated. The results of the program are interpreted in terms of the sources and origin of the radioactive and chemical substances (natural, fallout, Argonne, and other) and are compared with applicable environmental quality standards. 19 references, 8 figures, 49 tables. 6. Environmental monitoring at Argonne National Laboratory. Annual report for 1982 Energy Technology Data Exchange (ETDEWEB) Golchert, N.W.; Duffy, T.L.; Sedlet, J. 1983-03-01 The results of the environmental monitoring program at Argonne Ntaional Laboratory for 1982 are presented and discussed. To evaluate the effect of Argonne operations on the environment, measurements were made for a variety of radionuclides in air, surface water, soil, grass, bottom sediment, and milk; for a variety of chemical constituents in air, surface water, ground water, and Argonne effluent water; and of the environmental penetrating radiation dose. Sample collections and masurements were made at the site boundary and off the Argonne site for comparison purposes. Some on-site measurements were made to aid in the interpretation of the boundary and off-site data. The results of the program are interpreted in terms of the sources and origin of the radioactive and chemical substances (natural, fallout, Argonne, and other) and are compared with applicable environmental quality standards. The potential radiation dose to off-site population groups is also estimated. 7. Environmental assessment related to the operation of Argonne National Laboratory, Argonne, Illinois Energy Technology Data Exchange (ETDEWEB) 1982-08-01 In order to evaluate the environmental impacts of Argonne National Laboratory (ANL) operations, this assessment includes a descriptive section which is intended to provide sufficient detail to allow the various impacts to be viewed in proper perspective. In particular, details are provided on site characteristics, current programs, characterization of the existing site environment, and in-place environmental monitoring programs. In addition, specific facilities and operations that could conceivably impact the environment are described at length. 77 refs., 16 figs., 47 tabs. 8. Argonne National Laboratory research offers clues to Alzheimer's plaques CERN Multimedia 2003-01-01 Researchers from Argonne National Laboratory and the University of Chicago have developed methods to directly observe the structure and growth of microscopic filaments that form the characteristic plaques found in the brains of those with Alzheimer's Disease (1 page). 9. High-temperature superconductor applications development at Argonne National Laboratory Science.gov (United States) Hull, J. R.; Poeppel, R. B. 1992-02-01 Developments at Argonne National Laboratory of near and intermediate term applications using high-temperature superconductors are discussed. Near-term applications of liquid-nitrogen depth sensors, current leads, and magnetic bearings are discussed in detail. 10. Argonne National Lab gets Linux network teraflop cluster CERN Multimedia 2003-01-01 "Linux NetworX, Salt Lake City, Utah, has delivered an Evolocity II (E2) Linux cluster to Argonne National Laboratory that is capable of performing more than one trillion calculations per second (1 teraFLOP). The cluster, named "Jazz" by Argonne, is designed to provide optimum performance for multiple disciplines such as chemistry, physics and reactor engineering and will be used by the entire scientific community at the Lab" (1 page). 11. Argonne National Laboratory Site Environmental Report for Calendar Year 2013 Energy Technology Data Exchange (ETDEWEB) Davis, T. M. [Argonne National Lab. (ANL), Argonne, IL (United States); Gomez, J. L. [Argonne National Lab. (ANL), Argonne, IL (United States); Moos, L. P. [Argonne National Lab. (ANL), Argonne, IL (United States) 2014-09-02 This report discusses the status and the accomplishments of the environmental protection program at Argonne National Laboratory for calendar year 2013. The status of Argonne environmental protection activities with respect to compliance with the various laws and regulations is discussed, along with environmental management, sustainability efforts, environmental corrective actions, and habitat restoration. To evaluate the effects of Argonne operations on the environment, samples of environmental media collected on the site, at the site boundary, and off the Argonne site were analyzed and compared with applicable guidelines and standards. A variety of radionuclides were measured in air, surface water, on-site groundwater, and bottom sediment samples. In addition, chemical constituents in surface water, groundwater, and Argonne effluent water were analyzed. External penetrating radiation doses were measured, and the potential for radiation exposure to off-site population groups was estimated. Results are interpreted in terms of the origin of the radioactive and chemical substances (i.e., natural, Argonne, and other) and are compared with applicable standards intended to protect human health and the environment. A U.S. Department of Energy (DOE) dose calculation methodology, based on International Commission on Radiological Protection (ICRP) recommendations and the U.S. Environmental Protection Agency’s (EPA) CAP-88 Version 3 computer code, was used in preparing this report. 12. Argonne National Laboratory Site Environmental report for calendar year 2009. Energy Technology Data Exchange (ETDEWEB) Golchert, N. W.; Davis, T. M.; Moos, L. P. 2010-08-04 This report discusses the status and the accomplishments of the environmental protection program at Argonne National Laboratory for calendar year 2009. The status of Argonne environmental protection activities with respect to compliance with the various laws and regulations is discussed, along with the progress of environmental corrective actions and restoration projects. To evaluate the effects of Argonne operations on the environment, samples of environmental media collected on the site, at the site boundary, and off the Argonne site were analyzed and compared with applicable guidelines and standards. A variety of radionuclides were measured in air, surface water, on-site groundwater, and bottom sediment samples. In addition, chemical constituents in surface water, groundwater, and Argonne effluent water were analyzed. External penetrating radiation doses were measured, and the potential for radiation exposure to off-site population groups was estimated. Results are interpreted in terms of the origin of the radioactive and chemical substances (i.e., natural, Argonne, and other) and are compared with applicable environmental quality standards. A U.S. Department of Energy (DOE) dose calculation methodology, based on International Commission on Radiological Protection recommendations and the U.S. Environmental Protection Agency's (EPA) CAP-88 Version 3 (Clean Air Act Assessment Package-1988) computer code, was used in preparing this report. 13. Argonne National Laboratory site environmental report for calendar year 2006. Energy Technology Data Exchange (ETDEWEB) Golchert, N. W.; ESH/QA Oversight 2007-09-13 This report discusses the status and the accomplishments of the environmental protection program at Argonne National Laboratory for calendar year 2006. The status of Argonne environmental protection activities with respect to compliance with the various laws and regulations is discussed, along with the progress of environmental corrective actions and restoration projects. To evaluate the effects of Argonne operations on the environment, samples of environmental media collected on the site, at the site boundary, and off the Argonne site were analyzed and compared with applicable guidelines and standards. A variety of radionuclides were measured in air, surface water, on-site groundwater, and bottom sediment samples. In addition, chemical constituents in surface water, groundwater, and Argonne effluent water were analyzed. External penetrating radiation doses were measured, and the potential for radiation exposure to off-site population groups was estimated. Results are interpreted in terms of the origin of the radioactive and chemical substances (i.e., natural, fallout, Argonne, and other) and are compared with applicable environmental quality standards. A U.S. Department of Energy dose calculation methodology, based on International Commission on Radiological Protection recommendations and the U.S. Environmental Protection Agency's CAP-88 Version 3 (Clean Air Act Assessment Package-1988) computer code, was used in preparing this report. 14. Argonne National Laboratory site enviromental report for calendar year 2008. Energy Technology Data Exchange (ETDEWEB) Golchert, N. W.; Davis, T. M.; Moos, L. P. 2009-09-02 This report discusses the status and the accomplishments of the environmental protection program at Argonne National Laboratory for calendar year 2008. The status of Argonne environmental protection activities with respect to compliance with the various laws and regulations is discussed, along with the progress of environmental corrective actions and restoration projects. To evaluate the effects of Argonne operations on the environment, samples of environmental media collected on the site, at the site boundary, and off the Argonne site were analyzed and compared with applicable guidelines and standards. A variety of radionuclides were measured in air, surface water, on-site groundwater, and bottom sediment samples. In addition, chemical constituents in surface water, groundwater, and Argonne effluent water were analyzed. External penetrating radiation doses were measured, and the potential for radiation exposure to off-site population groups was estimated. Results are interpreted in terms of the origin of the radioactive and chemical substances (i.e., natural, fallout, Argonne, and other) and are compared with applicable environmental quality standards. A U.S. Department of Energy dose calculation methodology, based on International Commission on Radiological Protection recommendations and the U.S. Environmental Protection Agency's CAP-88 Version 3 (Clean Air Act Assessment Package-1988) computer code, was used in preparing this report. 15. Argonne National Laboratory site environmental report for calendar year 2007. Energy Technology Data Exchange (ETDEWEB) Golchert, N. W.; Davis, T. M.; Moos, L. P.; ESH/QA Oversight 2008-09-09 This report discusses the status and the accomplishments of the environmental protection program at Argonne National Laboratory for calendar year 2007. The status of Argonne environmental protection activities with respect to compliance with the various laws and regulations is discussed, along with the progress of environmental corrective actions and restoration projects. To evaluate the effects of Argonne operations on the environment, samples of environmental media collected on the site, at the site boundary, and off the Argonne site were analyzed and compared with applicable guidelines and standards. A variety of radionuclides were measured in air, surface water, on-site groundwater, and bottom sediment samples. In addition, chemical constituents in surface water, groundwater, and Argonne effluent water were analyzed. External penetrating radiation doses were measured, and the potential for radiation exposure to off-site population groups was estimated. Results are interpreted in terms of the origin of the radioactive and chemical substances (i.e., natural, fallout, Argonne, and other) and are compared with applicable environmental quality standards. A U.S. Department of Energy dose calculation methodology, based on International Commission on Radiological Protection recommendations and the U.S. Environmental Protection Agency's CAP-88 Version 3 (Clean Air Act Assessment Package-1988) computer code, was used in preparing this report. 16. Do you want to build such a machine? : Designing a high energy proton accelerator for Argonne National Laboratory. Energy Technology Data Exchange (ETDEWEB) Paris, E. 2004-04-05 Argonne National Laboratory's efforts toward researching, proposing and then building a high-energy proton accelerator have been discussed in a handful of studies. In the main, these have concentrated on the intense maneuvering amongst politicians, universities, government agencies, outside corporations, and laboratory officials to obtain (or block) approval and/or funds or to establish who would have control over budgets and research programs. These ''top-down'' studies are very important but they can also serve to divorce such proceedings from the individuals actually involved in the ground-level research which physically served to create theories, designs, machines, and experiments. This can lead to a skewed picture, on the one hand, of a lack of effect that so-called scientific and technological factors exert and, on the other hand, of the apparent separation of the so-called social or political from the concrete practice of doing physics. An exception to this approach can be found in the proceedings of a conference on ''History of the ZGS'' held at Argonne at the time of the Zero Gradient Synchrotron's decommissioning in 1979. These accounts insert the individuals quite literally as they are, for the most part, personal reminiscences of those who took part in these efforts on the ground level. As such, they are invaluable raw material for historical inquiry but generally lack the rigor and perspective expected in a finished historical work. The session on ''Constructing Cold War Physics'' at the 2002 annual History of Science Society Meeting served to highlight new approaches circulating towards history of science and technology in the post-WWII period, especially in the 1950s. There is new attention towards the effects of training large numbers of scientists and engineers as well as the caution not to equate ''national security'' with military preparedness, but rather 17. Argonne National Laboratory institutional plan FY 2001--FY 2006. Energy Technology Data Exchange (ETDEWEB) Beggs, S.D. 2000-12-07 This Institutional Plan describes what Argonne management regards as the optimal future development of Laboratory activities. The document outlines the development of both research programs and support operations in the context of the nation's R and D priorities, the missions of the Department of Energy (DOE) and Argonne, and expected resource constraints. The Draft Institutional Plan is the product of many discussions between DOE and Argonne program managers, and it also reflects programmatic priorities developed during Argonne's summer strategic planning process. That process serves additionally to identify new areas of strategic value to DOE and Argonne, to which Laboratory Directed Research and Development funds may be applied. The Draft Plan is provided to the Department before Argonne's On-Site Review. Issuance of the final Institutional Plan in the fall, after further comment and discussion, marks the culmination of the Laboratory's annual planning cycle. Chapter II of this Institutional Plan describes Argonne's missions and roles within the DOE laboratory system, its underlying core competencies in science and technology, and six broad planning objectives whose achievement is considered critical to the future of the Laboratory. Chapter III presents the Laboratory's ''Science and Technology Strategic Plan,'' which summarizes key features of the external environment, presents Argonne's vision, and describes how Argonne's strategic goals and objectives support DOE's four business lines. The balance of Chapter III comprises strategic plans for 23 areas of science and technology at Argonne, grouped according to the four DOE business lines. The Laboratory's 14 major initiatives, presented in Chapter IV, propose important advances in key areas of fundamental science and technology development. The ''Operations and Infrastructure Strategic Plan'' in Chapter V includes 18. Fire protection review revisit no. 2, Argonne National Laboratory, Argonne, Illinois Science.gov (United States) Dobson, P. H.; Earley, M. W.; Mattern, L. J. 1985-05-01 A fire protection survey was conducted at Argonne National Laboratory on April 1-5, 8-12, and April 29-May 2, 1985. The purpose was to review the facility fire protection program and to make recommendations or identify areas according to criteria established by the Department of Energy. There has been a substantial improvement in fire protection at this laboratory since the 1977 audit. Numerous areas which were previously provided with detection systems only have since been provided with automatic sprinkler protection. The following basic fire protection features are not properly controlled: (1) resealing wall and floor penetrations between fire areas after installation of services; (2) cutting and welding; and (3) housekeeping. The present Fire Department manpower level appears adequate to control a route fire. Their ability to adequately handle a high-challenge fire, or one involving injuries to personnel, or fire spread beyond the initial fire area is doubtful. 19. Tiger team assessment of the Argonne Illinois site Energy Technology Data Exchange (ETDEWEB) 1990-10-19 This report documents the results of the Department of Energy's (DOE) Tiger Team Assessment of the Argonne Illinois Site (AIS) (including the DOE Chicago Operations Office, DOE Argonne Area Office, Argonne National Laboratory-East, and New Brunswick Laboratory) and Site A and Plot M, Argonne, Illinois, conducted from September 17 through October 19, 1990. The Tiger Team Assessment was conducted by a team comprised of professionals from DOE, contractors, consultants. The purpose of the assessment was to provide the Secretary of Energy with the status of Environment, Safety, and Health (ES H) Programs at AIS. Argonne National Laboratory-East (ANL-E) is the principal tenant at AIS. ANL-E is a multiprogram laboratory operated by the University of Chicago for DOE. The mission of ANL-E is to perform basic and applied research that supports the development of energy-related technologies. There are a significant number of ES H findings and concerns identified in the report that require prompt management attention. A significant change in culture is required before ANL-E can attain consistent and verifiable compliance with statutes, regulations and DOE Orders. ES H activities are informal, fragmented, and inconsistently implemented. Communication is seriously lacking, both vertically and horizontally. Management expectations are not known or commondated adequately, support is not consistent, and oversight is not effective. 20. Argonne's contribution to regional development : successful examples. Energy Technology Data Exchange (ETDEWEB) Chang, Y. I. 2000-11-14 Argonne National Laboratory's mission is basic research and technology development to meet national goals in scientific leadership, energy technology, and environmental quality. In addition to its core missions as a national research and development center, Argonne has exerted a positive impact on its regional economic development, has carried out outstanding educational programs not only for college/graduate students but also for pre-college students and teachers, and has fostered partnerships with universities for research collaboration and with industry for shaping the new technological frontiers. 1. Performance model of the Argonne Voyager multimedia server Energy Technology Data Exchange (ETDEWEB) Disz, T.; Olson, R.; Stevens, R. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div. 1997-07-01 The Argonne Voyager Multimedia Server is being developed in the Futures Lab of the Mathematics and Computer Science Division at Argonne National Laboratory. As a network-based service for recording and playing multimedia streams, it is important that the Voyager system be capable of sustaining certain minimal levels of performance in order for it to be a viable system. In this article, the authors examine the performance characteristics of the server. As they examine the architecture of the system, they try to determine where bottlenecks lie, show actual vs potential performance, and recommend areas for improvement through custom architectures and system tuning. 2. Argonne to open new facility for advanced vehicle testing CERN Multimedia 2002-01-01 Argonne National Laboratory will open it's Advanced Powertrain Research Facility on Friday, Nov. 15. The facility is North America's only public testing facility for engines, fuel cells, electric drives and energy storage. State-of-the-art performance and emissions measurement equipment is available to support model development and technology validation (1 page). 3. Brookhaven Lab and Argonne Lab scientists invent a plasma valve CERN Multimedia 2003-01-01 Scientists from Brookhaven National Laboratory and Argonne National Laboratory have received U.S. patent number 6,528,948 for a device that shuts off airflow into a vacuum about one million times faster than mechanical valves or shutters that are currently in use (1 page). 4. Argonne National Laboratory Publications July 1, 1968 - June 30, 1969. Energy Technology Data Exchange (ETDEWEB) None, None 1969-08-01 This publication list is a bibliography of scientific and technical accounts originated at Argonne and published during the fiscal year 1969 (July 1, 1968 through June 30, 1969). It includes items published as journal articles, technical reports, books, etc., all of which have been made available to the public. 5. Argonne Laboratory Computing Resource Center - FY2004 Report. Energy Technology Data Exchange (ETDEWEB) Bair, R. 2005-04-14 In the spring of 2002, Argonne National Laboratory founded the Laboratory Computing Resource Center, and in April 2003 LCRC began full operations with Argonne's first teraflops computing cluster. The LCRC's driving mission is to enable and promote computational science and engineering across the Laboratory, primarily by operating computing facilities and supporting application use and development. This report describes the scientific activities, computing facilities, and usage in the first eighteen months of LCRC operation. In this short time LCRC has had broad impact on programs across the Laboratory. The LCRC computing facility, Jazz, is available to the entire Laboratory community. In addition, the LCRC staff provides training in high-performance computing and guidance on application usage, code porting, and algorithm development. All Argonne personnel and collaborators are encouraged to take advantage of this computing resource and to provide input into the vision and plans for computing and computational analysis at Argonne. Steering for LCRC comes from the Computational Science Advisory Committee, composed of computing experts from many Laboratory divisions. The CSAC Allocations Committee makes decisions on individual project allocations for Jazz. 6. Three Argonne technologies win R&D 100 awards CERN Multimedia 2003-01-01 "Three technologies developed or co-developed at the U.S. Department of Energy's Argonne National Laboratory have been recognized with R&D 100 Awards, which highlight some of the best products and technologies from around the world" (1 page). 7. Argonne's Laboratory computing center - 2007 annual report. Energy Technology Data Exchange (ETDEWEB) Bair, R.; Pieper, G. W. 2008-05-28 Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (1012 floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2007, there were over 60 active projects representing a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff use of national computing facilities, and improving the scientific 8. 1985 annual site environmental report for Argonne National Laboratory Energy Technology Data Exchange (ETDEWEB) Golchert, N.W.; Duffy, T.L.; Sedlet, J. 1986-03-01 This is one in a series of annual reports prepared to provide DOE, environmental agencies, and the public with information on the level of radioactive and chemical pollutants in the environment and on the amounts of such substances, if any, added to the environment as a result of Argonne operations. Included in this report are the results of measurements obtained in 1985 for a number of radionuclides in air, surface water, ground water, soil, grass, bottom sediment, and milk; for a variety of chemical constituents in surface and subsurface water; and for the external penetrating radiation dose. 9. Research in mathematics and computer science at Argonne Energy Technology Data Exchange (ETDEWEB) Pieper, G.W. 1989-08-01 This report reviews the research activities in the Mathematics and Computer Science Division at Argonne National Laboratory for the period January 1988 - August 1989. The body of the report gives a brief look at the MCS staff and the research facilities, and discusses various projects carried out in two major areas of research: analytical and numerical methods and advanced computing concepts. Projects funded by non-DOE sources are also discussed, and new technology transfer activities are described. Further information on division staff, visitors, workshops, and seminars is found in the appendices. 10. Change in argonne national laboratory: a case study. Science.gov (United States) Mozley, A 1971-10-01 Despite traditional opposition to change within an institution and the known reluctance of an "old guard" to accept new managerial policies and techniques, the reactions suggested in this study go well beyond the level of a basic resistance to change. The response, indeed, drawn from a random sampling of Laboratory scientific and engineering personnel, comes close to what Philip Handler has recently described as a run on the scientific bank in a period of depression (1, p. 146). It appears that Argonne's apprehension stems less from the financial cuts that have reduced staff and diminished programs by an annual 10 percent across the last 3 fiscal years than from the administrative and conceptual changes that have stamped the institution since 1966. Administratively, the advent of the AUA has not forged a sense of collaborative effort implicit in the founding negotiations or contributed noticeably to increasing standards of excellence at Argonne. The AUA has, in fact, yet to exercise the constructive powers vested in them by the contract of reviewing and formulating long-term policy on the research and reactor side. Additionally, the University of Chicago, once the single operator, appears to have forfeited some of the trust and understanding that characterized the Laboratory's attitude to it in former years. In a period of complex and sensitive management the present directorate at Argonne is seriously dissociated from a responsible spectrum of opinion within the Laboratory. The crux of discontent among the creative scientific and engineering community appears to lie in a developed sense of being overadministered. In contrast to earlier periods, Argonne's professional staff feels a critical need for a voice in the formulation of Laboratory programs and policy. The Argonne senate could supply this mechanism. Slow to rally, their present concern springs from a firm conviction that the Laboratory is "withering on the vine." By contrast, the Laboratory director Powers 11. Initial operation of the Argonne superconducting heavy-ion linac Energy Technology Data Exchange (ETDEWEB) Shepard, K. W. 1979-01-01 Initial operation and recent development of the Argonne superconducting heavy-ion linac are discussed. The linac has been developed in order to demonstrate a cost-effective means of extending the performance of electrostatic tandem accelerators. The results of beam acceleration tests which began in June 1978 are described. At present 7 of a planned array of 22 resonators are operating on-line, and the linac system provides an effective accelerating potential of 7.5 MV. Although some technical problems remain, the level of performance and reliability is sufficient that appreciable beam time is becoming available to users. 12. Microscale chemistry technology exchange at Argonne National Laboratory - east. Energy Technology Data Exchange (ETDEWEB) Pausma, R. 1998-06-04 The Division of Educational Programs (DEP) at Argonne National Laboratory-East interacts with the education community at all levels to improve science and mathematics education and to provide resources to instructors of science and mathematics. DEP conducts a wide range of educational programs and has established an enormous audience of teachers, both in the Chicago area and nationally. DEP has brought microscale chemistry to the attention of this huge audience. This effort has been supported by the U.S. Department of Energy through the Environmental Management Operations organization within Argonne. Microscale chemistry is a teaching methodology wherein laboratory chemistry training is provided to students while utilizing very small amounts of reagents and correspondingly small apparatus. The techniques enable a school to reduce significantly the cost of reagents, the cost of waste disposal and the dangers associated with the manipulation of chemicals. The cost reductions are achieved while still providing the students with the hands-on laboratory experience that is vital to students who might choose to pursue careers in the sciences. Many universities and colleges have already begun to switch from macroscale to microscale chemistry in their educational laboratories. The introduction of these techniques at the secondary education level will lead to freshman being better prepared for the type of experimentation that they will encounter in college. 13. Draft environmental assessment of Argonne National Laboratory, East Energy Technology Data Exchange (ETDEWEB) 1975-10-01 This environmental assessment of the operation of the Argonne National Laboratory is related to continuation of research and development work being conducted at the Laboratory site at Argonne, Illinois. The Laboratory has been monitoring various environmental parameters both offsite and onsite since 1949. Meteorological data have been collected to support development of models for atmospheric dispersion of radioactive and other pollutants. Gaseous and liquid effluents, both radioactive and non-radioactive, have been measured by portable monitors and by continuous monitors at fixed sites. Monitoring of constituents of the terrestrial ecosystem provides a basis for identifying changes should they occur in this regime. The Laboratory has established a position of leadership in monitoring methodologies and their application. Offsite impacts of nonradiological accidents are primarily those associated with the release of chlorine and with sodium fires. Both result in releases that cause no health damage offsite. Radioactive materials released to the environment result in a cumulative dose to persons residing within 50 miles of the site of about 47 man-rem per year, compared to an annual total of about 950,000 man-rem delivered to the same population from natural background radiation. 100 refs., 17 figs., 33 tabs. 14. Argonne National Lab deploys Force10 networks' massively dense ethernet switch for supercomputing cluster CERN Multimedia 2003-01-01 "Force10 Networks, Inc. today announced that Argonne National Laboratory (Argonne, IL) has successfully deployed Force10 E-Series switch/routers to connect to the TeraGrid, the world's largest supercomputing grid, sponsored by the National Science Foundation (NSF)" (1/2 page). 15. Frontiers: Research highlights 1946-1996 [50th Anniversary Edition. Argonne National Laboratory Energy Technology Data Exchange (ETDEWEB) NONE 1996-12-31 This special edition of 'Frontiers' commemorates Argonne National Laboratory's 50th anniversary of service to science and society. America's first national laboratory, Argonne has been in the forefront of U.S. scientific and technological research from its beginning. Past accomplishments, current research, and future plans are highlighted. 16. Users Handbook for the Argonne Premium Coal Sample Program Energy Technology Data Exchange (ETDEWEB) Vorres, K.S. 1993-10-01 This Users Handbook for the Argonne Premium Coal Samples provides the recipients of those samples with information that will enhance the value of the samples, to permit greater opportunities to compare their work with that of others, and aid in correlations that can improve the value to all users. It is hoped that this document will foster a spirit of cooperation and collaboration such that the field of basic coal chemistry may be a more efficient and rewarding endeavor for all who participate. The different sections are intended to stand alone. For this reason some of the information may be found in several places. The handbook is also intended to be a dynamic document, constantly subject to change through additions and improvements. Please feel free to write to the editor with your comments and suggestions. 17. Flow Induced Vibration Program at Argonne National Laboratory Energy Technology Data Exchange (ETDEWEB) 1984-01-01 Argonne National Laboratory has had a Flow Induced Vibration Program since 1967; the Program currently resides in the Laboratory's Components Technology Division. Throughout its existence, the overall objective of the program has been to develop and apply new and/or improved methods of analysis and testing for the design evaluation of nuclear reactor plant components and heat exchange equipment from the standpoint of flow induced vibration. Historically, the majority of the program activities have been funded by the US Atomic Energy Commission (AEC), Energy Research and Development Administration (ERDA), and Department of Energy (DOE). Current DOE funding is from the Breeder Mechanical Component Development Division, Office of Breeder Technology Projects; Energy Conversion and Utilization Technology (ECUT) Program, Office of Energy Systems Research; and Division of Engineering, Mathematical and Geosciences, Office of Basic Energy Sciences. Testing of Clinch River Breeder Reactor upper plenum components has been funded by the Clinch River Breeder Reactor Plant (CRBRP) Project Office. Work has also been performed under contract with Foster Wheeler, General Electric, Duke Power Company, US Nuclear Regulatory Commission, and Westinghouse. 18. Treatment of mixed radioactive liquid wastes at Argonne National Laboratory Energy Technology Data Exchange (ETDEWEB) Vandegrift, G.F.; Chamberlain, D.B.; Conner, C. [and others 1994-03-01 Aqueous mixed waste at Argonne National Laboratory (ANL) is traditionally generated in small volumes with a wide variety of compositions. A cooperative effort at ANL between Waste Management (WM) and the Chemical Technology Division (CMT) was established, to develop, install, and implement a robust treatment operation to handle the majority of such wastes. For this treatment, toxic metals in mixed-waste solutions are precipitated in a semiautomated system using Ca(OH){sub 2} and, for some metals, Na{sub 2}S additions. This step is followed by filtration to remove the precipitated solids. A filtration skid was built that contains several filter types which can be used, as appropriate, for a variety of suspended solids. When supernatant liquid is separated from the toxic-metal solids by decantation and filtration, it will be a low-level waste (LLW) rather than a mixed waste. After passing a Toxicity Characteristic Leaching Procedure (TCLP) test, the solids may also be treated as LLW. 19. Argonne National Laboratory site environmental report for calendar year 2004. Energy Technology Data Exchange (ETDEWEB) Golchert, N. W.; Kolzow, R. G. 2005-09-02 This report discusses the accomplishments of the environmental protection program at Argonne National Laboratory (ANL) for calendar year 2004. The status of ANL environmental protection activities with respect to compliance with the various laws and regulations is discussed, along with the progress of environmental corrective actions and restoration projects. To evaluate the effects of ANL operations on the environment, samples of environmental media collected on the site, at the site boundary, and off the ANL site were analyzed and compared with applicable guidelines and standards. A variety of radionuclides were measured in air, surface water, on-site groundwater, and bottom sediment samples. In addition, chemical constituents in surface water, groundwater, and ANL effluent water were analyzed. External penetrating radiation doses were measured, and the potential for radiation exposure to off-site population groups was estimated. Results are interpreted in terms of the origin of the radioactive and chemical substances (i.e., natural, fallout, ANL, and other) and are compared with applicable environmental quality standards. A U.S. Department of Energy dose calculation methodology, based on International Commission on Radiological Protection recommendations and the U.S. Environmental Protection Agency's CAP-88 (Clean Air Act Assessment Package-1988) computer code, was used in preparing this report. 20. Routine environmental reaudit of the Argonne National Laboratory - West Energy Technology Data Exchange (ETDEWEB) NONE 1996-04-01 This report documents the results of the Routine Environmental Reaudit of the Argonne National Laboratory - West (ANL-W), Idaho Falls, Idaho. During this audit, the activities conducted by the audit team included reviews of internal documents and reports from previous audits and assessments; interviews with U.S. Department of Energy (DOE), U.S. Environmental Protection Agency (EPA), State of Idaho Department of Health and Welfare (IDHW), and DOE contractor personnel; and inspections and observations of selected facilities and operations. The onsite portion of the audit was conducted from October 11 to October 22, 1993, by the DOE Office of Environmental Audit (EH-24), located within the Office of Environment, Safety and Health (EH). DOE 5482.113, {open_quotes}Environment, Safety, and Health Appraisal Program,{close_quotes} established the mission of EH-24 to provide comprehensive, independent oversight of Department-wide environmental programs on behalf of the Secretary of Energy. The ultimate goal of EH-24 is enhancement of environmental protection and minimization of risk to public health and the environment. EH-24 accomplishes its mission by conducting systematic and periodic evaluations of the Departments environmental programs within line organizations, and by utilizing supplemental activities that serve to strengthen self-assessment and oversight functions within program, field, and contractor organizations. 1. Argonne Natl Lab receives TeraFLOP Cluster Linux NetworX CERN Multimedia 2002-01-01 " Linux NetworX announced today it has delivered an Evolocity II (E2) Linux cluster to Argonne National Laboratory that is capable of performing more than one trillion calculations per second (1 teraFLOP)" (1/2 page). 2. Authorized limits for disposal of PCB capacitors from Buildings 361 and 391 at Argonne National Laboratory, Argonne, Illinois. Energy Technology Data Exchange (ETDEWEB) Cheng, J.-J.; Chen, S.-Y.; Environmental Science Division 2009-12-22 This report contains data and analyses to support the approval of authorized release limits for the clearance from radiological control of polychlorinated biphenyl (PCB) capacitors in Buildings 361 and 391 at Argonne National Laboratory, Argonne, Illinois. These capacitors contain PCB oil that must be treated and disposed of as hazardous waste under the Toxic Substances Control Act (TSCA). However, they had been located in radiological control areas where the potential for neutron activation existed; therefore, direct release of these capacitors to a commercial facility for PCB treatment and landfill disposal is not allowable unless authorized release has been approved. Radiological characterization found no loose contamination on the exterior surface of the PCB capacitors; gamma spectroscopy analysis also showed the radioactivity levels of the capacitors were either at or slightly above ambient background levels. As such, conservative assumptions were used to expedite the analyses conducted to evaluate the potential radiation exposures of workers and the general public resulting from authorized release of the capacitors; for example, the maximum averaged radioactivity levels measured for capacitors nearest to the beam lines were assumed for the entire batch of capacitors. This approach overestimated the total activity of individual radionuclide identified in radiological characterization by a factor ranging from 1.4 to 640. On the basis of this conservative assumption, the capacitors were assumed to be shipped from Argonne to the Clean Harbors facility, located in Deer Park, Texas, for incineration and disposal. The Clean Harbors facility is a state-permitted TSCA facility for treatment and disposal of hazardous materials. At this facility, the capacitors are to be shredded and incinerated with the resulting incineration residue buried in a nearby landfill owned by the company. A variety of receptors that have the potential of receiving radiation exposures were 3. Characterisation and testing of a prototype $6 \\times 6$ cm$^2$ Argonne MCP-PMT CERN Document Server Cowan, Greig A; Needham, Matthew; Gambetta, Silvia; Eisenhardt, Stephan; McBlane, Neil; Malek, Matthew 2016-01-01 The Argonne micro-channel plate photomultiplier tube (MCP-PMT) is an offshoot of the Large Area Pico-second Photo Detector (LAPPD) project, wherein \\mbox{6 $\\times$ 6 cm$^2$} sized detectors are made at Argonne National Laboratory. Measurements of the properties of these detectors, including gain, time and spatial resolution, dark count rates, cross-talk and sensitivity to magnetic fields are reported. In addition, possible applications of these devices in future neutrino and collider physics experiments are discussed. 4. Argonne National Laboratorys photooxidation organic mixed-waste treatment system Energy Technology Data Exchange (ETDEWEB) Shearer, T.L.; Torres, T.; Conner, C. [Argonne National Lab., IL (United States)] [and others 1997-12-01 This paper describes the installation and startup testing of the Argonne National Laboratory-East (ANL-E) photo-oxidation organic mixed-waste treatment system. This system will treat organic mixed (i.e., radioactive and hazardous) waste by oxidizing the organics to carbon dioxide and inorganic salts in an aqueous media. The residue will be treated in the existing radwaste evaporators. The system is installed in the waste management facility at the ANL-E site in Argonne, Illinois. 5. Analysis of the Argonne distance tabletop exercise method. Energy Technology Data Exchange (ETDEWEB) Tanzman, E. A.; Nieves, L. A.; Decision and Information Sciences 2008-02-14 The purpose of this report is to summarize and evaluate the Argonne Distance Tabletop Exercise (DISTEX) method. DISTEX is intended to facilitate multi-organization, multi-objective tabletop emergency response exercises that permit players to participate from their own facility's incident command center. This report is based on experience during its first use during the FluNami 2007 exercise, which took place from September 19-October 17, 2007. FluNami 2007 exercised the response of local public health officials and hospitals to a hypothetical pandemic flu outbreak. The underlying purpose of the DISTEX method is to make tabletop exercising more effective and more convenient for playing organizations. It combines elements of traditional tabletop exercising, such as scenario discussions and scenario injects, with distance learning technologies. This distance-learning approach also allows playing organizations to include a broader range of staff in the exercise. An average of 81.25 persons participated in each weekly webcast session from all playing organizations combined. The DISTEX method required development of several components. The exercise objectives were based on the U.S. Department of Homeland Security's Target Capabilities List. The ten playing organizations included four public health departments and six hospitals in the Chicago area. An extent-of-play agreement identified the objectives applicable to each organization. A scenario was developed to drive the exercise over its five-week life. Weekly problem-solving task sets were designed to address objectives that could not be addressed fully during webcast sessions, as well as to involve additional playing organization staff. Injects were developed to drive play between webcast sessions, and, in some cases, featured mock media stories based in part on player actions as identified from the problem-solving tasks. The weekly 90-minute webcast sessions were discussions among the playing organizations 6. The Argonne Leadership Computing Facility 2010 annual report. Energy Technology Data Exchange (ETDEWEB) Drugan, C. (LCF) 2011-05-09 7. Authorized limits for disposal of PCB capacitors from Buildings 361 and 391 at Argonne National Laboratory, Argonne, Illinois. Energy Technology Data Exchange (ETDEWEB) Cheng, J.-J.; Chen, S.-Y.; Environmental Science Division 2009-12-22 This report contains data and analyses to support the approval of authorized release limits for the clearance from radiological control of polychlorinated biphenyl (PCB) capacitors in Buildings 361 and 391 at Argonne National Laboratory, Argonne, Illinois. These capacitors contain PCB oil that must be treated and disposed of as hazardous waste under the Toxic Substances Control Act (TSCA). However, they had been located in radiological control areas where the potential for neutron activation existed; therefore, direct release of these capacitors to a commercial facility for PCB treatment and landfill disposal is not allowable unless authorized release has been approved. Radiological characterization found no loose contamination on the exterior surface of the PCB capacitors; gamma spectroscopy analysis also showed the radioactivity levels of the capacitors were either at or slightly above ambient background levels. As such, conservative assumptions were used to expedite the analyses conducted to evaluate the potential radiation exposures of workers and the general public resulting from authorized release of the capacitors; for example, the maximum averaged radioactivity levels measured for capacitors nearest to the beam lines were assumed for the entire batch of capacitors. This approach overestimated the total activity of individual radionuclide identified in radiological characterization by a factor ranging from 1.4 to 640. On the basis of this conservative assumption, the capacitors were assumed to be shipped from Argonne to the Clean Harbors facility, located in Deer Park, Texas, for incineration and disposal. The Clean Harbors facility is a state-permitted TSCA facility for treatment and disposal of hazardous materials. At this facility, the capacitors are to be shredded and incinerated with the resulting incineration residue buried in a nearby landfill owned by the company. A variety of receptors that have the potential of receiving radiation exposures were 8. Argonne Leadership Computing Facility 2011 annual report : Shaping future supercomputing. Energy Technology Data Exchange (ETDEWEB) Papka, M.; Messina, P.; Coffey, R.; Drugan, C. (LCF) 2012-08-16 The ALCF's Early Science Program aims to prepare key applications for the architecture and scale of Mira and to solidify libraries and infrastructure that will pave the way for other future production applications. Two billion core-hours have been allocated to 16 Early Science projects on Mira. The projects, in addition to promising delivery of exciting new science, are all based on state-of-the-art, petascale, parallel applications. The project teams, in collaboration with ALCF staff and IBM, have undertaken intensive efforts to adapt their software to take advantage of Mira's Blue Gene/Q architecture, which, in a number of ways, is a precursor to future high-performance-computing architecture. The Argonne Leadership Computing Facility (ALCF) enables transformative science that solves some of the most difficult challenges in biology, chemistry, energy, climate, materials, physics, and other scientific realms. Users partnering with ALCF staff have reached research milestones previously unattainable, due to the ALCF's world-class supercomputing resources and expertise in computation science. In 2011, the ALCF's commitment to providing outstanding science and leadership-class resources was honored with several prestigious awards. Research on multiscale brain blood flow simulations was named a Gordon Bell Prize finalist. Intrepid, the ALCF's BG/P system, ranked No. 1 on the Graph 500 list for the second consecutive year. The next-generation BG/Q prototype again topped the Green500 list. Skilled experts at the ALCF enable researchers to conduct breakthrough science on the Blue Gene system in key ways. The Catalyst Team matches project PIs with experienced computational scientists to maximize and accelerate research in their specific scientific domains. The Performance Engineering Team facilitates the effective use of applications on the Blue Gene system by assessing and improving the algorithms used by applications and the techniques used to 9. PHASE II VAULT TESTING OF THE ARGONNE RFID SYSTEM Energy Technology Data Exchange (ETDEWEB) Willoner, T.; Turlington, R.; Koenig, R. 2012-06-25 The U.S. Department of Energy (DOE) (Environmental Management [EM], Office of Packaging and Transportation [EM-45]) Packaging and Certification Program (DOE PCP) has developed a Radio Frequency Identification (RFID) tracking and monitoring system, called ARG-US, for the management of nuclear materials packages during transportation and storage. The performance of the ARG-US RFID equipment and system has been fully tested in two demonstration projects in April 2008 and August 2009. With the strong support of DOE-SR and DOE PCP, a field testing program was completed in Savannah River Site's K-Area Material Storage (KAMS) Facility, an active Category I Plutonium Storage Facility, in 2010. As the next step (Phase II) of continued vault testing for the ARG-US system, the Savannah River Site K Area Material Storage facility has placed the ARG-US RFIDs into the 910B storage vault for operational testing. This latest version (Mark III) of the Argonne RFID system now has the capability to measure radiation dose and dose rate. This paper will report field testing progress of the ARG-US RFID equipment in KAMS, the operability and reliability trend results associated with the applications of the system, and discuss the potential benefits in enhancing safety, security and materials accountability. The purpose of this Phase II K Area test is to verify the accuracy of the radiation monitoring and proper functionality of the ARG-US RFID equipment and system under a realistic environment in the KAMS facility. Deploying the ARG-US RFID system leads to a reduced need for manned surveillance and increased inventory periods by providing real-time access to status and event history traceability, including environmental condition monitoring and radiation monitoring. The successful completion of the testing program will provide field data to support a future development and testing. This will increase Operation efficiency and cost effectiveness for vault operation. As the next step 10. Argonne National Laboratory summary site environmental report for calendar year 2006. Energy Technology Data Exchange (ETDEWEB) Golchert, N. W.; ESH/QA Oversight 2008-03-27 This booklet is designed to inform the public about what Argonne National Laboratory is doing to monitor its environment and to protect its employees and neighbors from any adverse environmental impacts from Argonne research. The Downers Grove South Biology II class was selected to write this booklet, which summarizes Argonne's environmental monitoring programs for 2006. Writing this booklet also satisfies the Illinois State Education Standard, which requires that students need to know and apply scientific concepts to graduate from high school. This project not only provides information to the public, it will help students become better learners. The Biology II class was assigned to condense Argonne's 300-page, highly technical Site Environmental Report into a 16-page plain-English booklet. The site assessment relates to the class because the primary focus of the Biology II class is ecology and the environment. Students developed better learning skills by working together cooperatively, writing and researching more effectively. Students used the Argonne Site Environmental Report, the Internet, text books and information from Argonne scientists to help with their research on their topics. The topics covered in this booklet are the history of Argonne, groundwater, habitat management, air quality, Argonne research, Argonne's environmental non-radiological program, radiation, and compliance. The students first had to read and discuss the Site Environmental Report and then assign topics to focus on. Dr. Norbert Golchert and Mr. David Baurac, both from Argonne, came into the class to help teach the topics more in depth. The class then prepared drafts and wrote a final copy. Ashley Vizek, a student in the Biology class stated, 'I reviewed my material and read it over and over. I then took time to plan my paper out and think about what I wanted to write about, put it into foundation questions and started to write my paper. I rewrote and revised so I 11. Argonne National Laboratory: Laboratory Directed Research and Development FY 1993 program activities. Annual report Energy Technology Data Exchange (ETDEWEB) None 1993-12-23 The purposes of Argonnes Laboratory Directed Research and Development (LDRD) Program are to encourage the development of novel concepts, enhance the Laboratorys R&D capabilities, and further the development of its strategic initiatives. Projects are selected from proposals for creative and innovative R&D studies which are not yet eligible for timely support through normal programmatic channels. Among the aims of the projects supported by the Program are establishment of engineering proof-of-principle assessment of design feasibility for prospective facilities; development of an instrumental prototype, method, or system; or discovery in fundamental science. Several of these projects are closely associated with major strategic thrusts of the Laboratory as described in Argonnes Five Year Institutional Plan, although the scientific implications of the achieved results extend well beyond Laboratory plans and objectives. The projects supported by the Program are distributed across the major programmatic areas at Argonne as indicated in the Laboratory LDRD Plan for FY 1993. 12. Spent fuel treatment and mineral waste form development at Argonne National Laboratory-West Energy Technology Data Exchange (ETDEWEB) Goff, K.M.; Benedict, R.W.; Bateman, K. [Argonne National Lab., Idaho Falls, ID (United States); Lewis, M.A.; Pereira, C. [Argonne National Lab., IL (United States); Musick, C.A. [Lockheed Idaho Technologies Co., Idaho Falls, ID (United States) 1996-07-01 At Argonne National Laboratory-West (ANL-West) there are several thousand kilograms of metallic spent nuclear fuel containing bond sodium. This fuel will be treated in the Fuel Conditioning Facility (FCF) at ANL-West to produce stable waste forms for storage and disposal. Both mineral and metal high-level waste forms will be produced. The mineral waste form will contain the active metal fission products and the transuranics. Cold small-scale waste form testing has been on-going at Argonne in Illinois. Large-scale testing is commencing at ANL-West. 13. Computation of Two-Body Matrix Elements From the Argonne $v_{18}$ Potential CERN Document Server Mihaila, B; Mihaila, Bogdan; Heisenberg, Jochen H. 1998-01-01 We discuss the computation of two-body matrix elements from the Argonne $v_{18}$ interaction. The matrix elements calculation is presented both in particle-particle and in particle-hole angular momentum coupling. The procedures developed here can be applied to the case of other NN potentials, provided that they have a similar operator format. 14. Argonne National Laboratory-East site environmental report for calendar year 1995 Energy Technology Data Exchange (ETDEWEB) Golchert, N.W.; Kolzow, R.G. [Environmental Management Operation, Argonne National Lab., IL (United States) 1996-09-01 This report presents the environmental report for the Argonne National Laboratory-East for the year of 1995. Topics discussed include: general description of the site including climatology, geology, seismicity, hydrology, vegetation, endangered species, population, water and land use, and archaeology; compliance summary; environmental program information; environmental nonradiological program information; ground water protection; and radiological monitoring program. 15. Applied mathematical sciences research at Argonne, April 1, 1981-March 31, 1982 Energy Technology Data Exchange (ETDEWEB) Pieper, G.W. (ed.) 1982-01-01 This report reviews the research activities in Applied Mathematical Sciences at Argonne National Laboratory for the period April 1, 1981, through March 31, 1982. The body of the report discusses various projects carried out in three major areas of research: applied analysis, computational mathematics, and software engineering. Information on section staff, visitors, workshops, and seminars is found in the appendices. 16. Bush will tour Illionois lab working to fight terrorism Argonne develops chemical detectors CERN Multimedia 2002-01-01 "A chemical sensor that detects cyanide gas, a biochip that can determine the presence of anthrax, and a portable device that finds concealed nuclear materials are among the items scientists at Argonne National Laboratory are working on to combat terrorism" (1/2 page). 17. Quality management at Argonne National Laboratory: Status, accomplishments, and lessons learned Energy Technology Data Exchange (ETDEWEB) NONE 1995-06-01 In April 1992, Argonne National Laboratory (ANL) launched the implementation of quality management (QM) as an initiative of the Laboratory Director. The goal of the program is to seek ways of improving Laboratory performance and effectiveness by drawing from the realm of experiences in the global total quality management movement. The Argonne QM initiative began with fact finding and formulating a strategy for implementation; the emphasis is that the underlying principles of QM should be an integral part of how the Laboratory is managed and operated. A primary theme that has guided the Argonne QM initiative is to consider only those practices that offer the potential for real improvement, make sense, fit the culture, and would be credible to the broad population. In October 1993, the Laboratory began to pilot a targeted set of QM activities selected to produce outcomes important to the Laboratory--strengthening the customer focus, improving work processes, enhancing employee involvement and satisfaction, and institutionalizing QM. This report describes the results of the just-concluded QM development and demonstration phase in terms of detailed strategies, accomplishments, and lessons learned. These results are offered as evidence to support the conclusion that the Argonne QM initiative has achieved value-added results and credibility and is well positioned to support future deployment across the entire Laboratory as an integrated management initiative. Recommendations for follow-on actions to implement future deployment are provided separately. 18. Argonne National Laboratory research to help U.S. steel industry CERN Multimedia 2003-01-01 Argonne National Laboratory has joined a 1.29 million project to develop technology software that will use advanced computational fluid dynamics (CFD), a method of solving fluid flow and heat transfer problems. This technology allows engineers to evaluate and predict erosion patterns within blast furnaces (1 page). 19. Update on intrusive characterization of mixed contact-handled transuranic waste at Argonne-West Energy Technology Data Exchange (ETDEWEB) Dwight, C.C.; Jensen, B.A.; Bryngelson, C.D.; Duncan, D.S. 1997-02-03 Argonne National Laboratory and Lockheed Martin Idaho Technologies Company have jointly participated in the Department of Energys (DOE) Waste Isolation Pilot Plant (WIPP) Transuranic Waste Characterization Program since 1990. Intrusive examinations have been conducted in the Waste Characterization Area, located at Argonne-West in Idaho Falls, Idaho, on over 200 drums of mixed contact-handled transuranic waste. This is double the number of drums characterized since the last update at the 1995 Waste Management Conference. These examinations have provided waste characterization information that supports performance assessment of WIPP and that supports Lockheeds compliance with the Resource Conservation and Recovery Act. Operating philosophies and corresponding regulatory permits have been broadened to provide greater flexibility and capability for waste characterization, such as the provision for minor treatments like absorption, neutralization, stabilization, and amalgamation. This paper provides an update on Argonnes intrusive characterization permits, procedures, results, and lessons learned. Other DOE sites that must deal with mixed contact-handled transuranic waste have initiated detailed planning for characterization of their own waste. The information presented herein could aid these other storage and generator sites in further development of their characterization efforts. 20. Argonne National Laboratory annual report of Laboratory Directed Research and Development Program Activities FY 2009. Energy Technology Data Exchange (ETDEWEB) Office of the Director 2010-04-09 I am pleased to submit Argonne National Laboratory's Annual Report on its Laboratory Directed Research and Development (LDRD) activities for fiscal year 2009. Fiscal year 2009 saw a heightened focus by DOE and the nation on the need to develop new sources of energy. Argonne scientists are investigating many different sources of energy, including nuclear, solar, and biofuels, as well as ways to store, use, and transmit energy more safely, cleanly, and efficiently. DOE selected Argonne as the site for two new Energy Frontier Research Centers (EFRCs) - the Institute for Atom-Efficient Chemical Transformations and the Center for Electrical Energy Storage - and funded two other EFRCs to which Argonne is a major partner. The award of at least two of the EFRCs can be directly linked to early LDRD-funded efforts. LDRD has historically seeded important programs and facilities at the lab. Two of these facilities, the Advanced Photon Source and the Center for Nanoscale Materials, are now vital contributors to today's LDRD Program. New and enhanced capabilities, many of which relied on LDRD in their early stages, now help the laboratory pursue its evolving strategic goals. LDRD has, since its inception, been an invaluable resource for positioning the Laboratory to anticipate, and thus be prepared to contribute to, the future science and technology needs of DOE and the nation. During times of change, LDRD becomes all the more vital for facilitating the necessary adjustments while maintaining and enhancing the capabilities of our staff and facilities. Although I am new to the role of Laboratory Director, my immediate prior service as Deputy Laboratory Director for Programs afforded me continuous involvement in the LDRD program and its management. Therefore, I can attest that Argonne's program adhered closely to the requirements of DOE Order 413.2b and associated guidelines governing LDRD. Our LDRD program management continually strives to be more efficient. In 1. Argonne National Laboratory summary site environmental report for calendar year 2007. Energy Technology Data Exchange (ETDEWEB) Golchert, N. W. 2009-05-22 This summary of Argonne National Laboratory's Site Environmental Report for calendar year 2007 was written by 20 students at Downers Grove South High School in Downers Grove, Ill. The student authors are classmates in Mr. Howard's Bio II course. Biology II is a research-based class that teaches students the process of research by showing them how the sciences apply to daily life. For the past seven years, Argonne has worked with Biology II students to create a short document summarizing the Site Environmental Report to provide the public with an easy-to-read summary of the annual 300-page technical report on the results of Argonne's on-site environmental monitoring program. The summary is made available online and given to visitors to Argonne, researchers interested in collaborating with Argonne, future employees, and many others. In addition to providing Argonne and the public with an easily understandable short summary of a large technical document, the participating students learn about professional environmental monitoring procedures, achieve a better understanding of the time and effort put forth into summarizing and publishing research, and gain confidence in their own abilities to express themselves in writing. The Argonne Summary Site Environmental Report fits into the educational needs for 12th grade students. Illinois State Educational Goal 12 states that a student should understand the fundamental concepts, principles, and interconnections of the life, physical, and earth/space sciences. To create this summary booklet, the students had to read and understand the larger technical report, which discusses in-depth many activities and programs that have been established by Argonne to maintain a safe local environment. Creating this Summary Site Environmental Report also helps students fulfill Illinois State Learning Standard 12B5a, which requires that students be able to analyze and explain biodiversity issues, and the causes and effects of 2. Diagnostic studies on lithium-ion cells at Argonne National Laboratory: an overview Science.gov (United States) Abraham, Daniel P. 2010-04-01 High-power and high-energy lithium-ion cells are being studied at Argonne National Laboratory (Argonne) as part of the U.S. Department of Energy's FreedomCar and Vehicle Technologies (FCVT) program. Cells ranging in capacity from 1 mAh to 1Ah, and containing a variety of electrodes and electrolytes, are examined to determine suitable material combinations that will meet and exceed the FCVT performance, cost, and safety targets. In this article, accelerated aging of 18650-type cells, and characterization of components harvested from these cells, is described. Several techniques that include electrochemical measurements, analytical electron microscopy, and x-ray spectroscopy were used to study the various cell components. Data from these studies were used to identify the most likely contributors to property degradation and determine mechanisms responsible for cell capacity fade and impedance rise. 3. Argonne National Laboratory Physics Division annual report, January--December 1996 Energy Technology Data Exchange (ETDEWEB) Thayer, K.J. [ed. 1997-08-01 The past year has seen several of the Physics Divisions new research projects reach major milestones with first successful experiments and results: the atomic physics station in the Basic Energy Sciences Research Center at the Argonne Advanced Photon Source was used in first high-energy, high-brilliance x-ray studies in atomic and molecular physics; the Short Orbit Spectrometer in Hall C at the Thomas Jefferson National Accelerator (TJNAF) Facility that the Argonne medium energy nuclear physics group was responsible for, was used extensively in the first round of experiments at TJNAF; at ATLAS, several new beams of radioactive isotopes were developed and used in studies of nuclear physics and nuclear astrophysics; the new ECR ion source at ATLAS was completed and first commissioning tests indicate excellent performance characteristics; Quantum Monte Carlo calculations of mass-8 nuclei were performed for the first time with realistic nucleon-nucleon interactions using state-of-the-art computers, including Argonnes massively parallel IBM SP. At the same time other future projects are well under way: preparations for the move of Gammasphere to ATLAS in September 1997 have progressed as planned. These new efforts are imbedded in, or flowing from, the vibrant ongoing research program described in some detail in this report: nuclear structure and reactions with heavy ions; measurements of reactions of astrophysical interest; studies of nucleon and sub-nucleon structures using leptonic probes at intermediate and high energies; atomic and molecular structure with high-energy x-rays. The experimental efforts are being complemented with efforts in theory, from QCD to nucleon-meson systems to structure and reactions of nuclei. Finally, the operation of ATLAS as a national users facility has achieved a new milestone, with 5,800 hours beam on target for experiments during the past fiscal year. 4. Derived concentration guideline levels for Argonne National Laboratory's building 310 area. Energy Technology Data Exchange (ETDEWEB) Kamboj, S., Dr.; Yu, C ., Dr. (Environmental Science Division) 2011-08-12 The derived concentration guideline level (DCGL) is the allowable residual radionuclide concentration that can remain in soil after remediation of the site without radiological restrictions on the use of the site. It is sometimes called the single radionuclide soil guideline or the soil cleanup criteria. This report documents the methodology, scenarios, and parameters used in the analysis to support establishing radionuclide DCGLs for Argonne National Laboratory's Building 310 area. 5. Research in mathematics and computer science at Argonne, July 1, 1986-January 6, 1988 Energy Technology Data Exchange (ETDEWEB) Pieper, G.W. (ed.) 1988-01-01 This report reviews the research activities in the Mathematics and Computer Science Division at Argonne National Laboratory for the period July 1, 1986, through January 6, 1988. The body of the report gives a brief look at the MCS staff and the research facilities, and discusses various projects carried out in two major areas of research: analytical and numerical methods and advanced computer systems concepts. Information on division staff, visitors, workshops, and seminars is found in the appendixes. 6 figs. 6. Status report on the positive ion injector (PII) for ATLAS at Argonne National Laboratory Energy Technology Data Exchange (ETDEWEB) Zinkann, G.P.; Added, N.; Billquist, P.; Bogaty, J.; Clifft, B.; Markovich, P.; Phillips, D.; Strickhorn, P.; Shepard, K.W. 1991-01-01 The Positive Ion Injector (PII) is part of the Uranuim upgrade for ATLAS accelerator at Argonne National Laboratory. This paper will include a technical discussion of the Positive Ion Injector (PII) accelerator with its superconducting, niobium, very low-velocity accelerating structures. It will also discuss the current construction schedule of PII, and review an upgrade of the fast- tuning system. 10 refs., 6 figs. 7. Argonne National Laboratory Annual Report of Laboratory Directed Research and Development program activities FY 2011. Energy Technology Data Exchange (ETDEWEB) (Office of The Director) 2012-04-25 As a national laboratory Argonne concentrates on scientific and technological challenges that can only be addressed through a sustained, interdisciplinary focus at a national scale. Argonne's eight major initiatives, as enumerated in its strategic plan, are Hard X-ray Sciences, Leadership Computing, Materials and Molecular Design and Discovery, Energy Storage, Alternative Energy and Efficiency, Nuclear Energy, Biological and Environmental Systems, and National Security. The purposes of Argonne's Laboratory Directed Research and Development (LDRD) Program are to encourage the development of novel technical concepts, enhance the Laboratory's research and development (R and D) capabilities, and pursue its strategic goals. projects are selected from proposals for creative and innovative R and D studies that require advance exploration before they are considered to be sufficiently developed to obtain support through normal programmatic channels. Among the aims of the projects supported by the LDRD Program are the following: establishment of engineering proof of principle, assessment of design feasibility for prospective facilities, development of instrumentation or computational methods or systems, and discoveries in fundamental science and exploratory development. 8. Argonne National Laboratory Annual Report of Laboratory Directed Research and Development program activities FY 2010. Energy Technology Data Exchange (ETDEWEB) (Office of The Director) 2012-04-25 As a national laboratory Argonne concentrates on scientific and technological challenges that can only be addressed through a sustained, interdisciplinary focus at a national scale. Argonne's eight major initiatives, as enumerated in its strategic plan, are Hard X-ray Sciences, Leadership Computing, Materials and Molecular Design and Discovery, Energy Storage, Alternative Energy and Efficiency, Nuclear Energy, Biological and Environmental Systems, and National Security. The purposes of Argonne's Laboratory Directed Research and Development (LDRD) Program are to encourage the development of novel technical concepts, enhance the Laboratory's research and development (R and D) capabilities, and pursue its strategic goals. projects are selected from proposals for creative and innovative R and D studies that require advance exploration before they are considered to be sufficiently developed to obtain support through normal programmatic channels. Among the aims of the projects supported by the LDRD Program are the following: establishment of engineering proof of principle, assessment of design feasibility for prospective facilities, development of instrumentation or computational methods or systems, and discoveries in fundamental science and exploratory development. 9. Environment, Safety and Health Progress Assessment of the Argonne Illinois Site Energy Technology Data Exchange (ETDEWEB) 1993-11-01 This report documents the results of the US Department of Energy (DOE) Environment, Safety and Health (ES&H) Progress Assessment of the Argonne Illinois Site (AIS), near Chicago, Illinois, conducted from October 25 through November 9, 1993. During the Progress Assessment, activities included a selective review of the ES&H management systems and programs with principal focus on the DOE Office of Energy Research (ER); CH, which includes the Argonne Area Office; the University of Chicago; and the contractors organization responsible for operation of Argonne National Laboratory (ANL). The ES&H Progress Assessments are part of DOEs continuing effort to institutionalize line management accountability and the self-assessment process throughout DOE and its contractor organizations. The purpose of the AIS ES&H Progress Assessment was to provide the Secretary of Energy, senior DOE managers, and contractor management with concise independent information on the following: change in culture and attitude related to ES&H activities; progress and effectiveness of the ES&H corrective actions resulting from the previous Tiger Team Assessment; adequacy and effectiveness of the ES&H self-assessment process of the DOE line organizations, the site management, and the operating contractor; and effectiveness of DOE and contractor management structures, resources, and systems to effectively address ES&H problems and new ES&H initiatives. 10. Argonne National Laboratory Annual Report of Laboratory Directed Research and Development Program Activities for FY 1994 Energy Technology Data Exchange (ETDEWEB) None 1995-02-25 The purposes of Argonne's Laboratory Directed Research and Development (LDRD) Program are to encourage the development of novel concepts, enhance the Laboratory's R and D capabilities, and further the development of its strategic initiatives. Projects are selected from proposals for creative and innovative R and D studies which are not yet eligible for timely support through normal programmatic channels. Among the aims of the projects supported by the Program are establishment of engineering proof-of-principle; assessment of design feasibility for prospective facilities; development of an instrumental prototype, method, or system; or discovery in fundamental science. Several of these projects are closely associated with major strategic thrusts of the Laboratory as described in Argonne's Five-Year Institutional Plan, although the scientific implications of the achieved results extend well beyond Laboratory plans and objectives. The projects supported by the Program are distributed across the major programmatic areas at Argonne as indicated in the Laboratory's LDRD Plan for FY 1994. Project summaries of research in the following areas are included: (1) Advanced Accelerator and Detector Technology; (2) X-ray Techniques for Research in Biological and Physical Science; (3) Nuclear Technology; (4) Materials Science and Technology; (5) Computational Science and Technology; (6) Biological Sciences; (7) Environmental Sciences: (8) Environmental Control and Waste Management Technology; and (9) Novel Concepts in Other Areas. 11. Leidos Biomed Teams with NCI, DOE, and Argonne National Lab to Support National X-Ray Resource | Poster Science.gov (United States) Scientists are making progress in understanding a bleeding disorder caused by prescription drug interactions, thanks to a high-tech research facility involving two federal national laboratories, Argonne and Frederick. 12. Argonne National Laboratory High Energy Physics Division semiannual report of research activities, January 1, 1989--June 30, 1989 Energy Technology Data Exchange (ETDEWEB) 1989-01-01 This paper discuss the following areas on High Energy Physics at Argonne National Laboratory: experimental program; theory program; experimental facilities research; accelerator research and development; and SSC detector research and development. 13. Argonne National Laboratorys photo-oxidation organic mixed waste treatment system - installation and startup testing Energy Technology Data Exchange (ETDEWEB) Shearer, T.L.; Nelson, R.A.; Torres, T.; Conner, C.; Wygmans, D. 1997-09-01 This paper describes the installation and startup testing of the Argonne National Laboratory (ANL-E) Photo-Oxidation Organic Mixed Waste Treatment System. This system will treat organic mixed (i.e., radioactive and hazardous) waste by oxidizing the organics to carbon dioxide and inorganic salts in an aqueous media. The residue will be treated in the existing radwaste evaporators. The system is installed in the Waste Management Facility at the ANL-E site in Argonne, Illinois. 1 fig. 14. Development and analysis of a meteorological database, Argonne National Laboratory, Illinois Science.gov (United States) Over, Thomas M.; Price, Thomas H.; Ishii, Audrey 2010-01-01 A database of hourly values of air temperature, dewpoint temperature, wind speed, and solar radiation from January 1, 1948, to September 30, 2003, primarily using data collected at the Argonne National Laboratory station, was developed for use in continuous-time hydrologic modeling in northeastern Illinois. Missing and apparently erroneous data values were replaced with adjusted values from nearby stations used as 'backup'. Temporal variations in the statistical properties of the data resulting from changes in measurement and data-storage methodologies were adjusted to match the statistical properties resulting from the data-collection procedures that have been in place since January 1, 1989. The adjustments were computed based on the regressions between the primary data series from Argonne National Laboratory and the backup series using data obtained during common periods; the statistical properties of the regressions were used to assign estimated standard errors to values that were adjusted or filled from other series. Each hourly value was assigned a corresponding data-source flag that indicates the source of the value and its transformations. An analysis of the data-source flags indicates that all the series in the database except dewpoint have a similar fraction of Argonne National Laboratory data, with about 89 percent for the entire period, about 86 percent from 1949 through 1988, and about 98 percent from 1989 through 2003. The dewpoint series, for which observations at Argonne National Laboratory did not begin until 1958, has only about 71 percent Argonne National Laboratory data for the entire period, about 63 percent from 1948 through 1988, and about 93 percent from 1989 through 2003, indicating a lower reliability of the dewpoint sensor. A basic statistical analysis of the filled and adjusted data series in the database, and a series of potential evapotranspiration computed from them using the computer program LXPET (Lamoreux Potential 15. Argonne's Laboratory Computing Resource Center 2009 annual report. Energy Technology Data Exchange (ETDEWEB) Bair, R. B. (CLS-CI) 2011-05-13 Now in its seventh year of operation, the Laboratory Computing Resource Center (LCRC) continues to be an integral component of science and engineering research at Argonne, supporting a diverse portfolio of projects for the U.S. Department of Energy and other sponsors. The LCRC's ongoing mission is to enable and promote computational science and engineering across the Laboratory, primarily by operating computing facilities and supporting high-performance computing application use and development. This report describes scientific activities carried out with LCRC resources in 2009 and the broad impact on programs across the Laboratory. The LCRC computing facility, Jazz, is available to the entire Laboratory community. In addition, the LCRC staff provides training in high-performance computing and guidance on application usage, code porting, and algorithm development. All Argonne personnel and collaborators are encouraged to take advantage of this computing resource and to provide input into the vision and plans for computing and computational analysis at Argonne. The LCRC Allocations Committee makes decisions on individual project allocations for Jazz. Committee members are appointed by the Associate Laboratory Directors and span a range of computational disciplines. The 350-node LCRC cluster, Jazz, began production service in April 2003 and has been a research work horse ever since. Hosting a wealth of software tools and applications and achieving high availability year after year, researchers can count on Jazz to achieve project milestones and enable breakthroughs. Over the years, many projects have achieved results that would have been unobtainable without such a computing resource. In fiscal year 2009, there were 49 active projects representing a wide cross-section of Laboratory research and almost all research divisions. 16. Argonne CW Linac (ACWL) -- Legacy from SDI and opportunities for the future Energy Technology Data Exchange (ETDEWEB) McMichael, G.E.; Yule, T.J. 1994-08-01 The former Strategic Defense Initiative Organization (SDIO) invested significant resources over a 6-year period to develop and build an accelerator to demonstrate the launching of a cw beam with characteristics suitable for a space-based Neutral Particle Beam (NPD) system. This accelerator, the CWDD (Continuous Wave Deuterium Demonstrator) accelerator, was designed to accelerate 80 mA cw of D{sup {minus}} to 7.5 MeV. A considerable amount of hardware was constructed and installed in the Argonne-based facility, and major performance milestones were achieved before program funding from the Department of Defense ended in October 1993. Existing assets have been turned over to Argonne. Assets include a fully functional 200 kV cw D{sup {minus}} injector, a cw RFQ that has been tuned, leak checked and aligned, beam lines and a high-power beam stop, all installed in a shielded vault with appropriate safety and interlock systems. In addition, there are two high power (1 MW) cw rf amplifiers and all the ancillary power, cooling and control systems required for a high-power accelerator system. The SDI mission required that the CWDD accelerator structures operate at cryogenic temperatures (26 K), a requirement that placed severe limitations on operating period (CWDD would have provided 20 seconds of cw beam every 90 minutes). However, the accelerator structures were designed for full-power rf operation with water cooling and ACWL (Argonne Continuous Wave Linac), the new name for CWDD in its water-cooled, positive-ion configuration, will be able to operate continuously. Project status and achievements will be reviewed. Preliminary design of a proton conversion for the RFQ, and other proposals for turning ACWL into a testbed for cw-linac engineering, will be discussed. 17. Status of the Argonne heavy-ion-fusion low-beta linac Energy Technology Data Exchange (ETDEWEB) Watson, J.M.; Bogaty, J.M.; Moretti, A.; Sacks, R.A.; Sesol, N.Q.; Wright, A.J. 1981-01-01 The primary goal of the experimental program in heavy-ion fusion (HIF) at Argonne National Laboratory (ANL) during the next few years is to demonstrate many of the requirements of a RF linac driver for inertial-fusion power plants. So far, most of the construction effort has been applied to the front end. The ANL program has developed a high-intensity xenon source, a 1.5-MV preaccelerator, and the initial cavities of the low-beta linac. The design, initial tests, and status of the low-beta linac are described. 18. Status of the Argonne heavy ion fusion low-beta linac Energy Technology Data Exchange (ETDEWEB) Watson, J.M.; Bogaty, J.M.; Moretti, A.; Sacks, R.A.; Sesol, N.Q.; Wright, A.J. 1981-06-01 The primary goal of the experimental program in heavy ion fusion (HIF) at Argonne National Laboratory (ANL) during the next few years is to demonstrate many of the requirements of a RF linac driver for inertial fusion power plants. So far, most of the construction effort has been applied to the front end. The ANL program has developed a high intensity xenon source, a 1.5 MV preaccelerator, and the initial cavities of the low-beta linac. The design, initial tests and status of the low-beta linac are described. 8 refs. 19. Generation of annular, high-charge electron beams at the Argonne wakefield accelerator Science.gov (United States) Wisniewski, E. E.; Li, C.; Gai, W.; Power, J. 2013-01-01 We present and discuss the results from the experimental generation of high-charge annular(ring-shaped)electron beams at the Argonne Wakefield Accelerator (AWA). These beams were produced by using laser masks to project annular laser profiles of various inner and outer diameters onto the photocathode of an RF gun. The ring beam is accelerated to 15 MeV, then it is imaged by means of solenoid lenses. Transverse profiles are compared for different solenoid settings. Discussion includes a comparison with Parmela simulations, some applications of high-charge ring beams,and an outline of a planned extension of this study. 20. Ionomer-like structures and {pi}-cation interactions in Argonne Premium coals Energy Technology Data Exchange (ETDEWEB) Opaprakasit, P.; Scaroni, A.W.; Painter, P.C. [Pennsylvania State University, University Park, PA (United States). Energy Institute 2002-06-01 The increase in the amount of pyridine-soluble material obtained from Argonne Premium coals after acid treatment is examined. The amount of pyridine-soluble material in most of the coals increases significantly with acid treatment. In low and to some extent medium rank coals this is largely a result of the presence of ionic clusters formed by carboxylate groups. In higher rank coals we are proposing that {pi}-cation interactions play a major role. These ion/coal interactions are of sufficient strength to act as 'reversible' cross-links, in the same way as ionic clusters behave in ionomers. 26 refs., 14 figs., 3 tabs. 1. Research in mathematics and computer science at Argonne, September 1989--February 1991 Energy Technology Data Exchange (ETDEWEB) Pieper, G.W. 1991-03-01 This report reviews the research activities in the Mathematics and Computer Science Division at Argonne National Laboratory for the period September 1989 through February 1991. The body of the report gives a brief look at the MCS staff and the research facilities and then discusses the diverse research projects carried out in the division. Projects funded by non-DOE sources are also discussed, and new technology transfer activities are described. Further information on staff, visitors, workshops, and seminars is found in the appendixes. 2. Survey of biomedical and environental data bases, models, and integrated computer systems at Argonne National Laboratory Energy Technology Data Exchange (ETDEWEB) Murarka, I.P.; Bodeau, D.J.; Scott, J.M.; Huebner, R.H. 1978-08-01 This document contains an inventory (index) of information resources pertaining to biomedical and environmental projects at Argonne National Laboratory--the information resources include a data base, model, or integrated computer system. Entries are categorized as models, numeric data bases, bibliographic data bases, or integrated hardware/software systems. Descriptions of the Information Coordination Focal Point (ICFP) program, the system for compiling this inventory, and the plans for continuing and expanding it are given, and suggestions for utilizing the services of the ICFP are outlined. 3. Past and Future Work on Radiobiology Mega-Studies: A Case Study At Argonne National Laboratory Energy Technology Data Exchange (ETDEWEB) Haley, Benjamin; Wang, Qiong; Wanzer, Beau; Vogt, Stefan; Finney, Lydia; Yang, Ping Liu; Paunesku, Tatjana; Woloschak, Gayle 2011-09-06 Between 1952 and 1992, more than 200 large radiobiology studies were conducted in research institutes throughout Europe, North America, and Japan to determine the effects of external irradiation and internal emitters on the lifespan and tissue toxicity development in animals. At Argonne National Laboratory, 22 external beam studies were conducted on nearly 700 beagle dogs and 50,000 mice between 1969 and 1992. These studies helped to characterize the effects of neutron and gamma irradiation on lifespan, tumorigenesis, and mutagenesis across a range of doses and dosing patterns. The records and tissues collected at Argonne during that time period have been carefully preserved and redisseminated. Using these archived data, ongoing statistical work has been done and continues to characterize quality of radiation, dose, dose rate, tissue, and gender-specific differences in the radiation responses of exposed animals. The ongoing application of newly-developed molecular biology techniques to the archived tissues has revealed gene-specific mutation rates following exposure to ionizing irradiation. The original and ongoing work with this tissue archive is presented here as a case study of a more general trend in the radiobiology megastudies. These experiments helped form the modern understanding of radiation responses in animals and continue to inform development of new radiation models. Recent archival efforts have facilitated open access to the data and materials produced by these studies, and so a unique opportunity exists to expand this continued research. 4. An in-house alternative to traditional SDI services at Argonne National Laboratory Energy Technology Data Exchange (ETDEWEB) Noel, R.E.; Dominiak, R.R. 1997-02-20 Selective Dissemination of Information (SDIs) are based on automated, well-defined programs that regularly produce precise, relevant bibliographic information. Librarians have typically turned to information vendors such as Dialog or STN international to design and implement these searches for their users in business, academia, and the science community. Because Argonne National Laboratory (ANL) purchases the Institute for Scientific Information (ISI) Current Contents tapes (all subject areas excluding Humanities). ANL scientists enjoy the benefit of in-house developments with BASISplus software programming and no longer need to turn to outside companies for reliable SDI service. The database and its customized services are known as ACCESS (Argonne Current Contents Electronic Search Service). Through collaboration with librarians on Boolean logic and selection of terms, users can now design their own personal profiles to comb the new data, thereby avoiding service fees from outside providers. Based on the feedback from scientists, it seems that this new service can help transform the ANL distributed libraries into more efficient central functioning entities that better serve the users. One goal is to eliminate the routing of paper copies of many new journal issues to different library locations for users to browse; instead users may be expected to rely more on electronic dissemination of both table of contents and customized SDIs for new scientific and technical information. 5. Special Report on "Allegations of Conflict of Interest Regarding Licensing of PROTECT by Argonne National Laboratory" Energy Technology Data Exchange (ETDEWEB) None 2009-08-01 In February 2009, the Office of Inspector General received a letter from Congressman Mark Steven Kirk of Illinois, which included constituent allegations that an exclusive technology licensing agreement by Argonne National Laboratory was tainted by inadequate competition, conflicts of interest, and other improprieties. The technology in question was for the Program for Response Options and Technology Enhancements for Chemical/Biological Terrorism, commonly referred to as PROTECT. Because of the importance of the Department of Energy's technology transfer program, especially as implementation of the American Recovery and Reinvestment Act matures, we reviewed selected aspects of the licensing process for PROTECT to determine whether the allegations had merit. In summary, under the facts developed during our review, it was understandable that interested parties concluded that there was a conflict of interest in this matter and that Argonne may have provided the successful licensee with an unfair advantage. In part, this was consistent with aspects of the complaint from Congressman Kirk's constituent. 6. Gas Warfare in World War I. The Use of Gas in the Meuse-Argonne Campaign, September-November 1918 Science.gov (United States) 1958-12-01 ERIBULLES-our-MEUSE-CUNEL, the Vth Corps the heights in BOIS de GESNES , BOIS do MONCY and the METIT BOIS, and the lt Corps the FORE? D’ ARGONNE to include...use in Le Petit Bois, Bois do Gesnes , Bois do Moncy, and the Argonne that night and the next day. The next evening the Aire Gpg retorted that none of...October, the left and center corps made slight gains, reaching Apremont, Exermont, and Gesnes , but the right corps, "hampered by the German flanking 7. Argonne's Laboratory Computing Resource Center : 2005 annual report. Energy Technology Data Exchange (ETDEWEB) Bair, R. B.; Coghlan, S. C; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Pieper, G. P. 2007-06-30 Argonne National Laboratory founded the Laboratory Computing Resource Center in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. The first goal of the LCRC was to deploy a mid-range supercomputing facility to support the unmet computational needs of the Laboratory. To this end, in September 2002, the Laboratory purchased a 350-node computing cluster from Linux NetworX. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the fifty fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2005, there were 62 active projects on Jazz involving over 320 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to improve the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure 8. Argonne's Laboratory computing resource center : 2006 annual report. Energy Technology Data Exchange (ETDEWEB) Bair, R. B.; Kaushik, D. K.; Riley, K. R.; Valdes, J. V.; Drugan, C. D.; Pieper, G. P. 2007-05-31 Argonne National Laboratory founded the Laboratory Computing Resource Center (LCRC) in the spring of 2002 to help meet pressing program needs for computational modeling, simulation, and analysis. The guiding mission is to provide critical computing resources that accelerate the development of high-performance computing expertise, applications, and computations to meet the Laboratory's challenging science and engineering missions. In September 2002 the LCRC deployed a 350-node computing cluster from Linux NetworX to address Laboratory needs for mid-range supercomputing. This cluster, named 'Jazz', achieved over a teraflop of computing power (10{sup 12} floating-point calculations per second) on standard tests, making it the Laboratory's first terascale computing system and one of the 50 fastest computers in the world at the time. Jazz was made available to early users in November 2002 while the system was undergoing development and configuration. In April 2003, Jazz was officially made available for production operation. Since then, the Jazz user community has grown steadily. By the end of fiscal year 2006, there were 76 active projects on Jazz involving over 380 scientists and engineers. These projects represent a wide cross-section of Laboratory expertise, including work in biosciences, chemistry, climate, computer science, engineering applications, environmental science, geoscience, information science, materials science, mathematics, nanoscience, nuclear engineering, and physics. Most important, many projects have achieved results that would have been unobtainable without such a computing resource. The LCRC continues to foster growth in the computational science and engineering capability and quality at the Laboratory. Specific goals include expansion of the use of Jazz to new disciplines and Laboratory initiatives, teaming with Laboratory infrastructure providers to offer more scientific data management capabilities, expanding Argonne staff 9. Physics Division Argonne National Laboratory description of the programs and facilities. Energy Technology Data Exchange (ETDEWEB) Thayer, K.J. [ed. 1999-05-24 The ANL Physics Division traces its roots to nuclear physics research at the University of Chicago around the time of the second world war. Following the move from the University of Chicago out to the present Argonne site and the formation of Argonne National Laboratory: the Physics Division has had a tradition of research into fundamental aspects of nuclear and atomic physics. Initially, the emphasis was on areas such as neutron physics, mass spectrometry, and theoretical studies of the nuclear shell model. Maria Goeppert Maier was an employee in the Physics Division during the time she did her Nobel-Prize-winning work on the nuclear shell model. These interests diversified and at the present time the research addresses a wide range of current problems in nuclear and atomic physics. The major emphasis of the current experimental nuclear physics research is in heavy-ion physics, centered around the ATLAS facility (Argonne Tandem-Linac Accelerator System) with its new injector providing intense, energetic ion beams over the fill mass range up to uranium. ATLAS is a designated National User Facility and is based on superconducting radio-frequency technology developed in the Physics Division. A small program continues in accelerator development. In addition, the Division has a strong program in medium-energy nuclear physics carried out at a variety of major national and international facilities. The nuclear theory research in the Division spans a wide range of interests including nuclear dynamics with subnucleonic degrees of freedom, dynamics of many-nucleon systems, nuclear structure, and heavy-ion interactions. This research makes contact with experimental research programs in intermediate-energy and heavy-ion physics, both within the Division and on the national and international scale. The Physics Division traditionally has strong connections with the nation's universities. We have many visiting faculty members and we encourage students to participate in our 10. Overview of basic and applied research on battery systems at Argonne Energy Technology Data Exchange (ETDEWEB) Nevitt, M. V. 1979-01-01 The need for a basic understanding of the ion transport and related effects that are observed under the unique physical and electrochemical conditions occurring in high-temperature, high-performance batteries is pointed out. Such effects include those that are typical of transport in bulk materials such as liquid and solid electrolytes and the less well understood effects observed in migration in and across the interfacial zones existing around electrodes. The basic and applied studies at Argonne National Laboratory, centered in part around the development of a Li(alloy)/iron sulfide battery system for energy storage, are briefly described as an example of the way that such an understanding is being sought by coordinated interdisciplinary research. 3 figures. 11. Ground State Correlations Using exp(S) Method for the Argonne-v18 Potential. Science.gov (United States) Heisenberg, Jochen; Mihaila, Bogdan 1997-04-01 We use the Argonne-v18 potential together with the phenomenological three-nucleon interaction to do the calculation of the mean-field single particle wave functions and the correlation operator S for ^16O. Our correlation operator includes the contributions from up to 4p4h terms. From the three-nucleon interaction we include only those terms that can be written as a density dependent two-body term. We present a breakdown of the contributions to the binding from the two- and the three-body interactions. The one- and the two-body densities for ^16O are presented. Effects of the center-of-mass correction on the charge density and form factor are also discussed. 12. Two-Nucleon Scattering without partial waves using a momentum space Argonne V18 interaction CERN Document Server Veerasamy, S; Polyzou, W N 2012-01-01 We test the operator form of the Fourier transform of the Argonne V18 potential by computing selected scattering observables and all Wolfenstein parameters for a variety of energies. These are compared to the GW-DAC database and to partial wave calculations. We represent the interaction and transition operators as expansions in a spin-momentum basis. In this representation the Lippmann-Schwinger equation becomes a six channel integral equation in two variables. Our calculations use different numbers of spin-momentum basis elements to represent the on- and off-shell transition operators. This is because different numbers of independent spin-momentum basis elements are required to expand the on- and off-shell transition operators. The choice of on and off-shell spin-momentum basis elements is made so the coefficients of the on-shell spin-momentum basis vectors are simply related to the corresponding off-shell coefficients. 13. The Anapole Moment of the Deuteron with the Argonne v18 Nucleon-Nucleon Interaction Model CERN Document Server Hyun, C H; Hyun, Chang Ho; Desplanques, Bertrand 2003-01-01 We calculate the deuteron anapole moment with the wave functions obtained from the Argonnev18nucleon-nucleon interaction model. The anapole moment operators are considered at the leading order. To minimize the uncertainty due to a lack of current conservation, we calculate the matrix element of the anapole moment from the original definition. In virtue of accurate wave functions, we can obtain a more precise value of the deuteron anapole moment which contains less uncertainty than the former works. We obtain a result reduced by more than 25% in the magnitude of the deuteron anapole moment. The reduction of individual nuclear contributions is much more important however, varying from a factor 2 for the spin part to a factor 4 for the convection and associated two-body currents. 14. National coal utilization assessment: modeling long-term coal production with the Argonne coal market model Energy Technology Data Exchange (ETDEWEB) Dux, C.D.; Kroh, G.C.; VanKuiken, J.C. 1977-08-01 The Argonne Coal Market Model was developed as part of the National Coal Utilization Assessment, a comprehensive study of coal-related environmental, health, and safety impacts. The model was used to generate long-term coal market scenarios that became the basis for comparing the impacts of coal-development options. The model has a relatively high degree of regional detail concerning both supply and demand. Coal demands are forecast by a combination of trend and econometric analysis and then input exogenously into the model. Coal supply in each region is characterized by a linearly increasing function relating increments of new mine capacity to the marginal cost of extraction. Rail-transportation costs are econometrically estimated for each supply-demand link. A quadratic programming algorithm is used to calculate flow patterns that minimize consumer costs for the system. 15. Decontamination and dismantlement of the JANUS Reactor at Argonne National Laboratory-East. Project final report Energy Technology Data Exchange (ETDEWEB) Fellhauer, C.R.; Clark, F.R. [Argonne National Lab., IL (United States). Technology Development Div.; Garlock, G.A. [MOTA Corp., Cayce, SC (United States) 1997-10-01 The decontamination and dismantlement of the JANUS Reactor at Argonne National Laboratory-East (ANL-E) was completed in October 1997. Descriptions and evaluations of the activities performed and analyses of the results obtained during the JANUS D and D Project are provided in this Final Report. The following information is included: objective of the JANUS D and D Project; history of the JANUS Reactor facility; description of the ANL-E site and the JANUS Reactor facility; overview of the D and D activities performed; description of the project planning and engineering; description of the D and D operations; summary of the final status of the JANUS Reactor facility based upon the final survey results; description of the health and safety aspects of the project, including personnel exposure and OSHA reporting; summary of the waste minimization techniques utilized and total waste generated by the project; and summary of the final cost and schedule for the JANUS D and D Project. 16. Proc. of the sixteenth symposium on energy engineering sciences, May 13-15, 1998, Argonne, IL. Energy Technology Data Exchange (ETDEWEB) None 1998-05-13 This Proceedings Volume includes the technical papers that were presented during the Sixteenth Symposium on Energy Engineering Sciences on May 13--15, 1998, at Argonne National Laboratory, Argonne, Illinois. The Symposium was structured into eight technical sessions, which included 30 individual presentations followed by discussion and interaction with the audience. A list of participants is appended to this volume. The DOE Office of Basic Energy Sciences (BES), of which Engineering Research is a component program, is responsible for the long-term, mission-oriented research in the Department. The Office has prime responsibility for establishing the basic scientific foundation upon which the Nation's future energy options will be identified, developed, and built. BES is committed to the generation of new knowledge necessary to solve present and future problems regarding energy exploration, production, conversion, and utilization, while maintaining respect for the environment. Consistent with the DOE/BES mission, the Engineering Research Program is charged with the identification, initiation, and management of fundamental research on broad, generic topics addressing energy-related engineering problems. Its stated goals are to improve and extend the body of knowledge underlying current engineering practice so as to create new options for enhancing energy savings and production, prolonging the useful life of energy-related structures and equipment, and developing advanced manufacturing technologies and materials processing. The program emphasis is on reducing costs through improved industrial production and performance and expanding the nation's store of fundamental knowledge for solving anticipated and unforeseen engineering problems in energy technologies. To achieve these goals, the Engineering Research Program supports approximately 130 research projects covering a broad spectrum of topics that cut across traditional engineering disciplines. The program 17. Changes in the Vegetation Cover in a Constructed Wetland at Argonne National Laboratory, Illinois Energy Technology Data Exchange (ETDEWEB) Bergman, C.L.; LaGory, K. 2004-01-01 Wetlands are valuable resources that are disappearing at an alarming rate. Land development has resulted in the destruction of wetlands for approximately 200 years. To combat this destruction, the federal government passed legislation that requires no net loss of wetlands. The United States Army Corps of Engineers (USACE) is responsible for regulating wetland disturbances. In 1991, the USACE determined that the construction of the Advanced Photon Source at Argonne National Laboratory would damage three wetlands that had a total area of one acre. Argonne was required to create a wetland of equal acreage to replace the damaged wetlands. For the first five years after this wetland was created (1992-1996), the frequency of plant species, relative cover, and water depth was closely monitored. The wetland was not monitored again until 2002. In 2003, the vegetation cover data were again collected with a similar methodology to previous years. The plant species were sampled using quadrats at randomly selected locations along transects throughout the wetland. The fifty sampling locations were monitored once in June and percent cover of each of the plant species was determined for each plot. Furthermore, the extent of standing water in the wetland was measured. In 2003, 21 species of plants were found and identified. Eleven species dominated the wetland, among which were reed canary grass (Phalaris arundinacea), crown vetch (Coronilla varia), and Canada thistle (Cirsium arvense). These species are all non-native, invasive species. In the previous year, 30 species were found in the same wetland. The common species varied from the 2002 study but still had these non-native species in common. Reed canary grass and Canada thistle both increased by more than 100% from 2002. Unfortunately, the non-native species may be contributing to the loss of biodiversity in the wetland. In the future, control measures should be taken to ensure the establishment of more desired native species. 18. Argonne National Laboratory study of the transfer of federal computational technology to manufacturing industry in the State of Michigan Energy Technology Data Exchange (ETDEWEB) Mueller, C.J. 1991-11-01 This report describes a pilot study to develop, initiate the implementation, and document a process to identify computational technology capabilities resident within Argonne National Laboratory to small and medium-sized businesses in the State of Michigan. It is a derivative of a program entitled Technology Applications Development Process for the State of Michigan undertaken by the Industrial Technology Institute and MERRA under funding from the National Institute of Standards and Technology. The overall objective of the latter program is to develop procedures which can facilitate the discovery and commercialization of new technologies for the benefit of small and medium-size manufacturing firms. Federal laboratories such as Argonne, along with universities, have been identified by the Industrial Technology Institute as key sources of technology which can be profitably commercialized by the target firms. The scope of this study limited the investigation of technology areas for technology transfer to that of computational science and engineering featuring high performance computing. This area was chosen as the broad technological capability within Argonne to investigate for technology transfer to Michigan firms for several reasons. First, and most importantly, as a multidisciplinary laboratory, Argonne has the full range of scientific and engineering skills needed to utilize leading-edge computing capabilities in many areas of manufacturing. 19. Argonne National Laboratory study of the transfer of federal computational technology to manufacturing industry in the State of Michigan Energy Technology Data Exchange (ETDEWEB) Mueller, C.J. 1991-11-01 This report describes a pilot study to develop, initiate the implementation, and document a process to identify computational technology capabilities resident within Argonne National Laboratory to small and medium-sized businesses in the State of Michigan. It is a derivative of a program entitled Technology Applications Development Process for the State of Michigan'' undertaken by the Industrial Technology Institute and MERRA under funding from the National Institute of Standards and Technology. The overall objective of the latter program is to develop procedures which can facilitate the discovery and commercialization of new technologies for the benefit of small and medium-size manufacturing firms. Federal laboratories such as Argonne, along with universities, have been identified by the Industrial Technology Institute as key sources of technology which can be profitably commercialized by the target firms. The scope of this study limited the investigation of technology areas for technology transfer to that of computational science and engineering featuring high performance computing. This area was chosen as the broad technological capability within Argonne to investigate for technology transfer to Michigan firms for several reasons. First, and most importantly, as a multidisciplinary laboratory, Argonne has the full range of scientific and engineering skills needed to utilize leading-edge computing capabilities in many areas of manufacturing. 20. Practical superconductor development for electrical applications - Argonne National Laboratory quarterly report for the period ending September 30, 2002. Energy Technology Data Exchange (ETDEWEB) Dorris, S. E. 2002-12-02 This is a multiyear experimental research program that focuses on improving relevant material properties of high-T{sub c} superconductors (HTSs) and developing fabrication methods that can be transferred to industry for production of commercial conductors. The development of teaming relationships through agreements with industrial partners is a key element of the Argonne (ANL) program. 1. Practical superconductor development for electrical power applications - Argonne National Laboratory - quarterly report for the period ending June 30, 2001. Energy Technology Data Exchange (ETDEWEB) Dorris, S. E. 2001-08-21 This is a multiyear experimental research program focused on improving relevant material properties of high-T{sub c} superconductors (HTSs) and on development of fabrication methods that can be transferred to industry for production of commercial conductors. The development of teaming relationships through agreements with industrial partners is a key element of the Argonne (ANL) program. 2. Investigation of the vertical instability at the Argonne Intense Pulsed Neutron Source Science.gov (United States) Wang, Shaoheng; Dooling, J. C.; Harkay, K. C.; Kustom, R. L.; McMichael, G. E. 2009-10-01 The rapid cycling synchrotron of the intense pulsed neutron source at Argonne National Laboratory normally operates at an average beam current of 14 to 15μA, accelerating protons from 50 to 450 MeV 30 times per second. The beam current is limited by a single-bunch vertical instability that occurs in the later part of the 14 ms acceleration cycle. By analyzing turn-by-turn beam position monitor data, two cases of vertical beam centroid oscillations were discovered. The oscillations start from the tail of the bunch, build up, and develop toward the head of the bunch. The development stops near the bunch center and oscillations remain localized in the tail for a relatively long time (2-4 ms, 1-2×104 turns). This vertical instability is identified as the cause of the beam loss. We compared this instability with a head-tail instability that was purposely induced by switching off sextupole magnets. It appears that the observed vertical instability is different from the classical head-tail instability. 3. Argonne National Laboratory-East site environmental report for calendar year 1994 Energy Technology Data Exchange (ETDEWEB) Golchert, N.W.; Kolzow, R.G. 1995-05-01 This report discusses the results of the environmental protection program at Argonne National Laboratory-East (ANL) for 1994. To evaluate the effects of ANL operations on the environment, samples of environmental media collected on the site, at the site boundary, and off the ANL site were analyzed and compared to applicable guidelines and standards. A variety of radionuclides was measured in air, surface water, groundwater, soil, grass, and bottom sediment samples. In addition, chemical constituents in surface water, groundwater, and ANL effluent water were analyzed. External penetrating radiation doses were measured and the potential for radiation exposure to off-site population groups was estimated. The results of the surveillance program are interpreted in terms of the origin of the radioactive and chemical substances (natural, fallout, ANL, and other) and are compared with applicable environmental quality standards. A US Department of Energy (DOE) dose calculation methodology, based on International Commission on Radiological Protection (ICRP) recommendations and the CAP-88 version of the EPA-AIRDOSE/RADRISK COMPUTER CODE, is used in this report. The status of ANL environmental protection activities with respect to the various laws and regulations which govern waste handling and disposal is discussed. This report also discusses progress being made on environmental corrective actions and restoration projects. 4. Argonne National Laboratory--East site environmental report for calendar year 1990 Energy Technology Data Exchange (ETDEWEB) Golchert, N.W.; Duffy, T.L.; Moos, L.P. 1991-07-01 This report discusses the results of the environmental protection program at Argonne National Laboratory-East (ANL) for 1990. To evaluate the effects of ANL operations on the environment, samples of environmental media collected on the site, at the site boundary, and off the ANL site were analyzed and compared to applicable guidelines and standards. A variety of radionuclides was measured in air, surface water, groundwater, soil, grass, bottom sediment, and milk samples. In addition, chemical constituents in surface water, groundwater, and ANL effluent water were analyzed. External penetrating radiation doses were measured and the potential for radiation exposure to off-site population groups was estimated. The results of the surveillance program are interpreted in terms of the origin of the radioactive and chemical substances (natural, fallout, ANL, and other) and are compared with applicable environmental quality standards. A US Department of Energy (DOE) dose calculation methodology, based on International Commission on Radiological Protection (ICRP) recommendations, is used in this report. The status of ANL environmental protection activities with respect to the various laws and regulations which govern waste handling and disposal is discussed. This report also discusses progress being made on environmental corrective actions and restoration projects from past activities. 5. Argonne National Laboratory-East site environmental report for calendar year 1996 Energy Technology Data Exchange (ETDEWEB) Golchert, N.W.; Kolzow, R.G. 1997-09-01 This report discusses the results of the environmental protection program at Argonne National Laboratory-East (ANL-E) for 1996. To evaluate the effects of ANL-E operations on the environment, samples of environmental media collected on the site, at the site boundary, and off the ANL-E site were analyzed and compared to applicable guidelines and standards. A variety of radionuclides were measured in air, surface water, on-site groundwater, soil, grass, and bottom sediment samples. In addition, chemical constituents in surface water, groundwater, and ANL-E effluent water were analyzed. External penetrating radiation doses were measured, and the potential for radiation exposure to off-site population groups was estimated. The results of the surveillance program are interpreted in terms of the origin of the radioactive and chemical substances (natural, fallout, ANL-E, and other) and are compared with applicable environmental quality standards. A US Department of Energy dose calculation methodology, based on International Commission on Radiological Protection recommendations and the CAP-88 version of the EPA-AIRDOSE/RADRISK computer code, is used in this report. The status of ANL-E environmental protection activities with respect to the various laws and regulations that govern waste handling and disposal is discussed. This report also discusses progress being made on environmental corrective actions and restoration projects. 6. Argonne National Laboratory-East site environmental report for calendar year 1993 Energy Technology Data Exchange (ETDEWEB) Golchert, N.W.; Kolzow, R.G. [Argonne National Lab., IL (United States). Environment and Waste Management Program 1994-05-01 This report discusses the results of the environmental protection program at Argonne National Laboratory-East (ANL) for 1993. To evaluate the effects of ANL operations on the environment, samples of environmental media collected on the site, at the site boundary, and off the ANL site were analyzed and compared to applicable guidelines and standards. A variety of radionuclides was measured in air, surface water, groundwater, soil, grass, and bottom sediment samples. In addition, chemical constituents in surface water, groundwater, and ANL effluent water were analyzed. External penetrating radiation doses were measured and the potential for radiation exposure to off-site population groups was estimated. The results of the surveillance program are interpreted in terms of the origin of the radioactive and chemical substances (natural, fallout, ANL, and other) and are compared with applicable environmental quality standards. A US Department of Energy (DOE) dose calculation methodology, based on International Commission on Radiological Protection (ICRP) recommendations and the CAP-88 version of the EPA-AIRDOSE/RADRISK computer code, is used in this report. The status of ANL environmental protection activities with respect to the various laws and regulations which govern waste handling and disposal is discussed. This report also discusses progress being made on environmental corrective actions and restoration projects from past activities. 7. Argonne National Laboratory-East site environmental report for calendar year 1998. Energy Technology Data Exchange (ETDEWEB) Golchert, N.W.; Kolzow, R.G. 1999-08-26 This report discusses the results of the environmental protection program at Argonne National Laboratory-East (ANL-E) for 1998. To evaluate the effects of ANL-E operations on the environment, samples of environmental media collected on the site, at the site boundary, and off the ANL-E site were analyzed and compared with applicable guidelines and standards. A variety of radionuclides were measured in air, surface water, on-site groundwater, and bottom sediment samples. In addition, chemical constituents in surface water, groundwater, and ANL-E effluent water were analyzed. External penetrating radiation doses were measured, and the potential for radiation exposure to off-site population groups was estimated. Results are interpreted in terms of the origin of the radioactive and chemical substances (i.e., natural, fallout, ANL-E, and other) and are compared with applicable environmental quality standards. A US Department of Energy dose calculation methodology, based on International Commission on Radiological Protection recommendations and the US Environmental Protection Agency's CAP-88 (Clean Air Act Assessment Package-1988) computer code, was used in preparing this report. The status of ANL-E environmental protection activities with respect to the various laws and regulations that govern waste handling and disposal is discussed, along with the progress of environmental corrective actions and restoration projects. 8. Vitrification as a low-level radioactive mixed waste treatment technology at Argonne National Laboratory Energy Technology Data Exchange (ETDEWEB) Mazer, J.J.; No, Hyo J. 1995-08-01 Argonne National Laboratory-East (ANL-E) is developing plans to use vitrification to treat low-level radioactive mixed wastes (LLMW) generated onsite. The ultimate objective of this project is to install a full-scale vitrification system at ANL-E capable of processing the annual generation and historic stockpiles of selected LLMW streams. This project is currently in the process of identifying a range of processible glass compositions that can be produced from actual mixed wastes and additives, such as boric acid or borax. During the formulation of these glasses, there has been an emphasis on maximizing the waste content in the glass (70 to 90 wt %), reducing the overall final waste volume, and producing a stabilized low-level radioactive waste glass. Crucible glass studies with actual mixed waste streams have produced alkali borosilicate glasses that pass the Toxic Characteristic Leaching Procedure (TCLP) test. These same glass compositions, spiked with toxic metals well above the expected levels in actual wastes, also pass the TCLP test. These results provide compelling evidence that the vitrification system and the glass waste form will be robust enough to accommodate expected variations in the LLMW streams from ANL-E. Approximately 40 crucible melts will be studied to establish a compositional envelope for vitrifying ANL-E mixed wastes. Also being determined is the identity of volatilized metals or off-gases that will be generated. 9. Structural elucidation of Argonne premium coals: Molecular weights, heteroatom distributions and linkages between clusters Energy Technology Data Exchange (ETDEWEB) Winans, R.E.,; Kim, Y.; Hunt, J.E.; McBeth, R.L. 1995-12-31 The objective of this study is to create a statistically accurate picture of important structural features for a group of coals representing a broad rank range. Mass spectrometric techniques are used to study coals, coal extracts and chemically modified coals and extracts. Laser desorption mass spectrometry is used to determine molecular weight distributions. Desorption chemical ionization high resolution mass spectrometry provides detailed molecular information on compound classes of molecules is obtained using tandem mass spectrometry. These results are correlated with other direct studies on these samples such as solid NMR, XPS and X-ray absorption spectroscopy. From the complex sets of data, several general trends are emerging especially for heteroatom containing species. From a statistical point of view, heteroatoms must play important roles in the reactivity of all coals. Direct characterization of sulfur containing species in the Argonne coals has been reported from XANES analysis. Indirect methods used include: TG-FTIR and HRMS which rely on thermal desorption and pyrolysis to vaporize the samples. Both XANES and XPS data on nitrogen has been reported, but at this time, the XPS information is probably more reliable. Results from HRMS are discussed in this paper. Most other information on nitrogen is limited to analysis of liquefaction products. However, nitrogen can be important in influencing characteristics of coal liquids and as a source of NO{sub x}s in coal combustion. 10. Experimental results obtained with the positron-annihilation- radiation telescope of the Toulouse-Argonne collaboration Energy Technology Data Exchange (ETDEWEB) Naya, J.E.; von Ballmoos, P.; Albernhe, F.; Vedrenne, G. [Centre dEtude Spatial des Rayonnements, Toulouse (France); Smither, R.K.; Faiz, M.; Fernandez, P.B.; Graber, T. [Argonne National Lab., IL (United States) 1995-10-01 We present laboratory measurements obtained with a ground-based prototype of a focusing positron-annihilation-radiation telescope developed by the Toulouse-Argonne collaboration. This balloon-borne telescope has been designed to collect 511-keV photons with an extremely low instrumental background. The telescope features a Laue diffraction lens and a detector module containing a small array of germanium detectors. It will provide a combination of high spatial and energy resolution (15 arc sec and 2 keV, respectively) with a sensitivity of {approximately}3{times}10{sup {minus}5} photons cm{sup {minus}2}s{sup {minus}1}. These features will allow us to resolve a possible narrow 511-keV line both energetically and spatially within a Galactic center microquasar or in other broad-class annihilators. The ground-based prototype consists of a crystal lens holding small cubes of diffracting germanium crystals and a 3{times}3 germanium array that detects the concentrated beam in the focal plane. Measured performances of the instrument at different line energies (511 keV and 662 keV) are presented and compared with Monte-Carlo simulations. The advantages of a 3{times}3 Ge-detector array with respect to a standard-monoblock detector have been confirmed. The results obtained in the laboratory have strengthened interest in a crystal-diffraction telescope, offering new perspectives for die future of experimental gamma-ray astronomy. 11. Experimental results obtained with the positron-annihilation-radiation telescope of the Toulouse-Argonne collaboration Energy Technology Data Exchange (ETDEWEB) Naya, J.E. [Toulouse-3 Univ., 31 (France). Centre dEtude Spatiale des Rayonnements; Ballmoos, P. von [Toulouse-3 Univ., 31 (France). Centre dEtude Spatiale des Rayonnements; Smither, R.K. [Argonne National Lab., IL (United States). Advanced Photon Source Div.; Faiz, M. [Argonne National Lab., IL (United States). Advanced Photon Source Div.; Fernandez, P.B. [Argonne National Lab., IL (United States). Advanced Photon Source Div.; Graber, T. [Argonne National Lab., IL (United States). Advanced Photon Source Div.; Albernhe, F. [Toulouse-3 Univ., 31 (France). Centre dEtude Spatiale des Rayonnements; Vedrenne, G. [Toulouse-3 Univ., 31 (France). Centre dEtude Spatiale des Rayonnements 1996-04-11 We present laboratory measurements obtained with a ground-based prototype of the focusing positron-annihilation-radiation telescope developed by the Toulouse-Argonne collaboration. This instrument has been designed to collect 511-keV photons from astrophysical sources when operating as a balloon borne observatory. The ground-based prototype consists of a crystal lens holding small cubes of diffracting germanium crystals and a 3 x 3 germanium array that detects the concentrated beam in the focal plane. Measured performances of the instrument at different line energies (511 and 662 keV) are presented and compared with Monte Carlo simulations; also the advantages of combining the lens with a detector array are discussed. The results obtained in the laboratory have strengthened interest in a crystal-diffraction telescope: the balloon instrument will provide a combination of high spatial and energy resolution (15 arc sec and 2 keV, respectively) with an extremely low instrumental background resulting in a sensitivity of similar 3.10{sup -5} photons cm{sup -2}s{sup -1}. These features will allow us to resolve a possible narrow 511-keV line both energetically and spatially within a Galactic center microquasar or in other broad-class annihilators. (orig.). 12. Inspection and monitoring plan, contaminated groundwater seeps 317/319/ENE Area, Argonne National Laboratory Energy Technology Data Exchange (ETDEWEB) NONE 1996-10-11 During the course of completing the Resource Conservation and Recovery Act (RCRA) Facility Investigation (RFI) in the 317/319/East-Northeast (ENE) Area of Argonne National Laboratory-East (ANL-E), groundwater was discovered moving to the surface through a series of groundwater seeps. The seeps are located in a ravine approximately 600 ft south of the ANL-E fence line in Waterfall Glen Forest Preserve. Samples of the seep water were collected and analyzed for selected parameters. Two of the five seeps sampled were found to contain detectable levels of organic contaminants. Three chemical species were identified: chloroform (14--25 {micro}g/L), carbon tetrachloride (56--340 {micro}g/L), and tetrachloroethylene (3--6 {micro}g/L). The other seeps did not contain detectable levels of volatile organics. The nature of the contaminants in the seeps will also be monitored on a regular basis. Samples of surface water flowing through the bottom of the ravine and groundwater emanating from the seeps will be collected and analyzed for chemical and radioactive constituents. The results of the routine sampling will be compared with the concentrations used in the risk assessment. If the concentrations exceed those used in the risk assessment, the risk calculations will be revised by using the higher numbers. This revised analysis will determine if additional actions are warranted. 13. The beam bunching and transport system of the Argonne positive ion injector Energy Technology Data Exchange (ETDEWEB) Den Hartog, P.K.; Bogaty, J.M.; Bollinger, L.M.; Clifft, B.E.; Pardo, R.C.; Shepard, K.W. 1989-01-01 A new positive ion injector (PII) is currently under construction at Argonne that will replace the existing 9-MV tandem electrostatic accelerator as an injector into ATLAS. It consists of an electron-cyclotron resonance-ion source on a 350-kV platform injecting into a superconducting linac optimized for very slow (..beta.. less than or equal to .007 c) ions. This combination can potentially produce even higher quality heavy-ion beams than are currently available from the tandem since the emittance growth within the linac is largely determined by the quality of the bunching and beam transport. The system we have implemented uses a two-stage bunching system, composed of a 4-harmonic gridded buncher located on the ECR high-voltage platform and a room temperature spiral-loaded buncher of novel design. A sinusoidal beam chopper is used for removal of tails. The beam transport is designed to provide mass resolution of M/..delta..M > 250 and a doubly-isochronous beamline is used to minimize time spread due to path length differences. 4 refs., 2 figs. 14. Management of wildlife causing damage at Argonne National Laboratory-East, DuPage County, Illinois Energy Technology Data Exchange (ETDEWEB) NONE 1995-04-01 The DOE, after an independent review, has adopted an Environmental Assessment (EA) prepared by the US Department of Agriculture (USDA) which evaluates use of an Integrated Wildlife Damage Management approach at Argonne National Laboratory-East (ANL-E) in DuPage County, Illinois (April 1995). In 1994, the USDA issued a programmatic Environmental Impact Statement (EIS) that covers nationwide animal damage control activities. The EA for Management of Wildlife Causing Damage at ANL-E tiers off this programmatic EIS. The USDA wrote the EA as a result of DOEs request to USDA to prepare and implement a comprehensive Wildlife Management Damage Plan; the USDA has authority for animal damage control under the Animal Damage Control Act of 1931, as amended, and the Rural Development, Agriculture and Related Agencies Appropriations Act of 1988. DOE has determined, based on the analysis in the EA, that the proposed action does not constitute a major Federal action significantly affecting the quality of the human environment within the meaning of the National Environmental Policy Act of 1969 (NEPA). Therefore, the preparation of an EIS is not required. This report contains the Environmental Assessment, as well as the Finding of No Significant Impact (FONSI). 15. Software package as an information center product. [Activities of Argonne Code Center Energy Technology Data Exchange (ETDEWEB) Butler, M. K. 1977-01-01 The Argonne Code Center serves as a software exchange and information center for the U.S. Energy Research and Development Administration and the Nuclear Regulatory Commission. The goal of the Center's program is to provide a means for sharing of software among agency offices and contractors, and for transferring computing applications and technology, developed within the agencies, to the information-processing community. A major activity of the Code Center is the acquisition, review, testing, and maintenance of a collection of software--computer systems, applications programs, subroutines, modules, and data compilations--prepared by agency offices and contractors to meet programmatic needs. A brief review of the history of computer program libraries and software sharing is presented to place the Code Center activity in perspective. The state-of-the-art discussion starts off with an appropriate definition of the term software package, together with descriptions of recommended package contents and the Carter's package evaluation activity. An effort is made to identify the various users of the product, to enumerate their individual needs, to document the Center's efforts to meet these needs and the ongoing interaction with the user community. Desirable staff qualifications are considered, and packaging problems, reviewed. The paper closes with a brief look at recent developments and a forecast of things to come. 2 tables. (RWR) 16. Design, calibration, and operation of 220Rn stack effluent monitoring systems at Argonne National Laboratory. Science.gov (United States) Munyon, W J; Kretz, N D; Marchetti, F P 1994-09-01 A group of stack effluent monitoring systems have been developed to monitor discharges of 220Rn from a hot cell facility at Argonne National Laboratory. The stack monitors use flow-through scintillation cells and are completely microprocessor-based systems. A method for calibrating the stack monitors in the laboratory and in the field is described. A nominal calibration factor for the stack monitoring systems in use is 15.0 cts min-1 per kBq m-3 (0.56 cts min-1 per pCi L-1) +/- 26% at the 95% confidence level. The plate-out fraction of decay products in the stack monitor scintillation cells, without any pre-filtering, was found to be nominally 25% under normal operating conditions. When the sample was pre-filtered upstream of the scintillation cell, the observed cell plate-out fraction ranged from 16-22%, depending on the specific sampling conditions. The instantaneous 220Rn stack concentration can be underestimated or overestimated when the steady state condition established between 220Rn and its decay products in the scintillation cell is disrupted by sudden changes in the monitored 220Rn concentration. For long-term measurements, however, the time-averaged response of the monitor represents the steady state condition and leads to a reasonable estimate of the average 220Rn concentration during the monitoring period. 17. R D activities at Argonne National Laboratory for the application of base seismic isolation in nuclear facilities Energy Technology Data Exchange (ETDEWEB) Seidensticker, R.W. 1991-01-01 Argonne National Laboratory (ANL) has been deeply involved in the development of seismic isolation for use in nuclear facilities for the past decade. Initial focus of these efforts has been on the use of seismic isolation for advanced liquid metal reactors (LMR). Subsequent efforts in seismic isolation at ANL included a lead role in an accelerated development program for possible use of seismic isolation for the DOE's New Production reactors (NPR). Under funding provided by the National Science Foundation (NSF) Argonne is currently working with Shimizu in a joint United States-Japanese program on response of seismically-isolated buildings to actual earthquakes. The results of recent work in the seismic isolation program elements are described in this paper. The current Status of these programs is presented along with an assessment of work still needed to bring the benefits of this emerging technology to full potential in nuclear reactors and other nuclear facilities. 38 refs., 3 figs. 18. Argonne National Laboratory, High Energy Physics Division: Semiannual report of research activities, July 1, 1986-December 31, 1986 Energy Technology Data Exchange (ETDEWEB) 1987-01-01 This paper discusses the research activity of the High Energy Physics Division at the Argonne National Laboratory for the period, July 1986-December 1986. Some of the topics included in this report are: high resolution spectrometers, computational physics, spin physics, string theories, lattice gauge theory, proton decay, symmetry breaking, heavy flavor production, massive lepton pair production, collider physics, field theories, proton sources, and facility development. (LSP) 19. Combustion and leaching behavior of elements in the argonne premium coal samples Science.gov (United States) Finkelman, R.B.; Palmer, C.A.; Krasnow, M.R.; Aruscavage, P. J.; Sellers, G.A.; Dulong, F.T. 1990-01-01 Eight Argonne Premium Coal samples and two other coal samples were used to observe the effects of combustion and leaching on 30 elements. The results were used to infer the modes of occurrence of these elements. Instrumental neutron activation analysis indicates that the effects of combustion and leaching on many elements varied markedly among the samples. As much as 90% of the selenium and bromine is volatilized from the bituminous coal samples, but substantially less is volatilized from the low-rank coals. We interpret the combustion and leaching behavior of these elements to indicate that they are associated with the organic fraction. Sodium, although nonvolatile, is ion-exchangeable in most samples, particularly in the low-rank coal samples where it is likely to be associated with the organic constituents. Potassium is primarily in an ion-exchangeable form in the Wypdak coal but is in HF-soluble phases (probably silicates) in most other samples. Cesium is in an unidentified HNO3-soluble phase in most samples. Virtually all the strontium and barium in the low-rank coal samples is removed by NH4OAc followed by HCl, indicating that these elements probably occur in both organic and inorganic phases. Most tungsten and tantalum are in insoluble phases, perhaps as oxides or in organic association. Hafnium is generally insoluble, but as much as 65% is HF soluble, perhaps due to the presence of very fine grained or metamict zircon. We interpret the leaching behavior of uranium to indicate its occurrence in chelates and its association with silicates and with zircon. Most of the rare-earth elements (REE) and thorium appear to be associated with phosphates. Differences in textural relationships may account for some of the differences in leaching behavior of the REE among samples. Zinc occurs predominantly in sphalerite. Either the remaining elements occur in several different modes of occurrence (scandium, iron), or the leaching data are equivocal (arsenic, antimony 20. Flood-hazard analysis of four headwater streams draining the Argonne National Laboratory property, DuPage County, Illinois Science.gov (United States) Soong, David T.; Murphy, Elizabeth A.; Straub, Timothy D.; Zeeb, Hannah L. 2016-11-22 Results of a flood-hazard analysis conducted by the U.S. Geological Survey, in cooperation with the Argonne National Laboratory, for four headwater streams within the Argonne National Laboratory property indicate that the 1-percent and 0.2-percent annual exceedance probability floods would cause multiple roads to be overtopped. Results indicate that most of the effects on the infrastructure would be from flooding of Freund Brook. Flooding on the Northeast and Southeast Drainage Ways would be limited to overtopping of one road crossing for each of those streams. The Northwest Drainage Way would be the least affected with flooding expected to occur in open grass or forested areas.The Argonne Site Sustainability Plan outlined the development of hydrologic and hydraulic models and the creation of flood-plain maps of the existing site conditions as a first step in addressing resiliency to possible climate change impacts as required by Executive Order 13653 “Preparing the United States for the Impacts of Climate Change.” The Hydrological Simulation Program-FORTRAN is the hydrologic model used in the study, and the Hydrologic Engineering Center‒River Analysis System (HEC–RAS) is the hydraulic model. The model results were verified by comparing simulated water-surface elevations to observed water-surface elevations measured at a network of five crest-stage gages on the four study streams. The comparison between crest-stage gage and simulated elevations resulted in an average absolute difference of 0.06 feet and a maximum difference of 0.19 feet.In addition to the flood-hazard model development and mapping, a qualitative stream assessment was conducted to evaluate stream channel and substrate conditions in the study reaches. This information can be used to evaluate erosion potential. 1. NNWSI [Nevada Nuclear Waste Storage Investigations] waste form testing at Argonne National Laboratory; Semiannual report, January--June 1988 Energy Technology Data Exchange (ETDEWEB) Bates, J.K.; Gerding, T.J.; Ebert, W.L.; Mazer, J.J.; Biwer, B.M. [Argonne National Lab., IL (USA) 1990-04-01 The Chemical Technology Division of Argonne National Laboratory is performing experiments in support of the waste package development of the Yucca Mountain Project (formerly the Nevada Nuclear Waste Storage Investigations Project). Experiments in progress include (1) the development and performance of a durability test in unsaturated conditions, (2) studies of waste form behavior in an irradiated atmosphere, (3) studies of behavior in water vapor, and (4) studies of naturally occurring glasses to be used as analogues for waste glass behavior. This report documents progress made during the period of January--June 1988. 21 refs., 37 figs., 12 tabs. 2. An evaluation of alternative reactor vessel cutting technologies for the experimental boiling water reactor at Argonne National Laboratory Energy Technology Data Exchange (ETDEWEB) Boing, L.E.; Henley, D.R. (Argonne National Lab., IL (USA)); Manion, W.J.; Gordon, J.W. (Nuclear Energy Services, Inc., Danbury, CT (USA)) 1989-12-01 Metal cutting techniques that can be used to segment the reactor pressure vessel of the Experimental Boiling Water Reactor (EBWR) at Argonne National Laboratory (ANL) have been evaluated by Nuclear Energy Services. Twelve cutting technologies are described in terms of their ability to perform the required task, their performance characteristics, environmental and radiological impacts, and cost and schedule considerations. Specific recommendations regarding which technology should ultimately be used by ANL are included. The selection of a cutting method was the responsibility of the decommissioning staff at ANL, who included a relative weighting of the parameters described in this document in their evaluation process. 73 refs., 26 figs., 69 tabs. 3. Global climate change and international security. Report on a conference held at Argonne National Laboratory, May 8--10, 1991 Energy Technology Data Exchange (ETDEWEB) Rice, M. 1991-12-31 On May 8--10, 1991, the Midwest Consortium of International Security Studies (MCISS) and Argonne National Laboratory cosponsored a conference on Global Climate Change and International Security. The aim was to bring together natural and social scientists to examine the economic, sociopolitical, and security implications of the climate changes predicted by the general circulation models developed by natural scientists. Five themes emerged from the papers and discussions: (1) general circulation models and predicted climate change; (2) the effects of climate change on agriculture, especially in the Third World; (3) economic implications of policies to reduce greenhouse gas emissions; (4) the sociopolitical consequences of climate change; and (5) the effect of climate change on global security. 4. Radiological and Environmental Research Division annual report. Fundamental molecular physics and chemistry, June 1975--September 1976. [Summaries of research activities at Argonne National Laboratory Energy Technology Data Exchange (ETDEWEB) None 1976-01-01 A summary of research activities in the fundamental molecular physics and chemistry section at Argonne National Laboratory from July 1975 to September 1976 is presented. Of the 40 articles and abstracts given, 24 have been presented at conferences or have been published and will be separately abstracted. Abstracts of the remaining 16 items appear in this issue of ERA. (JFP) 5. Vibratory response of a mirror support/positioning system for the Advanced Photon Source project at Argonne National Laboratory Energy Technology Data Exchange (ETDEWEB) Basdogan, I.; Shu, Deming; Kuzay, T.M. [Argonne National Lab., IL (United States); Royston, T.J.; Shabana, A.A. [Univ. of Illinois, Chicago, IL (United States) 1996-08-01 The vibratory response of a typical mirror support/positioning system used at the experimental station of the Advanced Photon Source (APS) project at Argonne National Laboratory is investigated. Positioning precision and stability are especially critical when the supported mirror directs a high-intensity beam aimed at a distant target. Stability may be compromised by low level, low frequency seismic and facility-originated vibrations traveling through the ground and/or vibrations caused by flow-structure interactions in the mirror cooling system. The example case system has five positioning degrees of freedom through the use of precision actuators and rotary and linear bearings. These linkage devices result in complex, multi-dimensional vibratory behavior that is a function of the range of positioning configurations. A rigorous multibody dynamical approach is used for the development of the system equations. Initial results of the study, including estimates of natural frequencies and mode shapes, as well as limited parametric design studies, are presented. While the results reported here are for a particular system, the developed vibratory analysis approach is applicable to the wide range of high-precision optical positioning systems encountered at the APS and at other comparable facilities. 6. The Earth Microbiome Project: Meeting report of the "1 EMP meeting on sample selection and acquisition" at Argonne National Laboratory October 6 2010. Science.gov (United States) Gilbert, Jack A; Meyer, Folker; Jansson, Janet; Gordon, Jeff; Pace, Norman; Tiedje, James; Ley, Ruth; Fierer, Noah; Field, Dawn; Kyrpides, Nikos; Glöckner, Frank-Oliver; Klenk, Hans-Peter; Wommack, K Eric; Glass, Elizabeth; Docherty, Kathryn; Gallery, Rachel; Stevens, Rick; Knight, Rob 2010-12-25 This report details the outcome the first meeting of the Earth Microbiome Project to discuss sample selection and acquisition. The meeting, held at the Argonne National Laboratory on Wednesday October 6(th) 2010, focused on discussion of how to prioritize environmental samples for sequencing and metagenomic analysis as part of the global effort of the EMP to systematically determine the functional and phylogenetic diversity of microbial communities across the world. 7. Argonne Liquid-Metal Advanced Burner Reactor : components and in-vessel system thermal-hydraulic research and testing experience - pathway forward. Energy Technology Data Exchange (ETDEWEB) Kasza, K.; Grandy, C.; Chang, Y.; Khalil, H.; Nuclear Engineering Division 2007-06-30 This white paper provides an overview and status report of the thermal-hydraulic nuclear research and development, both experimental and computational, conducted predominantly at Argonne National Laboratory. Argonne from the early 1970s through the early 1990s was the Department of Energy's (DOE's) lead lab for thermal-hydraulic development of Liquid Metal Reactors (LMRs). During the 1970s and into the mid-1980s, Argonne conducted thermal-hydraulic studies and experiments on individual reactor components supporting the Experimental Breeder Reactor-II (EBR-II), Fast Flux Test Facility (FFTF), and the Clinch River Breeder Reactor (CRBR). From the mid-1980s and into the early 1990s, Argonne conducted studies on phenomena related to forced- and natural-convection thermal buoyancy in complete in-vessel models of the General Electric (GE) Prototype Reactor Inherently Safe Module (PRISM) and Rockwell International (RI) Sodium Advanced Fast Reactor (SAFR). These two reactor initiatives involved Argonne working closely with U.S. industry and DOE. This paper describes the very important impact of thermal hydraulics dominated by thermal buoyancy forces on reactor global operation and on the behavior/performance of individual components during postulated off-normal accident events with low flow. Utilizing Argonne's LMR expertise and design knowledge is vital to the further development of safe, reliable, and high-performance LMRs. Argonne believes there remains an important need for continued research and development on thermal-hydraulic design in support of DOE's and the international community's renewed thrust for developing and demonstrating the Global Nuclear Energy Partnership (GNEP) reactor(s) and the associated Argonne Liquid Metal-Advanced Burner Reactor (LM-ABR). This white paper highlights that further understanding is needed regarding reactor design under coolant low-flow events. These safety-related events are associated with the transition 8. How Argonne's Intense Pulsed Neutron Source came to life and gained its niche : the view from an ecosystem perspective. Energy Technology Data Exchange (ETDEWEB) Westfall, C.; Office of The Director 2008-02-25 At first glance the story of the Intense Pulsed Neutron Source (IPNS) at Argonne National Laboratory (ANL) appears to have followed a puzzling course. When researchers first proposed their ideas for an accelerator-driven neutron source for exploring the structure of materials through neutron scattering, the project seemed so promising that both Argonne managers and officials at the laboratory's funding agency, the Department of Energy (DOE), suggested that it be made larger and more expensive. But then, even though prototype building, testing, and initial construction went well a group of prominent DOE reviewers recommended in fall 1980 that it be killed, just months before it had been slated to begin operation, and DOE promptly accepted the recommendation. In response, Argonne's leadership declared the project was the laboratory's top priority and rallied to save it. In late 1982, thanks to another review panel led by the same scientist who had chaired the panel that had delivered the death sentence, the project was granted a reprieve. However, by the late 1980s, the IPNS was no longer top priority within the international materials science community, at Argonne, or within the DOE budget because prospects for another, larger materials science accelerator emerged. At just this point, the facility started to produce exciting scientific results. For the next two decades, the IPNS, its research, and its experts became valued resources at Argonne, within the U.S. national laboratory system, and within the international materials science community. Why did this Argonne project prosper and then almost suffer premature death, even though it promised (and later delivered) good science? How was it saved and how did it go on to have a long, prosperous life for more than a quarter of a century? In particular, what did an expert assessment of the quality of IPNS science have to do with its fate? Getting answers to such questions is important. The U.S. government 9. Gammasphere activities at Argonne Energy Technology Data Exchange (ETDEWEB) Khoo, T.L.; Carpenter, M.; Ahmad, I. [and others 1995-08-01 A powerful third-generation national gamma-ray facility consisting of 110 Ge detectors with BGO Compton suppressors is being constructed at LBL. After 18 months of operation there it will move to another site. This detector system combines calorimetric and multiplicity information with the excellent energy resolution, large efficiency, and high granularity of the Ge detectors. The large number of Ge detectors is essential for high- ({>=} 3) fold coincidences. Since each additional fold results in roughly an order-of-magnitude improvement in selectivity, this feature makes it possible to isolate cleanly weak structures, where new physics will undoubtedly lie. Since Gammasphere represents a national facility, we have made substantial contributions in its construction. In addition, T. L. Khoo is the Chairman of the Gammasphere Scientific Advisory Committee (formerly Steering Committee) which follows, and provides advice on, the construction of Gammasphere, while R.V. F. Janssens is Chairman of the Users Executive Committee. 10. Argonne Braille Project Energy Technology Data Exchange (ETDEWEB) Grunwald, A. 1977-07-01 Development of a braille machine is summarized. It is noted that the machine has reached the stage where development of the system appears both possible and desirable. Sections are included containing papers on computer translation and auxiliary equipment, and on letters and awards in recognition of the braille machine development. (JRD) 11. Inter-laboratory comparison II: CO{sub 2} isotherms measured on moisture-equilibrated Argonne premium coals at 55 C and up to 15 MPa Energy Technology Data Exchange (ETDEWEB) Goodman, A.L.; Romanov, V.; Schroeder, K.; White, C.M. [U.S. Department of Energy, National Energy Technology Laboratory, Pittsburgh, PA (United States); Busch, A.; Gensterblum, Y.; Krooss, B.M. [Institute of Geology and Geochemistry of Petroleum and Coal, RWTH Aachen University, Aachen (Germany); Bustin, R.M.; Chikatamarla, L. [University of British Columbia, Earth and Ocean Sciences, Vancouver (Canada); Day, S.; Duffy, G.J.; Sakurovs, R. [CSIRO Energy Technology, Newcastle, NSW (Australia); Fitzgerald, J.E.; Gasem, K.A.M.; Jing, C.; Mohammed, S.; Robinson, R.L. Jr. [School of Chemical Engineering, Oklahoma State University, Stillwater, OK (United States); Hartman, C.; Pratt, T. [TICORA Geosciences, Inc.19000 West Hwy. 72, Suite 100, Arvada, CO 80007 (United States) 2007-11-22 Sorption isotherms, which describe the coal's gas storage capacity, are important for estimating the carbon sequestration potential of coal seams. This study investigated the inter-laboratory reproducibility of carbon dioxide isotherm measurements on moisture-equilibrated Argonne premium coal samples (Pocahontas No. 3, Illinois No. 6, and Beulah Zap). Six independent laboratories provided isotherm data on the three moisture-equilibrated coal samples at 55 C and pressures up to 15 MPa. Agreement among the laboratories was good up to 8 MPa. At the higher pressures, the data among the laboratories diverged significantly for two of the laboratories and coincided reasonably well for four of the laboratories. (author) 12. Fiscal years 1993 and 1994 decontamination and decommissioning activities photobriefing book for the Argonne National Laboratory-East Site, Technology Development Division, Decontamination and Decommissioning Projects Department Energy Technology Data Exchange (ETDEWEB) NONE 1995-12-31 This photobriefing book describes the ongoing decontamination and decommissioning projects at the Argonne National Laboratory (ANL)-East Site near Lemont, Illinois. The book is broken down into three sections: introduction, project descriptions, and summary. The introduction elates the history and mission of the Decontamination and Decommissioning (D and D) Projects Department at ANL-East. The second section describes the active ANL-East D and D projects, giving a project history and detailing fiscal year (FY) 1993 and FY 1994 accomplishments and FY 1995 goals. The final section summarizes the goals of the D and D Projects Department and the current program status. The D/D projects include the Experimental Boiling Water Reactor, Chicago Pile-5 Reactor, that cells, and plutonium gloveboxes. 73 figs. 13. Particulate Emissions Control using Advanced Filter Systems: Final Report for Argonne National Laboratory, Corning Inc. and Hyundai Motor Company CRADA Project Energy Technology Data Exchange (ETDEWEB) Seong, Hee Je [Argonne National Lab. (ANL), Argonne, IL (United States); Choi, Seungmok [Argonne National Lab. (ANL), Argonne, IL (United States) 2015-10-09 This is a 3-way CRADA project working together with Corning, Inc. and Hyundai Motor Co. (HMC). The project is to understand particulate emissions from gasoline direct-injection engines (GDI) and their physico-chemical properties. In addition, this project focuses on providing fundamental information about filtration and regeneration mechanisms occurring in gasoline particulate filter (GPF) systems. For the work, Corning provides most advanced filter substrates for GPF applications and HMC provides three-way catalyst (TWC) coating services of these filter by way of a catalyst coating company. Then, Argonne National Laboratory characterizes fundamental behaviors of filtration and regeneration processes as well as evaluated TWC functionality for the coated filters. To examine aging impacts on TWC and GPF performance, the research team evaluates gaseous and particulate emissions as well as back-pressure increase with ash loading by using an engine-oil injection system to accelerate ash loading in TWC-coated GPFs. 14. Direct determination of sulfur species in coals from the Argonne premium sample program by solid sampling electrothermal vaporization inductively coupled plasma optical emission spectrometry. Science.gov (United States) Bauer, Daniela; Vogt, Thomas; Klinger, Mathias; Masset, Patrick Joseph; Otto, Matthias 2014-10-21 A new direct solid sampling method for speciation of sulfur in coals by electrothermal vaporization inductively coupled plasma optical emission spectrometry (ETV-ICP OES) is presented. On the basis of the controlled thermal decomposition of coal in an argon atmosphere, it is possible to determine the different sulfur species in addition to elemental sulfur in coals. For the assignment of the obtained peaks from the sulfur transient emission signal, several analytical techniques (reflected light microscopy, scanning electron microscopy with energy dispersive X-ray spectroscopy and X-ray diffraction) were used. The developed direct solid sampling method enables a good accuracy (relative standard deviation ≤ 6%), precision and was applied to determine the sulfur forms in the Argonne premium coals, varying in rank. The generated method is time- and cost-effective and well suited for the fast characterization of sulfur species in coal. It can be automated to a large extent and is applicable for process-accompanying analyses. 15. Physics with fast molecular-ion beams. Proceedings of workshop held at Argonne National Laboratory, August 20-21, 1979. [Workshop Energy Technology Data Exchange (ETDEWEB) Gemmell, D.S. (ed.) 1979-01-01 The Workshop on Physics with Fast Molecular-Ion Beams was held in the Physics Division, Argonne National Laboratory on August 20 and 21, 1979. The meeting brought together representatives from several groups studying the interactions of fast (MeV) molecular-ion beams with matter. By keeping the Workshop program sharply focussed on current work related to the interactions of fast molecular ions, it was made possible for the participants to engage in vigorous and detailed discussions concerning such specialized topics as molecular-ion dissociation and transmission, wake effects, ionic charge states, cluster stopping powers, beam-foil spectroscopy, electron-emissions studies with molecular-ion beams, and molecular-ion structure determinations. 16. Argonne's performance assessment of major facility systems to support semiconductor manufacturing by the National Security Agency/R Group, Ft. Meade, Maryland Energy Technology Data Exchange (ETDEWEB) Harrison, W.; Miller, G.M. 1990-12-01 The National Security Agency (NSA) was authorized in 1983 to construct a semiconductor and circuit-board manufacturing plant at its Ft. Meade, Maryland, facility. This facility was to become known as the Special Process Laboratories (SPL) building. Phase I construction was managed by the US Army Corps of Engineers, Baltimore District (USACE/BD) and commenced in January 1986. Phase I construction provided the basic building and support systems, such as the heating, ventilating, and air-conditioning system, the deionized-water and wastewater-treatment systems, and the high-purity-gas piping system. Phase II construction involved fitting the semiconductor manufacturing side of the building with manufacturing tools and enhancing various aspects of the Phase I construction. Phase II construction was managed by NSA and commenced in April 1989. Argonne National Laboratory (ANL) was contracted by USACE/BD midway through the Phase I construction period to provide quality-assured performance reviews of major facility systems in the SPL. Following completion of the Phase I construction, ANL continued its performance reviews under NSA sponsorship, focusing its attention on the enhancements to the various manufacturing support systems of interest. The purpose of this document is to provide a guide to the files that were generated by ANL during its term of technical assistance to USACE/BD and NSA and to explain the quality assurance program that was implemented when ANL conducted its performance reviews of the SPL building's systems. One set of the ANL project files is located at NSA, Ft. Meade, and two sets are at Argonne, Illinois. The ANL sets will be maintained until the year 2000, or for the 10-year estimated life of the project. 1 fig. 17. Application of Argonne's Glass Furnace Model to longhorn glass corporation oxy-fuel furnace for the production of amber glass. Energy Technology Data Exchange (ETDEWEB) Golchert, B.; Shell, J.; Jones, S.; Energy Systems; Shell Glass Consulting; Anheuser-Busch Packaging Group 2006-09-06 The objective of this project is to apply the Argonne National Laboratory's Glass Furnace Model (GFM) to the Longhorn oxy-fuel furnace to improve energy efficiency and to investigate the transport of gases released from the batch/melt into the exhaust. The model will make preliminary estimates of the local concentrations of water, carbon dioxide, elemental oxygen, and other subspecies in the entire combustion space as well as the concentration of these species in the furnace exhaust gas. This information, along with the computed temperature distribution in the combustion space may give indications on possible locations of crown corrosion. An investigation into the optimization of the furnace will be performed by varying several key parameters such as the burner firing pattern, exhaust number/size, and the boost usage (amount and distribution). Results from these parametric studies will be analyzed to determine more efficient methods of operating the furnace that reduce crown corrosion. Finally, computed results from the GFM will be qualitatively correlated to measured values, thus augmenting the validation of the GFM. 18. Treatment of EBR-I NaK mixed waste at Argonne National Laboratory and subsequent land disposal at the Idaho National Engineering and Environmental Laboratory. Energy Technology Data Exchange (ETDEWEB) Herrmann, S. D.; Buzzell, J. A.; Holzemer, M. J. 1998-02-03 Sodium/potassium (NaK) liquid metal coolant, contaminated with fission products from the core meltdown of Experimental Breeder Reactor I (EBR-I) and classified as a mixed waste, has been deactivated and converted to a contact-handled, low-level waste at Argonne's Sodium Component Maintenance Shop and land disposed at the Radioactive Waste Management Complex. Treatment of the EBR-I NaK involved converting the sodium and potassium to its respective hydroxide via reaction with air and water, followed by conversion to its respective carbonate via reaction with carbon dioxide. The resultant aqueous carbonate solution was solidified in 55-gallon drums. Challenges in the NaK treatment involved processing a mixed waste which was incompletely characterized and difficult to handle. The NaK was highly radioactive, i.e. up to 4.5 R/hr on contact with the mixed waste drums. In addition, the potential existed for plutonium and toxic characteristic metals to be present in the NaK, resultant from the location of the partial core meltdown of EBR-I in 1955. Moreover, the NaK was susceptible to degradation after more than 40 years of storage in unmonitored conditions. Such degradation raised the possibility of energetic exothermic reactions between the liquid NaK and its crust, which could have consisted of potassium superoxide as well as hydrated sodium/potassium hydroxides. 19. Report on the workshop "Decay spectroscopy at CARIBU: advanced fuel cycle applications, nuclear structure and astrophysics". 14-16 April 2011, Argonne National Laboratory, USA. Energy Technology Data Exchange (ETDEWEB) Kondev, F.; Carpenter, M.P.; Chowdhury, P.; Clark, J.A.; Lister, C.J.; Nichols, A.L.; Swewryniak, D. (Nuclear Engineering Division); (Univ. of Massachusetts); (Univ. of Surrey) 2011-10-06 A workshop on 'Decay Spectroscopy at CARIBU: Advanced Fuel Cycle Applications, Nuclear Structure and Astrophysics' will be held at Argonne National Laboratory on April 14-16, 2011. The aim of the workshop is to discuss opportunities for decay studies at the Californium Rare Isotope Breeder Upgrade (CARIBU) of the ATLAS facility with emphasis on advanced fuel cycle (AFC) applications, nuclear structure and astrophysics research. The workshop will consist of review and contributed talks. Presentations by members of the local groups, outlining the status of relevant in-house projects and availabile equipment, will also be organized. time will also be set aside to discuss and develop working collaborations for future decay studies at CARIBU. Topics of interest include: (1) Decay data of relevance to AFC applications with emphasis on reactor decay heat; (2) Discrete high-resolution gamma-ray spectroscopy following radioactive decya and related topics; (3) Calorimetric studies of neutron-rich fission framgents using Total ABsorption Gamma-Ray Spectrometry (TAGS) technique; (4) Beta-delayed neutron emissions and related topics; and (5) Decay data needs for nuclear astrophysics. 20. Studies of acute and chronic radiation injury at the Biological and Medical Research Division, Argonne National Laboratory, 1970-1992: The JANUS Program Survival and Pathology Data Energy Technology Data Exchange (ETDEWEB) Grahn, D.; Wright, B.J.; Carnes, B.A.; Williamson, F.S.; Fox, C. 1995-02-01 A research reactor for exclusive use in experimental radiobiology was designed and built at Argonne National Laboratory in the 1960s. It was located in a special addition to Building 202, which housed the Division of Biological and Medical Research. Its location assured easy access for all users to the animal facilities, and it was also near the existing gamma-irradiation facilities. The water-cooled, heterogeneous 200-kW(th) reactor, named JANUS, became the focal point for a range of radiobiological studies gathered under the rubic of {open_quotes}the JANUS program{close_quotes}. The program ran from about 1969 to 1992 and included research at all levels of biological organization, from subcellular to organism. More than a dozen moderate- to large-scale studies with the B6CF{sub 1} mouse were carried out; these focused on the late effects of whole-body exposure to gamma rays or fission neutrons, in matching exposure regimes. In broad terms, these studies collected data on survival and on the pathology observed at death. A deliberate effort was made to establish the cause of death. This archieve describes these late-effects studies and their general findings. The database includes exposure parameters, time of death, and the gross pathology and histopathology in codified form. A series of appendices describes all pathology procedures and codes, treatment or irradiation codes, and the manner in which the data can be accessed in the ORACLE database management system. A series of tables also presents summaries of the individual experiments in terms of radiation quality, sample sizes at entry, mean survival times by sex, and number of gross pathology and histopathology records. 1. YUCCA Mountain Project - Argonne National Laboratory, Annual Progress Report, FY 1997 for activity WP 1221 unsaturated drip condition testing of spent fuel and unsaturated dissolution tests of glass. Energy Technology Data Exchange (ETDEWEB) Bates, J. K.; Buck, E. C.; Emery, J. W.; Finch, R. J.; Finn, P. A.; Fortner, J.; Hoh, J. C.; Mertz, C.; Neimark, L. A.; Wolf, S. F.; Wronkiewicz, D. J. 1998-09-18 This document reports on the work done by the Nuclear Waste Management Section of the Chemical Technology Division of Argonne National Laboratory in the period of October 1996 through September 1997. Studies have been performed to evaluate the behavior of nuclear waste glass and spent fuel samples under the unsaturated conditions (low-volume water contact) that are likely to exist in the Yucca Mountain environment being considered as a potential site for a high-level waste repository. Tests with actinide-doped waste glasses, in progress for over 11 years, indicate that the transuranic element release is dominated by colloids that continuously form and span from the glass surface. The nature of the colloids that form in the glass and spent fuel testing programs is being investigated by dynamic light scattering to determine the size distribution, by autoradiography to determine the chemistry, and by zeta potential to measure the electrical properties of the colloids. Tests with UO{sub 2} have been ongoing for 12 years. They show that the oxidation of UO{sub 2} occurs rapidly, and the resulting paragenetic sequence of secondary phases forming on the sample surface is similar to that observed for uranium found in natural oxidizing environments. The reaction of spent fuel samples in conditions similar to those used with UO{sub 2} have been in progress for over six years, and the results suggest that spent fuel forms many of the same alteration products as UO{sub 2}. With spent fuel, the bulk of the reaction occurs via a through-grain reaction process, although grain boundary attack is sufficient to have reacted all of the grain boundary regions in the samples. New test methods are under development to evaluate the behavior of spent fuel samples with intact cladding: the rate at which alteration and radionuclide release occurs when water penetrates fuel sections and whether the reaction causes the cladding to split. Alteration phases have been formed on fine grains of UO 2. Around the laboratories: Rutherford: Successful tests on bubble chamber target technique; Stanford (SLAC): New storage rings proposal; Berkeley: The HAPPE project to examine cosmic rays with superconducting magnets; The 60th birthday of Professor N.N. Bogolyubov; Argonne: Performance of the automatic film measuring system POLLY II CERN Multimedia 1969-01-01 Around the laboratories: Rutherford: Successful tests on bubble chamber target technique; Stanford (SLAC): New storage rings proposal; Berkeley: The HAPPE project to examine cosmic rays with superconducting magnets; The 60th birthday of Professor N.N. Bogolyubov; Argonne: Performance of the automatic film measuring system POLLY II 3. Proceedings of the NEANDC/NEACRP specialists meeting on fast neutron fission cross sections of U-233, U-235, U-238, and Pu-239, June 28--30, 1976, at Argonne National Laboratory Energy Technology Data Exchange (ETDEWEB) Poenitz, W P; Guenther, P T 1976-01-01 Data files of all available data of absolute cross section measurements of U-233, U-235, U-238 and Pu-239, and of the ratios of U-233, U-238, and Pu-239 to U-235 were assembled at Argonne National Laboratory for use by the two Working Groups. The data files of absolute cross sections included also data measured relative to one of the standard cross sections H(n,n), Li-6(n,..cap alpha..), and B-10(n,..cap alpha..), and the ratio data files included ratios derived from absolute values which were measured in an identical type of experiment by the same group of experimenters. The subject files (e.g., U-235-Absolute, or U-238/U-235-Ratio, etc.) consisted of ''Sets.'' These sets contained the data from one experimental group which may have been published at different times. The assembling of the files was started with an extract from the CSISRS data files of the National Neutron Cross Section Center at the Brookhaven National Laboratory. Ratios were derived from quoted consistent sets of absolute cross sections, or from data which were actually measured as ratios but quoted as absolute values. The latter type of data was eliminated from the data files on absolute values. The files were improved by an extensive search for errors and data missing on the original CSISRS files at the time the extract was made. Other additions to the present subject files came from presentations made at this meeting and are described in the proceedings. 4. Studies of acute and chronic radiation injury at the Biological and Medical Research Division, Argonne National Laboratory, 1953-1970: Description of individual studies, data files, codes, and summaries of significant findings Energy Technology Data Exchange (ETDEWEB) Grahn, D.; Fox, C.; Wright, B.J.; Carnes, B.A. 1994-05-01 Between 1953 and 1970, studies on the long-term effects of external x-ray and {gamma} irradiation on inbred and hybrid mouse stocks were carried out at the Biological and Medical Research Division, Argonne National Laboratory. The results of these studies, plus the mating, litter, and pre-experimental stock records, were routinely coded on IBM cards for statistical analysis and record maintenance. Also retained were the survival data from studies performed in the period 1943-1953 at the National Cancer Institute, National Institutes of Health, Bethesda, Maryland. The card-image data files have been corrected where necessary and refiled on hard disks for long-term storage and ease of accessibility. In this report, the individual studies and data files are described, and pertinent factors regarding caging, husbandry, radiation procedures, choice of animals, and other logistical details are summarized. Some of the findings are also presented. Descriptions of the different mouse stocks and hybrids are included in an appendix; more than three dozen stocks were involved in these studies. Two other appendices detail the data files in their original card-image format and the numerical codes used to describe the animals exit from an experiment and, for some studies, any associated pathologic findings. Tabular summaries of sample sizes, dose levels, and other variables are also given to assist investigators in their selection of data for analysis. The archive is open to any investigator with legitimate interests and a willingness to collaborate and acknowledge the source of the data and to recognize appropriate conditions or caveats. 5. Argonne Code Center: compilation of program abstracts Energy Technology Data Exchange (ETDEWEB) Butler, M.K.; DeBruler, M.; Edwards, H.S.; Harrison, C. Jr.; Hughes, C.E.; Jorgensen, R.; Legan, M.; Menozzi, T.; Ranzini, L.; Strecok, A.J. 1977-08-01 This publication is the eleventh supplement to, and revision of, ANL-7411. It contains additional abstracts and revisions to some earlier abstracts and other pages. Sections of the complete document ANL-7411 are as follows: preface, history and acknowledgements, abstract format, recommended program package contents, program classification guide and thesaurus, and the abstract collection. (RWR) 6. Argonne Code Center numerical control postprocessor inventory Energy Technology Data Exchange (ETDEWEB) Vollink, S. (comp.) 1977-12-21 A survey to identify numerical control postprocessors available at Department of Energy facilities is reported. The data are presented in the body of the report under the postprocessor identification. Information supplied includes the vendor name and address, the N/C and postprocessor languages, the machine tools and control unit supported, the computers used, and the identification of the DOE installation. The body of the report is followed by five indexes permitting users to refer to the postprocessor data by product number, DOE installation, machine tool, control unit, or computer. (RWR) 7. Argonne Bubble Experiment Thermal Model Development Energy Technology Data Exchange (ETDEWEB) Buechler, Cynthia Eileen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States) 2015-12-03 This report will describe the Computational Fluid Dynamics (CFD) model that was developed to calculate the temperatures and gas volume fractions in the solution vessel during the irradiation. It is based on the model used to calculate temperatures and volume fractions in an annular vessel containing an aqueous solution of uranium . The experiment was repeated at several electron beam power levels, but the CFD analysis was performed only for the 12 kW irradiation, because this experiment came the closest to reaching a steady-state condition. The aim of the study is to compare results of the calculation with experimental measurements to determine the validity of the CFD model. 8. Argonne Code Center: benchmark problem book Energy Technology Data Exchange (ETDEWEB) 1977-06-01 This report is a supplement to the original report, published in 1968, as revised. The Benchmark Problem Book is intended to serve as a source book of solutions to mathematically well-defined problems for which either analytical or very accurate approximate solutions are known. This supplement contains problems in eight new areas: two-dimensional (R-z) reactor model; multidimensional (Hex-z) HTGR model; PWR thermal hydraulics--flow between two channels with different heat fluxes; multidimensional (x-y-z) LWR model; neutron transport in a cylindrical ''black'' rod; neutron transport in a BWR rod bundle; multidimensional (x-y-z) BWR model; and neutronic depletion benchmark problems. This supplement contains only the additional pages and those requiring modification. (RWR) 9. Argonne Code Center: Benchmark problem book. Energy Technology Data Exchange (ETDEWEB) None, None 1977-06-01 This book is an outgrowth of activities of the Computational Benchmark Problems Committee of the Mathematics and Computation Division of the American Nuclear Society. This is the second supplement of the original benchmark book which was first published in February, 1968 and contained computational benchmark problems in four different areas. Supplement No. 1, which was published in December, 1972, contained corrections to the original benchmark book plus additional problems in three new areas. The current supplement. Supplement No. 2, contains problems in eight additional new areas. The objectives of computational benchmark work and the procedures used by the committee in pursuing the objectives are outlined in the original edition of the benchmark book (ANL-7416, February, 1968). The members of the committee who have made contributions to Supplement No. 2 are listed below followed by the contributors to the earlier editions of the benchmark book. 10. Argonne Code Center: compilation of program abstracts Energy Technology Data Exchange (ETDEWEB) Butler, M.K.; DeBruler, M.; Edwards, H.S. 1976-08-01 This publication is the tenth supplement to, and revision of, ANL-7411. It contains additional abstracts and revisions to some earlier abstracts and other pages. Sections of the document are as follows: preface; history and acknowledgements; abstract format; recommended program package contents; program classification guide and thesaurus; and abstract collection. (RWR) 11. Argonne superconducting heavy-ion linac Energy Technology Data Exchange (ETDEWEB) Bollinger, L.M.; Benaroya, R.; Clifft, B.E.; Jaffey, A.H.; Johnson, K.W.; Khoe, T.K.; Scheibelhut, C.H.; Shepard, K.W.; Wangler, Y.Z. 1976-01-01 A summary is given of the status of a project to develop and build a small superconducting linac to boost the energy of heavy ions from an existing tandem electrostatic accelerator. The design of the system is well advanced, and construction of major components is expected to start in late 1976. The linac will consist of independently-phased resonators of the split-ring type made of niobium and operating at a temperature of 4.2/sup 0/K. The resonance frequency is 97 MHz. Tests on full-scale resonators lead one to expect accelerating fields of approximately 4 MV/m within the resonators. The linac will be long enough to provide a voltage gain of at least 13.5 MV, which will allow ions with A less than or approximately 80 to be accelerated above the Coulomb barrier of any target. The modular nature of the system will make future additions to the length relatively easy. A major design objective is to preserve the good quality of the tandem beam. This requires an exceedingly narrow beam pulse, which is achieved by bunching both before and after the tandem. Focusing by means of superconducting solenoids within the linac limit the radial size of the beam. An accelerating structure some 15 meters downstream from the linac will manipulate the longitudinal phase ellipse so as to provide the experimenter with either very good energy resolution (..delta..E/E approximately equal to 2 x 10/sup -4/) or very good time resolution (..delta.. t approximately equal to 30 psec). 12. Argonne lectures on particles accelerator magnets Energy Technology Data Exchange (ETDEWEB) Devred, A 1999-09-01 The quest for elementary particles has promoted the development of particle accelerators producing beams of increasingly higher energies. In a synchrotron, the particle energy is directly proportional to the product of the machine's radius times the bending magnets' field strength. Present proton experiments at the TeV scale require facilities with circumferences ranging from a few to tens of kilometers and relying on a large number (several hundred to several thousand) high field dipole magnets and high field gradient quadrupole magnets. These electro-magnets use high-current-density, low-critical-temperature superconducting cables and are cooled down at liquid helium temperature. They are among the most costly and the most challenging components of the machine. After explaining what are the various types of accelerator magnets and why they are needed (lecture 1), we briefly recall the origins of superconductivity and we review the parameters of existing superconducting particle accelerators (lecture 2). Then, we review the superconducting materials that are available at industrial scale (chiefly, NbTi and Nb{sub 3}Sn) and we explain in details the manufacturing of NbTi wires and cables (lecture 3). We also present the difficulties of processing and insulating Nb{sub 3}Sn conductors, which so far have limited the use of this material in spite of its superior performances. We continue by discussing the two dimensional current distributions which are the most appropriate for generating pure dipole and quadrupole fields and we explain how these ideal distributions can be approximated by so called cos{theta} and cos 2{theta} coil designs (lecture 4). We also present a few alternative designs which are being investigated and we describe the difficulties of realizing coil ends. Next, we present the mechanical design concepts that are used in existing accelerator magnets (lecture 5) and we describe how the magnets are assembled (lecture 6). Some of the toughest requirements on the performance of accelerator magnets are related to field quality Lecture 7 summarizes the different sources of field errors (lecture 7). We follow by a brief overview of the cooling schemes which have been implemented in the various accelerator rings and we discuss the issues related to quench performance (lecture 8). Finally, we detail the quench protection schemes which are needed to ensure safe operations of the magnets (lecture 9). (author) 13. Surveys of research in the Chemistry Division, Argonne National Laboratory Energy Technology Data Exchange (ETDEWEB) Grazis, B.M. (ed.) 1992-01-01 Research reports are presented on reactive intermediates in condensed phase (radiation chemistry, photochemistry), electron transfer and energy conversion, photosynthesis and solar energy conversion, metal cluster chemistry, chemical dynamics in gas phase, photoionization-photoelectrons, characterization and reactivity of coal and coal macerals, premium coal sample program, chemical separations, heavy elements coordination chemistry, heavy elements photophysics/photochemistry, f-electron interactions, radiation chemistry of high-level wastes (gas generation in waste tanks), ultrafast molecular electronic devices, and nuclear medicine. Separate abstracts have been prepared. Accelerator activites and computer system/network services are also reported. 14. Surveys of research in the Chemistry Division, Argonne National Laboratory Energy Technology Data Exchange (ETDEWEB) Grazis, B.M. [ed. 1992-11-01 Research reports are presented on reactive intermediates in condensed phase (radiation chemistry, photochemistry), electron transfer and energy conversion, photosynthesis and solar energy conversion, metal cluster chemistry, chemical dynamics in gas phase, photoionization-photoelectrons, characterization and reactivity of coal and coal macerals, premium coal sample program, chemical separations, heavy elements coordination chemistry, heavy elements photophysics/photochemistry, f-electron interactions, radiation chemistry of high-level wastes (gas generation in waste tanks), ultrafast molecular electronic devices, and nuclear medicine. Separate abstracts have been prepared. Accelerator activites and computer system/network services are also reported. 15. 2009 Argonne National Laboratory Annual Illness and Injury Surveillance Report Energy Technology Data Exchange (ETDEWEB) U.S. Department of Energy, Office of Health, Safety and Security, Office of Health and Safety, Office of Illness and Injury Prevention Programs 2010-08-19 The U.S. Department of Energy’s (DOE) commitment to assuring the health and safety of its workers includes the conduct of epidemiologic surveillance activities that provide an early warning system for health problems among workers. The Illness and Injury Surveillance Program monitors illnesses and health conditions that result in an absence of workdays, occupational injuries and illnesses, and disabilities and deaths among current workers. 16. Proactive maintenance initiatives at Argonne National Laboratory-West Energy Technology Data Exchange (ETDEWEB) Duckwitz, N.R.; Duncan, L.W.; Whipple, J.J. 1995-06-01 In the late 1980s, ANL-W Management foresaw a need to provide dedicated technical support for maintenance supervisors. Maintenance supervisors were facing increased challenges to ensure all environmental, safety, and waste management regulations were followed in daily maintenance activities. This increased burden was diverting supervisory time away from on-the-job supervision. Supervisors were finding less time for their mentor roles to ensure maintenance focused on finding and correcting root causes. Additionally the traditional maintenance organization could not keep up with the explosion in predictive maintenance technologies. As a result, engineers were tasked to provide direct technical support to the maintenance organization. Today the maintenance technical support group consists of two mechanical engineers, two electrical engineers and an I&C engineer. The group provides a readily available, quick response resource for crafts people and their supervisors. They can and frequently do ask the support group for help to determine the root cause and to effect permanent fixes. Crafts and engineers work together informally to make an effective maintenance team. In addition to day-to-day problem solving, the technical support group has established several maintenance improvement programs for the site. This includes vibration analysis of rotating machinery, testing of fuel for emergency diesel generators, improving techniques for testing of high efficiency particulate air (HEPA) filters, and capacity testing of UPS and emergency diesel starting batteries. These programs have increased equipment reliability, reduced conventional routine maintenance, reduced unexpected maintenance, and improved testing accuracy. This paper will discuss the interaction of the technical support group within the maintenance department. Additionally the maintenance improvement programs will be presented along with actual cases encountered, the resolutions and lessons learned. 17. Split ring resonator for the Argonne superconducting heavy ion booster Energy Technology Data Exchange (ETDEWEB) Shepard, K.W.; Scheibelhut, C.H.; Benaroya, R.; Bollinger, L.M. 1977-01-01 A split-ring resonator for use in the ANL superconducting heavy-ion linac was constructed and is being tested. The electromagnetic characteristics of the 98-MHz device are the same as the unit described earlier, but the housing is formed of a new material consisting of niobium sheet explosively bonded to copper. The niobium provides the superconducting path and the copper conducts heat to a small area cooled by liquid helium. This arrangement greatly simplified the cryogenic system. Fabrication of the housing was relatively simple, with the result that costs have been reduced substantially. The mechanical stability of the resonator and the performance of the demountable superconducting joints are significantly better than for the earlier unit. 18. Decontamination and decommissioning of the Argonne Thermal Source Reactor at Argonne National Laboratory - East project final report. Energy Technology Data Exchange (ETDEWEB) Fellhauer, C.; Garlock, G.; Mathiesen, J. 1998-12-02 The ATSR D&D Project was directed toward the following goals: (1) Removal of radioactive and hazardous materials associated with the ATSR Reactor facility; (2) Decontamination of the ATSR Reactor facility to unrestricted use levels; and (3)Documentation of all project activities affecting quality (i.e., waste packaging, instrument calibration, audit results, and personnel exposure). These goals had been set in order to eliminate the radiological and hazardous safety concerns inherent in the ATSR Reactor facility and to allow, upon completion of the project, unescorted and unmonitored access to the area. The reactor aluminum, reactor lead, graphite piles in room E-111, and the contaminated concrete in room E-102 were the primary areas of concern. NES, Incorporated (Danbury, CT) characterized the ATSR Reactor facility from January to March 1998. The characterization identified a total of thirteen radionuclides, with a total activity of 64.84 mCi (2.4 GBq). The primary radionuclides of concern were Co{sup 60}, Eu{sup 152}, Cs{sup 137}, and U{sup 238}. No additional radionuclides were identified during the D&D of the facility. The highest dose rates observed during the project were associated with the reactor tank and shield tank. Contact radiation levels of 30 mrem/hr (0.3 mSv/hr) were measured on reactor internals during dismantlement of the reactor. A level of 3 mrem/hr (0.03 mSv/hr) was observed in a small area (hot spot) in room E-102. DOE Order 5480.2A establishes the maximum whole body exposure for occupational workers at 5 rem/yr (50 mSv/yr); the administrative limit at ANL-E is 1 rem/yr (10 mSv/yr). 19. Symptomatic therapy for knee osteoarthrosis: new possibilities Directory of Open Access Journals (Sweden) E S Tsvetkova 2011-01-01 Full Text Available Objective: to evaluate the efficacy and safety of the zinaxin-glucosamine sulfate (ZGS complex versus meloxicam in patients with knee osteoarthrosis (KOA. Subjects and methods. The 12-week, prospective, open-label, randomized clinical and instrumental study enrolled 40 patients with bilater al Kellgren and Lawrence stage I– IV KOA on X-ray. The Western Ontario and McMaster Universities Osteoarthritis (WOMAC index and knee joint ultrasound data were assessed during the study over time (before and 4 and 12 weeks after treatment. Results. The main symptoms of KOA were considerably alleviated during the administration of ZGS and meloxicam. The same effect was achieved at week 4 of treatment and increased throughout the study. Both groups showed significantly reduced stiffness (p<0.001, which indirectly confirmed that the compared drugs had anti-inflammatory activity. The changes in the WOMAC index by the functional activity scale and the total WOMAC score suggested an increase in the positive effect of ZGS and meloxicam. The total assessment of the results of treatment with ZGS showed improvement and considerable improvement in 100% of cases; meloxicam was demonstrated to have no effect in 5.3% of the patients. The anti-inflammatory activity of ZGS and meloxicam was evidenced by knee joint ultrasonography. Conclusion. The analgesic and anti-inflammatory effects of ZGS are comparable with those of meloxicam on pain, stiffness, and functional activity. Dynamic ultrasonography of the knee joints provides support for the fact that ZGS has anti-inflammatory properties. The high effi cacy of ZGS is combined with the absence of adverse reactions. ZGS may be recommended as an alternative treatment for KOA. 20. New catalyst developed at Argonne National Laboratory could help diesels meet NOx deadlines CERN Document Server 2003-01-01 "A new catalyst could help auto makers meet the U.S. Environmental Protection Agency's deadline to eliminate 95 percent of nitrogen-oxide from diesel engine exhausts by 2007, while saving energy" (1 page). 1. TMI-2 instrument nozzle examinations at Argonne National Laboratory, February 1991--June 1993 Energy Technology Data Exchange (ETDEWEB) Neimark, L.A.; Shearer, T.L.; Purohit, A.; Hins, A.G. 1994-06-01 The accident at the Three Mile Island Unit 2 (TMI-2) reactor in March 1979 resulted in the relocation of approximately 19,000 kg of molten core material to the lower head of the reactor vessel. This material caused extensive damage to the instrument guide tubes and nozzles and was suspected of having caused significant metallurgical changes in the condition of the lower head itself. These changes and their effect on the margin-to-failure of the lower head became the focal point of an investigation co-sponsored by the United States Nuclear Regulatory Commission (NRC) and the Organization for Economic Co-operation and Development (OECD). The TMI-2 Vessel Investigation Project (VIP) was formed to determine the metallurgical state of the vessel at the lower head and to assess the margin-to-failure of the vessel under the conditions existing during the accident. This report was prepared under the auspices of the OECD/NEA Three Mile Island Vessel Investigation Project. Under the auspices of the VIP, specimens of the reactor vessel were removed in February 1990 by MPR Associates, Inc. In addition to these specimens, fourteen instrument nozzle segments and two segments of instrument guide tubes were retrieved for metallurgical evaluation. The purpose of this evaluation was to provide additional information on the thermal conditions on the lower head that would influence the margin-to-failure, and to provide insight into the progression of the accident scenario, specifically the movement of the molten fuel across the lower head. 2. Status of the Argonne-Notre Dame BGO gamma-ray facility at ATLAS Energy Technology Data Exchange (ETDEWEB) Janssens, R.V.; Blumenthal, D.J.; Carpenter, M.P. [and others 1995-08-01 The gamma-ray facility at ATLAS consists of (a) a 4{pi} gamma-sum/multiplicity spectrometer with 50 BGO hexagonal elements (inner array) and (b) 12 Compton-suppressed germanium detectors (CSG) external to the inner array. During the past year the effort related to this facility continued on several fronts. Because of neutron damage, annealing was performed on eight Ge detectors. Three of these were annealed twice. The performance of the detectors was recovered in all but one case. In the latter, the FET was lost and the detector was returned to the manufacturer for repair. Maintenance and repairs had to be performed on several electronics modules and, in particular, on some of the CAMAC units. None of these problems affected an experiment for more than a couple of hours. Preventive maintenance was performed on the LN{sub 2} filling system (inspection of all filling lines and check of the various functions of the control modules). 10 of the CSGs were moved to the FMA for long periods of time on three different occasions and were used in conjunction with this device. Such a move takes about 1 day, does not require that the Ge detectors be warmed up, and has not resulted in any noticeable loss in performance of the CSGs. A new dedicated target chamber was designed and constructed. This chamber allows us to place a target upstream from the usual location, outside of the array. In this way it is possible to study decays from isomers after recoil from the target into a stopper located at the focus of the {gamma}-ray facility. 3. V&V Of CFD Modeling Of The Argonne Bubble Experiment: FY15 Summary Report Energy Technology Data Exchange (ETDEWEB) Hoyt, Nathaniel C. [Argonne National Lab. (ANL), Argonne, IL (United States); Wardle, Kent E. [Argonne National Lab. (ANL), Argonne, IL (United States); Bailey, James L. [Argonne National Lab. (ANL), Argonne, IL (United States); Basavarajappa, Manjunath [Univ. of Utah, Salt Lake City, UT (United States) 2015-09-30 In support of the development of accelerator-driven production of the fission product Mo 99, computational fluid dynamics (CFD) simulations of an electron-beam irradiated, experimental-scale bubble chamber have been conducted in order to aid in interpretation of existing experimental results, provide additional insights into the physical phenomena, and develop predictive thermal hydraulic capabilities that can be applied to full-scale target solution vessels. Toward that end, a custom hybrid Eulerian-Eulerian-Lagrangian multiphase solver was developed, and simulations have been performed on high-resolution meshes. Good agreement between experiments and simulations has been achieved, especially with respect to the prediction of the maximum temperature of the uranyl sulfate solution in the experimental vessel. These positive results suggest that the simulation methodology that has been developed will prove to be suitable to assist in the development of full-scale production hardware. 4. The path to the future: The role of science and technology at Argonne National Laboratory Energy Technology Data Exchange (ETDEWEB) Reck, R.A. 1996-04-30 Today some scientists are concerned that present budget considerations in Washington will make it impossible for the US to maintain its preeminence in important areas of science and technology. In the private sector there has been a demise of substantive R & D efforts through most of the major industries. For DOE a lack of future support for science and technology would be an important issue because this could impact DOEs abilities to solve problems in its major areas of concern, national security, energy, environment. In fact some scientists maintain that were the present trend to continue unabated it could lead to a national security issue. Preeminence in science and technology plays a critical role in our nations position as the leader of world democracy. In contrast with this point of view of gloom and doom, however, in this presentation I hope to bring to you what I see as an exciting message of good news. Today I will list the important opportunities and challenges for the future that I note for ANL, the leadership role that I believe ANL can play and the qualities that will help our laboratory to maintain its status as an outstanding DOE National Laboratory. 5. Technical information for the relocation and treatment of Argonne National Laboratory drums Energy Technology Data Exchange (ETDEWEB) Clinton, R. 1997-08-07 The technical information in this document is to evaluate waste drums stored in Solid Waste Project Management facilities that contain organic and potentially flammable gases. The document provides an evaluation of the planned venting of potentially flammable gases and the potential risks associated with the task. 6. Workshop report - Bridging the Climate Information held at Argonne National Laboratory September 29, 1999 Energy Technology Data Exchange (ETDEWEB) Taylor, J. 2000-03-10 In a recent report entitled The Regional Impacts of Climate Change it was concluded that the technological capacity to adapt to climate change is likely to be readily available in North America, but its application will be realized only if the necessary information is available (sufficiently far in advance in relation to the planning horizons and lifetimes of investments) and the institutional and financial capacity to manage change exists. The report also acknowledged that one of the key factors that limit the ability to understand the vulnerability of subregions of North America to climate change, and to develop and implement adaptive strategies to reduce that vulnerability, is the lack of accurate regional projections of climate change, including extreme events. In particular, scientists need to account for the physical-geographic characteristics (e.g., the Great Lakes, coastlines, and mountain ranges) that play a significant role in the North America climate and also need to consider the feedback between the biosphere and atmosphere. 7. Risk assessment of seeps from the 317 Area of Argonne National Laboratory Energy Technology Data Exchange (ETDEWEB) NONE 1996-09-17 Chlorinated hydrocarbon contaminants have recently been detected in groundwater seeps on forest preserve property south of the 317 Area at ANL. The 317 Area is near ANLs southern boundary and is considered the source of the contamination. Five seeps are about 200 m south of the ANL property line and about same distance from the nearest developed trails in the forest preserve. Conservative assumptions were used to assess the possibility of adverse health effects associated with forest preserve seeps impacted by the 317 Area. Results indicate that neither cancer risks nor noncarcinogenic effects associated with exposures to seep contaminants are a concern; thus, the area is safe for all visitors. The ecological impact study found that the presence of the three contaminants (CCl{sub 4}, CHCl{sub 3}, tetrachloroethylene) in the seep water does not pose a risk to biota in the area. 8. Integrating Culture into the Russian Language Curriculum at Argonne Elementary School Science.gov (United States) Teper, Mila 2010-01-01 One cannot teach language without teaching culture; culture is the context for language learning. Cultural instruction must be integrated into all lessons throughout the year, not just taught as mini-lessons in order. Teachers cannot expect their students to gain intercultural competencies through activities that are not embedded in cultural… 9. The Advanced Photon Source: A national synchrotron radiation research facility at Argonne National Laboratory Energy Technology Data Exchange (ETDEWEB) NONE 1995-10-01 The vision of the APS sprang from prospective users, whose unflagging support the project has enjoyed throughout the decade it has taken to make this facility a reality. Perhaps the most extraordinary aspect of synchrotron radiation research, is the extensive and diverse scientific makeup of the user community. From this primordial soup of scientists exchanging ideas and information, come the collaborative and interdisciplinary accomplishments that no individual alone could produce. So, unlike the solitary Roentgen, scientists are engaged in a collective and dynamic enterprise with the potential to see and understand the structures of the most complex materials that nature or man can produce--and which underlie virtually all modern technologies. This booklet provides scientists and laymen alike with a sense of both the extraordinary history of x-rays and the knowledge they have produced, as well as the potential for future discovery contained in the APS--a source a million million times brighter than the Roentgen tube. 10. Effects of vertical girder realignment in the Argonne APS storage ring. Energy Technology Data Exchange (ETDEWEB) Lessner, E. 1999-04-14 The effects of vertical girder misalignments on the vertical orbit of the Advanced Photon Source (APS) storage ring are studied. Partial sector-realignment is prioritized in terms of the closed-orbit distortions due to misalignments of the corresponding girders in the sectors. A virtual girder-displacement (VGD) method is developed that allows the effects of a girder realignment to be tested prior to physically moving the girder. The method can also be used to anticipate the corrector strengths needed to restore the beam orbit after a realignment. Simulation results are compared to experimental results and found to reproduce the latter quite closely. Predicted corrector strengths are also found to be close to the actual local corrector strengths after a proof-of-principle two-sector realignment was performed. 11. Meeting of the Advisory Committee to the Argonne Cancer Research Hospital Program Energy Technology Data Exchange (ETDEWEB) Jacobson, L. O.; Doyle, M. 1965-03-05 Abstracts are provided for presentations in these areas: studies in immunology; studies in molecular biology; experimental and clinical studies of cell differentiation; studies on the blood; general metabolic studies; problems in scanning; clinical and experimental studies on the effects of radiation; and studies with high energy radiations. 12. Status of the Argonne superconducting-linac heavy-ion energy booster Energy Technology Data Exchange (ETDEWEB) Aron, J.; Benaroya, R.; Bollinger, L.M.; Clifft, B.E.; Henning, W.; Johnson, K.W.; Nixon, J.M.; Markovich, P.; Shepard, K.W. 1979-01-01 A superconducting linac is being constructed to provide an energy booster for heavy ions from an FN tandem. By late 1980 the linac will consist of 24 independently-phased superconducting resonators, and will provide an effective accelerating potential of more than 25 MV. While the linac is under construction, completed sections are being used to provide useful beam for nuclear physics experiments. In the most recent run with beam (June 1979), an eight resonator array provided an effective accelerating potential of 9.3 MV. Operation of a 12 resonator array is scheduled to begin in October 1979. 13. Superconducting low-velocity linac for the Argonne positive-ion injector Energy Technology Data Exchange (ETDEWEB) Shepard, K.W.; Markovich, P.K.; Zinkann, G.P.; Clifft, B.; Benaroya, R. 1989-01-01 A low-velocity superconducting linac has been developed as part of a positive-ion injector system, which is replacing a 9 MV tandem as the injector for the ATLAS accelerator. The linac consists of an independently phased array of resonators, and is designed to accelerate various ions over a velocity range .008 < v/c < .06. The resonator array is formed of four different types of superconducting interdigital structures. The linac is being constructed in three phases, each of which will cover the full velocity range. Successive phases will increase the total accelerating potential and permit heavier ions to be accelerated. Assembly of the first phase was completed in early 1989. In initial tests with beam, a five-resonator array provided approximately 3.5 MV of accelerating potential and operated without difficulty for several hundred hours. The second phase is scheduled for completion in late 1989, and will increase the accelerating potential to more than 8 MV. 5 refs., 2 figs., 1 tab. 14. Nanomaterials research in Chicago - the center for nanoscale materials at Argonne National Laboratory. Energy Technology Data Exchange (ETDEWEB) Gibson, J. M. 2001-09-27 This report contains information about the following: (1) Regional center planned for nanofabrication and nanocharacterization; (2) Capabilities of the unique x-ray nanoprobe facility at the Advanced Photon Source; (3) Overview of research programs in nanomagnetism, ferroelectrics, nanocrystalline diamond, photochemistry and others; and (4) opportunities for collaborative research. 15. Development of a Monolithic Research Reactor Fuel Type at Argonne National Laboratory Energy Technology Data Exchange (ETDEWEB) Clark, C.R.; Briggs, R.J. 2004-10-06 The Reduced Enrichment for Research and Test Reactors (RERTR) program has been tasked with the conversion of research reactors from highly enriched to low-enriched uranium (LEU). To convert several high power reactors, monolithic fuel, a new fuel type, is being developed. This fuel type replaces the standard fuel dispersion with a fuel alloy foil, which allows for fuel densities far in excess of that found in dispersion fuel. The single-piece fuel foil also contains a significantly lower interface area between the fuel and the aluminum in the plate than the standard fuel type, limiting the amount of detrimental fuel-aluminum interaction that can occur. Implementation of monolithic fuel is dependant on the development of a suitable fabrication method as traditional roll-bonding techniques are inadequate. 16. Advanced Reciprocating Engine Systems (ARES) Research at Argonne National Laboratory. A Report Energy Technology Data Exchange (ETDEWEB) Gupta, Sreenath [Argonne National Lab. (ANL), Argonne, IL (United States); Biruduganti, Muni [Argonne National Lab. (ANL), Argonne, IL (United States); Bihari, Bipin [Argonne National Lab. (ANL), Argonne, IL (United States); Sekar, Raj [Argonne National Lab. (ANL), Argonne, IL (United States) 2014-08-01 The goals of these experiments were to determine the potential of employing spectral measurements to deduce combustion metrics such as HRR, combustion temperatures, and equivalence ratios in a natural gas-fired reciprocating engine. A laser-ignited, natural gas-fired single-cylinder research engine was operated at various equivalence ratios between 0.6 and 1.0, while varying the EGR levels between 0% and maximum to thereby ensure steady combustion. Crank angle-resolved spectral signatures were collected over 266-795 nm, encompassing chemiluminescence emissions from OH*, CH*, and predominantly by CO2* species. Further, laser-induced gas breakdown spectra were recorded under various engine operating conditions. 17. Decontamination and dismantlement of the building 594 waste ion exchange facility at Argonne National Laboratory-East project final report. Energy Technology Data Exchange (ETDEWEB) Wiese, E. C. 1998-11-23 The Building 594 D&D Project was directed toward the following goals: Removal of any radioactive and hazardous materials associated with the Waste Ion Exchange Facility; Decontamination of the Waste Ion Exchange Facility to unrestricted use levels; Demolition of Building 594; and Documentation of all project activities affecting quality (i.e., waste packaging, instrument calibration, audit results, and personnel exposure) These goals had been set in order to eliminate the radiological and hazardous safety concerns inherent in the Waste Ion Exchange Facility and to allow, upon completion of the project, unescorted and unmonitored access to the area. The ion exchange system and the resin contained in the system were the primary areas of concern, while the condition of the building which housed the system was of secondary concern. ANL-E health physics technicians characterized the Building 594 Waste Ion Exchange Facility in September 1996. The characterization identified a total of three radionuclides present in the Waste Ion Exchange Facility with a total activity of less than 5 {micro}Ci (175 kBq). The radionuclides of concern were Co{sup 60}, Cs{sup 137}, and Am{sup 241}. The highest dose rates observed during the project were associated with the resin in the exchange vessels. DOE Order 5480.2A establishes the maximum whole body exposure for occupational workers at 5 rem (50 mSv)/yr; the administrative limit at ANL-E is 1 rem/yr (10 mSv/yr). 18. Argonne National Laboratory Expedited Site Characterization: First International Symposium on Integrated Technical Approaches to Site Characterization - Proceedings Volume Energy Technology Data Exchange (ETDEWEB) NONE 1998-06-08 Laboratory applications for the analysis of PCBS (polychlorinated biphenyls) in environmental matrices such as soil/sediment/sludge and oil/waste oil were evaluated for potential reduction in waste, source reduction, and alternative techniques for final determination. As a consequence, new procedures were studied for solvent substitution, miniaturization of extraction and cleanups, minimization of reagent consumption, reduction of cost per analysis, and reduction of time. These new procedures provide adequate data that meet all the performance requirements for the determination of PCBS. Use of the new procedures reduced costs for all sample preparation techniques. Time and cost were also reduced by combining the new sample preparation procedures with the power of fast gas chromatography. Separation of Aroclor 1254 was achieved in less than 6 min by using DB-1 and SPB-608 columns. With the greatly shortened run times, reproducibility can be tested quickly and consequently with low cost. With performance-based methodology, the applications presented here can be applied now, without waiting for regulatory approval. 19. Charge exchange reaction by Reggeon exchange and W^{+}$W$^{-}$-fusion CERN Document Server Schicker, R 2014-01-01 Charge exchange reactions at high energies are examined. The existing cross section data on the Reggeon induced reaction pp$\\rightarrow$n +$\\Delta^{++}$taken at the ZGS and ISR accelerators are extrapolated to the energies of the RHIC and LHC colliders. The interest in the charge exchange reaction induced by$W^{\\pm}$-fusion is presented, and the corresponding QCD-background is examined. 20. Proceedings of a seminar on the potential for LMFBR boiling detection by acoustic/neutronic monitoring, Argonne, Illinois, April 8--9, 1976 Energy Technology Data Exchange (ETDEWEB) Carey, W.M.; Albrecht, R.W. 1976-06-01 A seminar involving ten technical presentations by principal investigators was held to assess the current scope of ERDA-sponsored programs to determine the feasibility of sodium-boiling detection in LMFBRs and to establish areas in need of additional research and development. The consensus was that (1) feasibility of boiling detection by acoustic, neutronic, and acoustic/neutronic monitors has been demonstrated in U.S. and European programs; (2) additional research and development is needed in areas of reactor noise, cavitation, and the effects of noncondensible gases on sound source levels and transmission; (3) the role of acoustic/neutronic monitors from the standpoint of reactor surveillance rather than reactor safety is a viable approach to be adapted; and, in particular (4) a need exists for an operational LMFBR demonstration system. Each paper has been separately abstracted and indexed. (DG) 1. What Were the Causes of the Delay of the 79th Division Capturing Montfaucon during the Meuse-Argonne Offensive in World War I? Science.gov (United States) 2011-06-10 on the Western Front near Rougemont, in the Ainse-Marne Offensive, near Fismes on the Vesle River, in the Oise -Ainse Offensive, and in the Avecourt...107th Trench Mortar Battery, was detached from the brigade and located at a rest area behind the Oise -Ainse front, where they remained for a majority 2. Radiological and Environmental Research Division annual report: Fundamental Molecular Physics and Chemistry, October 1977-September 1978. [Summary of research activities at Argonne National Laboratory Energy Technology Data Exchange (ETDEWEB) Rowland, R. E.; Inokuti, Mitio [eds. 1978-01-01 Research presented includes 32 papers, six of which have appeared previously in ERA, and 26 appear in this issue of ERA. Molecular physics and chemistry including photoionization, molecular properties, oscillator strengths, scattering, shape resonances, and photoelectrons are covered. A list of publications is included. (JFP) 3. U.S. Army Chemical Corps Historical Studies, Gas Warfare in World War I: The 1st Division in the Meuse-Argonne 1-12 October 1918 Science.gov (United States) 1957-08-01 October, instead of 30 September-I October 0 On the night of the relief the .Jnemy began shelling at 10s00 porn October 1st and continued until 4s00... Juvin and Landre, out off the Argoure front, and attack in rear of the Brunhild position to effect decisive action on the Group -Argonnej." 8 7 At...attack, fired on the Son’.erance area and north of St Georges et Landres, Juvin , Marcq, and Champigxeulle. 93 Company C, 1st Gas Regiment, was ordered 4. Decontamination of hot cells K-1, K-3, M-1, M-3, and A-1, M-Wing, Building 200: Project final report Argonne National Laboratory-East Energy Technology Data Exchange (ETDEWEB) Cheever, C.L.; Rose, R.W. 1996-09-01 The purpose of this project was to remove radioactively contaminated materials and equipment from the hot cells, to decontaminate the hot cells, and to dispose of the radioactive waste. The goal was to reduce stack releases of Rn-220 and to place the hot cells in an emptied, decontaminated condition with less than 10 {micro}Sv/h (1 mrem/h) general radiation background. The following actions were needed: organize and mobilize a decontamination team; prepare decontamination plans and procedures; perform safety analyses to ensure protection of the workers, public, and environment; remotely size-reduce, package, and remove radioactive materials and equipment for waste disposal; remotely decontaminate surfaces to reduce hot cell radiation background levels to allow personnel entries using supplied air and full protective suits; disassemble and package the remaining radioactive materials and equipment using hands-on techniques; decontaminate hot cell surfaces to remove loose radioactive contaminants and to attain a less than 10 {micro}Sv/h (1 mrem/h) general background level; document and dispose of the radioactive and mixed waste; and conduct a final radiological survey. 5. Proceedings of the international conference on radiation test facilities for the CTR surface and materials program, Argonne, Illinois, July 15--18, 1975 Energy Technology Data Exchange (ETDEWEB) 1975-01-01 Separate abstracts were prepared for the 42 included papers. Abstracts for all 42 papers appear in Nuclear Science Abstracts while abstracts for 32 of the papers appear in ERDA Research Abstracts. (MOW) 6. Decontamination and dismantlement of the building 200/205 pneumatic transfer tube at Argonne National Laboratory-East project final report. Energy Technology Data Exchange (ETDEWEB) Wiese, E. C. 1998-12-11 The Building 200/205 Pneumatic Transfer Tube D&D Project was directed toward the following goals: Remove any radioactive and hazardous materials associated with the transfer tube; Survey the transfer tube to identify any external contamination; Remove the transfer tube and package for disposal; Survey the soil and sand surrounding the transfer tube for any contamination; and Backfill the trench in which the tube sat and restore the area to its original condition. These goals had been set in order to eliminate the radiological and hazardous safety concerns inherent in the buried transfer tube and to allow, upon completion of the project, the removal of this project from the ANL-E action item list. The physical condition of the transfer tube and possible nuclear fuel samples lost in the tube were the primary areas of concern, while the exact location of the transfer tube was of secondary concern. ANL-E health physics technicians collected characterization data from the ends of the Building 200/205 pneumatic transfer tube in January 1998. The characterization surveys identified contamination to a level of 67,000 dpm (1,117 Bq) ({beta}/{gamma}) and 20,000 dpm (333 Bq) {alpha} smearable at the opening. 7. a Study of Negative Kaon-Proton Interactions at 6.5 Gev/c: Final States with Neutral Anti-Kaon and Neutral Anti-Kaon Mesonic Resonance Production. Science.gov (United States) Herder, Leland Earl In this dissertation, we present a study of K(' -) p interactions at 6.5 GeV/c, using the 12-foot bubble chamber at the Argonne ZGS. This study is based upon events from the two-prong-plus-Vee topology, where the Vee fits a K(,s)('0) decay. Resonance production in the following final states was studied:. K('-) p (--->) K('0) (pi)('-) p (7C-fit) (I). K('-) p (--->) K('0) (pi)('0) (pi)('-) p (4C -fit) (II). K('-) p (--->) K('0) (pi)('+) (pi)('-) n (4C -fit) (III). For the 7C reaction (I), we found signals for K*(892), K*(1430), and K*(1780) with cross sections 181 (+OR-) 22, 41.2 (+OR-) 6, and 8.4 (+OR-) 2.9 (mu)b, respectively. Production of the K*(2080) was not significant in our data. The partial waves contributing to the production of the K('0(' ))(pi)(' -) system from threshold up to 1.7 GeV were studied. The principal conclusions are: (i) K*(892) and K*(1430) production is dominated by natural parity exchange, (ii) the ratio of unnatural- to natural-parity exchange increases with the resonance mass, consistent with the predictions of a triple Regge model, (iii) there is evidence for a broad 0('+) (K(pi)) S-wave enhancement, with considerable S-D and S-P interference, centered at 1.2 GeV, and (iv) the m = 2 amplitudes of (K(pi)) production are negligible. The two 4C reactions (II and III) were found to have considerably more background and ambiguities than reaction I, as expected. Cross sections for various two -body resonances were measured and compared with results obtained at neighboring energies. The m = 0 amplitudes for the production of K*(892) resonance at low t' were found to be large for both reactions II and III. Reaction II shows evidence for double peripheral production of the K* resonance from an analysis of slope parameters of the differential cross sections. The K*(pi) systems of reactions II and III were studied and cross sections were obtained. The K(rho) channels do not exhibit any significant signals. The spin and parity of the K*(pi) systems 8. Optimal Threshold-Based Multi-Trial Error/Erasure Decoding with the Guruswami-Sudan Algorithm CERN Document Server Senger, Christian; Bossert, Martin; Zyablov, Victor V 2011-01-01 Traditionally, multi-trial error/erasure decoding of Reed-Solomon (RS) codes is based on Bounded Minimum Distance (BMD) decoders with an erasure option. Such decoders have error/erasure tradeoff factor L=2, which means that an error is twice as expensive as an erasure in terms of the code's minimum distance. The Guruswami-Sudan (GS) list decoder can be considered as state of the art in algebraic decoding of RS codes. Besides an erasure option, it allows to adjust L to values in the range 1=1 times. We show that BMD decoders with z_BMD decoding trials can result in lower residual codeword error probability than GS decoders with z_GS trials, if z_BMD is only slightly larger than z_GS. This is of practical interest since BMD decoders generally have lower computational complexity than GS decoders. 9. High Throughput Facility Data.gov (United States) Federal Laboratory Consortium — Argonne?s high throughput facility provides highly automated and parallel approaches to material and materials chemistry development. The facility allows scientists... 10. Materials Engineering Research Facility (MERF) Data.gov (United States) Federal Laboratory Consortium — Argonne?s Materials Engineering Research Facility (MERF) enables engineers to develop manufacturing processes for producing advanced battery materials in sufficient... 11. High energy physics. Progress report, March 1, 1980-February 28, 1981. [Bonner Nuclear Labs. , Rice Univ. , 3/1/80-2/28/81 Energy Technology Data Exchange (ETDEWEB) Phillips, G.C.; Roberts, J.B. 1981-01-01 During this contract year results of several ANL/ZGS experiments have been published, and the data analysis of three others is in various stages of completion. PPT VI was refurburished and made into a portable polarized target system. Several new experiments have been proposed. Separate abstracts were prepared for two experiments that have produced data but have not yet been published. 7 figures. (RWR) 12. Accelerating Polarized Protons with Siberian Snakes Energy Technology Data Exchange (ETDEWEB) Krisch, A.D. [Randall Laboratory of Physics, University of Michigan, Ann Arbor (United States) 1998-05-01 There is a brief review of the history of polarized proton beams and the unexpected and still unexplained large transverse spin effects found in high energy proton spin experiments at the ZGS, AGS and Fermilab. Next there is a detailed discussion of Siberian snakes and some of their tests at the IUCF Cooler Ring. Finally there is a report on the use of Siberian snakes in some possible high energy polarized proton beams at RHIC, HERA and Fermilab. (author) 19 refs, 12 figs 13. On the polarized beam acceleration in medium energy synchrotrons Energy Technology Data Exchange (ETDEWEB) Lee, S.Y. 1992-12-31 This lecture note reviews physics of spin motion in a synchrotron, spin depolarization mechanisms of spin resonances, and methods of overcoming the spin resonances during acceleration. Techniques used in accelerating polarized ions in the low/medium energy synchrotrons, such as the ZGS, the AGS, SATURNE, and the KEK PS and PS Booster are discussed. Problems related to polarized proton acceleration with snakes or partial snake are also examined. 14. Proceedings of the NEACRP/IAEA Specialists meeting on the international comparison calculation of a large sodium-cooled fast breeder reactor at Argonne National Laboratory on February 7-9, 1978 Energy Technology Data Exchange (ETDEWEB) LeSage, L.G.; McKnight, R.D.; Wade, D.C.; Freese, K.E.; Collins, P.J. 1980-08-01 The results of an international comparison calculation of a large (1250 MWe) LMFBR benchmark model are presented and discussed. Eight reactor configurations were calculated. Parameters included with the comparison were: eigenvalue, k/sub infinity/, neutron balance data, breeding reaction rate ratios, reactivity worths, central control rod worth, regional sodium void reactivity, core Doppler and effective delayed neutron fraction. Ten countries participated in the comparison, and sixteen solutions were contributed. The discussion focuses on the variation in parameter values, the degree of consistency among the various parameters and solutions, and the identification of unexpected results. The results are displayed and discussed both by individual participants and by groupings of participants (e.g., results from adjusted data sets versus non-adjusted data sets). 15. Institutional plan. Fiscal year, 1997--2002 Energy Technology Data Exchange (ETDEWEB) NONE 1996-10-01 The Institutional Plan is the culmination of Argonnes annual planning cycle. The document outlines what Argonne National Laboratory (ANL) regards as the optimal development of programs and resources in the context of national research and development needs, the missions of the Department of Energy and Argonne National Laboratory, and pertinent resource constraints. It is the product of ANLs internal planning process and extensive discussions with DOE managers. Strategic planning is important for all of Argonnes programs, and coordination of planning for the entire institution is crucial. This Institutional Plan will increasingly reflect the planning initiatives that have recently been implemented. 16. Polarized protons and Siberian snakes Energy Technology Data Exchange (ETDEWEB) Krisch, A.D. [Michigan Univ., Ann Arbor, MI (United States). Randall Lab. of Physics 1999-07-01 The lecture started with a brief review of the history of polarized proton beams. Then it described the unexpected and still unexplained large transverse spin effects found in high energy proton spin experiments at the ZGS, AGS, and Fermilab. Next there was detailed discussion of Siberian snakes and some of their tests at the IUCF Cooler Ring. Finally there was a review of the use of Siberian Snakes in some possible high energy polarized proton beams at RHIC, HERA and Fermilab. Since a similar lecture is being published elsewhere, this manuscript will only contain this brief summary and the references. (author) 17. High energy physics studies progress report. Part I. Experimental program. [Summaries of research activities at Ohio State University Energy Technology Data Exchange (ETDEWEB) 1977-01-01 The experimental program of research, including Assembly of an experiment at Fermilab E-351 to measure decay lifetimes, with tagged emulsion, of charmed particles produced by high energy neutrinos will continue. A data-taking run will take place in the coming fiscal year. Participation in the neutrino experiment E-310, Fermilab-Harvard-Pennsylvania-Rutgers-Wisconsin, will also continue. Data analysis from several experiments performed in the recent past at the ZGS ANL is in progress and will be pursued. These experiments are, E-397, E-420 and E-428 performed with the Charged and Neutral Spectrometer, and E-347 with the ..sigma../sub ..beta../ Spectrometer. Plans are in the making to collaborate with a polarized proton experiment at the ZGS. New approaches to ''third generation'' neutrino experiments at Fermilab are being discussed by the whole high energy group. Ideas of pursuing experiments at the AGS-BNL with the ..sigma../sub ..beta../ Spectrometer are explored. The theoretical research program covers topics of current interest in particle theory which will be investigated in the coming year; namely, the role of instantons in quantum chromodynamics, Higgs Lagrangian involving scalar fields, phenomenology of neutrino physics and in particular the nature of trimuon production, higher order symmetries like SU(3) x U(1) SU(5) and SU(6), dynamics of high energy diffractive scattering, classical solutions to the gauge field theories. 18. 3D field calculation of the GEM prototype magnet and comparison with measurements Energy Technology Data Exchange (ETDEWEB) Lari, R.J. 1983-10-28 The proposed 4 GeV Electron Microtron (GEM) is designed to fill the existing buildings left vacant by the demise of the Zero Gradient Synchrotron (ZGS) accelerator. One of the six large dipole magnets is shown as well as the first 10 electron orbits. A 3-orbit prototype magnet has been built. The stepped edge of the magnet is to keep the beam exiting perpendicular to the pole. The end guards that wrap around the main coils are joined together by the 3 shield plates. The auxiliary coils are needed to keep the end guards and shield plates from saturating. A 0.3 cm Purcell filter air gap exists between the pole and the yoke. Can anyone question this being a truly three-dimensional magnetostatic problem. The computer program TOSCA, developed at the Rutherford Appleton Laboratory by the Computing Applications Group, was used to calculate this magnet and the results have been compared with measurements. 19. Advanced Photon Source (APS) Data.gov (United States) Federal Laboratory Consortium — The Advanced Photon Source (APS) at the U.S. Department of Energy's Argonne National Laboratoryprovides this nation's (in fact, this hemisphere's) brightest storage... 20. Transportation Research Analysis Computing Center (TRACC) Data.gov (United States) Federal Laboratory Consortium — Argonne National Laboratory initiated a multi-year program with the US Department of Transportation (USDOT) in October 2006, to establish the Transportation Research... 1. 2016 ALCF Science Highlights Energy Technology Data Exchange (ETDEWEB) Collins, James R. [Argonne National Lab. (ANL), Argonne, IL (United States); Cerny, Beth A. [Argonne National Lab. (ANL), Argonne, IL (United States); Wolf, Laura [Argonne National Lab. (ANL), Argonne, IL (United States); Coffey, Richard M. [Argonne National Lab. (ANL), Argonne, IL (United States); Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States) 2016-01-01 The Argonne Leadership Computing Facility provides supercomputing capabilities to the scientific and engineering community to advance fundamental discovery and understanding in a broad range of disciplines. 2. Electron Microscopy Center (EMC) Data.gov (United States) Federal Laboratory Consortium — The Electron Microscopy Center (EMC) at Argonne National Laboratory develops and maintains unique capabilities for electron beam characterization and applies those... 3. Phase and Texture Evolution in Chemically Derived PZT Thin Films on Pt Substrates Science.gov (United States) 2014-09-01 Florida 32611 §Advanced Photon Source, Argonne National Laboratory, Argonne, Illinosis 60439 ¶Electronic, Optical, and Nano Materials Department...Sandia National Laboratories, Albuquerque, New Mexico 87185 kRF MEMS & mm Scale Robotics , U.S. Army Research Laboratory, Adelphi, Maryland 20783 4. Advanced Borobond™ Shields for Nuclear Materials Containment and Borobond™ Immobilization of Volatile Fission Products - Final CRADA Report Energy Technology Data Exchange (ETDEWEB) Wagh, Arun S. [Argonne National Lab. (ANL), Argonne, IL (United States) 2016-05-19 Borobond is a company-proprietary material developed by the CRADA partner in collaboration with Argonne, and is based on Argonne's Ceramicrete technology. It is being used by DOE for nuclear materials safe storage, and Boron Products, LLC is the manufacturer and supplier of Borobond. 5. Energy Levels and Predicted Absorption Spectra of Rare-Earth Ions in Rare-Earth Arsenides Science.gov (United States) 1992-09-01 2 copies) Departmento de Fisica Ames, IA 50011 Attn: A. da Gama Attn: G. F. de SA Argonne National Laboratory Attn: 0. L. Malta Attn: W. T. Carnall...da UFPE, Cidade Universitaria 9700 South Ca.s Avenue 50,000, Recife, Pe, Brasil Argonne, IL 60439 26 Distribution (cont’d) Howard University 6. Baseline Tests of the Electra Van Model 1000 Electric Vehicle. Science.gov (United States) 1980-07-01 Argonne National I- ahora tory 9700 South Liss Avenue Argonne, I i, 6043’) F. J. les 705 Buffalo D~rive Arlington, TX 76013 C. (irandy tUriion Flect nc CoI. P.O. Box 14’) St Louis. MO ( 3101 59 7. Pilot-scale studies of soil vapor extraction and bioventing for remediation of a gasoline spill at Cameron Station, Alexandria, Virginia Energy Technology Data Exchange (ETDEWEB) Harrison, W.; Joss, C.J.; Martino, L.E. [and others 1994-07-01 Approximately 10,000 gal of spilled gasoline and unknown amounts Of trichloroethylene and benzene were discovered at the US Armys Cameron Station facility. Because the base is to be closed and turned over to the city of Alexandria in 1995, the Army sought the most rapid and cost-effective means of spill remediation. At the request of the Baltimore District of the US Army Corps of Engineers, Argonne conducted a pilot-scale study to determine the feasibility of vapor extraction and bioventing for resolving remediation problems and to critique a private firms vapor-extraction design. Argonne staff, working with academic and private-sector participants, designed and implemented a new systems approach to sampling, analysis and risk assessment. The US Geological Surveys AIRFLOW model was adapted for the study to simulate the performance of possible remediation designs. A commercial vapor-extraction machine was used to remove nearly 500 gal of gasoline from Argonne-installed horizontal wells. By incorporating numerous design comments from the Argonne project team, field personnel improved the systems performance. Argonne staff also determined that bioventing stimulated indigenous bacteria to bioremediate the gasoline spin. The Corps of Engineers will use Argonnes pilot-study approach to evaluate remediation systems at field operation sites in several states. 8. Annual Report of Groundwater Monitoring at Centralia, Kansas, in 2012 Energy Technology Data Exchange (ETDEWEB) LaFreniere, Lorraine M. [Argonne National Lab. (ANL), Argonne, IL (United States) 2013-06-01 Periodic sampling is performed at Centralia, Kansas, on behalf of the Commodity Credit Corporation of the U.S. Department of Agriculture (CCC/USDA) by Argonne National Laboratory. The sampling is currently (2009-2012) conducted in accord with a monitoring program approved by the Kansas Department of Health and Environment (KDHE 2009). The objective is to monitor levels of carbon tetrachloride contamination identified in the groundwater sitewide (Argonne 2003, 2004, 2005a), as well as the response to the interim measure (IM) pilot test that is in progress (Argonne 2007b). This report provides a summary of the findings for groundwater inspection in Centralia. 9. Supplemental site inspection for Air Force Plant 59, Johnson City, New York, Volume 1: Investigation report Energy Technology Data Exchange (ETDEWEB) Nashold, B.; Rosenblatt, D.; Hau, J. [and others 1995-08-01 This summary describes a Supplemental Site Inspection (SSI) conducted by Argonne National Laboratory (ANL) at Air Force Plant 59 (AFP 59) in Johnson City, New York. All required data pertaining to this project were entered by ANL into the Air Force-wide Installation Restoration Program Information System (IRPIMS) computer format and submitted to an appropriate authority. The work was sponsored by the United States Air Force as part of its Installation Restoration Program (IRP). Previous studies had revealed the presence of contaminants at the site and identified several potential contaminant sources. Argonnes study was conducted to answer questions raised by earlier investigations. 10. Water Resource Assessment of Geothermal Resources and Water Use in Geopressured Geothermal Systems Energy Technology Data Exchange (ETDEWEB) Clark, C. E. [Argonne National Lab. (ANL), Argonne, IL (United States); Harto, C. B. [Argonne National Lab. (ANL), Argonne, IL (United States); Troppe, W. A. [Argonne National Lab. (ANL), Argonne, IL (United States) 2011-09-01 This technical report from Argonne National Laboratory presents an assessment of fresh water demand for future growth in utility-scale geothermal power generation and an analysis of fresh water use in low-temperature geopressured geothermal power generation systems. 11. Qwest provides high-speed network for major research institutions in Illinois eight campuses interconnected to foster collaborative, virtual research CERN Multimedia 2003-01-01 Qwest Communications International Inc. today announced that Argonne National Laboratory has deployed Qwest's broadband fiber optic network for the Illinois Wired/Wireless Infrastructure for Research and Education (I-WIRE) project (1 page). 12. High Energy Physics Division: Semiannual report of research activities, July 1, 1988--December 31, 1988 Energy Technology Data Exchange (ETDEWEB) 1988-01-01 This paper briefly discusses progress at Argonne National Laboratory in the following areas: Experimental Program; Theory Program; Experimental Facilities Research; Accelerator Research and Development; and SSC Detector Research and Development. 13. Ian Foster named 2003 Innovator of the Year CERN Multimedia Studt, T 2003-01-01 "From his roots in New Zealand to his joint positions at the Univ. of Chicago and Argonne National Laboratory, Foster has set the standards for how distributed computing systems should work" (2 pages). 14. EVALUATION OF CHEMICALLY BONDED PHOSPHATE CERAMICS FOR MERCURY STABILIZATION OF A MIXED SYNTHETIC WASTE Science.gov (United States) This experimental study was conducted to evaluate the stabilization and encapsulation technique developed by Argonne National Laboratory, called the Chemically Bonded Phosphate Ceramics technology for Hg- and HgCl2-contaminated synthetic waste materials. Leachability ... 15. Summary of Expansions, Updates, and Results in GREET® 2016 Suite of Models Energy Technology Data Exchange (ETDEWEB) None, None 2016-10-01 This report documents the technical content of the expansions and updates in Argonne National Laboratory’s GREET® 2016 release and provides references and links to key documents related to these expansions and updates. 16. Characterization of the Tribological Behavior of Oxide-Based NanoMaterials: Final CRADA Report Energy Technology Data Exchange (ETDEWEB) Fenske, George [Argonne National Lab. (ANL), Argonne, IL (United States) 2017-01-04 Under the Argonne/Pixelligent cooperative research and development agreement (CRADA – C1200801), Argonne performed labscale tribological tests on proprietary nano-sized ZrO2 material developed by Pixelligent. Pixelligent utilized their proprietary process to prepare variants with different surfactants at different loadings in different carrier fluids for testing and evaluation at Argonne. Argonne applied a range of benchtop tribological test rigs to evaluate friction and wear under a range of conditions (contact geometry, loads, speeds, and temperature) that simulated a broad range of conditions experienced in engines and driveline components. Post-test analysis of worn surfaces provided information on the structure and chemistry of the tribofilms produced during the tests. 17. Operating plan FY 1998 Energy Technology Data Exchange (ETDEWEB) NONE 1997-10-01 This document is the first edition of Argonnes new Operating Plan. The Operating Plan complements the strategic planning in the Laboratorys Institutional Plan by focusing on activities that are being pursued in the immediate fiscal year, FY 1998. It reflects planning that has been done to date, and it will serve in the future as a resource and a benchmark for understanding the Laboratorys performance. The heart of the Institutional Plan is the set of major research initiatives that the Laboratory is proposing to implement in future years. In contrast, this Operating Plan focuses on Argonnes ongoing R&D programs, along with cost-saving measures and other improvements being implemented in Laboratory support operations. 18. Irradiation effects on reactor structural materials. Quarterly progress report, May--July 1965 Energy Technology Data Exchange (ETDEWEB) None 1965-08-01 This document contains reports of the following laboratories: Argonne, Battelle, Brookhaven, General Atomic Division, Materials Research Lab., Naval Research Lab., Nuclear Materials and Propulsion Operation, ORNL, Pacific Northwest Lab., and Phillips Petroleum Co. (DLC) 19. Demonstration and Deployment Strategy Workshop: Summary Energy Technology Data Exchange (ETDEWEB) none, 2014-05-01 This report is based on the proceedings of the U.S. Department of Energy Bioenergy Technologies Office Demonstration and Deployment Strategy Workshop, held on March 12–13, 2014, at Argonne National Laboratory. 20. Ramona, Kansas, Corrective Action Monitoring Report for 2014 Energy Technology Data Exchange (ETDEWEB) LaFreniere, Lorraine M. [Argonne National Lab. (ANL), Argonne, IL (United States) 2015-06-01 This report describes groundwater monitoring in 2014 for the property at Ramona, Kansas, on which a grain storage facility was formerly operated by the Commodity Credit Corporation of the U.S. Department of Agriculture (CCC/USDA). The monitoring was implemented on behalf of the CCC/USDA by Argonne National Laboratory and was conducted as specified in the Long-Term Groundwater Monitoring Plan (Argonne 2012) approved by the Kansas Department of Health and Environment (KDHE 2012). 1. Incorporating Bioenergy in Sustainable Landscape Designs Workshop Two: Agricultural Landscapes Energy Technology Data Exchange (ETDEWEB) None 2015-08-01 The Bioenergy Technologies Office hosted two workshops on Incorporating Bioenergy in Sustainable Landscape Designs with Oak Ridge and Argonne National Laboratories in 2014. The second workshop focused on agricultural landscapes and took place in Argonne, IL from June 24—26, 2014. The workshop brought together experts to discuss how landscape design can contribute to the deployment and assessment of sustainable bioenergy. This report summarizes the discussions that occurred at this particular workshop. 2. Systematic Investigation of Organic Photovoltaic Cell Charge Injection/Performance Modulation by Dipolar Organosilane Interfacial Layers Science.gov (United States) 2013-08-13 Michael J. Bedzyk,*,§,# and Tobin J. Marks*,†,# †Department of Chemistry and the Argonne-Northwestern Solar Energy Research Center, Northwestern...NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Northwestern University,Department of Chemistry and the Argonne-Northwestern Solar Energy...room temperature. The solution was diluted with distilled water (40 mL) and the product was extracted with ethyl acetate (3 × 15 mL). The organic 3. Institutional plan: Supplements, FY 1998--FY 2003 Energy Technology Data Exchange (ETDEWEB) NONE 1997-07-01 This supplement contains summaries of the projects, both DOE and non-DOE, that the Argonne National Laboratory conducts. DOE projects include nuclear energy, energy research, energy efficiency, fossil energy, defense programs, non-proliferation and national security, environmental management, and civilian radioactive waste management. The second part of this report contains descriptions of the Argonne National Lab site and facilities. Budget information is also presented. 4. Ramona, Kansas, Corrective Action Monitoring Report for 2012 Energy Technology Data Exchange (ETDEWEB) LaFreniere, Lorraine M. [Argonne National Lab. (ANL), Argonne, IL (United States) 2014-04-01 This Monitoring Report describes groundwater monitoring for the property at Ramona, Kansas, on which a grain storage facility was formerly operated by the Commodity Credit Corporation of the U.S. Department of Agriculture (CCC/USDA). The monitoring was implemented on behalf of the CCC/USDA by Argonne National Laboratory. Monitoring was conducted as specified in the Long-Term Groundwater Monitoring Plan (Argonne 2012) approved by the Kansas Department of Health and Environment (KDHE 2012). 5. Comparative analysis of discharges into Lake Michigan, Phase I - Southern Lake Michigan. Energy Technology Data Exchange (ETDEWEB) Veil, J. A.; Elcock, D.; Gasper, J. R.; Environmental Science Division 2008-06-30 BP Products North America Inc. (BP) owns and operates a petroleum refinery located on approximately 1,700 acres in Whiting, East Chicago, and Hammond, Indiana, near the southern tip of Lake Michigan. BP provided funding to Purdue University-Calumet Water Institute (Purdue) and Argonne National Laboratory (Argonne) to conduct studies related to wastewater treatment and discharges. Purdue and Argonne are working jointly to identify and characterize technologies that BP could use to meet the previous discharge permit limits for total suspended solids (TSS) and ammonia after refinery modernization. In addition to the technology characterization work, Argonne conducted a separate project task, which is the subject of this report. In Phase I of a two-part study, Argonne estimated the current levels of discharge to southern Lake Michigan from significant point and nonpoint sources in Illinois, Indiana, and portions of Michigan. The study does not consider all of the chemicals that are discharged. Rather, it is narrowly focused on a selected group of pollutants, referred to as the 'target pollutants'. These include: TSS, ammonia, total and hexavalent chromium, mercury, vanadium, and selenium. In Phase II of the study, Argonne will expand the analysis to cover the entire Lake Michigan drainage basin. 6. Yucca Mountain project canister material corrosion studies as applied to the electrometallurgical treatment metallic waste form Energy Technology Data Exchange (ETDEWEB) Keiser, D.D. 1996-11-01 Yucca Mountain, Nevada is currently being evaluated as a potential site for a geologic repository. As part of the repository assessment activities, candidate materials are being tested for possible use as construction materials for waste package containers. A large portion of this testing effort is focused on determining the long range corrosion properties, in a Yucca Mountain environment, for those materials being considered. Along similar lines, Argonne National Laboratory is testing a metallic alloy waste form that also is scheduled for disposal in a geologic repository, like Yucca Mountain. Due to the fact that Argonnes waste form will require performance testing for an environment similar to what Yucca Mountain canister materials will require, this report was constructed to focus on the types of tests that have been conducted on candidate Yucca Mountain canister materials along with some of the results from these tests. Additionally, this report will discuss testing of Argonnes metal waste form in light of the Yucca Mountain activities. 7. Final work plan for targeted sampling at Webber, Kansas. Energy Technology Data Exchange (ETDEWEB) LaFreniere, L. M.; Environmental Science Division 2006-05-01 This Work Plan outlines the scope of work for targeted sampling at Webber, Kansas (Figure 1.1). This activity is being conducted at the request of the Kansas Department of Health and Environment (KDHE), in accordance with Section V of the Intergovernmental Agreement between the KDHE and the Commodity Credit Corporation of the U.S. Department of Agriculture (CCC/USDA). Data obtained in this sampling event will be used to (1) evaluate the current status of previously detected contamination at Webber and (2) determine whether the site requires further action. This work is being performed on behalf of the CCC/USDA by the Environmental Science Division of Argonne National Laboratory. Argonne is a nonprofit, multidisciplinary research center operated by the University of Chicago for the U.S. Department of Energy (DOE). The CCC/USDA has entered into an interagency agreement with DOE, under which Argonne provides technical assistance to the CCC/USDA with environmental site characterization and remediation at its former grain storage facilities. Argonne has issued a Master Work Plan (Argonne 2002) that describes the general scope of and guidance for all investigations at former CCC/USDA facilities in Kansas. The Master Work Plan, approved by the KDHE, contains the materials common to investigations at all locations in Kansas. This document should be consulted for complete details of the technical activities proposed at the former CCC/USDA facility in Webber. 8. Laboratory directed research and development. FY 1991 program activities: Summary report Energy Technology Data Exchange (ETDEWEB) 1991-11-15 The purposes of Argonnes Laboratory Directed Research and Development (LDRD) Program are to encourage the development of novel concepts, enhance the Laboratorys R&D capabilities, and further the development of its strategic initiatives. Among the aims of the projects supported by the Program are establishment of engineering proof-of-principle; development of an instrumental prototype, method, or system; or discovery in fundamental science. Several of these project are closely associated with major strategic thrusts of the Laboratory as described in Argonnes Five Year Institutional Plan, although the scientific implications of the achieved results extend well beyond Laboratory plans and objectives. The projects supported by the Program are distributed across the major programmatic areas at Argonne. Areas of emphasis are (1) advanced accelerator and detector technology, (2) x-ray techniques in biological and physical sciences, (3) advanced reactor technology, (4) materials science, computational science, biological sciences and environmental sciences. Individual reports summarizing the purpose, approach, and results of projects are presented. 9. DOE technology information management system database study report Energy Technology Data Exchange (ETDEWEB) Widing, M.A.; Blodgett, D.W.; Braun, M.D.; Jusko, M.J.; Keisler, J.M.; Love, R.J.; Robinson, G.L. [Argonne National Lab., IL (United States). Decision and Information Sciences Div. 1994-11-01 To support the missions of the US Department of Energy (DOE) Special Technologies Program, Argonne National Laboratory is defining the requirements for an automated software system that will search electronic databases on technology. This report examines the work done and results to date. Argonne studied existing commercial and government sources of technology databases in five general areas: on-line services, patent database sources, government sources, aerospace technology sources, and general technology sources. First, it conducted a preliminary investigation of these sources to obtain information on the content, cost, frequency of updates, and other aspects of their databases. The Laboratory then performed detailed examinations of at least one source in each area. On this basis, Argonne recommended which databases should be incorporated in DOEs Technology Information Management System. 10. Restoring our urban communities: A model for an empowered America Energy Technology Data Exchange (ETDEWEB) NONE 1995-08-01 This booklet tells the story of how two very different types of organizations - Bethel New Life and Argonne National Laboratory - have forged a partnership to rebuild West Garfield Park. This unique Partnership blends Bethels theological and sociological roots with Argonnes scientific and technological expertise. Together they hope to offer the community fresh, transferable approaches to solving urban socio-economic and environmental problems. The Partnership hopes to address and solve the inner citys technological problems through community participation and collaborative demonstrations - without losing sight of the communitys social needs. The key themes throughout this booklet - jobs, sustainable community development, energy efficiency, and environment - highlight challenges the partners face. By bringing people and technologies together, this Partnership will give West Garfield Park residents a better life -- and, perhaps, offer other communities a successful model for urban renewal. 11. Fabrication and Testing of Deflecting Cavities for APS Energy Technology Data Exchange (ETDEWEB) Mammosser, John; Wang, Haipeng; Rimmer, Robert; Jim, Henry; Katherine, Wilson; Dhakal, Pashupati; Ali, Nassiri; Jim, Kerby; Jeremiah, Holzbauer; Genfa, Wu; Joel, Fuerst; Yawei, Yang; Zenghai, Li 2013-09-01 Jefferson Lab (Newport News, Virginia) in collaboration with Argonne National Laboratory (Argonne, IL) has fabricated and tested four first article, 2.8 GHz, deflecting SRF cavities, for Argonne's Short-Pulse X-ray (SPX) project. These cavities are unique in many ways including the fabrication techniques in which the cavity cell and waveguides were fabricated. These cavity subcomponents were milled from bulk large grain niobium ingot material directly from 3D CAD files. No forming of sub components was used with the exception of the beam-pipes. The challenging cavity and helium vessel design and fabrication results from the stringent RF performance requirements required by the project and operation in the APS ring. Production challenges and fabrication techniques as well as testing results will be discussed in this paper. 12. Supplemental site inspection for Air Force Plant 59, Johnson City, New York, Volume 3: Appendices F-Q Energy Technology Data Exchange (ETDEWEB) Nashold, B.; Rosenblatt, D.; Hau, J. [and others 1995-08-01 This summary describes a Supplemental Site Inspection (SSI) conducted by Argonne National Laboratory (ANL) at Air Force Plant 59 (AFP 59) in Johnson City, New York. All required data pertaining to this project were entered by ANL into the Air Force-wide Installation Restoration Program Information System (IRPIMS) computer format and submitted to an appropriate authority. The work was sponsored by the United States Air Force as part of its Installation Restoration Program (IRP). Previous studies had revealed the presence of contaminants at the site and identified several potential contaminant sources. Argonnes study was conducted to answer questions raised by earlier investigations. This volume consists of appendices F-Q, which contain the analytical data from the site characterization. 13. Annual Report of Groundwater Monitoring at Everest, Kansas, in 2012 Energy Technology Data Exchange (ETDEWEB) LaFreniere, Lorraine M. [Argonne National Lab. (ANL), Argonne, IL (United States) 2013-07-01 In March 2009, the CCC/USDA developed a plan for annual monitoring of the groundwater and surface water (Argonne 2009). Under this plan, approved by the KDHE (2009), monitoring wells are sampled by using the low-flow procedure, and surface water samples are collected at five locations along the intermittent creek. Vegetation sampling is conducted as a secondary indicator of plume migration. Results of annual sampling in 2009-2011 for volatile organic compounds (VOCs) and water level measurements (Argonne 2010a, 2011a,b) were consistent with previous observations (Argonne 2003, 2006a,d, 2008). No carbon tetrachloride was detected in surface water of the intermittent creek or in tree branch samples collected at locations along the creek banks. This report presents the results of the fourth annual sampling event, conducted in 2012. 14. Supplemental site inspection for Air Force Plant 59, Johnson City, New York, Volume 2: Appendices A-E Energy Technology Data Exchange (ETDEWEB) Nashold, B.; Rosenblatt, D.; Tomasko, D. [and others 1995-08-01 This summary describes a Supplemental Site Inspection (SSI) conducted by Argonne National Laboratory (ANL) at Air Force Plant 59 (AFP 59) in Johnson City, New York. All required data pertaining to this project were entered by ANL into the Air Force-wide Installation Restoration Program Information System (IRPIMS) computer format and submitted to an appropriate authority. The work was sponsored by the United States Air Force as part of its Installation Restoration Program (IRP). Previous studies had revealed the presence of contaminants at the site and identified several potential contaminant sources. Argonnes study was conducted to answer questions raised by earlier investigations. This volume consists of appendices A-E, containing field data and data validation. 15. Institutional plan. FY 1998--2003 Energy Technology Data Exchange (ETDEWEB) NONE 1997-07-01 This Institutional Plan for Argonne National Laboratory contains central elements of Argonnes strategic plan. Chapter II of this document discusses the Laboratorys mission and core competencies. Chapter III presents the Science and Technology Strategic Plan, which summarizes key features of the external environment, presents Argonnes vision, and describes how the Laboratorys strategic goals and objectives map onto and support DOEs four business lines. The balance of the chapter comprises the science and technology area plans, organized by the four DOE business lines. Chapter IV describes the Laboratorys ten major initiatives, which cover a broad spectrum of science and technology. Our proposal for an Exotic Beam Facility aims at, among other things, increased understanding of the processes of nuclear synthesis during and shortly after the Big Bang. Our Advanced Transportation Technology initiative involves working with US industry to develop cost-effective technologies to improve the fuel efficiency and reduce the emissions of transportation systems. The Laboratorys plans for the future depend significantly on the success of its major initiatives. Chapter V presents our Operations and Infrastructure Strategic Plan. The main body of the chapter comprises strategic plans for human resources; environmental protection, safety, and health; site and facilities; and information management. The chapter concludes with a discussion of the business and management practices that Argonne is adopting to improve the quality and cost-effectiveness of its operations. The structure and content of this document depart from those of the Institutional Plan in previous years. Emphasis here is on directions for the future; coverage of ongoing activities is less detailed. We hope that this streamlined plan is more direct and accessible. 16. MCP-based Photodetectors for Cryogenic Applications CERN Document Server Dharmapalan, Ranjan; Byrum, Karen; Demarteau, Marcel; Elam, Jeffrey; May, Edward; Wagner, Robert; Walters, Dean; Xia, Lei; Xie, Junqi; Zhao, Huyue; Wang, J 2016-01-01 The Argonne MCP-based photo detector is an offshoot of the Large Area Pico-second Photo Detector (LAPPD) project, wherein 6 cm x 6 cm sized detectors are made at Argonne National Laboratory. We have successfully built and tested our first detectors for pico-second timing and few mm spatial resolution. We discuss our efforts to customize these detectors to operate in a cryogenic environment. Initial plans aim to operate in liquid argon. We are also exploring ways to mitigate wave length shifting requirements and also developing bare-MCP photodetectors to operate in a gaseous cryogenic environment. 17. Evaporation residue corss sections for {sup 32}S + {sup 184}W Energy Technology Data Exchange (ETDEWEB) Back, B.B.; Blumenthal, D.J.; Davids, C.N. [and others 1995-08-01 We recently measured evaporation residue cross sections for the {sup 32}S + {sup 184}W system over a range of beam energies using the Argonne Fragment Mass Analyzer (FMA). Absolute cross sections were obtained on the basis of the recent determination of the transmission probability through the FMA of heavy, slow-moving reaction products. The measurements were carried out using {sup 32}S-beams from the ATLAS superconducting linac at Argonne. Beam energies of 165, 174, 185, 195, 205, 215, 225, 236, 246, and 257 MeV were used. The sliding-seal target chamber is used to allow for measurements at finite angles. 18. Photodisintegration of light nuclei for testing a correlated realistic interaction in the continuum CERN Document Server Bacca, S 2006-01-01 An exact calculation of the photodisintegration cross section of 3H, 3He and 4He is performed using as interaction the correlated Argonne V18 potential, constructed within the Unitary Correlation Operator Method (VUCOM). Calculations are carried out using the Lorentz Integral Transform method in conjunction with an hyperspherical harmonics basis expansion. A comparison with other realistic potentials and with available experimental data is discussed. The VUCOM potential leads to a very similar description of the cross section as the Argonne V18 interaction with the inclusion of the Urbana IX three-body force for photon energies 45< w < 120 MeV, while larger differences are found close to threshold. 19. The Shock and Vibration Digest. Volume 16, Number 5 Science.gov (United States) 1984-05-01 Closure to HCDA Loads "- R.F. Kulak and C. Fiala Argonne Natl. Lab., Argonne, IL, Rept. No. CONF- 830805-32, 18 pp (1983) (Intl. Conf. Struc. Me- chanics...Germany, Rept. No. G KSS-82/E/49, 49 pp (1982) DE83750995 (In German) 84-811 Analysis of HCDA Loads and Containment Response Key Words: Drilling...of a large loop-type LMFBR subjected to an HCDA of a Measurements of Wave and Drift Induced Line Forces 1000 MJ energy release. The reference reactor 20. Plant-Scale Concentration Column Designs for SHINE Target Solution Utilizing AG 1 Anion Exchange Resin Energy Technology Data Exchange (ETDEWEB) Stepinski, Dominique C. [Argonne National Lab. (ANL), Argonne, IL (United States); Vandegrift, G. F. [Argonne National Lab. (ANL), Argonne, IL (United States) 2015-09-30 Argonne is assisting SHINE Medical Technologies (SHINE) in their efforts to develop SHINE, an accelerator-driven process that will utilize a uranyl-sulfate solution for the production of fission product Mo-99. An integral part of the process is the development of a column for the separation and recovery of Mo-99, followed by a concentration column to reduce the product volume from 15-25 L to <1 L. Argonne has collected data from batch studies and breakthrough column experiments to utilize the VERSE (Versatile Reaction Separation) simulation program (Purdue University) to design plant-scale product recovery and concentration processes. 1. Impact of Burst Buffer Architectures on Application Portability Energy Technology Data Exchange (ETDEWEB) Harms, Kevin [Argonne National Lab. (ANL), Argonne, IL (United States); Oral, H. Sarp [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). National Center for Computational Science; Atchley, Scott [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). National Center for Computational Science; Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). National Center for Computational Science 2016-09-30 The Oak Ridge and Argonne Leadership Computing Facilities are both receiving new systems under the Collaboration of Oak Ridge, Argonne, and Livermore (CORAL) program. Because they are both part of the INCITE program, applications need to be portable between these two facilities. However, the Summit and Aurora systems will be vastly different architectures, including their I/O subsystems. While both systems will have POSIX-compliant parallel file systems, their Burst Buffer technologies will be different. This difference may pose challenges to application portability between facilities. Application developers need to pay attention to specific burst buffer implementations to maximize code portability. 2. Investigation of GOSIP technology at ANL Energy Technology Data Exchange (ETDEWEB) Winkler, L. 1992-01-01 This document describes testing of OSI products conducted at Argonne National Laboratory. Sun, IBM, and Clsco hardware platforms were used. Various software packages that implement file transfer and gateway applications were evaluated. The OSI model and GOSIP compliance are briefly discussed. Technical details on OSI addressing and routing are presented. The relationship of this testing to other OSI activities at Argonne and to activities of the national networking community is discussed. Mention is also made of the relationship of DECnet Phase V transition issues. 3. Physics division annual report 2006. Energy Technology Data Exchange (ETDEWEB) Glover, J.; Physics 2008-02-28 This report highlights the activities of the Physics Division of Argonne National Laboratory in 2006. The Division's programs include the operation as a national user facility of ATLAS, the Argonne Tandem Linear Accelerator System, research in nuclear structure and reactions, nuclear astrophysics, nuclear theory, investigations in medium-energy nuclear physics as well as research and development in accelerator technology. The mission of nuclear physics is to understand the origin, evolution and structure of baryonic matter in the universe--the core of matter, the fuel of stars, and the basic constituent of life itself. The Division's research focuses on innovative new ways to address this mission. 4. Final report : results of the 2006-2007 investigation of potential contamination at the former CCC/USDA facility in Barnes, Kansas. Energy Technology Data Exchange (ETDEWEB) LaFreniere, L. M.; Environmental Science Division 2008-08-28 The 2006-2007 investigation of carbon tetrachloride and chloroform contamination at Barnes, Kansas, was conducted at the request of the Kansas Department of Health and Environment (KDHE). The Environmental Science Division of Argonne National Laboratory implemented the investigation on behalf of the Commodity Credit Corporation of the U.S. Department of Agriculture (CCC/USDA). The overall goal of the investigation was to establish criteria for monitoring leading to potential site reclassification. The investigation objectives were to (1) determine the hydraulic gradient near the former CCC/USDA facility, (2) delineate the downgradient carbon tetrachloride plume, and (3) design and implement an expanded monitoring network at Barnes (Argonne 2006a). 5. Eleventh symposium on energy engineering sciences: Proceedings. Solid mechanics and processing: Analysis, measurement and characterization Energy Technology Data Exchange (ETDEWEB) 1993-09-01 The Eleventh Symposium on Energy Engineering Sciences was held on May 3--5, 1993, at the Argonne National Laboratory, Argonne, Illinois. These proceedings include the program, list of participants, and the papers that were presented during the eight technical sessions held at this meeting. This symposium was organized into eight technical sessions: Surfaces and interfaces; thermophysical properties and processes; inelastic behavior; nondestructive characterization; multiphase flow and thermal processes; optical and other measurement systems; stochastic processes; and large systems and control. Individual projects were processed separately for the databases. 6. Impact of Burst Buffer Architectures on Application Portability Energy Technology Data Exchange (ETDEWEB) Harms, Kevin [Argonne National Laboratory (ANL); Oral, H Sarp [ORNL; Atchley, Scott [ORNL; Vazhkudai, Sudharshan S [ORNL 2016-09-01 The Oak Ridge and Argonne Leadership Computing Facilities are both receiving new systems under the Collaboration of Oak Ridge, Argonne, and Livermore (CORAL) program. Because they are both part of the INCITE program, applications need to be portable between these two facilities. However, the Summit and Aurora systems will be vastly different architectures, including their I/O subsystems. While both systems will have POSIX-compliant parallel file systems, their Burst Buffer technologies will be different. This difference may pose challenges to application portability between facilities. Application developers need to pay attention to specific burst buffer implementations to maximize code portability. 7. October 2008 monitoring results for Morrill, Kansas. Energy Technology Data Exchange (ETDEWEB) LaFreniere, L. M.; Environmental Science Division 2009-03-10 In September 2005, the Commodity Credit Corporation of the U.S. Department of Agriculture (CCC/USDA) initiated periodic sampling of groundwater in the vicinity of a grain storage facility formerly operated by the CCC/USDA at Morrill, Kansas. The sampling at Morrill is being performed on behalf of the CCC/USDA by Argonne National Laboratory, in accord with a monitoring program approved by the Kansas Department of Health and Environment (KDHE 2005), to monitor levels of carbon tetrachloride contamination identified in the groundwater at this site (Argonne 2004, 2005a). This report provides results for the most recent monitoring event, in October 2008. Under the KDHE-approved monitoring plan (Argonne 2005b), groundwater was initially sampled twice yearly for a period of two years (in fall 2005, in spring and fall 2006, and in spring and fall 2007). The samples were analyzed for volatile organic compounds (VOCs), as well as for selected geochemical parameters to aid in the evaluation of possible natural contaminant degradation (reductive dechlorination) processes in the subsurface environment. During the two-year period, the originally approved scope of the monitoring was expanded to include vegetation sampling (initiated in October 2006) and surface water and stream bed sediment sampling (initiated in March 2007, after a visual reconnaissance along Terrapin Creek [Argonne 2007a]). The analytical results for groundwater sampling events at Morrill in September 2005, March and September 2006, March and October 2007, and April 2008 were documented previously (Argonne 2006a,b, 2007b, 2008a,c). Those results consistently demonstrated the presence of carbon tetrachloride contamination, at levels exceeding the KDHE Tier 2 risk-based screening level (5.0 {micro}g/L) for this compound, in a groundwater plume extending generally south-southeastward from the former CCC/USDA facility, toward Terrapin Creek at the south edge of the town. Low levels ({le} 1.3 {micro}g/L) of carbon 8. High Energy Physics Division semiannual report of research activities, July 1, 1994--December 31, 1994 Energy Technology Data Exchange (ETDEWEB) Wagner, R.; Schoessow, P.; Talaga, R. 1995-04-01 This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period of July 1, 1994--December 31, 1994. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of division publications and colloquia are included. 9. High Energy Physics division semiannual report of research activities, January 1, 1998--June 30, 1998. Energy Technology Data Exchange (ETDEWEB) Ayres, D. S.; Berger, E. L.; Blair, R.; Bodwin, G. T.; Drake, G.; Goodman, M. C.; Guarino, V.; Klasen, M.; Lagae, J.-F.; Magill, S.; May, E. N.; Nodulman, L.; Norem, J.; Petrelli, A.; Proudfoot, J.; Repond, J.; Schoessow, P. V.; Sinclair, D. K.; Spinka, H. M.; Stanek, R.; Underwood, D.; Wagner, R.; White, A. R.; Yokosawa, A.; Zachos, C. 1999-03-09 This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period of January 1, 1998 through June 30, 1998. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of Division publications and colloquia are included. 10. Analytical Chemistry Laboratory progress report for FY 1999 Energy Technology Data Exchange (ETDEWEB) Green, D. W.; Boparai, A. S.; Bowers, D. L.; Graczyk, D. G. 2000-06-15 This report summarizes the activities of the Analytical Chemistry Laboratory (ACL) at Argonne National Laboratory (ANL) for Fiscal Year (FY) 1999 (October 1998 through September 1999). This annual progress report, which is the sixteenth in this series for the ACL, describes effort on continuing projects, work on new projects, and contributions of the ACL staff to various programs at ANL. 11. Analytical Chemistry Laboratory progress report for FY 1998. Energy Technology Data Exchange (ETDEWEB) Boparai, A. S.; Bowers, D. L.; Graczyk, D. G.; Green, D. W.; Lindahl, P. C. 1999-03-29 This report summarizes the activities of the Analytical Chemistry Laboratory (ACL) at Argonne National Laboratory (ANL) for Fiscal Year (FY) 1998 (October 1997 through September 1998). This annual progress report, which is the fifteenth in this series for the ACL, describes effort on continuing projects, work on new projects, and contributions of the ACL staff to various programs at ANL. 12. 6th international conference on biophysics and synchrotron radiation. Program/Abstracts Energy Technology Data Exchange (ETDEWEB) Pittroff, Connie; Strasser, Susan Barr [lead editors 1999-08-03 This STI product consists of the Program/Abstracts book that was prepared for the participants in the Sixth International Conference on Biophysics and Synchrotron Radiation that was held August 4-8, 1998, at the Advanced Photon Source, Argonne National Laboratory. This book contains the full conference program and abstracts of the scientific presentations. 13. Final work plan : investigation of potential contamination at the former USDA facility in Powhattan, Kansas. Energy Technology Data Exchange (ETDEWEB) LaFreniere, L. M.; Environmental Science Division 2007-02-02 This Work Plan outlines the scope of work to be conducted to investigate the subsurface contaminant conditions at the property formerly leased by the Commodity Credit Corporation (CCC) in Powhattan, Kansas (Figure 1.1). Data obtained during this event will be used to (1) evaluate potential contaminant source areas on the property; (2) determine the vertical and horizontal extent of potential contamination; and (3) provide recommendations for future action, with the ultimate goal of assigning this site No Further Action status. The planned investigation includes groundwater monitoring requested by the Kansas Department of Health and Environment (KDHE), in accordance with Section V of the Intergovernmental Agreement between the KDHE and the Farm Service Agency of the U.S. Department of Agriculture (USDA). The work is being performed on behalf of the CCC/USDA by the Environmental Science Division of Argonne National Laboratory. A nonprofit, multidisciplinary research center operated by the University of Chicago for the U.S. Department of Energy, Argonne provides technical assistance to the CCC/USDA with environmental site characterization and remediation at former CCC/USDA grain storage facilities. Argonne issued a Master Work Plan (Argonne 2002) that has been approved by the KDHE. The Master Work Plan describes the general scope of all investigations at former CCC/USDA facilities in Kansas and provides guidance for these investigations. It should be consulted for the complete details of plans for work associated with the former CCC/USDA facility at Powhattan. 14. Radiation Risk from Chronic Low Dose-Rate Radiation Exposures: The Role of Life-Time Animal Studies - Workshop October 2005 Energy Technology Data Exchange (ETDEWEB) Gayle Woloschak 2009-12-16 As a part of Radiation research conference, a workshop was held on life-long exposure studies conducted in the course of irradiation experiements done at Argonne National Laboratory between 1952-1992. A recent review article documents many of the issues discussed at that workshop. 15. Energy and Environmental Systems Division's publications publications 1968-1982 Energy Technology Data Exchange (ETDEWEB) None 1982-03-01 Books, journal articles, conference papers, and technical reports produced by the Energy and Environmental Systems Division of Argonne National Laboratory are listed in this bibliography. Subjects covered are energy resources (recovery and use); energy-efficient technology; electric utilities, and environments. (MCW) 16. Proceedings of the 1978 symposium on instrumentation and control for fossil demonstration plants Energy Technology Data Exchange (ETDEWEB) 1978-01-01 The 1978 symposium on instrumentation and control for fossil demonstration plants was held at Newport Beach, California, June 19--21, 1978. It was sponsored by Argonne National Laboratory, the U.S. Department of Energy - Fossil Energy, and the Instrument Society of America - Orange County Section. Thirty-nine papers have been entered individually into the data base. (LTN) 17. Gigabits to the desktop: Installing tomorrows networks today Energy Technology Data Exchange (ETDEWEB) Phillips, P.T. 1996-03-01 In this report, the author discusses computer networking at Argonne National Laboratory. He discusses why networking is needed and what capabilities it will bring to the laboratory. He addresses both the advantages and disadvantages of using optical fibers for the data transmission. He also gives a brief overview of optical fibers and their technology. 18. High Energy Physics Division semiannual report of research activities July 1, 1997 - December 31, 1997. Energy Technology Data Exchange (ETDEWEB) Norem, J.; Rezmer, R.; Schuur, C.; Wagner, R. [eds. 1998-08-11 This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period July 1, 1997--December 31, 1997. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of Division publications and colloquia are included. 19. Final evaluation of the acoustics of the APS conference center Energy Technology Data Exchange (ETDEWEB) Restrepo, J.M. 1995-11-01 Along with a description of the changes that I prescribed on the original design, this report is an evaluation of the acoustical properties of the new Advanced Photon Source Auditorium at Argonne National Laboratory. Acoustical deficiencies in the hall are presented with several options for their expedient and economical solution. 20. 75 FR 30010 - Improving Market and Planning Efficiency Through Improved Software; Notice of Agenda and... Science.gov (United States) 2010-05-28 ... Energy Policy and Innovation, (202) 502-6214, [email protected] ; Tom Dautel (Technical Information), Office of Energy Policy and Innovation, (202) 502-6196, [email protected] . Kimberly D. Bose... Jayantilal, Areva T&D 3:50 p.m Session I--Forecasting for Market Operations Audun Botterud, Argonne... 1. Calculation of the α-Particle Ground State Science.gov (United States) Viviani, M.; Kievsky, A.; Rosati, S. 1995-01-01 The correlated hyperspherical harmonic expansion method is used to calculate α-particle properties with a realistic Hamiltonian consisting of the Argonne V14 two-nucleon and Urbana model VIII three-nucleon potentials. The calculated binding energy, mass radius and wave percentages are close to the corresponding quantities obtained with Green's-function Monte-Carlo and Faddeev-Yakubovsky techniques. 2. Manhattan Project Technical Series The Chemistry of Uranium (I) Chapters 1-10 Energy Technology Data Exchange (ETDEWEB) Rabinowitch, E. I. [Argonne National Laboratory (ANL), Argonne, IL (United States); Katz, J. J. [Argonne National Laboratory (ANL), Argonne, IL (United States) 1946-09-30 This constitutes Chapters 1 through 10. inclusive, of The Survey Volume on Uranium Chemistry prepared for the Manhattan Project Technical Series. It is issued fop purposes of review and criticism. It was decided in the Editorial Board meeting on June 11, 1946, that all comments must be communicated to the volume editors at The Argonne National Laboratory within one month after receiving this draft. 3. High energy physics with polarized beams and targets. [65 papers Energy Technology Data Exchange (ETDEWEB) Marshak, M L [ed. 1976-01-01 Sixty-six papers are presented as a report on conference sessions held from August 23-27, 1976, at Argonne National Laboratory. Topics covered include: (1) strong interactions; (2) weak and electromagnetic interactions; (3) polarized beams; and (4) polarized targets. A separate abstract was prepared for each paper for ERDA Energy Research Abstracts (ERA) and for the INIS Atomindex. (PMA) 4. Proceedings of the International Conference on Algebraic Methodology and Software Technology (2nd) Held in Iowa City, Iowa on May 22-25, 1991. Science.gov (United States) 1992-05-25 rea - soning system ITP, a well-known resolution-based theorem prover built at Argonne National Laboratory [12]. We show that the verification of the...Computer Science ???, 1990. A Framework for Dexterous Manipulation Using Lie Algebras Daniela Rus Department of Computer Science CorneU University 1 5. Mechanical properties test data for structural materials. Quarterly progress report for period ending October 31, 1976 Energy Technology Data Exchange (ETDEWEB) Hill, M R [comp. 1976-12-01 Test data on heat resisting reactor materials are presented. These data were obtained in research at EG and G Idaho, Argonne National Laboratory, Oak Ridge National Laboratory, Naval Research Laboratory, Hanford Engineering Development Laboratory, Westinghouse Advanced Reactors Division, General Electric Company, University of Cincinnati, and University of California at Los Angeles. (JRD) 6. International Spin Physics 2014 Summary CERN Document Server Milner, Richard G 2015-01-01 The Stern-Gerlach experiment and the origin of electron spin are described in historical context. SPIN 2014 occurs on the fortieth anniversary of the first International High Energy Spin Physics Symposium at Argonne in 1974. A brief history of the international spin conference series is presented. 7. Symposium Summary Science.gov (United States) Milner, Richard G. 2016-02-01 The Stern-Gerlach experiment and the origin of electron spin are described in historical context. SPIN 2014 occurs on the fortieth anniversary of the first International High Energy Spin Physics Symposium at Argonne in 1974. A brief history of the international spin conference series is presented. 8. Advances in thermal hydraulic and neutronic simulation for reactor analysis and safety Energy Technology Data Exchange (ETDEWEB) Tentner, A.M.; Blomquist, R.N.; Canfield, T.R.; Ewing, T.F.; Garner, P.L.; Gelbard, E.M.; Gross, K.C.; Minkoff, M.; Valentin, R.A. 1993-03-01 This paper describes several large-scale computational models developed at Argonne National Laboratory for the simulation and analysis of thermal-hydraulic and neutronic events in nuclear reactors and nuclear power plants. The impact of advanced parallel computing technologies on these computational models is emphasized. 9. Scale-up of Metal Hexacyanoferrate Cathode Material for Sodium Ion Batteries Energy Technology Data Exchange (ETDEWEB) Dzwiniel, Trevor L. [Argonne National Lab. (ANL), Argonne, IL (United States); Pupek, Krzysztof Z. [Argonne National Lab. (ANL), Argonne, IL (United States); Krumdick, Gregory K. [Argonne National Lab. (ANL), Argonne, IL (United States) 2016-10-04 Sharp Laboratories of America (SLA) approached Argonne National Laboratory with a bench-scale process to produce material for a sodium-ion battery, referred to as Prussian Blue, and a request to produce 1 kg of material for their ARPA-E program. The target performance criteria was an average capacity of >150 mAh/g. 10. Computational and Theoretical Investigations of Strongly Correlated Fermions in Optical Lattices Science.gov (United States) 2013-08-29 speaker, \\Physics of Superconductor - Insulator Transition and related topics", Argonne National Laboratory, November 16-19, 2010; talk titled \\Single...and two-particle spectral functions across the disorder- driven superconductor - insulator transition ". 22. Invited speaker, \\Fermions in Optical...energy gaps across the disorder- driven superconductor - insulator transition ", October 7, 2010, Harvard. 27. Seminar on \\Probing Quantum Phases of 11. Production of degradable polymers from food-waste streams Energy Technology Data Exchange (ETDEWEB) Tsai, S.P.: Coleman, R.D.; Bonsignore, P.V.; Moon, S.H. 1992-07-01 In the United States, billions of pounds of cheese whey permeate and approximately 10 billion pounds of potatoes processed each year are typically discarded or sold as cattle feed at$3{endash}6/ton; moreover, the transportation required for these means of disposal can be expensive. As a potential solution to this economic and environmental problem, Argonne National Laboratory is developing technology that: Biologically converts existing food-processing waste streams into lactic acid and uses lactic acid for making environmentally safe, degradable polylactic acid (PLA) and modified PLA plastics and coatings. An Argonne process for biologically converting high-carbohydrate food waste will not only help to solve a waste problem for the food industry, but will also save energy and be economically attractive. Although the initial substrate for Argonnes process development is potato by-product, the process can be adapted to convert other food wastes, as well as corn starch, to lactic acid. Proprietary technology for biologically converting greater than 90% of the starch in potato wastes to glucose has been developed. Glucose and other products of starch hydrolysis are subsequently fermented by bacteria that produce lactic acid. The lactic acid is recovered, concentrated, and further purified to a polymer-grade product. 12. Proceedings of the 1980 symposium on instrumentation and control for fossil energy processes Energy Technology Data Exchange (ETDEWEB) Doering, R.W. (comp.) 1980-01-01 The 1980 symposium on Instrumentation and Control for Fossil Energy Processes was held June 9-11, 1980, New Cavalier, Virginia Beach, Virginia. It was sponsored by the Argonne National Laboratory and the US Department of Energy, Office of Fossil Energy. Forty-five papers have been entered individually into EDB and ERA; nine papers had been entered previously from other sources. (LTN) 13. Practical superconductor development for electrical power applications. Quarterly report for the period ending June 30, 2000 Energy Technology Data Exchange (ETDEWEB) NONE 2000-07-21 This is a multiyear experimental research program focused on improving relevant material properties of high-{Tc} superconductors (HTSs) and on development of fabrication methods that can be transferred to industry for production of commercial conductors. The development of teaming relationships through agreements with industrial partners is a key element of the Argonne (ANL) program. 14. Practical superconductor development for electrical power applications - quarterly report for the period ending Dec. 31, 2003. Energy Technology Data Exchange (ETDEWEB) Dorris, S. E. 2004-03-02 This is a multiyear experimental research program that focuses on improving relevant material properties of high-critical temperature (Tc) superconductors and developing fabrication methods that can be transferred to industry for production of commercial conductors. The development of teaming relationships through agreements with industrial partners is a key element of the Argonne National Laboratory (ANL) program. 15. Practical superconductor development for electrical power applications - quarterly report for the period ending March 31, 2004. Energy Technology Data Exchange (ETDEWEB) Dorris, S. E. 2004-07-21 This is a multiyear experimental research program that focuses on improving relevant material properties of high-critical-temperature (Tc) superconductors and developing fabrication methods that can be transferred to industry for production of commercial conductors. The development of teaming relationships through agreements with industrial partners is a key element of the Argonne National Laboratory (ANL) program. 16. Practical superconductor development for electrical power applications - quarterly report for the period ending June 30, 2004. Energy Technology Data Exchange (ETDEWEB) Dorris, S. E. 2004-09-09 This is a multiyear experimental research program that focuses on improving relevant material properties of high-critical-temperature (Tc) superconductors and developing fabrication methods that can be transferred to industry for production of commercial conductors. The development of teaming relationships through agreements with industrial partners is a key element of the Argonne National Laboratory (ANL) program. 17. Case Study - Propane Bakery Delivery Step Vans Energy Technology Data Exchange (ETDEWEB) Laughlin, M.; Burnham, A. 2016-04-01 A switch to propane from diesel by a major Midwest bakery fleet showed promising results, including a significant displacement of petroleum, a drop in greenhouse gases and a fuel cost savings of seven cents per mile, according to a study recently completed by the U.S. Department of Energy's Argonne National Laboratory for the Clean Cities program. 18. High Energy Physics Division semiannual report of research activities. Semi-annual progress report, July 1, 1995--December 31, 1995 Energy Technology Data Exchange (ETDEWEB) Norem, J.; Bajt, D.; Rezmer, R.; Wagner, R. 1996-10-01 This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period July 1, 1995 - December 31, 1995. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of division publications and colloquia are included. 19. Theoretical Advanced Study Institute: 2014 Energy Technology Data Exchange (ETDEWEB) DeGrand, Thomas [Univ. of Colorado, Boulder, CO (United States) 2016-08-17 The Theoretical Advanced Study Institute was held at the University of Colorado, Boulder, during June 2-27, 2014. The topic was "Journeys through the Precision Frontier: Amplitudes for Colliders." The organizers were Professors Lance Dixon (SLAC) and Frank Petriello (Northwestern and Argonne). There were fifty one students. Nineteen lecturers gave sixty seventy five minute lectures. A Proceedings was published. 20. High energy physics division semiannual report of research activities Energy Technology Data Exchange (ETDEWEB) Schoessow, P.; Moonier, P.; Talaga, R.; Wagner, R. (eds.) (Argonne National Lab., IL (United States)) 1991-08-01 This report describes the research conducted in the High Energy Physics Division of Argonne National Laboratory during the period of January 1, 1991--June 30, 1991. Topics covered here include experimental and theoretical particle physics, advanced accelerator physics, detector development, and experimental facilities research. Lists of division publications and colloquia are included. 1. US develops neutron to sniff out nuclear material CERN Multimedia 2002-01-01 The USA has developed a tiny portable neutron device that can detect hidden nuclear materials. The device is undergoing trials in the Argonne National Laboratory to see if it could be used to stop smuggling and unauthorised use of nuclear weapons and materials (1/2 page). 2. Magnetic form factors of the trinucleons Energy Technology Data Exchange (ETDEWEB) Schiavilla, R; Pandharipande, V R; Riska, Dan-Olof 1989-11-01 The magnetic form factors of 3H and 3He are calculated with the Monte Carlo method from variational ground-state wave functions obtained for the Argonne and Urbana two- and three-nucleon interactions. The electromagnetic current operator contains one- and two-body terms that are constructed so as to satisfy the continuity equation with the two-nucleon potential in the Hamiltonian. The results obtained with the Argonne two-nucleon interaction are in overall agreement with the empirical values. It appears that the remaining theoretical uncertainty, in the calculation of these form factors from a given interaction model, is dominated by that in the electromagnetic form factors of the nucleon. It is found that the isovector magnetic form factors are rather sensitive to the details of the isospin-dependent tensor force, and they are much better reproduced with the Argonne than the Urbana potential. The isoscalar magnetic form factors appear to be sensitive to the spin-orbit interactions, and are better reproduced with the Urbana potential. The Argonne potential has a stronger τ1∙τ2 tensor force, while the Urbana one has a shorter-range spin-orbit interaction. 3. 75 FR 34734 - Improving Market and Planning Efficiency Through Improved Software; Notice of Agenda and... Science.gov (United States) 2010-06-18 ... Mellon University and NETSS, Inc., AC Optimal Power Flow and Smart Grids. Cong Liu, Argonne National... this event via television in the DC area and via phone bridge for a fee. If you have any questions..., Carnegie Mellon University and NETSS, Inc., Jeffrey Lang, Massachusetts Institute of Technology,... 4. Preparing for the Future: Developing an Adaptive Army in a Time of Peace, 1918-1941 Science.gov (United States) 2015-05-23 into three corps, jumped off along a narrow front between the Argonne Forrest in the west and the Meuse River in the east. The objective of the Meuse...Battalion, 511th Parachute Infantry (11th Airborne Division) in the Battle for Southern Manila, 3-10 February (Luzon Campaign) (Personal Experience 5. Quarterly report of Biological and Medical Research Division, April 1955 Energy Technology Data Exchange (ETDEWEB) Brues, A.M. 1955-04-01 This report is a compilation of 48 investigator prepared summaries of recent progress in individual research programs of the Biology and Medical Division of the Argonne National Laboratory for the quarterly period ending April,1955. Individual reports are about 3-6 pages in length and often contain research data. 6. Brain Implants for Prediction and Mitigation of Epileptic Seizures - Final CRADA Report Energy Technology Data Exchange (ETDEWEB) Gopalsami, Nachappa 2016-09-29 This is a CRADA final report on C0100901 between Argonne National Laboratory and Flint Hills Scientific, LLC of Lawrence, Kansas. Two brain implantable probes, a surface acoustic wave probe and a miniature cooling probe, were designed, built, and tested with excellent results. 7. Reduced enrichment for research and test reactors: Proceedings Energy Technology Data Exchange (ETDEWEB) 1993-07-01 The 15th annual Reduced Enrichment for Research and Test Reactors (RERTR) international meeting was organized by Ris{o} National Laboratory in cooperation with the International Atomic Energy Agency and Argonne National Laboratory. The topics of the meeting were the following: National Programs, Fuel Fabrication, Licensing Aspects, States of Conversion, Fuel Testing, and Fuel Cycle. Individual papers have been cataloged separately. 8. Physics division annual report - October 2000. Energy Technology Data Exchange (ETDEWEB) Thayer, K. [ed. 2000-10-16 This report summarizes the research performed in the past year in the Argonne Physics Division. The Division's programs include operation of ATLAS as a national heavy-ion user facility, nuclear structure and reaction research with beams of heavy ions, accelerator research and development especially in superconducting radio frequency technology, nuclear theory and medium energy nuclear physics. The Division took significant strides forward in its science and its initiatives for the future in the past year. Major progress was made in developing the concept and the technology for the future advanced facility of beams of short-lived nuclei, the Rare Isotope Accelerator. The scientific program capitalized on important instrumentation initiatives with key advances in nuclear science. In 1999, the nuclear science community adopted the Argonne concept for a multi-beam superconducting linear accelerator driver as the design of choice for the next major facility in the field a Rare Isotope Accelerator (RIA) as recommended by the Nuclear Science Advisory Committee's 1996 Long Range Plan. Argonne has made significant R&D progress on almost all aspects of the design concept including the fast gas catcher (to allow fast fragmentation beams to be stopped and reaccelerated) that in large part, defined the RIA concept the superconducting rf technology for the driver accelerator, the multiple-charge-state concept (to permit the facility to meet the design intensity goals with existing ion-source technology), and designs and tests of high-power target concepts to effectively deal with the full beam power of the driver linac. An NSAC subcommittee recommended the Argonne concept and set as tie design goal Uranium beams of 100-kwatt power at 400 MeV/u. Argonne demonstrated that this goal can be met with an innovative, but technically in-hand, design. 9. Final work plan : investigation of potential contamination at the former USDA facility in Ramona, Kansas. Energy Technology Data Exchange (ETDEWEB) LaFreniere, L. M. 2006-01-27 This Work Plan outlines the scope of work that will be conducted to investigate the subsurface contaminant conditions at the property formerly leased by the Commodity Credit Corporation (CCC) in Ramona, Kansas (Figure 1.1). Data obtained during this event will be used to (1) evaluate potential source areas on the property, (2) determine the vertical and horizontal extent of potential contamination, and (3) provide recommendations for future actions, with the ultimate goal of assigning this site No Further Action status. The planned investigation includes groundwater monitoring requested by the Kansas Department of Health and Environment (KDHE), in accordance with Section V of the Intergovernmental Agreement between the KDHE and the Farm Service Agency of the United States Department of Agriculture (USDA). The work is being performed on behalf of the CCC/USDA by the Environmental Research Division of Argonne National Laboratory. Argonne is a nonprofit, multidisciplinary research center operated by the University of Chicago for the U.S. Department of Energy. Under the Intergovernmental Agreement, Argonne provides technical assistance to the CCC/USDA with environmental site characterization and remediation at former CCC/USDA grain storage facilities. Argonne has issued a Master Work Plan (Argonne 2002) that describes the general scope of all investigations at former CCC/USDA facilities in Kansas and provides guidance for these investigations. The Master Work Plan was approved by the KDHE. It contains materials common to investigations at locations in Kansas and should be consulted for the complete details of plans for work associated with the former CCC/USDA facility at Ramona. 10. Annual report of groundwater monitoring at Centralia, Kansas, in 2010. Energy Technology Data Exchange (ETDEWEB) LaFreniere, L. M. (Environmental Science Division) 2011-03-16 In September 2005, periodic sampling of groundwater was initiated by the Commodity Credit Corporation of the U.S. Department of Agriculture (CCC/USDA) in the vicinity of a grain storage facility formerly operated by the CCC/USDA at Centralia, Kansas. The sampling at Centralia is performed on behalf of the CCC/USDA by Argonne National Laboratory, in accord with a monitoring program approved by the Kansas Department of Health and Environment (KDHE). The objective is to monitor levels of carbon tetrachloride contamination identified in the groundwater at Centralia (Argonne 2003, 2004, 2005a). Under the KDHE-approved monitoring plan (Argonne 2005b), the groundwater was sampled twice yearly from September 2005 until September 2007 for analyses for volatile organic compounds (VOCs), as well as measurement of selected geochemical parameters to aid in the evaluation of possible natural contaminant degradation processes (reductive dechlorination) in the subsurface environment (Argonne 2006, 2007a, 2008a). The results from the two-year sampling program demonstrated the presence of carbon tetrachloride contamination at levels exceeding the KDHE Tier 2 risk-based screening level (RBSL) of 5 {micro}g/L for this compound, in a localized groundwater plume that has shown little movement. The relative concentrations of chloroform, the primary degradation product of carbon tetrachloride, suggested that some degree of reductive dechlorination or natural biodegradation was talking place in situ at the former CCC/USDA facility on a localized scale. The CCC/USDA subsequently developed an Interim Measure Conceptual Design (Argonne 2007b), proposing a pilot test of the Adventus EHC technology for in situ chemical reduction (ISCR). The proposed interim measure (IM) was approved by the KDHE in November 2007 (KDHE 2007). Implementation of the pilot test occurred in November-December 2007. The objective was to create highly reducing conditions that would enhance both chemical and biological 11. Wechselwirkung zwischen Embryonenschutzgesetz und Stammzellgesetz - Interdisziplinäre Podiumsdiskussion am 30.11.2007 anlässlich des 2. DVR-Kongresses in Bonn/Bad Godesberg Directory of Open Access Journals (Sweden) Geisthövel F 2008-01-01 Full Text Available Die hier dokumentierte interdisziplinäre Podiumsdiskussion hat die aktuelle öffentliche Debatte zum Stammzellgesetz (StzG aufgenommen und diese zu der eher unter Spezialisten geführten Diskussion um das Embryonenschutzgesetz (ESchG ins Verhältnis gesetzt. Aus Sicht der humanen embryonalen Stammzellforschung (hES-Forschung ist die Durchführung eines solchen Forschungsarbeit an aktuell im Ausland verfügbaren, optimalen Stammzelllinen auch für Deutschland unter Aufgabe der im geltenden StzG bestehenden Stichtagsregelung unverzichtbar. Wenn auch mit den neuesten Forschungsergebnissen zu induzierten pluripotenten Stammzellen ein entscheidender Durchbruch erfolgt ist, mit der Möglichkeit, dass in Zukunft die hES-Forschung ohne Verbrauch von Embryonen auskommen könnte, scheint der derzeitige hES-Forschungsweg als Standard noch unersetzlich zu sein. Im (Mehrheiten-Votum A des (ehemaligen Nationalen Ethikrates (NER wird daher anstelle der bisherigen Stichtagsregelung eine Einzelfallprüfung vorgesehen. Sanktionen des StZGs sollten vom Strafrecht auf das Ordnungswidrigkeitenrecht umgestellt werden. Dahingegen sieht das (Minderheiten-Votum B des NER die Glaubwürdigkeit des ESchGs in Gefahr und plädiert für eine breite Forschungsförderung mit sehr gutem Potenzial alternativ zur hES-Forschung. Wenn – so die Meinung der Reproduktionsmedizin – die hES-Forschung vom Bundestag in Anlehnung an das Votums A des NER politische Unterstützung erfährt, dann müsste allgemein akzeptiert werden, dass auch flexiblere, individualisierte, ethisch hochwertige Therapieverfahren der Assistierten Reproduktion (sog. "Deutscher Mittelweg" flächendeckend in Deutschland angewandt werden, zumal dabei die normativen Vorgaben des geltenden ESchG nicht geändert werden müssten. Bei der strafrechtlichen Analyse wird nochmals herausgearbeitet, dass das ESchG zwar ein Forschungsverbot enthält, für reproduktionsmedizinische Therapiemaßnahmen aber ein ausgewogenes 12. Ion beam production with sub-milligram samples of material from an ECR source for AMS. Science.gov (United States) Scott, R; Bauder, W; Palchan-Hazan, T; Pardo, R; Vondrasek, R 2016-02-01 Current accelerator mass spectrometry experiments at the Argonne Tandem Linac Accelerator System facility at Argonne National Laboratory push us to improve the ion source performance with a large number of samples and a need to minimize cross contamination. These experiments can require the creation of ion beams from as little as a few micrograms of material. These low concentration samples push the limit of our current efficiency and stability capabilities of the electron cyclotron resonance ion source. A combination of laser ablation and sputtering techniques coupled with a newly modified multi-sample changer has been used to meet this demand. We will discuss performance, stability, and consumption rates as well as planned improvements. 13. Ion beam production with sub-milligram samples of material from an ECR source for AMS Energy Technology Data Exchange (ETDEWEB) Scott, R., E-mail: [email protected]; Palchan-Hazan, T.; Pardo, R.; Vondrasek, R. [Argonne Tandem Linac Accelerator System (ATLAS), Argonne National Laboratory, Lemont, Illinois 60439 (United States); Bauder, W. [Argonne Tandem Linac Accelerator System (ATLAS), Argonne National Laboratory, Lemont, Illinois 60439 (United States); Nuclear Structure Laboratory, University of Notre Dame, Notre Dame, Indiana 46556 (United States) 2016-02-15 Current accelerator mass spectrometry experiments at the Argonne Tandem Linac Accelerator System facility at Argonne National Laboratory push us to improve the ion source performance with a large number of samples and a need to minimize cross contamination. These experiments can require the creation of ion beams from as little as a few micrograms of material. These low concentration samples push the limit of our current efficiency and stability capabilities of the electron cyclotron resonance ion source. A combination of laser ablation and sputtering techniques coupled with a newly modified multi-sample changer has been used to meet this demand. We will discuss performance, stability, and consumption rates as well as planned improvements. 14. Algorithms and computer codes for atomic and molecular quantum scattering theory. Volume I Energy Technology Data Exchange (ETDEWEB) Thomas, L. (ed.) 1979-01-01 The goals of this workshop are to identify which of the existing computer codes for solving the coupled equations of quantum molecular scattering theory perform most efficiently on a variety of test problems, and to make tested versions of those codes available to the chemistry community through the NRCC software library. To this end, many of the most active developers and users of these codes have been invited to discuss the methods and to solve a set of test problems using the LBL computers. The first volume of this workshop report is a collection of the manuscripts of the talks that were presented at the first meeting held at the Argonne National Laboratory, Argonne, Illinois June 25-27, 1979. It is hoped that this will serve as an up-to-date reference to the most popular methods with their latest refinements and implementations. 15. Proceedings of the first users meeting for the Advanced Photon Source Energy Technology Data Exchange (ETDEWEB) 1988-02-01 The first national users meeting for the Advanced Photon Source (APS) at Argonne National Laboratory - held November 13-14, 1986, at Argonne - brought together scientists and engineers from industry, universities, and national laboratories to exchange information on the design of the facility and expectations for its use. Presented papers and potential participating research team (PRT) plans are documented in these proceedings. Topics covered include the current status of the project, an overview of the APS conceptual design, scientific opportunities offered by the facility for synchrotron-radiation-related research, current proposals and funding mechanisms for beam lines, and user policies. A number of participants representing universities and private industry discussed plans for the possible formation of PRTs to build and use beam lines at the APS site. The meeting also provided an opportunity for potential users to organize their efforts to support and guide the facility's development. 16. 10th International Workshop on Condensed Matter Theories CERN Document Server Kalia, Rajiv; Bishop, R 1987-01-01 The second volume of Condensed Matter Theories contains the proceedings of the 10th International Workshop held at Argonne National Laboratory, Argonne, IL, U.S.A. during the week of July 21, 1986. The workshop was attended by high-energy, nuclear and condensed-matter physicists as well as materials scientists. This diverse blend of participants was in keeping with the flavor of the previous workshops. This annual series of international workshops was"started in 1977 in Sao Paulo, Brazil. Subsequent'workshops were held in Trieste (Italy), Buenos Aires (Argentina), Caracas (Venezuela), Altenberg (West Germany), Granada (Spain), and San Francisco (U.S.A.). What began as a meeting of the physicists from the Western Hemisphere has expanded in the last three years into an international conference of scientists with diverse interests and backgrounds. This diversity has promoted a healthy exchange of ideas from different branches of physics and also fruitful interactions among the participants. The present volume is... 17. Summary of the Proceedings of the Super-Conductivity Technical Exchange Meeting Science.gov (United States) 1980-04-01 Contents: Large Superconductive Magnets; Superconductivity Activities at LASL; Superconductivity Studies at Argonne National Laboratory; CFFF MHD Magnet at Argonne National Laboratory; MHD Superconducting Magnets; Fermilab's Energy Saver; LCP and 12 Tesla Programs at ORNL; Division of Electric Energy System's Superconductivity Program; Development of Standards for Practical Superconductors; Casting of Dendritic Cu-Nb Alloys for Superconducting Wire; Review of Recent Developments of Multifilamentary Nb3Sn by 'in Situ' and Cold Powder Metallurgy Processes; Superconducting Magnet Facility at NRL; Airborne Superconductor Applications; High Pressure Synthesis Program at Benet Weapons Laboratory Watervliet Arsenal; CuCl; Stability and Exciton Population Percursive to Anomalous Diagmagnetism; Navy Superconductive Machinery Development Program; and Superconducting Materials Program at NRL. 18. Technical Improvements to an Absorbing Supergel for Radiological Decontamination in Tropical Environments Energy Technology Data Exchange (ETDEWEB) Kaminski, Michael D. [Argonne National Lab. (ANL), Argonne, IL (United States); Mertz, Carol J. [Argonne National Lab. (ANL), Argonne, IL (United States); Kivenas, Nadia [Argonne National Lab. (ANL), Argonne, IL (United States); demmer, Rick [Idaho National Lab. (INL), Idaho Falls, ID (United States) 2016-01-01 Argonne National Laboratory (Argonne) developed a superabsorbing gel-based process (SuperGel) for the decontamination of cesium from concrete and other porous building materials. Here, we report on results that tested the gel decontamination technology on specific concrete and ceramic formulations from a coastal city in Southeast Asia, which may differ significantly from some U.S. sources. Results are given for the evaluation of americium and cesium sequestering agents that are commercially available at a reasonable cost; the evaluation of a new SuperGel formulation that combines the decontamination properties of cesium and americium; the variation of the contamination concentration to determine the effects on the decontamination factors with concrete, tile, and brick samples; and pilot-scale testing (0.02–0.09 m2 or 6–12 in. square coupons). 19. Putting the scientist in science education Energy Technology Data Exchange (ETDEWEB) Greene, J.P. [Argonne National Lab., IL (United States) 1994-12-31 A personal account is given of some of the ways scientists could get involved in science education at the local level. Being employed at a National Laboratory such as Argonne presents a myriad of opportunities and programs involving the educational community. There have been basically, three areas of involvement at present, through our Division of Educational Programs (DEP), through initiatives presented, in conjunction with the Argonne Chapter of Sigma Xi and a volunteer effort with the Museum of Science and Industry of Chicago, Scientists, and School Program. Some descriptions of these efforts will be outlined from a personal perspective, and hopefully a measure of the impact gained by the scientists involvement in the education process. 20. History of the project as of February 1, 1951 Energy Technology Data Exchange (ETDEWEB) Reed, G.G. Jr. 1952-01-04 In 1946, it was recommended, by the district engineer for the War Department, United States Engineer Office, that it was desirable to transfer all of the functions of production to Hanford, a production installation; thereby relieving the Argonne National Laboratory, a research installation, from production duties. This decision was based on the belief by Argonne National Laboratories that the principal problems of production were solved, as a result of a meeting held at Clinton Laboratories, October 25, 1946, during which, Dr. T.S. Chapman discussed with Major F.A. Valente the possibility of Hanford assuming full production responsibility for the product extracted from the irradiation of Special Request. This responsibility was to include the procurement of lithium fluoride, the preparation and canning of the pellets, the irradiation of the slugs, the extraction of the product and its subsequent shipment to the consumer. This report details historical aspects of this program and the P-10 Project. 1. Evaluation of computer-aided software engineering tools for data base development Energy Technology Data Exchange (ETDEWEB) Woyna, M.A.; Carlson, C.R. 1989-02-01 More than 80 computer-aided software engineering (CASE) tools were evaluated to determine their usefulness in data base development projects. The goal was to review the current state of the CASE industry and recommend one or more tools for inclusion in the uniform development environment (UDE), a programming environment being designed by Argonne National Laboratory for the US Department of Defense Organization of the Joint Chiefs of Staff, J-8 Directorate. This environment gives a computer programmer a consistent user interface and access to a full suite of tools and utilities for software development. In an effort to identify tools that would be useful in the planning, analysis, design, implementation, and maintenance of Argonne's data base development projects for the J-8 Directorate, we evaluated 83 commercially available CASE products. This report outlines the method used and presents the results of the evaluation. 2. Photodisintegration of light nuclei for testing a correlated realistic interaction in the continuum Science.gov (United States) Bacca, Sonia 2007-04-01 An exact calculation of the photodisintegration cross section of H3, He3, and He4 is performed by using as interaction the correlated Argonne V18 potential, constructed within the unitary correlation operator method (VUCOM). Calculations are carried out by using the Lorentz integral transform method in conjunction with a hyperspherical harmonics basis expansion. A comparison with other realistic potentials and with available experimental data is discussed. The VUCOM potential leads to a description of the cross section that is very similar to that of the Argonne V18 interaction with the inclusion of the Urbana IX three-body force for photon energies 45⩽ω⩽120 MeV, whereas larger differences are found close to threshold. 3. Well-to-wheels analysis of energy use and greenhouse gas emissions of plug-in hybrid electric vehicles. Energy Technology Data Exchange (ETDEWEB) Elgowainy, A.; Han, J.; Poch, L.; Wang, M.; Vyas, A.; Mahalik, M.; Rousseau, A. 2010-06-14 Plug-in hybrid electric vehicles (PHEVs) are being developed for mass production by the automotive industry. PHEVs have been touted for their potential to reduce the US transportation sector's dependence on petroleum and cut greenhouse gas (GHG) emissions by (1) using off-peak excess electric generation capacity and (2) increasing vehicles energy efficiency. A well-to-wheels (WTW) analysis - which examines energy use and emissions from primary energy source through vehicle operation - can help researchers better understand the impact of the upstream mix of electricity generation technologies for PHEV recharging, as well as the powertrain technology and fuel sources for PHEVs. For the WTW analysis, Argonne National Laboratory researchers used the Greenhouse gases, Regulated Emissions, and Energy use in Transportation (GREET) model developed by Argonne to compare the WTW energy use and GHG emissions associated with various transportation technologies to those associated with PHEVs. Argonne researchers estimated the fuel economy and electricity use of PHEVs and alternative fuel/vehicle systems by using the Powertrain System Analysis Toolkit (PSAT) model. They examined two PHEV designs: the power-split configuration and the series configuration. The first is a parallel hybrid configuration in which the engine and the electric motor are connected to a single mechanical transmission that incorporates a power-split device that allows for parallel power paths - mechanical and electrical - from the engine to the wheels, allowing the engine and the electric motor to share the power during acceleration. In the second configuration, the engine powers a generator, which charges a battery that is used by the electric motor to propel the vehicle; thus, the engine never directly powers the vehicle's transmission. The power-split configuration was adopted for PHEVs with a 10- and 20-mile electric range because they require frequent use of the engine for acceleration and 4. A novel method for the absolute fluorescence yield measurement by AIRFLY CERN Document Server Ave, M 2008-01-01 One of the goals of the AIRFLY (AIR FLuorescence Yield) experiment is to measure the absolute fluorescence yield induced by electrons in air to better than 10% precision. We introduce a new technique for measurement of the absolute fluorescence yield of the 337 nm line that has the advantage of reducing the systematic uncertainty due to the detector calibration. The principle is to compare the measured fluorescence yield to a well known process - the Cerenkov emission. Preliminary measurements taken in the BFT (Beam Test Facility) in Frascati, Italy with 350 MeV electrons are presented. Beam tests in the Argonne Wakefield Accelerator at the Argonne National Laboratory, USA with 14 MeV electrons have also shown that this technique can be applied at lower energies. 5. Proceedings of the second users meeting for the Advanced Photon Source Energy Technology Data Exchange (ETDEWEB) 1988-11-01 The second national users meeting for the Advanced Photon Source (APS) at Argonne National Laboratory -- held March 9--10, 1988, at Argonne -- brought scientists and engineers from industry, universities, and national laboratories together to review the status of the facility and expectations for its use. Presented papers and status reports in these proceedings include the current status of the APS with respect to accelerator systems, experimental facilities, and conventional facilities; scientific papers on frontiers in synchrotron applications summaries of reports on workshops held by users in certain topical groups; reports in research and development activities in support of the APS at other synchrotron facilities; and noted from a discussion of APS user access policy. In addition, actions taken by the APS Users Organization and its Executive Committee are documented in this report. 6. TANGR2015 Heidelberg. Second international workshop on tracer applications of noble gas radionuclides in the geosciences Energy Technology Data Exchange (ETDEWEB) NONE 2015-07-01 TANGR2015 is a workshop on the progress in the technique and application of Atom Trap Trace Analyis (ATTA). It is a follow-up to the first TANGR workshop, TANGR2012, which was held at the Argonne National Laboratory, Argonne, USA, in June 2012. It is organized in response to recent technical advances and new applications of Atom Trap Trace Analysis (ATTA), an analytical method for measuring the isotopes {sup 81}Kr, {sup 85}Kr, and {sup 39}Ar. The primary aim of the workshop is to discuss the technical progress of ATTA and thereby enable innovative and timely applications of the noble gas radionuclides to important scientific problems in earth and environmental sciences, e.g. in the fields of groundwater hydrology, glaciology, oceanography, and paleoclimatology. 7. Ice slurry cooling development and field testing Energy Technology Data Exchange (ETDEWEB) Kasza, K.E. [Argonne National Lab., IL (United States); Hietala, J. [Northern States Power Co., Minneapolis, MN (United States); Wendland, R.D. [Electric Power Research Inst., Palo Alto, CA (United States); Collins, F. [USDOE, Washington, DC (United States) 1992-07-01 A new advanced cooling technology collaborative program is underway involving Argonne National Laboratory (ANL), Northern States Power (NSP) and the Electric Power Research Institute (EPRI). The program will conduct field tests of an ice slurry distributed load network cooling concept at a Northern States Power utility service center to further develop and prove the technology and to facilitate technology transfer to the private sector. The program will further develop at Argonne National Laboratory through laboratory research key components of hardware needed in the field testing and develop an engineering data base needed to support the implementation of the technology. This program will sharply focus and culminate research and development funded by both the US Department of Energy and the Electric Power Research Institute on advanced cooling and load management technology over the last several years. 8. Ice slurry cooling development and field testing Energy Technology Data Exchange (ETDEWEB) Kasza, K.E. (Argonne National Lab., IL (United States)); Hietala, J. (Northern States Power Co., Minneapolis, MN (United States)); Wendland, R.D. (Electric Power Research Inst., Palo Alto, CA (United States)); Collins, F. (USDOE, Washington, DC (United States)) 1992-01-01 A new advanced cooling technology collaborative program is underway involving Argonne National Laboratory (ANL), Northern States Power (NSP) and the Electric Power Research Institute (EPRI). The program will conduct field tests of an ice slurry distributed load network cooling concept at a Northern States Power utility service center to further develop and prove the technology and to facilitate technology transfer to the private sector. The program will further develop at Argonne National Laboratory through laboratory research key components of hardware needed in the field testing and develop an engineering data base needed to support the implementation of the technology. This program will sharply focus and culminate research and development funded by both the US Department of Energy and the Electric Power Research Institute on advanced cooling and load management technology over the last several years. 9. Long-Term Monitoring of Desert Land and Natural Resources and Application of Remote Sensing Technologies Energy Technology Data Exchange (ETDEWEB) Hamada, Yuki [Argonne National Lab. (ANL), Argonne, IL (United States); Rollins, Katherine E. [Argonne National Lab. (ANL), Argonne, IL (United States) 2016-11-01 Monitoring environmental impacts over large, remote desert regions for long periods of time can be very costly. Remote sensing technologies present a promising monitoring tool because they entail the collection of spatially contiguous data, automated processing, and streamlined data analysis. This report provides a summary of remote sensing products and refinement of remote sensing data interpretation methodologies that were generated as part of the U.S. Department of the Interior Bureau of Land Management Solar Energy Program. In March 2015, a team of researchers from Argonne National Laboratory (Argonne) collected field data of vegetation and surface types from more than 5,000 survey points within the eastern part of the Riverside East Solar Energy Zone (SEZ). Using the field data, remote sensing products that were generated in 2014 using very high spatial resolution (VHSR; 15 cm) multispectral aerial images were validated in order to evaluate potential refinements to the previous methodologies to improve the information extraction accuracy. 10. Liquid metal MHD and heat transfer in a tokamak blanket slotted coolant channel Science.gov (United States) Reed, C. B.; Hua, T. Q.; Black, D. B.; Kirillov, I. R.; Sidorenkov, S. I.; Shapiro, A. M.; Evtushenko, I. A. A liquid metal MHD (Magnetohydrodynamic)/heat transfer test was conducted at the ALEX (Argonne Liquid Metal Experiment) facility of ANL (Argonne National Laboratory), jointly between ANL and NIIEFA (Efremov Institute). The test section was a rectangular slotted channel geometry (meaning the channel has a high aspect ratio, in this case 10:1, and the long side is parallel to the applied magnetic field). Isothermal and heat transfer data were collected. A heat flux of approximately 9 W/sq cm was applied to the top horizontal surface (the long side) of the test section. Hartmann Numbers to 1050 (2 Tesla), interaction parameters to 9 x 10(exp 3), Peclet numbers of 10-200, based on the half-width of the small dimension (7 mm), and velocities of 1-75 cm/sec. were achieved. The working fluid was NaK (sodium potassium eutectic). All four interior walls were bare, 300-series stainless steel, conducting walls. 11. A survey of an air monitoring program Energy Technology Data Exchange (ETDEWEB) Lee, M.B. 1997-08-01 The objective of this report is to compare personal air sampling data to stationary air sampling data and to bioassay data that was taken during the decontamination and decommissioning of sixty-one plutonium glove boxes at Argonne National Laboratory (ANL) in 1995. An air monitoring program administered at Argonne National Laboratory was assessed by comparing personal air sampler (PAS) data, stationary air sampler (SAS) data, and bioassay data. The study revealed that the PAS and SAS techniques were equivalent when averaged over all employees and all workdays, but the standard deviation was large. Also, large deviations were observed in individual samples. The correlation between individual PAS results and bioassay results was low. Personal air samplers and bioassay monitoring played complementary roles in assessing the workplace and estimating intakes. The PAS technique is adequate for detection and evaluation of contaminated atmospheres, whereas bioassay monitoring is better for determining individual intakes. 12. Annual report of groundwater monitoring at Centralia, Kansas, in 2009. Energy Technology Data Exchange (ETDEWEB) LaFreniere, L. M. (Environmental Science Division) 2010-10-19 In September 2005, periodic sampling of groundwater was initiated by the Commodity Credit Corporation of the U.S. Department of Agriculture (CCC/USDA) in the vicinity of a grain storage facility formerly operated by the CCC/USDA at Centralia, Kansas. The sampling at Centralia is being performed on behalf of the CCC/USDA by Argonne National Laboratory, in accord with a monitoring program approved by the Kansas Department of Health and Environment (KDHE). The objective is to monitor levels of carbon tetrachloride contamination identified in the groundwater at Centralia (Argonne 2003, 2004, 2005a). Under the KDHE-approved monitoring plan (Argonne 2005b), the groundwater was sampled twice yearly from September 2005 until September 2007 for analyses for volatile organic compounds (VOCs), as well as measurement of selected geochemical parameters to aid in the evaluation of possible natural contaminant degradation (reductive dechlorination) processes in the subsurface environment. The results from the two-year sampling program demonstrated the presence of carbon tetrachloride contamination at levels exceeding the KDHE Tier 2 risk-based screening level (RBSL) of 5 {micro}g/L for this compound in a localized groundwater plume that has shown little movement. The relative concentrations of chloroform, the primary degradation product of carbon tetrachloride, suggested that some degree of reductive dechlorination or natural biodegradation was taking place in situ at the former CCC/USDA facility on a localized scale. The CCC/USDA subsequently developed an Interim Measure Conceptual Design (Argonne 2007b), proposing a pilot test of the Adventus EHC technology for in situ chemical reduction (ISCR). The proposed interim measure (IM) was approved by the KDHE in November 2007 (KDHE 2007). Implementation of the pilot test occurred in November-December 2007. The objective was to create highly reducing conditions that would enhance both chemical and biological reductive dechlorination 13. Case study of verification, validation, and testing in the Automated Data Processing (ADP) system development life cycle Energy Technology Data Exchange (ETDEWEB) Riemer, C.A. 1990-05-01 Staff of the Environmental Assessment and Information Sciences Division of Argonne National Laboratory (ANL) studies the role played by the organizational participants in the Department of Veterans Affairs (VA) that conduct verification, validation, and testing (VV T) activities at various stages in the automated data processing (ADP) system development life cycle (SDLC). A case-study methodology was used to assess the effectiveness of VV T activities (tasks) and products (inputs and outputs). The case selected for the study was a project designed to interface the compensation and pension (C P) benefits systems with the centralized accounts receivable system (CARS). Argonne developed an organizational SDLC VV T model and checklists to help collect information from C P/CARS participants on VV T procedures and activities, and these were then evaluated against VV T standards. 14. Study on fatigue analysis for operational load histories Energy Technology Data Exchange (ETDEWEB) Wilhelm, Paul; Rudolph, Juergen [AREVA GmbH, Erlangen (Germany); Steinmann, Paul [Erlangen-Nuernberg Univ. (Germany). Chair of Applied Mechanics 2013-07-01 Some laboratories performed fatigue tests in dissolved oxygen water at elevated temperature to better understand the influence of a long hold-time within cyclic loading. Also, the combined effect of complex waveform and surface finish was examined. The data show a less severe influence compared to the prediction model from Argonne National Laboratory; an increase in fatigue life was noticed and attributed to different effects. To evaluate an operational load history with this experimental data an algorithm is developed, which finds hold-times and the examined complex waveform in a stress-time series. All those cycles, which are either geometrically comparable to the complex loading signal or containing a hold period, are evaluated with the test results and not with the formula from Argonne National Laboratory. The reduction of the cumulative usage factor is calculated. Based on this discussion a realistic test condition is derived for further research activities. 15. Development of Solvent Extraction Approach to Recycle Enriched Molybdenum Material Energy Technology Data Exchange (ETDEWEB) Tkac, Peter [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Brown, M. Alex [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Sen, Sujat [Argonne National Lab. (ANL), Argonne, IL (United States). Energy Systems Division; Bowers, Delbert L. [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Wardle, Kent [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Copple, Jacqueline M. [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Pupek, Krzysztof Z. [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Dzwiniel, Trevor L. [Argonne National Lab. (ANL), Argonne, IL (United States). Energy Systems Division; Pereira, Candido [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Krumdick, Gregory K. [Argonne National Lab. (ANL), Argonne, IL (United States). Energy Systems Division; Vandegrift, George F. [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division 2016-06-01 Argonne National Laboratory, in cooperation with Oak Ridge National Laboratory and NorthStar Medical Technologies, LLC, is developing a recycling process for a solution containing valuable Mo-100 or Mo-98 enriched material. Previously, Argonne had developed a recycle process using a precipitation technique. However, this process is labor intensive and can lead to production of large volumes of highly corrosive waste. This report discusses an alternative process to recover enriched Mo in the form of ammonium heptamolybdate by using solvent extraction. Small-scale experiments determined the optimal conditions for effective extraction of high Mo concentrations. Methods were developed for removal of ammonium chloride from the molybdenum product of the solvent extraction process. In large-scale experiments, very good purification from potassium and other elements was observed with very high recovery yields (~98%). 16. Remedial investigation report for J-Field, Aberdeen Proving Ground, Maryland. Volume 3: Ecological risk assessment Energy Technology Data Exchange (ETDEWEB) Hlohowskyj, I.; Hayse, J.; Kuperman, R.; Van Lonkhuyzen, R. 2000-02-25 The Environmental Management Division of the U.S. Army Aberdeen Proving Ground (APG), Maryland, is conducting a remedial investigation (RI) and feasibility study (FS) of the J-Field area at APG, pursuant to the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), as amended. As part of that activity, Argonne National Laboratory (ANL) conducted an ecological risk assessment (ERA) of the J-Field site. This report presents the results of that assessment. 17. List of ERDA radioisotope (customers with summary of radioisotope shipments FY 1975 Energy Technology Data Exchange (ETDEWEB) Simmons, J.L.; Gano, S.R. (comp.) 1976-01-01 The twelfth edition of the ERDA radioisotope customer list has been prepared at the request of the Division of Biomedical and Environmental Research. The purpose of this document is to list the FY 1975 commercial radioisotope production and distribution activities of USERDA facilities at Argonne National Laboratory, Battelle, Pacific Northwest Laboratories, Brookhaven National Laboratory, United Nuclear Inc., Idaho Operations Office, Hanford Engineering Development Laboratory, Mound Laboratory, Oak Ridge National Laboratory, and Savannah River Plant. (TFD) 18. Analytical chemistry laboratory. Progress report for FY 1997 Energy Technology Data Exchange (ETDEWEB) Green, D.W.; Boparai, A.S.; Bowers, D.L. [and others 1997-12-01 The purpose of this report is to summarize the activities of the Analytical Chemistry Laboratory (ACL) at Argonne National Laboratory (ANL) for Fiscal Year (FY) 1997 (October 1996 through September 1997). This annual progress report is the fourteenth in this series for the ACL, and it describes continuing effort on projects, work on new projects, and contributions of the ACL staff to various programs at ANL. 19. Fusion Power Program biannual progress report, April-September 1979 Energy Technology Data Exchange (ETDEWEB) 1980-02-01 This biannual report summarizes the Argonne National Laboratory work performed for the Office of Fusion Energy during the April-September 1979 quarter in the following research and development areas: materials; energy storage and transfer; tritium containment, recovery and control; advanced reactor design; atomic data; reactor safety; fusion-fission hybrid systems; alternate applications of fusion energy; and other work related to fusion power. Separate abstracts were prepared for three sections. (MOW) 20. Dynamics and Controls in Maglev Systems Science.gov (United States) 1992-09-01 and Alscher, H. 1986. "The Magnetic Train Transrapid 06," Proc. Int. Conf. Maglev and Linear Drives, May 14-16, 1986, Vancouver, B.C., Canada, Publ. by...AD-A263 087 ANL-92/43It Il~l Iif IIt[11 Materials and Components Dynamics and Controls Technology Division Materials and Components in Maglev ...Argonne, Illinois 60439 Distribution Category: All Transportation Systems Reports (UC-330) Dynamics and Controls in Maglev Systems by Y. Cai and S. S 1. Van de Graaff Irradiation of Materials Energy Technology Data Exchange (ETDEWEB) Quigley, Kevin [Argonne National Lab. (ANL), Argonne, IL (United States); Chemerisov, Sergey [Argonne National Lab. (ANL), Argonne, IL (United States); Tkac, Peter [Argonne National Lab. (ANL), Argonne, IL (United States); Vandegrift, George F. [Argonne National Lab. (ANL), Argonne, IL (United States) 2016-10-01 Through irradiations using our 3 MeV Van de Graaf accelerator, Argonne is testing the radiation stability of components of equipment that are being used to dispense molybdenum solutions for use as feeds to 99mTc generators and in the 99mTc generators themselves. Components have been irradiated by both a direct electron beam and photons generated from a tungsten convertor. 2. Tapered undulators for SASE FELs CERN Document Server Fawley, W M; Vinokurov, N A 2002-01-01 We discuss the use of tapered undulators to enhance the performance of free-electron lasers (FELs) based upon self-amplified spontaneous emission, where the radiation tends to have a relatively broad bandwidth and limited temporal coherence. Using the polychromatic FEL simulation code GINGER, we numerically demonstrate the effectiveness of tapered undulators for parameters corresponding to the Argonne low-energy undulator test line FEL and the proposed linac coherent light source. 3. Interim progress report on safety and licensing strategy support for the ABR prototype. Energy Technology Data Exchange (ETDEWEB) Cahalan, J .E.; Nuclear Engineering Division 2007-06-26 Argonne National Laboratory is providing support to the U.S. Department of Energy in the Global Nuclear Energy Partnership (GNEP) in certification of an advanced, sodium-cooled fast reactor. The reactor is to be constructed as a prototype for future commercial power reactors that will produce electricity while consuming actinides recovered from light water reactor spent fuel. This prototype reactor has been called the ABR, or Advanced Burner Reactor. 4. Literature on fabrication of tungsten for application in pyrochemical processing of spent nuclear fuels Energy Technology Data Exchange (ETDEWEB) Edstrom, C.M.; Phillips, A.G.; Johnson, L.D.; Corle, R.R. 1980-10-11 The pyrochemical processing of nuclear fuels requires crucibles, stirrers, and transfer tubing that will withstand the temperature and the chemical attack from molten salts and metals used in the process. This report summarizes the literature that pertains to fabrication (joining, chemical vapor deposition, plasma spraying, forming, and spinning) is the main theme. This report also summarizes a sampling of literature on molbdenum and the work previously performed at Argonne National Laboratory on other container materials used for pyrochemical processing of spent nuclear fuels. 5. Ultrasonic techniques for process monitoring and control. Energy Technology Data Exchange (ETDEWEB) Chien, H.-T. 1999-03-24 Ultrasonic techniques have been applied successfully to process monitoring and control for many industries, such as energy, medical, textile, oil, and material. It helps those industries in quality control, energy efficiency improving, waste reducing, and cost saving. This paper presents four ultrasonic systems, ultrasonic viscometer, on-loom, real-time ultrasonic imaging system, ultrasonic leak detection system, and ultrasonic solid concentration monitoring system, developed at Argonne National Laboratory in the past five years for various applications. 6. ARO Research Instrumentation Program - IR Spectrometer Procurement Science.gov (United States) 2015-11-01 Transport in Mesoporous Semiconducting Thin Film Electrodes”, Argonne National Laboratory 24th Annual Undergraduate Symposium, 7 November, 2014... hysteresis that involves a system of associated cations and anions that cannot move in time with the imposed waveform. We believe this to be an...cycle (■). Data indicate hysteresis in that the anion concentration at the electrode appears to increase as cycles are repeated. This is 7. Technical basis in support of the conversion of the University of Missouri Research Reactor (MURR) core from highly-enriched to low-enriched uranium - core neutron physics Energy Technology Data Exchange (ETDEWEB) Stillman, J. [Argonne National Lab. (ANL), Argonne, IL (United States); Feldman, E. [Univ. of Missouri, Columbia, MO (United States). Columbia Research Reactor; Foyto, L [Univ. of Missouri, Columbia, MO (United States). Columbia Research Reactor; Kutikkad, K [Univ. of Missouri, Columbia, MO (United States). Columbia Research Reactor; McKibben, J C [Univ. of Missouri, Columbia, MO (United States). Columbia Research Reactor; Peters, N. [Univ. of Missouri, Columbia, MO (United States). Columbia Research Reactor; Stevens, J. [Argonne National Lab. (ANL), Argonne, IL (United States) 2012-09-01 This report contains the results of reactor design and performance for conversion of the University of Missouri Research Reactor (MURR) from the use of highly-enriched uranium (HEU) fuel to the use of low-enriched uranium (LEU) fuel. The analyses were performed by staff members of the Global Threat Reduction Initiative (GTRI) Reactor Conversion Program at the Argonne National Laboratory (ANL) and the MURR Facility. The core conversion to LEU is being performed with financial support of the U. S. government. 8. Electron Accelerator Shielding Design of KIPT Neutron Source Facility OpenAIRE Zhaopeng Zhong; Yousry Gohar 2016-01-01 The Argonne National Laboratory of the United States and the Kharkov Institute of Physics and Technology of the Ukraine have been collaborating on the design, development and construction of a neutron source facility at Kharkov Institute of Physics and Technology utilizing an electron-accelerator-driven subcritical assembly. The electron beam power is 100 kW using 100-MeV electrons. The facility was designed to perform basic and applied nuclear research, produce medical isotopes, and train nu... 9. Recent contributions to fusion reactor design and technology development Energy Technology Data Exchange (ETDEWEB) 1979-11-01 The report contains a collection of 16 recent fusion technology papers on the STARFIRE Project, the study of alternate fusion fuel cycles, a maintainability study, magnet safety, neutral beam power supplies and pulsed superconducting magnets and energy transfer. This collection of papers contains contributions for Argonne National Laboratory, McDonnell Douglas Astronautics Company, General Atomic Company, The Ralph M. Parsons Company, the University of Illinois, and the University of Wisconsin. Separate abstracts are presented for each paper. (MOW) 10. Sharp Interface Algorithm for Large Density Ratio Incompressible Multiphase Magnetohydrodynamic Flows Science.gov (United States) 2013-01-01 Incompressible MHD solver for Arbitrary Geome- tries) is developed to model the flow of liquid metal with free surfaces in the presence of strong multi...24] C. B. Reed S. Molokov. Review of free-surface mhd experiments and modeling . Technical Report ANL/TD/TM99-08, Argonne National Laboratory, 1999...and the corresponding paralleled implementation for the study of magnetohydrodynamics ( MHD ) of large density ratio, three-dimensional multiphase flows 11. Fluidized-bed combustion process evaluation and program support. Quarterly report, January-March 1980 Energy Technology Data Exchange (ETDEWEB) Johnson, I.; Podolski, W.F.; Swift, W.M.; Henry, R.F.; Hanway, J.E.; Griggs, K.E.; Herzenberg, C.; Helt, J.E.; Carls, E.L. 1980-12-01 Argonne National Laboratory is undertaking several tasks primarily in support of the pressurized fluidized-bed combustion project management team at Morgantown Energy Technology Center. Work is under way to provide fluidized-bed combustion process evaluation and program support to METC, determination of the state of the art of instrumentation for FBC applications, evaluation of the performance capability of cyclones for hot-gas cleaning in PFBC systems, and an initial assessment of methods for the measurement of sodium sulfate dew point. 12. Effects of Nonlocal One-Pion-Exchange Potential in Deuteron OpenAIRE Forest, J. L. 1999-01-01 The off-shell aspects of the one-pion-exchange potential (OPEP) are discussed. Relativistic Hamiltonians containing relativistic kinetic energy, relativistic OPEP with various off-shell behaviors and Argonne $v_{18}$ short-range parameterization are used to study the deuteron properties. The OPEP off-shell behaviors depend on whether a pseudovector or pseudoscalar pion-nucleon coupling is used and are characterized by a parameter $\\mu$. We study potentials having $\\mu$=-1, 0 and +1 and we fin... 13. Ground state of medium-heavy doubly-closed shell nuclei in correlated basis function theory CERN Document Server Bisconti, C; Có, G; Fabrocini, A 2006-01-01 The correlated basis function theory is applied to the study of medium-heavy doubly closed shell nuclei with different wave functions for protons and neutrons and in the jj coupling scheme. State dependent correlations including tensor correlations are used. Realistic two-body interactions of Argonne and Urbana type, together with three-body interactions have been used to calculate ground state energies and density distributions of the 12C, 16O, 40Ca, 48Ca and 208Pb nuclei. 14. Data sonification and sound visualization OpenAIRE 2000-01-01 This article describes a collaborative project between researchers in the Mathematics and Computer Science Division at Argonne National Laboratory and the Computer Music Project of the University of Illinois at Urbana-Champaign. The project focuses on the use of sound for the exploration and analysis of complex data sets in scientific computing. The article addresses digital sound synthesis in the context of DIASS (Digital Instrument for Additive Sound Synthesis) and sound visualization in a ... 15. Analyzing power in the reaction pp->d. pi. /sup +/ for beam momenta from 1. 17 to 1. 96 GeV/c Energy Technology Data Exchange (ETDEWEB) Corcoran, M.D.; Calkin, M.M.; Hoftiezer, J.H.; Mutchler, G.S. (Rice Univ., Houston, TX (USA). Bonner Nuclear Labs.); Arenton, M.W.; Ayres, D.S.; Diebold, R.; May, E.N.; Nodulman, L.; Sauer, J.R. 1983-01-13 The analyzing power Asub(y0) in the reaction pupp->d..pi../sup +/ has been measured using the polarized proton beam at Argonne National Laboratory's zero gradient synchrotron. Data were taken at beam momenta of 1.17, 1.47, 1.70, and 1.96 GeV/c and for pion center of mass angles from 8/sup 0/ to 163/sup 0/. 16. Ground state of 16O Science.gov (United States) Pieper, Steven C.; Wiringa, R. B.; Pandharipande, V. R. 1990-01-01 A variational method is used to study the ground state of 16O. Expectation values are computed with a cluster expansion for the noncentral correlations in the wave function; the central correlations and exchanges are treated to all orders by Monte Carlo integration. The expansion has good convergence. Results are reported for the Argonne v14 two-nucleon and Urbana VII three-nucleon potentials. 17. Pressure Effects on the Relaxation of an Excited Nitromethane Molecule in an Argon Bath Science.gov (United States) 2015-01-05 Thompson1 1Department of Chemistry , University of Missouri-Columbia, Columbia, Missouri 65211-7600, USA 2Argonne National Laboratory, Chemical...respectively. The parameters for the nitro- methane -Ar interactions were obtained using the combination rules Aαβ =  AααAββ, Bαβ = Bαα+Bββ /2, Cαβ...possible change is the development of Ar clusters. If the clusters are numerous enough, and if the vibrational relaxation efficacy of a nitro- methane 18. MURMoT: Design and Application of Microbial Uranium Reduction Monitoring Tools Energy Technology Data Exchange (ETDEWEB) Pennell, Kurt [Tufts Univ., Medford, MA (United States) 2014-12-31 The overarching project goal of the MURMoT project was the design of tools to elucidate the presence, abundance, dynamics, spatial distribution, and activity of metal- and radionuclide-transforming bacteria. To accomplish these objectives, an integrated approach that combined nucleic acid-based tools, proteomic workflows, uranium isotope measurements, and U(IV) speciation and structure analyses using the Advanced Photon Source (APS) at Argonne National Laboratory was developed. 19. Proceedings of the 1981 symposium on instrumentation and control for fossil-energy processes Energy Technology Data Exchange (ETDEWEB) 1982-01-01 The 1981 symposium on instrumentation and control for fossil-energy processes was held June 8-10, 1981, at the Sheraton-Palace Hotel, San Francisco, California. It was sponsored by the US Department of Energy; Office of Fossil Energy; Argonne National Laboratory; and the Society for Control and Instrumentation of Energy Processes. Sixty-seven articles from the proceedings have been entered individually into EDB and ERA; thirteen articles had been entered previously from other sources. (LTN) 20. Dakota Uncertainty Quantification Methods Applied to the CFD code Nek5000 Energy Technology Data Exchange (ETDEWEB) Delchini, Marc-Olivier [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Popov, Emilian L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Pointer, William David [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division 2016-04-29 This report presents the state of advancement of a Nuclear Energy Advanced Modeling and Simulation (NEAMS) project to characterize the uncertainty of the computational fluid dynamics (CFD) code Nek5000 using the Dakota package for flows encountered in the nuclear engineering industry. Nek5000 is a high-order spectral element CFD code developed at Argonne National Laboratory for high-resolution spectral-filtered large eddy simulations (LESs) and unsteady Reynolds-averaged Navier-Stokes (URANS) simulations. 1. A Real-Time Nearshore Wave and Current Prediction System Science.gov (United States) 2008-01-01 The MRFA04 Trial provided an opportunity to test (DIAS), developed by the Argonne National Laboratory, and evaluate a beach environmental...this component of the The Dclfl3D system, developed by Delft Hydraulics nearshorc modeling system was tailored specifically tbr ( htp :,’www.wldelft.nl...and 0.96. study, we performed three hindcasts using the following Scatter indices for all three test cases were consistently meteorological 2. Many-body theory of nuclear and neutron star matter Energy Technology Data Exchange (ETDEWEB) Pandharipande, V.R.; Akmal, A.; Ravenhall, D.G. [Dept. of Physics, Univ. of Illinois at Urbana-Champaign, Urbana, IL (United States) 1998-06-01 We present results obtained for nuclei, nuclear and neutron star matter, and neutron star structure obtained with the recent Argonne v{sub 18} two- nucleon and Urbana IX three-nucleon interactions including relativistic boost corrections. These interactions predict that matter will undergo a transition to a spin layered phase with neutral pion condensation. We also consider the possibility of a transition to quark matter. (orig.) 3. Proceedings of the 1977 symposium on instrumentation and process control for fossil demonstration plants Energy Technology Data Exchange (ETDEWEB) 1977-01-01 The 1977 Symposium on Instrumentation and Process Control for Fossil Demonstration Plants was held at Hyatt Regency O'Hare, Chicago, Illinois, July 13 to 15, 1977. It was sponsored by the Argonne National Laboratory, the U.S. Energy Research and Development Administration and the Instrument Society of America (Chicago Section). Seventeen papers from thee proceedings were entered individually into EDB and ERA (three papers weree entered previously). (LTN) 4. Southern Great Plains Safety Orientation Energy Technology Data Exchange (ETDEWEB) Schatz, John 2014-05-01 Welcome to the Atmospheric Radiation Measurement (ARM) Climate Research Facility (ARM) Southern Great Plains (SGP) site. This U.S. Department of Energy (DOE) site is managed by Argonne National Laboratory (ANL). It is very important that all visitors comply with all DOE and ANL safety requirements, as well as those of the Occupational Safety and Health Administration (OSHA), the National Fire Protection Association, and the U.S. Environmental Protection Agency, and with other requirements as applicable. 5. Dakota Uncertainty Quantification Methods Applied to the CFD code Nek5000 Energy Technology Data Exchange (ETDEWEB) Delchini, Marc-Olivier [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Popov, Emilian L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division; Pointer, William David [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Reactor and Nuclear Systems Division 2016-04-29 This report presents the state of advancement of a Nuclear Energy Advanced Modeling and Simulation (NEAMS) project to characterize the uncertainty of the computational fluid dynamics (CFD) code Nek5000 using the Dakota package for flows encountered in the nuclear engineering industry. Nek5000 is a high order spectral element CFD code developed at Argonne National Laboratory for high resolution spectral-filtered large eddy simulations (LESs) and unsteady Reynolds averaged Navier-Stokes (URANS) simulations. 6. Loglines. September-October 2013 Science.gov (United States) 2013-10-01 into gasoline, deisel or jet fuel. After being cooled and distilled in the final module, the resulting liquid fuel is complete in a usable state...a compressor and liquid fuel distillation subsystem that cools the liquid fuel, making it usable. “We’re not producing a whole lot of fuel, not the...discharge solar power for satellites. SEPT. 26, 1918: The first day of the Battle of the Argonne, the last major battle of World War I. 57 PERCENT: The 7. Hot neutron matter from a Self-Consistent Green's Functions approach CERN Document Server Rios, A; Vidaña, I 2008-01-01 A systematic study of the microscopic and thermodynamical properties of pure neutron matter at finite temperature within the Self-Consistent Green's Function approach is performed. The model dependence of these results is analyzed by both comparing the results obtained with two different microscopic interactions, the CD-BONN and the Argonne V18 potentials, and by analyzing the results obtained with other approaches, such as the Brueckner--Hartree--Fock approximation, the variational approach and the virial expansion. 8. HGMF of 10-L solutions Energy Technology Data Exchange (ETDEWEB) Larkin, K.A. 1994-08-14 This test plan describes the activities associated with the High Gradient Magnetic Filtration (HGMF) of plutonium-bearing solutions (10-L). The 10-L solutions were received from Argonne National Laboratories in 1972, are highly acidic, and are considered unstable. The purpose of the testing is to show that HGMF is an applicable method of removing plutonium precipitates from solution. The plutonium then can be stored safely in a solid form. 9. Stabilization of Monodomain Polarization in Ultrathin PbTiO3 Films Science.gov (United States) 2016-06-13 Stephenson,1 Carol Thompson,3 D. M. Kim,4 K. J. Choi,4 C. B. Eom,4 I. Grinberg,2 and A. M. Rappe2 1Materials Science Division, Argonne National...Pennsylvania 19104, USA 3Department of Physics, Northern Illinois University, DeKalb, Illinois 60115, USA 4Department of Materials Science and Engineering...surface. The polarization direction is predicted to depend on the chemi- cal nature of the adsorbate. The conducting substrates were epitaxial SrRuO3 10. Practical superconductor development for electrical power applications quarterly report for the period ending September 30, 2001 Energy Technology Data Exchange (ETDEWEB) NONE 2001-11-09 This is a multiyear experimental research program that focuses on improving relevant material properties of high-T{sub c} superconductors (HTSs) and developing fabrication methods that can be transferred to industry for production of commercial conductors. A key element of this Argonne National Laboratory (ANL) program is the development of teaming relationships with industrial partners in the areas of conductor development and prototype electric power system product demonstration. 11. Enhancements to Demilitarization Process Maps Program (ProMap) Science.gov (United States) 2016-10-14 request) 3. Supplemental Information 3.1 Technical Achievements – Summary for Annual Report (includes all work todate ) TASK 1 - Completion and...requests data from MIDAS. A test system was set up and demonstrated in ProMap. It starts by the user requesting the part number, as shown in Fig. 5...based on Part Data from MIDAS (example) This task is complete and can be implemented as soon as Argonne National Labs sets up a web service to 12. Advanced combustion, emission control, health impacts, and fuels merit review and peer evaluation Energy Technology Data Exchange (ETDEWEB) None, None 2006-10-01 This report is a summary and analysis of comments from the Advisory Panel at the FY 2006 DOE National Laboratory Advanced Combustion, Emission Control, Health Impacts, and Fuels Merit Review and Peer Evaluation, held May 15-18, 2006 at Argonne National Laboratory. The work evaluated in this document supports the FreedomCAR and Vehicle Technologies Program. The results of this merit review and peer evaluation are major inputs used by DOE in making its funding decisions for the upcoming fiscal year. 13. Physics Division annual report, April 1, 1993--March 31, 1994 Energy Technology Data Exchange (ETDEWEB) Thayer, K.J. [ed.; Henning, W.F. 1994-08-01 This is the Argonne National Laboratory Physics Division Annual Report for the period April 1, 1993 to March 31, 1994. It summarizes work done in a number of different fields, both on site, and at other facilities. Chapters describe heavy ion nuclear physics research, operation and development of the ATLAS accelerator, medium-energy nuclear physics research, theoretical physics, and atomic and molecular physics research. 14. Instrumentation and control for fossil-energy processes Energy Technology Data Exchange (ETDEWEB) 1982-09-01 The 1982 symposium on instrumentation and control for fossil energy processes was held June 7 through 9, 1982, at Adam's Mark Hotel, Houston, Texas. It was sponsored by the US Department of Energy, Office of Fossil Energy; Argonne National Laboratory; and the Society for Control and Instrumentation of Energy Processes. Fifty-two papers have been entered individually into EDB and ERA; eleven papers had been entered previously from other sources. (LTN) 15. Fusion Power Program. Quarterly progress report, October--December 1978 Energy Technology Data Exchange (ETDEWEB) 1979-04-01 This quarterly report summarizes the Argonne National Laboratory work performed for the Office of Fusion Energy during the October--December 1978 quarter in the following research and development areas: materials; energy storage and transfer; tritium containment, recovery and control; advanced reactor design; atomic data; reactor safety; fusion-fission hybrid systems; alternate applications of fusion energy; and other work related to fusion power. Three separate abstracts were prepared for the included sections. (MOW) 16. Purified water quality study Energy Technology Data Exchange (ETDEWEB) Spinka, H.; Jackowski, P. 2000-04-03 Argonne National Laboratory (HEP) is examining the use of purified water for the detection medium in cosmic ray sensors. These sensors are to be deployed in a remote location in Argentina. The purpose of this study is to provide information and preliminary analysis of available water treatment options and associated costs. This information, along with the technical requirements of the sensors, will allow the project team to determine the required water quality to meet the overall project goals. 17. Pulsed spallation Neutron Sources Energy Technology Data Exchange (ETDEWEB) Carpenter, J.M. [Argonne National Lab., IL (United States) 1994-12-31 This paper reviews the early history of pulsed spallation neutron source development at Argonne and provides an overview of existing sources world wide. A number of proposals for machines more powerful than currently exist are under development, which are briefly described. The author reviews the status of the Intense Pulsed Neutron Source, its instrumentation, and its user program, and provides a few examples of applications in fundamental condensed matter physics, materials science and technology. 18. DAPHNE: a parallel multiprocessor data acquisition system for nuclear physics. [Data Acquisition by Parallel Histogramming and NEtworking Energy Technology Data Exchange (ETDEWEB) Welch, L.C. 1984-01-01 This paper describes a project to meet these data acquisition needs for a new accelerator, ATLAS, being built at Argonne National Laboratory. ATLAS is a heavy-ion linear superconducting accelerator providing beam energies up to 25 MeV/A with a relative spread in beam energy as good as .0001 and a time spread of less than 100 psec. Details about the hardware front end, command language, data structure, and the flow of event treatment are covered. 19. Potential Influenza Effects on Military Populations Science.gov (United States) 2003-12-01 Meuse-Argonne Campaign, 2nd Edition, White Mane Books (Shippenburg, PA), 1998, p. 105. 29 R. Parkinson , Tormented Warrior, Holder and Staughton Ltd...was the rule.47 Influenza and influenzal pneumonia cases in the autumn of 1918 seem like gross exaggerations of today’s familiar maladies . Be that...Canada, 1918–1919,” in Medicine in Canadian Society, S. E. D. Shortt (Editor), McGill–Queen’s University Press, 1981, pp. 470–471. Parkinson , R 20. IEEE Particle Accelerator Conference on Accelerator Science and Technology Held in San Francisco, California on 6-9 May 1991. Volume 5 Science.gov (United States) 1991-05-01 W. Jones, and M. J. Jakobson ...... 893 User control of the proton beam injection trajectories into the AGS booster - T. D’Ottavio, A. Kponou, A...Baird, Heinz-Dieter Nuhn , Roman Tai’chyn, Herman lWinick, Alan S. Fisher, Juan C. Gallardo, and Claudio Pellegrini...LIVERMORE CA 945W50 9700 S CASS AVENUE USA ARGONNE IL 60439 USA ROMAN J. NAWROCKY BROOKHAVEN NATL LAB WILLIAM E. JR. NEXSEN BLDG. 725B LLNL VALERIJ 1. Verification and validation plan for the SFR system analysis module Energy Technology Data Exchange (ETDEWEB) Hu, R. [Argonne National Lab. (ANL), Argonne, IL (United States) 2014-12-18 This report documents the Verification and Validation (V&V) Plan for software verification and validation of the SFR System Analysis Module (SAM), developed at Argonne National Laboratory for sodium fast reactor whole-plant transient analysis. SAM is developed under the DOE NEAMS program and is part of the Reactor Product Line toolkit. The SAM code, the phenomena and computational models of interest, the software quality assurance, and the verification and validation requirements and plans are discussed in this report. 2. Mass balance and composition analysis of shredder residue. Energy Technology Data Exchange (ETDEWEB) Pomykala, J. A., Jr.; Jody, B. J.; Spangenberger, J. S.; Daniels, E. J.; Energy Systems 2007-01-01 The process of shredding end-of-life vehicles to recover metals results in a byproduct commonly referred to as shredder residue. The four-and-a-half million metric tons of shredder residue produced annually in the United States is presently land filled. To meet the challenges of automotive materials recycling, the U.S. Department of Energy is supporting research at Argonne National Laboratory in cooperation with the Vehicle Recycling Partnership (VRP) of the United States Council for Automotive Research (USCAR) and the American Plastics Council. This paper presents the results of a study that was conducted by Argonne to determine variations in the composition of shredder residue from different shredders. Over 90 metric tons of shredder residues were processed through the Argonne pilot plant. The contents of the various separated streams were quantitatively analyzed to determine their composition and to identify materials that should be targeted for recovery. The analysis established a reliable mass balance for the different materials in shredder residue. 3. Final work plan : Phase I investigation of potential contamination at the former CCC/USDA grain storage facility in Savannah, Missouri. Energy Technology Data Exchange (ETDEWEB) LaFreniere, L. M.; Environmental Science Division 2007-10-12 . This work will be performed in accord with the Intergovernmental Agreement established between the Farm Service Agency of the USDA and MoDNR, to address carbon tetrachloride contamination potentially associated with a number of former CCC/USDA grain storage facilities in Missouri. The investigative activities at Savannah will be conducted on behalf of the CCC/USDA by the Environmental Science Division of Argonne National Laboratory. Argonne is a nonprofit, multidisciplinary research center operated by UChicago Argonne, LLC, for the U.S. Department of Energy (DOE). The CCC/USDA has entered into an agreement with the DOE, under which Argonne provides technical assistance to the CCC/USDA with environmental site characterization and remediation at its former grain storage facilities. The site characterization at Savannah will take place in phases. This approach is recommended by the CCC/USDA and Argonne, so that information obtained and interpretations developed during each incremental stage of the investigation can be used most effectively to guide subsequent phases of the program. This site-specific Work Plan outlines the specific technical objectives and scope of work proposed for Phase I of the Savannah investigation. This Work Plan also includes the community relations plan to be followed throughout the CCC/USDA program at the Savannah site. Argonne is developing a Master Work Plan specific to operations in the state of Missouri. In the meantime, Argonne will issue a Provisional Master Work Plan (PMWP; Argonne 2007) that will be submitted to the MoDNR for review and approval. The agency has already reviewed and approved (with minor changes) the present Master Work Plan (Argonne 2002) under which Argonne currently operates in Kansas. The PMWP (Argonne 2007) will provide detailed information and guidance on the investigative technologies, analytical methodologies, quality assurance-quality control measures, and general health and safety policies to be employed by 4. Summary of PERF air program review - August 22-23, 2007, Annapolis, Maryland. Energy Technology Data Exchange (ETDEWEB) Veil, J. A.; Schmalzer, D. K.; Leath, P. P. 2007-10-24 For many years, the U.S. Department of Energy (DOE) has supported and sponsored various types of environmental research related to the oil and gas industry through its Office of Fossil Energy and its National Energy Technology Laboratory (NETL). In November 2005, Argonne National Laboratory (Argonne) organized and coordinated a review of DOE's water research program in conjunction with the fall 2005 meeting of the Petroleum Environmental Research Forum (PERF). PERF is a nonprofit organization created in 1986 to provide a stimulus and forum for collecting, exchanging, and analyzing research information related to the development of technology for the petroleum industry and also to provide a mechanism for establishing joint research projects in that field. Additional information on PERF can be accessed at http://www.perf.org. The water program review was so successful that both DOE and PERF agreed that a second program review would be useful -- this time on air research and issues. Argonne coordinated the air program review, which was held in Annapolis, Maryland, on August 22 and 23, 2007. This report summarizes the presentations and related discussions that were part of the air program review. The full agenda for the program review is included as Appendix A. 5. Irreversible Wash Aid Additive for Cesium Mitigation. Small-Scale Demonstration and Lessons Learned Energy Technology Data Exchange (ETDEWEB) Kaminski, Michael [Argonne National Lab. (ANL), Argonne, IL (United States) 2015-01-01 The Irreversible Wash Aid Additive process has been under development by the U.S. Environmental Protection Agency (EPA) and Argonne National Laboratory (Argonne). This process for radioactive cesium mitigation consists of a solution to wash down contaminated structures, roadways, and vehicles and a sequestering agent to bind the radionuclides from the wash water and render them environmentally immobile. The purpose of this process is to restore functionality to basic services and immediately reduce the consequences of a radiologically-contaminated urban environment. Research and development have resulted in a down-selection of technologies for integration and demonstration at the pilot-scale level as part of the Wide Area Recovery and Resiliency Program (WARRP) under the Department of Homeland Security and the Denver Urban Area Security Initiative. As part of developing the methods for performing a pilot-scale demonstration at the WARRP conference in Denver in 2012, Argonne conducted small-scale field experiments at Separmatic Systems. The main purpose of these experiments was to refine the wash water collection and separations systems and demonstrate key unit operations to help in planning for the large scale demonstration in Denver. Since the purpose of these tests was to demonstrate the operations of the system, we used no radioactive materials. After a brief set of experiments with the LAKOS unit to familiarize ourselves with its operation, two experiments were completed on two separate dates with the Separmatic systems. 6. Production of degradable polymers from food-waste streams Energy Technology Data Exchange (ETDEWEB) Tsai, S.P.: Coleman, R.D.; Bonsignore, P.V.; Moon, S.H. 1992-01-01 In the United States, billions of pounds of cheese whey permeate and approximately 10 billion pounds of potatoes processed each year are typically discarded or sold as cattle feed at $3{endash}6/ton; moreover, the transportation required for these means of disposal can be expensive. As a potential solution to this economic and environmental problem, Argonne National Laboratory is developing technology that: Biologically converts existing food-processing waste streams into lactic acid and uses lactic acid for making environmentally safe, degradable polylactic acid (PLA) and modified PLA plastics and coatings. An Argonne process for biologically converting high-carbohydrate food waste will not only help to solve a waste problem for the food industry, but will also save energy and be economically attractive. Although the initial substrate for Argonne's process development is potato by-product, the process can be adapted to convert other food wastes, as well as corn starch, to lactic acid. Proprietary technology for biologically converting greater than 90% of the starch in potato wastes to glucose has been developed. Glucose and other products of starch hydrolysis are subsequently fermented by bacteria that produce lactic acid. The lactic acid is recovered, concentrated, and further purified to a polymer-grade product. 7. Programming in Fortran M. Revision 1 Energy Technology Data Exchange (ETDEWEB) Foster, I.T.; Olson, R.D.; Tuecke, S.J. 1993-10-01 Fortran M is a small set of extensions to Fortran that supports a modular approach to the construction of sequential and parallel programs. Fortran M program use channels to plug together processes which may be written in Fortran M or Fortran 77. Processes communicate by sending and receiving messages on channels. Channels and processes can be created dynamically, but programs remain deterministic unless specialized nondeterministic constructs are used. Fortran M programs can execute on a range of sequential, parallel, and networked computers. This report incorporates both a tutorial introduction to Fortran M and a users guide for the Fortran M compiler developed at Argonne National Laboratory. The Fortran M compiler, supporting software, and documentation are made available free of charge by Argonne National Laboratory, but are protected by a copyright which places certain restrictions on how they may be redistributed. See the software for details. The latest version of both the compiler and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/fortran-m at info.mcs.anl.gov. 8. Final work plan for targeted investigation at Hilton, Kansas. Energy Technology Data Exchange (ETDEWEB) LaFreniere, L. M.; Environmental Science Division 2007-08-28 This Work Plan outlines the scope of a targeted investigation to update the status of carbon tetrachloride contamination in groundwater associated with grain storage operations at Hilton, Kansas. The Commodity Credit Corporation (CCC), an agency of the U.S. Department of Agriculture (USDA), operated a grain storage facility in Hilton during the 1950s and 1960s. At the time of the CCC/USDA operation in Hilton, grain storage facilities (CCC/USDA and private) were located along the both sides of the former Union Pacific railroad tracks (Figure 1.1). The main grain storage structures were on or near the railroad right-of-way. The proposed targeted investigation, to be conducted by Argonne National Laboratory on the behalf of CCC/USDA, will supplement Argonne's Phase I and Phase II investigations in 1996-1997. The earlier investigations erroneously focused on an area east of the railroad property where the CCC/USDA did not operate, specifically on a private grain storage facility. In addition, the investigation was limited in scope, because access to railroad property was denied (Argonne 1997a,b). The hydrogeologic system at Hilton is potentially complex. 9. Laboratory directed research and development Energy Technology Data Exchange (ETDEWEB) 1991-11-15 The purposes of Argonne's Laboratory Directed Research and Development (LDRD) Program are to encourage the development of novel concepts, enhance the Laboratory's R D capabilities, and further the development of its strategic initiatives. Among the aims of the projects supported by the Program are establishment of engineering proof-of-principle''; development of an instrumental prototype, method, or system; or discovery in fundamental science. Several of these project are closely associated with major strategic thrusts of the Laboratory as described in Argonne's Five Year Institutional Plan, although the scientific implications of the achieved results extend well beyond Laboratory plans and objectives. The projects supported by the Program are distributed across the major programmatic areas at Argonne. Areas of emphasis are (1) advanced accelerator and detector technology, (2) x-ray techniques in biological and physical sciences, (3) advanced reactor technology, (4) materials science, computational science, biological sciences and environmental sciences. Individual reports summarizing the purpose, approach, and results of projects are presented. 10. Progress report and technical evaluation of the ISCR pilot test conducted at the former CCC/USDA grain storage facility in Centralia, Kansas. Energy Technology Data Exchange (ETDEWEB) LaFreniere, L. M.; Environmental Science Division 2009-01-14 In October, 2007, the Commodity Credit Corporation of the U.S. Department of Agriculture (CCC/USDA) presented the document Interim Measure Conceptual Design (Argonne 2007a) to the Kansas Department of Health and Environment, Bureau of Environmental Remediation (KDHE/BER), for a proposed non-emergency Interim Measure (IM) at the site of the former CCC/USDA grain storage facility in Centralia, Kansas (Figure 1.1). The IM was recommended to mitigate existing levels of carbon tetrachloride contamination identified in the vadose zone soils beneath the former facility and in the groundwater beneath and in the vicinity of the former facility, as well as to moderate or decrease the potential future concentrations of carbon tetrachloride in the groundwater. The Interim Measure Conceptual Design (Argonne 2007a) was developed in accordance with the KDHE/BER Policy No.BERRS-029, Policy and Scope of Work: Interim Measures (KDHE 1996). The hydrogeologic, geochemical, and contaminant distribution characteristics of the Centralia site, as identified by the CCC/USDA, factored into the development of the nonemergency IM proposal. These characteristics were summarized in the Interim Measure Conceptual Design (Argonne 2007a) and were discussed in detail in previous Argonne reports (Argonne 2002a, 2003, 2004, 2005a,b,c, 2006a,b, 2007b). The identified remedial goals of the proposed IM were as follows: (1) To reduce the existing concentrations of carbon tetrachloride in groundwater in three 'hot spot' areas identified at the site (at SB01, SB05, and SB12-MW02; Figure 1.2) to levels acceptable to the KDHE. (2) To reduce carbon tetrachloride concentrations in the soils near the location of former soil boring SB12 and existing monitoring well MW02 (Figure 1.2) to levels below the KDHE Tier 2 Risk-Based Screening Level (RBSL) of 200 {micro}g/kg for this contaminant. To address these goals, the potential application of an in situ chemical reduction (ISCR) treatment technology 11. Final work plan : indoor air and ambient air sampling near the former CCC/USDA grain storage facility in Everest, Kansas. Energy Technology Data Exchange (ETDEWEB) LaFreniere, L. M. (Environmental Science Division) 2010-05-24 The Commodity Credit Corporation of the U.S. Department of Agriculture (CCC/USDA) operated a grain storage facility at the western edge of Everest, Kansas, from the early 1950s to the early 1970s. Sampling by the Kansas Department of Health and Environment (KDHE) in 1997 resulted in the detection of carbon tetrachloride in one domestic well (the Nigh well) northwest of the former facility. On behalf of the CCC/USDA, Argonne National Laboratory subsequently conducted a series of investigations to characterize the contamination (Argonne 2003, 2006a,b,c). Automatic, continuous monitoring of groundwater levels began in 2002 and is ongoing at six locations. The results have consistently indicated groundwater flow toward the north-northwest from the former CCC/USDA property to the Nigh property, then west-southwest from the Nigh property to the intermittent creek. Sitewide periodic groundwater and surface water sampling with analysis for volatile organic compounds (VOCs) began in 2008. Argonne's combined data indicate no significant downgradient extension of contamination since 2000. At present, the sampling is annual, as approved by the KDHE (2009) in response to a plan developed for the CCC/USDA (Argonne 2009). This document presents a plan for collecting indoor air samples in homes located along and adjacent to the defined extent of the carbon tetrachloride contamination. The plan was requested by the KDHE. Ambient air samples to represent the conditions along this pathway will also be taken. The purpose of the proposed work is to satisfy KDHE requirements and to collect additional data for assessing the risk to human health due to the potential upward migration of carbon tetrachloride and its primary degradation product (chloroform) into homes located in close proximity to the former grain storage facility, as well as along and within 100 ft laterally from the currently defined plume emanating from the former Everest facility. Investigation of the indoor air 12. Semi-annual monitoring report for Barnes, Kansas, for July-December 2009. Energy Technology Data Exchange (ETDEWEB) LaFreniere, L. M.; Environmental Science Division 2010-04-27 The Commodity Credit Corporation of the U.S. Department of Agriculture (CCC/USDA) operated a grain storage facility at Barnes, Kansas, during most of the interval 1949-1974. Carbon tetrachloride contamination was initially detected in 1986 in the town's public water supply wells. In 2006-2007, the CCC/USDA conducted a comprehensive targeted investigation at and near its former property in Barnes to characterize this contamination. Those results were reported previously (Argonne 2008a). In November 2007, the CCC/USDA began quarterly groundwater monitoring at Barnes. The monitoring is being conducted on behalf of the CCC/USDA by Argonne National Laboratory, in accord with the recommendations made in the report for the 2006-2007 targeted investigation (Argonne 2008a). The objective is to monitor the carbon tetrachloride contamination identified in the groundwater at Barnes. The sampling is presently conducted in a network of 28 individual monitoring wells (at 19 distinct locations), 2 public water supply wells, and 1 private well (Figure 1.1). The results of the 2006-2007 targeted investigation and the subsequent monitoring events (Argonne 2008a-d, 2009a,b) demonstrated the presence of carbon tetrachloride contamination in groundwater at levels exceeding the Kansas Department of Health and Environment (KDHE) Tier 2 risk-based screening level (RBSL) of 5.0 {micro}g/L for this compound. The contaminant plume appears to extend from the former CCC/USDA property northwestward, toward the Barnes public water supply wells. Information obtained during the 2006-2007 investigation indicates that at least one other potential source might have contributed to the groundwater contaminant plume (Argonne 2008a). The former agriculture building owned by the local school district, located immediately east of well PWS3, is also a potential source of the contamination. This current report presents the results of the seventh quarterly monitoring event, conducted in September 13. Final work plan : phase I investigation of potential contamination at the former CCC/USDA grain storage facility in Montgomery City, Missouri. Energy Technology Data Exchange (ETDEWEB) LaFreniere, L. M.; Environmental Science Division 2010-08-16 former grain storage facility, the CCC/USDA will conduct investigations to (1) characterize the source(s), extent, and factors controlling the possible subsurface distribution and movement of carbon tetrachloride at the Montgomery City site and (2) evaluate the health and environmental threats potentially represented by the contamination. This work will be performed in accord with the Intergovernmental Agreement established between the Farm Service Agency of the USDA and the MoDNR, to address carbon tetrachloride contamination potentially associated with a number of former CCC/USDA grain storage facilities in Missouri. The investigations at Montgomery City will be conducted on behalf of the CCC/USDA by the Environmental Science Division of Argonne National Laboratory. Argonne is a nonprofit, multidisciplinary research center operated by UChicago Argonne, LLC, for the U.S. Department of Energy (DOE). The CCC/USDA has entered into an agreement with DOE, under which Argonne provides technical assistance to the CCC/USDA with environmental site characterization and remediation at its former grain storage facilities. The site characterization at Montgomery City will take place in phases. This approach is recommended by the CCC/USDA and Argonne, so that information obtained and interpretations developed during each incremental stage of the investigation can be used most effectively to guide subsequent phases of the program. This site-specific Work Plan outlines the specific technical objectives and scope of work proposed for Phase I of the Montgomery City investigation. This Work Plan also includes the community relations plan to be followed throughout the CCC/USDA program at the Montgomery City site. Argonne is developing a Master Work Plan specific to operations in the state of Missouri. In the meantime, Argonne has issued a Provisional Master Work Plan (PMWP; Argonne 2007) that has been reviewed and approved by the MoDNR for current use. The PMWP (Argonne 2007) provides 14. Final work plan : supplemental upward vapor intrusion investigation at the former CCC/USDA grain storage facility in Hanover, Kansas. Energy Technology Data Exchange (ETDEWEB) LaFreniere, L. M.; Environmental Science Division 2008-12-15 The Commodity Credit Corporation (CCC), an agency of the U.S. Department of Agriculture (USDA), operated a grain storage facility at the northeastern edge of the city of Hanover, Kansas, from 1950 until the early 1970s. During this time, commercial grain fumigants containing carbon tetrachloride were in common use by the grain storage industry to preserve grain in their facilities. In February 1998, trace to low levels of carbon tetrachloride (below the maximum contaminant level [MCL] of 5.0 {micro}g/L) were detected in two private wells near the former grain storage facility at Hanover, as part of a statewide USDA private well sampling program that was implemented by the Kansas Department of Health and Environment (KDHE) near former CCC/USDA facilities. In 2007, the CCC/USDA conducted near-surface soil sampling at 61 locations and also sampled indoor air at nine residences on or adjacent to its former Hanover facility to address the residents concerns regarding vapor intrusion. Low levels of carbon tetrachloride were detected at four of the nine homes. The results were submitted to the KDHE in October 2007 (Argonne 2007). On the basis of the results, the KDHE requested sub-slab sampling and/or indoor air sampling (KDHE 2007). This Work Plan describes, in detail, the proposed additional scope of work requested by the KDHE and has been developed as a supplement to the comprehensive site investigation work plan that is pending (Argonne 2008). Indoor air samples collected previously from four homes at Hanover were shown to contain the carbon tetrachloride at low concentrations (Table 2.1). It cannot be concluded from these previous data that the source of the detected carbon tetrachloride is vapor intrusion attributable to former grain storage operations of the CCC/USDA at Hanover. The technical objective of the vapor intrusion investigation described here is to assess the risk to human health due to the potential for upward migration of carbon tetrachloride and 15. Emerging technologies and approaches to minimize discharges into Lake Michigan Phase 2, Module 3 report. Energy Technology Data Exchange (ETDEWEB) Negri, M. C.; Gillenwater, P.; Urgun Demirtas, M. (Energy Systems) 2011-05-11 Purdue University Calumet (Purdue) and Argonne National Laboratory (Argonne) have conducted an independent study to identify deployable technologies that could help the BP Whiting Refinery, and other petroleum refineries, meet future wastewater discharge limits. This study has been funded by BP. Each organization tested a subset of the target technologies and retains sole responsibility for its respective test design and implementation, quality assurance and control, test results obtained from each of the technologies, and corresponding conclusions and recommendations. This project was divided in two phases and modules. This report summarizes the work conducted by Argonne in Phase II Module 3 (Bench Scale Testing). Other Modules are discussed elsewhere (Emerging Technologies and Approaches to Minimize Discharges into Lake Michigan, Phase 2, Modules 1-3 Report, April 2011, prepared for BP Americas by the Argonne - Purdue Task Force). The goal of this project was to identify and assess available and emerging wastewater treatment technologies for removing mercury and vanadium from the Whiting Refinery wastewater and to conduct bench-scale tests to provide comparable, transparent, and uniform results across the broad range of technologies tested. After the bench-scale testing phase, a previously developed decision matrix was refined and applied by Argonne to process and review test data to estimate and compare the preliminary performance, engineering configuration, preliminary cost, energy usage, and waste generation of technologies that were shown to be able to remove Hg and/or V to below the target limit at the bench scale. The data were used as the basis to identify the best candidates for further testing at the bench or pilot scale on a slip stream of effluent to lake (ETL) or clarifier effluent (CE) at the Whiting Refinery to determine whether future limits could be met and to generate other pertinent data for scale-up and sustainability evaluation. As a result of 16. Final master work plan : environmental investigations at former CCC/USDA facilities in Kansas, 2002 revision. Energy Technology Data Exchange (ETDEWEB) Burton, J. C.; Environmental Research 2003-01-23 The Commodity Credit Corporation (CCC) of the U.S. Department of Agriculture (USDA) has entered into an interagency agreement with the U.S. Department of Energy (DOE) under which Argonne National Laboratory provides technical assistance for hazardous waste site characterization and remediation for the CCC/USDA. Carbon tetrachloride is the contaminant of primary concern at sites in Kansas where former CCC/USDA grain storage facilities were located. Argonne applies its QuickSite(reg sign) Expedited Site Characterization (ESC) approach to these former facilities. The QuickSite environmental site characterization methodology is Argonne's proprietary implementation of the ESC process (ASTM 1998). Argonne has used this approach at several former CCC/USDA facilities in Kansas, including Agenda, Agra, Everest, and Frankfort. The Argonne ESC approach revolves around a multidisciplinary, team-oriented approach to problem solving. The basic features and steps of the QuickSite methodology are as follows: (1) A team of scientists with diverse expertise and strong field experience is required to make the process work. The Argonne team is composed of geologists, geochemists, geophysicists, hydrogeologists, chemists, biologists, engineers, computer scientists, health and safety personnel, and regulatory staff, as well as technical support staff. Most of the staff scientists are at the Ph.D. level; each has on average, more than 15 years of experience. The technical team works together throughout the process. In other words, the team that plans the program also implements the program in the field and writes the reports. More experienced scientists do not remain in the office while individuals with lesser degrees or experience carry out the field work. (2) The technical team reviews, evaluates, and interprets existing data for the site and the contaminants there to determine which data sets are technically valid and can be used in initially designing the field program. A basic 17. Annual report of monitoring at Barnes, Kansas, in 2010. Energy Technology Data Exchange (ETDEWEB) LaFreniere, L. M. (Environmental Science Division) 2011-05-25 The Commodity Credit Corporation of the U.S. Department of Agriculture (CCC/USDA) operated a grain storage facility at Barnes, Kansas, in 1949-1974. Carbon tetrachloride contamination was initially detected in 1986 in the town's public water supply wells. In 2006-2007, the CCC/USDA conducted a comprehensive targeted investigation at and near its former property in Barnes to characterize this contamination. Those results were reported previously (Argonne 2008a). The results of that investigation indicated that carbon tetrachloride contamination is present in groundwater at low to moderate levels in the vicinity of the former CCC/USDA grain storage facility. Information obtained during the 2006-2007 investigation also indicated that at least one other potential source might have contributed to the groundwater contaminant plume (Argonne 2008a). The former agriculture building owned by the local school district, located immediately east of well PWS3, is also a potential source of the contamination. In November 2007, the CCC/USDA began periodic groundwater monitoring at Barnes. The monitoring is being conducted on behalf of the CCC/USDA by Argonne National Laboratory, under the direction of the Kansas Department of Health and Environment (KDHE). The objective is to monitor the carbon tetrachloride contamination identified in the groundwater at Barnes. Through 2010, sampling was conducted in a network of 28 individual monitoring wells (at 19 distinct locations), 2 public water supply wells, and 1 private well (Figure 1.1). The results of the 2006-2007 targeted investigation and the subsequent monitoring events (Argonne 2008a-d, 2009a,b, 2010) demonstrated the presence of carbon tetrachloride contamination in groundwater at levels exceeding the KDHE Tier 2 risk-based screening level (RBSL) of 5.0 {micro}g/L for this compound. The contaminant plume appears to extend from the former CCC/USDA property northwestward, toward the Barnes public water supply wells. Long 18. Final Report: Results of Environmental Site Investigation at Sylvan Grove, Kansas Energy Technology Data Exchange (ETDEWEB) LaFreniere, Lorraine M [Argonne National Lab. (ANL), Argonne, IL (United States) 2014-09-01 Sylvan Grove is located in western Lincoln County, approximately 60 mi west of Salina, Kansas (Figure 1.1). From 1954 to 1966, the Commodity Credit Corporation (CCC), an agency of the U.S. Department of Agriculture (USDA), operated a grain storage facility at the northeastern edge of Sylvan Grove. During this time, commercial grain fumigants containing carbon tetrachloride were in common use to preserve grain in storage. In 1998, the Kansas Department of Health and Environment (KDHE) found carbon tetrachloride above the maximum contaminant level (MCL) of 5 μg/L in groundwater from one private well used for livestock and lawn and garden watering. The 1998 KDHE sampling at Sylvan Grove was conducted under the USDA private well sampling program. To determine whether the former CCC/USDA facility at Sylvan Grove is a potential contaminant source and its possible relationship to the contamination in groundwater, the CCC/USDA proposed to conduct an environmental site investigation, in accordance with the Intergovernmental Agreement between the KDHE and the Farm Service Agency (FSA) of the USDA. Argonne National Laboratory, on behalf of the CCC/USDA, developed a work plan (Argonne 2012) for the site investigation and a supplemental work plan for indoor and ambient air sampling (Appendix A). The proposed work was approved by the KDHE (2012a, 2013). The investigations were performed by the Environmental Science Division of Argonne National Laboratory, on behalf of the CCC/USDA. The main activities for the site investigation were conducted in June 2012, and indoor and ambient air sampling was performed in February 2013. This report presents the findings of the investigations at Sylvan Grove. 19. Final Work Plan: Phase I Investigation at Bladen, Nebraska Energy Technology Data Exchange (ETDEWEB) LaFreniere, Lorraine M. [Argonne National Lab. (ANL), Argonne, IL (United States). Environmental Science Division. Applied Geosciences and Environmental Management Section; Yan, Eugene [Argonne National Lab. (ANL), Argonne, IL (United States). Environmental Science Division 2014-07-01 The village of Bladen is a town of population approximately 237 in the northwest part of Webster County, Nebraska, 30 mi southwest of Hastings and 140 mi southwest of Lincoln, Nebraska. In 2000, the fumigant-related compound carbon tetrachloride was detected in public water supply well PWS 68-1, at a trace level. Low-level contamination, below the maximum contamination level (MCL) of 5.0 μg/L, has been detected intermittently in well PWS 68-1 since 2000, including in the last sample taken in July 2013. In 2006, the village installed a new well, PWS 2006-1, that remains free of contamination. Because the carbon tetrachloride found in well PWS 68-1 might be linked to historical use of fumigants containing carbon tetrachloride at grain storage facilities, including its former facility in Bladen, the CCC/USDA is proposing an investigation to (1) delineate the source and extent of the carbon tetrachloride contamination potentially associated with its former facility, (2) characterize pathways and controlling factors for contaminant migration in the subsurface, and (3) establish a basis for estimating potential health and environmental risks. The work will be performed in accordance with the Intergovernmental Agreement established between the NDEQ and the Farm Service Agency of the USDA. The site investigation at Bladen will be implemented in phases, so that data collected and interpretations developed during each phase can be evaluated to determine if a subsequent phase of investigation is warranted and, if warranted, to provide effective guidance for the subsequent investigation activities. This Work Plan identifies the specific technical objectives and defines the scope of work proposed for the Phase I investigation by compiling and evaluating historical data. The proposed investigation activities will be performed on behalf of the CCC/USDA by the Environmental Science Division of Argonne National Laboratory. Argonne is a nonprofit, multidisciplinary research 20. Radium in Humans: A Review of U.S. Studies. Science.gov (United States) Rowland, R. E. 1994-09-01 This document was originally conceived as a description of the radium studies that took place at Argonne National Laboratory. It soon became evident, however, that to document the widespread use of radium, a brief review of the application of radium in medicine and in the US dial painting industry is required. Further, because the Argonne studies were not the only such efforts, brief overviews of the other radium programs are included. Even so, much material has been omitted. The extensive references included will allow the interested reader to find additional information. The effects of internally deposited radium in humans have been studied in this country for more than 75 years. Some 2,400 subjects have had their body contents of radium measured, and a majority of them have been followed for most of their adult lives, to understand and quantify the effects of radium. Many more individuals acquired radium internally but were never measured. Some of this group have been located and followed until death; in these cases the cause of death is known without a body content measurement. As a consequence of the efforts made to locate, measure, and follow exposed individuals, a great deal of information about the effects of radium is available. Nevertheless, great gaps remain in the knowledge of radium toxicity. The Argonne study is the largest every undertaken of the effects on humans of an internally deposited radioelement, in which the insult has been quantitated by actual measurements of the retained radioisotope. The study has now been terminated, even though more than 1,000 subjects with measured radium burdens are still alive. This document is written as a brief summary of current knowledge accumulated in this incomplete study. 1. Comparison of Statistically Modeled Contaminated Soil Volume Estimates and Actual Excavation Volumes at the Maywood FUSRAP Site - 13555 Energy Technology Data Exchange (ETDEWEB) Moore, James [U.S. Army Corps of Engineers - New York District 26 Federal Plaza, New York, New York 10278 (United States); Hays, David [U.S. Army Corps of Engineers - Kansas City District 601 E. 12th Street, Kansas City, Missouri 64106 (United States); Quinn, John; Johnson, Robert; Durham, Lisa [Argonne National Laboratory, Environmental Science Division 9700 S. Cass Ave., Argonne, Illinois 60439 (United States) 2013-07-01 As part of the ongoing remediation process at the Maywood Formerly Utilized Sites Remedial Action Program (FUSRAP) properties, Argonne National Laboratory (Argonne) assisted the U.S. Army Corps of Engineers (USACE) New York District by providing contaminated soil volume estimates for the main site area, much of which is fully or partially remediated. As part of the volume estimation process, an initial conceptual site model (ICSM) was prepared for the entire site that captured existing information (with the exception of soil sampling results) pertinent to the possible location of surface and subsurface contamination above cleanup requirements. This ICSM was based on historical anecdotal information, aerial photographs, and the logs from several hundred soil cores that identified the depth of fill material and the depth to bedrock under the site. Specialized geostatistical software developed by Argonne was used to update the ICSM with historical sampling results and down-hole gamma survey information for hundreds of soil core locations. The updating process yielded both a best guess estimate of contamination volumes and a conservative upper bound on the volume estimate that reflected the estimate's uncertainty. Comparison of model results to actual removed soil volumes was conducted on a parcel-by-parcel basis. Where sampling data density was adequate, the actual volume matched the model's average or best guess results. Where contamination was un-characterized and unknown to the model, the actual volume exceeded the model's conservative estimate. Factors affecting volume estimation were identified to assist in planning further excavations. (authors) 2. Summary of operations and performance of the Murdock site restoration project in 2008. Energy Technology Data Exchange (ETDEWEB) LaFreniere, L. M.; Environmental Science Division 2009-06-04 This document summarizes the performance of the groundwater and surface water restoration systems installed by the Commodity Credit Corporation of the U.S. Department of Agriculture (CCC/USDA) at the former CCC/USDA grain storage facility in Murdock, Nebraska, during the third full year of system operation, from January 1 through December 31, 2008. Performance in June 2005 through December 2007 was reported previously (Argonne 2007, 2008). In the Murdock project, several innovative technologies are being used to remove carbon tetrachloride contamination from a shallow aquifer underlying the town, as well as from water naturally discharged to the surface at the headwaters of a small creek (a tributary to Pawnee Creek) north of the town (Figure 1.1). The restoration activities at Murdock are being conducted by the CCC/USDA as a non-time-critical removal action under the regulatory authority and supervision of the U.S. Environmental Protection Agency (EPA), Region VII. Argonne National Laboratory assisted the CCC/USDA by providing technical oversight for the restoration effort and facilities during this review period. Included in this report are the results of all sampling and monitoring activities performed in accord with the EPA-approved Monitoring Plan for this site (Argonne 2006), as well as additional investigative activities conducted during the review period. The annual performance reports for the Murdock project assemble information that will become part of the five-year review and evaluation of the remediation effort. This review will occur in 2010. This document presents overviews of the treatment facilities (Section 2) and site operations and activities (Section 3), then describes the groundwater, surface water, vegetation, and atmospheric monitoring results (Section 4) and modifications and costs during the review period (Section 5). Section 6 summarizes the current period of operation. A gallery of photographs of the Murdock project is in Appendix A. 3. Gigabits to the Desktop: Installing tomorrows networks today Energy Technology Data Exchange (ETDEWEB) Kuhfuss, T.C.; Phillips, P.T. 1994-03-01 Argonne is one of the US Department of Energys world class research institutions. Leading edge computing tools and networks allow Argonne to maintain and enhance this reputation. One current effort to deploy leading edge tools is the Argonne Gigabits to the Desktop project. While the delivering and using gigabits to the desktop is little more than a hope at this time, this paper will discuss the hurdles to achieving it and how to tear down as many hurdles as possible. Under this project, four distinct areas are being investigated and enhanced. This paper will discuss briefly the applications and tools that we see driving the requirement for gigabits to the desktop. It will touch on a functional description of ourideal workstations, architectures and the candidates for the next generation network capable of delivering gigabits. Lastly, it will provide an in-depth analysis of physical layer options and attempt to prove that this area, while the least risky, must be done properly, with the proper media. This paper assumes one important point. It assumes that bandwidth is essentially free. We will discuss network architectures and physical installation recommendations which have a fixed cost. However on a campus, there is no marginal cost for additional packets on these networks once the network infrastructure is installed. This point is important when extrapolating our conclusions to the wide area. The marginal cost of a packet sent to a commercial network is usually non zero. This fact may prove to be a great hindrance in migrating the applications mentioned beyond the organizational boundaries. 4. Electricity Transmission, Pipelines, and National Trails: An Analysis of Current and Potential Intersections on Federal Lands in the Eastern United States, Alaska, and Hawaii Energy Technology Data Exchange (ETDEWEB) Kuiper, James A. [Argonne National Lab. (ANL), Argonne, IL (United States); Krummel, John R. [Argonne National Lab. (ANL), Argonne, IL (United States); Hlava, Kevin J. [Argonne National Lab. (ANL), Argonne, IL (United States); Moore, H. Robert [Argonne National Lab. (ANL), Argonne, IL (United States); Orr, Andrew B. [Argonne National Lab. (ANL), Argonne, IL (United States); Schlueter, Scott O. [Argonne National Lab. (ANL), Argonne, IL (United States); Sullivan, Robert G. [Argonne National Lab. (ANL), Argonne, IL (United States); Zvolanek, Emily A. [Argonne National Lab. (ANL), Argonne, IL (United States) 2016-11-21 As has been noted in many reports and publications, acquiring new or expanded rights-of-way for transmission is a challenging process, because numerous land use and land ownership constraints must be overcome to develop pathways suitable for energy transmission infrastructure. In the eastern U.S., more than twenty federally protected national trails (some of which are thousands of miles long, and cross many states) pose a potential obstacle to the development of new or expanded electricity transmission capacity. However, the scope of this potential problem is not well-documented, and there is no baseline information available that could allow all stakeholders to study routing scenarios that could mitigate impacts on national trails. This report, Electricity Transmission, Pipelines, and National Trails: An Analysis of Current and Potential Intersections on Federal Lands in the Eastern United States, was prepared by the Environmental Science Division of Argonne National Laboratory (Argonne). Argonne was tasked by DOE to analyze the “footprint” of the current network of National Historic and Scenic Trails and the electricity transmission system in the 37 eastern contiguous states, Alaska, and Hawaii; assess the extent to which national trails are affected by electrical transmission; and investigate the extent to which national trails and other sensitive land use types may be affected in the near future by planned transmission lines. Pipelines are secondary to transmission lines for analysis, but are also within the analysis scope in connection with the overall directives of Section 368 of the Energy Policy Act of 2005, and because of the potential for electrical transmission lines being collocated with pipelines. 5. Lighting energy efficiency opportunities at Cheyenne Mountain Air Station Energy Technology Data Exchange (ETDEWEB) Molburg, J.C.; Rozo, A.J.; Sarles, J.K.; Haffenden, R.A.; Thimmapuram, P.R.; Cavallo, J.D. 1996-06-01 CMAS is an intensive user of electricity for lighting because of its size, lack of daylight, and 24-hour operating schedule. Argonne National Laboratory recently conducted a lighting energy conservation evaluation at CMAS. The evaluation included inspection and characterization of existing lighting systems, analysis of energy-efficient retrofit options, and investigation of the environmental effects that these lighting system retrofits could have when they are ready to be disposed of as waste. Argonne devised three retrofit options for the existing lighting systems at various buildings: (1) minimal retrofit--limited fixture replacement; (2) moderate retrofit--more extensive fixture replacement and limited application of motion detectors; and (3) advanced retrofit--fixture replacement, reduction in the number of lamps, expansion of task lighting, and more extensive application of motion detectors. Argonne used data on electricity consumption to analyze the economic and energy effects of these three retrofit options. It performed a cost analysis for each retrofit option in terms of payback. The analysis showed that lighting retrofits result in savings because they reduce electricity consumption, cooling load, and maintenance costs. The payback period for all retrofit options was found to be less than 2 years, with the payback period decreasing for more aggressive retrofits. These short payback periods derived largely from the intensive (24-hours-per-day) use of electric lighting at the facility. Maintenance savings accounted for more than half of the annual energy-related savings under the minimal and moderate retrofit options and slightly less than half of these savings under the advanced retrofit option. Even if maintenance savings were excluded, the payback periods would still be impressive: about 4.4 years for the minimal retrofit option and 2 years for the advanced option. The local and regional environmental impacts of the three retrofit options were minimal. 6. Vehicle technologies program Government Performance and Results Act (GPA) report for fiscal year 2012 Energy Technology Data Exchange (ETDEWEB) Ward, J.; Stephens, T. S.; Birky, A. K. (Energy Systems); (DOE-EERE); (TA Engineering) 2012-08-10 The U.S. Department of Energy's Office of Energy Efficiency and Renewable Energy has defined milestones for its Vehicle Technologies Program (VTP). This report provides estimates of the benefits that would accrue from achieving these milestones relative to a base case that represents a future in which there is no VTP-supported vehicle technology development. Improvements in the fuel economy and reductions in the cost of light- and heavy-duty vehicles were estimated by using Argonne National Laboratory's Autonomie powertrain simulation software and doing some additional analysis. Argonne also estimated the fraction of the fuel economy improvements that were attributable to VTP-supported development in four 'subsystem' technology areas: batteries and electric drives, advanced combustion engines, fuels and lubricants, and materials (i.e., reducing vehicle mass, called 'lightweighting'). Oak Ridge National Laboratory's MA{sup 3}T (Market Acceptance of Advanced Automotive Technologies) tool was used to project the market penetration of light-duty vehicles, and TA Engineering's TRUCK tool was used to project the penetrations of medium- and heavy-duty trucks. Argonne's VISION transportation energy accounting model was used to estimate total fuel savings, reductions in primary energy consumption, and reductions in greenhouse gas emissions that would result from achieving VTP milestones. These projections indicate that by 2030, the on-road fuel economy of both light- and heavy-duty vehicles would improve by more than 20%, and that this positive impact would be accompanied by a reduction in oil consumption of nearly 2 million barrels per day and a reduction in greenhouse gas emissions of more than 300 million metric tons of CO{sub 2} equivalent per year. These benefits would have a significant economic value in the U.S. transportation sector and reduce its dependency on oil and its vulnerability to oil price shocks. 7. 317/319 Phytoremediation site monitoring report - 2009 growing season : final report. Energy Technology Data Exchange (ETDEWEB) Negri, C .N.; Benda, P. L.; Gopalakrishnan, G.; Energy Systems 2010-02-10 In 1999, Argonne National Laboratory (Argonne) designed and installed a series of engineered plantings consisting of a vegetative cover system and approximately 800 hybrid poplars and willows rooting at various predetermined depths. The plants were installed using various methods including Applied Natural Science's TreeWell{reg_sign} system. The goal of the installation was to protect downgradient surface and groundwater by intercepting the contaminated groundwater with the tree roots, removing moisture from the upgradient soil area, reducing water infiltration, preventing soil erosion, degrading and/or transpiring the residual volatile organic compounds (VOCs), and removing tritium from the subsoil and groundwater. This report presents the results of the monitoring activities conducted by Argonne's Energy Systems (ES) Division in the growing season of 2009. Monitoring of the planted trees began soon after the trees were installed in 1999 and has been conducted every summer since then. As the trees grew and consolidated their growth into the contaminated soil and groundwater, their exposure to the contaminants was progressively shown through tissue sampling. During the 2009 sampling campaign, VOC concentrations found in the French Drain area were in general consistent with or slightly lower than the 2008 results. Additionally, closely repeated, stand wide analyses showed contaminant fluctuations that may indicate short-term contaminant depletion in the area of interest of roots. This data will be useful to determine short-term removal rate by the trees. As in previous years, levels in the Hydraulic Control Area were close to background levels except for a few exceptions. 8. ENergy and Power Evaluation Program Energy Technology Data Exchange (ETDEWEB) NONE 1996-11-01 In the late 1970s, national and international attention began to focus on energy issues. Efforts were initiated to design and test analytical tools that could be used to assist energy planners in evaluating energy systems, particularly in developing countries. In 1984, the United States Department of Energy (DOE) commissioned Argonne National Laboratorys Decision and Information Sciences Division (DIS) to incorporate a set of analytical tools into a personal computer-based package for distribution in developing countries. The package developed by DIS staff, the ENergy and Power Evaluation Program (ENPEP), covers the range of issues that energy planners must face: economic development, energy demand projections, supply-and-demand balancing, energy system expansion, and environmental impact analysis. Following the original DOE-supported development effort, the International Atomic Energy Agency (IAEA), with the assistance from the US Department of State (DOS) and the US Department of Energy (DOE), provided ENPEP training, distribution, and technical support to many countries. ENPEP is now in use in over 60 countries and is an international standard for energy planning tools. More than 500 energy experts have been trained in the use of the entire ENPEP package or some of its modules during the international training courses organized by the IAEA in collaboration with Argonnes Decision and Information Sciences (DIS) Division and the Division of Educational Programs (DEP). This report contains the ENPEP program which can be download from the internet. Described in this report is the description of ENPEP Program, news, forums, online support and contacts. 9. Preliminary assessment report for Camp Carroll Training Center, Installation 02045, Anchorage, Alaska. Installation Restoration Program Energy Technology Data Exchange (ETDEWEB) Krokosz, M.; Sefano, J. 1993-08-01 This report presents the results of the preliminary assessment (PA) conducted by Argonne National Laboratory at the Alaska Army National Guard property known as Camp Carroll Training Center, located on the Fort Richardson Army facility near Anchorage, Alaska. Preliminary assessments of federal facilities are being conducted to compile the information necessary for the completion of preremedial activities and to provide a basis for establishing, corrective actions in response to releases of hazardous substances. The principal objective of the PA is to characterize the site accurately and determine the need for further action by examining site activities, types and quantities of hazardous substances used, the nature and amounts of wastes generated or stored at the facility, and potential pathways by which contamination could affect public health and the environment. The primary environmentally significant operations (ESOs) associated with the property are (1) the Alaska Air National Guard storage area behind Building S57112 (Organizational Maintenance Shop [OMS] 6); (2) the state of Alaska maintenance facility and the soil/tar-type spill north of the state of Alaska maintenance facility; (3) the waste storage area adjacent to OMS 6; (4) the contaminated area from leaking underground storage tanks (USTs) and the oil-water separator; and (5) soil staining in the parking area at the Camp Carroll Headquarters Building. Camp Carroll appears to be in excellent condition from an environmental standpoint, and current practices are satisfactory. Argonne recommends that the Alaska Department of Military Affairs consider remediation of soil contamination associated with all storage areas, as well as reviewing the practices of other residents of the facility. Argonne also recommends that the current methods of storing waste material behind Building S57112 (OMS 6) be reviewed for alternatives. 10. A mouse model of severe acute pancreatitis induced with caerulein and lipopolysaccharide Institute of Scientific and Technical Information of China (English) Shi-Ping Ding; Ji-Cheng Li; Chang Jin 2003-01-01 cells was seriously damaged in the Cn+LPS group. Chromatin margination of nuclei was present, the number and volume of vacuoles greatly increased. Zymogen granules (ZGs) were greatly decreased in number and endoplasmic reticulum exhibited whorls. The swollen mitochondria appeared, the crista of which was decreased in number or disappeared.(3) Pancreatic weight and serum amylase levels in the Cn +LPS was significantly higher than those of the NS group and the LPS group respectively (P<0.01 or P<0.05).However, the pancreatic wet weight and serum amylase concentration showed no significant difference between the Cn+LPS group and the Cn group. (4) NO concentration in the Cn+LPS group was significantly higher than that of NS group, LPS group and Cn group(P<0.05 or P<0.01). 5) The SOD and MDA concentration of the pancreas in the Cn+LPS group were significantly higher than those of NS, LPS and Cn groups (P<0.05 or P<0.01).CONCLUSION: The mouse model of severe acute pancreatitis could be induced with caerulein and LPS, which could be non-traumatic and easy to induce, reproducible with the same pathological characteristics as those of SAP in human, and could be used in the research on the mechanism of human SAP. 11. Impact of mental representational systems on design interface. Energy Technology Data Exchange (ETDEWEB) Brown-VanHoozer, S. A. 1998-02-25 The purpose of the studies conducted at Argonne National Laboratory is to understand the impact mental representational systems have in identifying how user comfort parameters influence how information is to best be presented. By understanding how each individual perceives information based on the three representational systems (visual, auditory and kinesthetic modalities), it has been found that a different approach must be taken in the design of interfaces resulting in an outcome that is much more effective and representative of the users mental model. This paper will present current findings and future theories to be explored. 12. The use of a centrifugal contactor for component concentration by solvent extraction Energy Technology Data Exchange (ETDEWEB) Leonard, R.A.; Wygmans, D.G.; McElwee, M.J.; Wasserman, M.O.; Vandegrift, G.F. 1992-07-01 Theoretical and experimental work was undertaken to explore the use of the Argonne design centrifugal contactor as a concentrating device for metal ions in solutions such as transuranic-containing waste streams and contaminated groundwater. First, the theoretical basis for operating the contactor as a concentrator was developed. Then, the ability of the contactor to act as a concentrating device was experimentally demonstrated with neodymium over a wide range of organic-to-aqueous (O/A) flow ratios (0.01 to 33). These data were also used to derive a correlation for the effect of O/A flow ratio on extraction efficiency. 13. Observation of transverse space charge effects in a multi-beamlet electron bunch produced in a photo-emission electron source Energy Technology Data Exchange (ETDEWEB) Rihaoui, M.; /Northern Illinois U. /NICADD, DeKalb; Gai, W.; /Argonne; Piot, P.; /Northern Illinois U. /NICADD, DeKalb /FERMILAB; Power, J.G.; /Argonne; Ysof, Z.; /Argonne 2008-09-01 A 'multiple beamlet' experiment aimed at investigating the transverse space charge effect was recently conducted at the Argonne Wakefield Accelerator. The experiment generated a symmetric pattern of 5 beamlets on the photocathode of the RF gun with the drive laser. We explored the evolution of the thereby produced 5 MeV, space-charge dominated electron beamlets in the 2m drift following the RF photocathode gun for various external focusing. Two important effects were observed and benchmarked using the particle-in-cell beam dynamics code IMPACT-T. In this paper, we present our experimental observation and their benchmarking with Impact-T. 14. TSO at AMD: the Applied Mathematics Division's implementation of the Time Sharing Option. [IBM 360/75 at ANL Energy Technology Data Exchange (ETDEWEB) Burger, A.J. 1975-05-26 This memorandum describes some of the ways in which the Time Sharing Option (TSO) is implemented on the 360/75 in the Applied Mathematics Division of Argonne National Laboratory differs from IBM standard TSO under OS/MVT. Differences pertaining to internal modifications to improve performance, reliability, and efficient use of direct access storage, not resulting in any changes to the syntax of any command, are not discussed. This document assumes a basic familiarity with TSO, as may be gained from TSO Terminal User's Guide. (RWR) 15. Onsite and Electric Backup Capabilities at Critical Infrastructure Facilities in the United States Energy Technology Data Exchange (ETDEWEB) Phillips, Julia A. [Argonne National Lab. (ANL), Argonne, IL (United States); Wallace, Kelly E. [Argonne National Lab. (ANL), Argonne, IL (United States); Kudo, Terence Y. [Argonne National Lab. (ANL), Argonne, IL (United States); Eto, Joseph H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States) 2016-04-01 The following analysis, conducted by Argonne National Laboratory’s (Argonne’s) Risk and Infrastructure Science Center (RISC), details an analysis of electric power backup of national critical infrastructure as captured through the Department of Homeland Security’s (DHS’s) Enhanced Critical Infrastructure Program (ECIP) Initiative. Between January 1, 2011, and September 2014, 3,174 ECIP facility surveys have been conducted. This study focused first on backup capabilities by infrastructure type and then expanded to infrastructure type by census region. 16. Tribology and coatings Energy Technology Data Exchange (ETDEWEB) NONE 1995-06-01 The future use of fuel-efficient, low-emission, advanced transportation systems (for example, those using low-heat-rejection diesel engines or advanced gas turbines) presents new challenges to tribologists and materials scientists. High service temperatures, corrosive environments, and extreme contact pressures are among the concerns that make necessary new tribological designs, novel materials, and effective lubrication concepts. Argonne is working on methods to reduce friction, wear and corrosion, such as soft metal coatings on ceramics, layered compounds, diamond coatings, and hard surfaces. 17. Extended phonon collapse in the charge-density-wave compound NbSe{sub 2} Energy Technology Data Exchange (ETDEWEB) Weber, Frank [Karlsruhe Institute of Technology, Institute of Solid State Physics, Karlsruhe (Germany); Materials Science Division, Argonne National Laboratory, Argonne, Illinois (United States); Rosenkranz, Stephan; Castellan, John-Paul; Osborn, Raymond [Materials Science Division, Argonne National Laboratory, Argonne, Illinois (United States); Hott, Roland; Heid, Rolf; Bohnen, Klaus-Peter [Karlsruhe Institute of Technology, Institute of Solid State Physics, Karlsruhe (Germany); Egami, Takeshi [Department of Materials Science and Engineering, University of Tennessee, Knoxville, Tennessee (United States); Said, Ayman [Advanced Photon Source, Argonne National Laboratory, Illinois (United States); Reznik, Dmitry [Karlsruhe Institute of Technology, Institute of Solid State Physics, Karlsruhe (Germany); Department of Physics, University of Colorado at Boulder, Boulder, Colorado (United States) 2011-07-01 We investigated the phonon softening in the charge density wave compound NbSe{sub 2} using the high-resolution hard inelastic X-ray scattering beamline 30-ID-C at the Advanced Photon Source, Argonne National Laboratory. The acoustic {sigma}{sub 1} phonon branch was measured from the zone center {gamma} to the M point at temperatures between 250 K and 8 K across the CDW transition at T{sub CDW}=33 K. Density functional theory calculations for the lattice dynamical properties which predict an extended phonon breakdown are used to analyze the detailed nature of the softening phonon branch. 18. Separating the Minor Actinides Through Advances in Selective Coordination Chemistry Energy Technology Data Exchange (ETDEWEB) Lumetta, Gregg J.; Braley, Jenifer C.; Sinkov, Sergey I.; Carter, Jennifer C. 2012-08-22 This report describes work conducted at the Pacific Northwest National Laboratory (PNNL) in Fiscal Year (FY) 2012 under the auspices of the Sigma Team for Minor Actinide Separation, funded by the U.S. Department of Energy Office of Nuclear Energy. Researchers at PNNL and Argonne National Laboratory (ANL) are investigating a simplified solvent extraction system for providing a single-step process to separate the minor actinide elements from acidic high-level liquid waste (HLW), including separating the minor actinides from the lanthanide fission products. 19. Compilation of current high-energy-physics experiments Energy Technology Data Exchange (ETDEWEB) Wohl, C.G.; Kelly, R.L.; Armstrong, F.E. 1980-04-01 This is the third edition of a compilation of current high energy physics experiments. It is a collaborative effort of the Berkeley Particle Data Group, the SLAC library, and ten participating laboratories: Argonne (ANL), Brookhaven (BNL), CERN, DESY, Fermilab (FNAL), the Institute for Nuclear Study, Tokyo (INS), KEK, Rutherford (RHEL), Serpukhov (SERP), and SLAC. The compilation includes summaries of all high energy physics experiments at the above laboratories that (1) were approved (and not subsequently withdrawn) before about January 1980, and (2) had not completed taking of data by 1 January 1976. 20. Proceedings of the 1988 International Meeting on Reduced Enrichment for Research and Test Reactors Energy Technology Data Exchange (ETDEWEB) 1993-07-01 The international effort to develop and implement new research reactor fuels utilizing low-enriched uranium, instead of highly- enriched uranium, continues to make solid progress. This effort is the cornerstone of a widely shared policy aimed at reducing, and possibly eliminating, international traffic in highly-enriched uranium and the nuclear weapon proliferation concerns associated with this traffic. To foster direct communication and exchange of ideas among the specialists in this area, the Reduced Enrichment Research and Test Reactor (RERTR) Program, at Argonne National Laboratory, sponsored this meeting as the eleventh of a series which began 1978. Individual papers presented at the meeting have been cataloged separately. 1. Environmental Research Division technical progress report, January 1984-December 1985 Energy Technology Data Exchange (ETDEWEB) 1986-05-01 Technical progress in the various research and assessment activities of Argonne National Laboratory's Environmental Research Division is reported for the period 1984 to 1985. Textual, graphic, and tabular information is used to briefly summarize (in separate chapters) the work of the Division's Atmospheric Physics, Environmental Effects Research, Environmental Impacts, Fundamental Molecular Physics and Chemistry, and Waste Management Programs. Information on professional qualifications, awards, and outstanding professional activities of staff members, as well as lists of publications, oral presentations, special events organized, and participants in educational programs, are provided in appendices at the end of each chapter. 2. Ground State Correlations Using exp(S) Method for the ^16O Nucleus. Science.gov (United States) Mihaila, Bogdan; Heisenberg, Jochen 1998-04-01 We use the Argonne-v18 potential together with a phenomenological three-nucleon interaction to do the calculation of the mean-field single particle wave functions and the correlation operator describing the ground state of the ^16O nucleus. Our correlation operator includes the contributions from up to 4p4h terms. We present a breakdown of the contributions to the binding from the two- and the three-body interactions. The one- and the two-body densities for ^16O are presented. Effects of the center-of-mass correction on the charge density and form factor are also discussed. 3. Center-of-mass corrections revisited a many-body expansion approach CERN Document Server Mihaila, B; Mihaila, Bogdan; Heisenberg, Jochen H. 1999-01-01 A many-body expansion for the computation of the charge form factor in the center-of-mass system is proposed. For convergence testing purposes, we apply our formalism to the case of the harmonic oscillator shell model, where an exact solution exists. We also work out the details of the calculation involving realistic nuclear wave functions. Results obtained for the Argonne$v$18 two-nucleon and Urbana-IX three-nucleon interactions are reported. No corrections due to the meson-exchange charge density are taken into account. 4. Center-of-mass corrections reexamined: A many-body expansion approach Science.gov (United States) Mihaila, Bogdan; Heisenberg, Jochen H. 1999-11-01 A many-body expansion for the computation of the charge form factor in the center-of-mass system is proposed. For convergence testing purposes, we apply our formalism to the case of the harmonic oscillator shell model, where an exact solution exists. We also work out the details of the calculation involving realistic nuclear wave functions. Results obtained for the Argonne v18 two-nucleon and Urbana-IX three-nucleon interactions are reported. No corrections due to the meson-exchange charge density are taken into account. 5. Users' guide to CACECO containment analysis code. [LMFBR Energy Technology Data Exchange (ETDEWEB) Peak, R.D. 1979-06-01 The CACECO containment analysis code was developed to predict the thermodynamic responses of LMFBR containment facilities to a variety of accidents. The code is included in the National Energy Software Center Library at Argonne National Laboratory as Program No. 762. This users' guide describes the CACECO code and its data input requirements. The code description covers the many mathematical models used and the approximations used in their solution. The descriptions are detailed to the extent that the user can modify the code to suit his unique needs, and, indeed, the reader is urged to consider code modification acceptable. 6. First-principles calculations for$c$-coefficients of the isobaric mass multiplet equation in the$1p0f$shell CERN Document Server Ormand, W E; Jensen, M Hjorth 2016-01-01 We present the first calculations for the$c$-coefficients of the isobaric mass multiplet equation (IMME) for nuclei from$A=42$to$A=54$based on input from several realistic nucleon-nucleon interactions. We show that there is clear dependence on the short-ranged charge-symmetry breaking (CSB) part of the strong interaction. There is a significant variation in the CSB part between the commonly used CD-Bonn, N$^3$LO and Argonne V18 nucleon-nucleon interactions. All of them give a CSB contribution that is too large when compared to experiment. 7. History of fast reactor fuel development Energy Technology Data Exchange (ETDEWEB) Kittel, J.H.; Frost, B.R.T. (Argonne National Lab., IL (United States)); Mustelier, J.P. (COGEMA, Velizy-Villacoublay (France)) 1992-01-01 Most of the first generation of fast reactors that were operated at significant power levels employed solid metal fuels. They were constructed in the United States and United Kingdom in the 1950s and included Experimental Breeder Reactor (EBR)-I and -II operated by Argonne National Laboratory, United States, the Enrico Fermi Reactor operated by the Atomic Power Development Associates, United States and DFR operated by the U.K. Atomic Energy Authority (UKAEA). Their paper tracer pre-development of fast reactor fuel from these early days through the 1980s including ceramic fuels. 8. Friction reduction and heat transfer enhancement in turbulent pipe flow of non-Newtonian liquid-solid mixtures Science.gov (United States) Choi, U. S.; Liu, K. V. 1988-02-01 Argonne National Laboratory (ANL) has identified two concepts for developing advanced energy transmission fluids for thermal systems, in particular district heating and cooling systems. A test series was conducted at ANL to prove these concepts. This paper presents experimental results and discusses the degradation behavior of linear polymer additives and the flow and heat transfer characteristics of non-melting slurry flows. The test results furnished strong evidence that the use of friction reducing additives and slurries can yield improved thermal-hydraulic performance of thermal systems. 9. First results from the microwave air yield beam experiment (MAYBE: Measurement of GHz radiation for ultra-high energy cosmic ray detection Directory of Open Access Journals (Sweden) Verzi V. 2013-06-01 Full Text Available We present measurements of microwave emission from an electron-beam induced air plasma performed at the 3 MeV electron Van de Graaff facility of the Argonne National Laboratory. Results include the emission spectrum between 1 and 15 GHz, the polarization of the microwave radiation and the scaling of the emitted power with respect to beam intensity. MAYBE measurements provide further insight on microwave emission from extensive air showers as a novel detection technique for Ultra-High Energy Cosmic Rays. 10. Data acquisition system for the neutron scattering instruments at the intense pulsed neutron source Energy Technology Data Exchange (ETDEWEB) Crawford, R.K.; Daly, R.T.; Haumann, J.R.; Hitterman, R.L.; Morgan, C.B.; Ostrowski, G.E.; Worlton, T.G. 1981-01-01 The Intense Pulsed Neutron Source (IPNS) at Argonne National Laboratory is a major new user-oriented facility which is now coming on line for basic research in neutron scattering and neutron radiation damage. This paper describes the data-acquisition system which will handle data acquisition and instrument control for the time-of-flight neutron-scattering instruments at IPNS. This discussion covers the scientific and operational requirements for this system, and the system architecture that was chosen to satisfy these requirements. It also provides an overview of the current system implementation including brief descriptions of the hardware and software which have been developed. 11. Extreme Performance Scalable Operating Systems Final Progress Report (July 1, 2008 - October 31, 2011) Energy Technology Data Exchange (ETDEWEB) Malony, Allen D; Shende, Sameer 2011-10-31 This is the final progress report for the FastOS (Phase 2) (FastOS-2) project with Argonne National Laboratory and the University of Oregon (UO). The project started at UO on July 1, 2008 and ran until April 30, 2010, at which time a six-month no-cost extension began. The FastOS-2 work at UO delivered excellent results in all research work areas: * scalable parallel monitoring * kernel-level performance measurement * parallel I/0 system measurement * large-scale and hybrid application performance measurement * onlne scalable performance data reduction and analysis * binary instrumentation 12. Microscopic calculations of elastic scattering between light nuclei based on a realistic nuclear interaction Energy Technology Data Exchange (ETDEWEB) Dohet-Eraly, Jeremy [F.R.S.-FNRS (Belgium); Sparenberg, Jean-Marc; Baye, Daniel, E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [Physique Nucleaire et Physique Quantique, CP229, Universite Libre de Bruxelles (ULB), B-1050 Brussels (Belgium) 2011-09-16 The elastic phase shifts for the {alpha} + {alpha} and {alpha} + {sup 3}He collisions are calculated in a cluster approach by the Generator Coordinate Method coupled with the Microscopic R-matrix Method. Two interactions are derived from the realistic Argonne potentials AV8' and AV18 with the Unitary Correlation Operator Method. With a specific adjustment of correlations on the {alpha} + {alpha} collision, the phase shifts for the {alpha} + {alpha} and {alpha} + {sup 3}He collisions agree rather well with experimental data. 13. Dakota uncertainty quantification methods applied to the NEK-5000 SAHEX model. Energy Technology Data Exchange (ETDEWEB) Weirs, V. Gregory 2014-03-01 This report summarizes the results of a NEAMS project focused on the use of uncertainty and sensitivity analysis methods within the NEK-5000 and Dakota software framework for assessing failure probabilities as part of probabilistic risk assessment. NEK-5000 is a software tool under development at Argonne National Laboratory to perform computational fluid dynamics calculations for applications such as thermohydraulics of nuclear reactor cores. Dakota is a software tool developed at Sandia National Laboratories containing optimization, sensitivity analysis, and uncertainty quantification algorithms. The goal of this work is to demonstrate the use of uncertainty quantification methods in Dakota with NEK-5000. 14. Actinide and fission product separation and transmutation Energy Technology Data Exchange (ETDEWEB) NONE 1993-07-01 The second international information exchange meeting on actinide and fission product separation and transmutation, took place in Argonne National Laboratory in Illinois United States, on 11-13 November 1992. The proceedings are presented in four sessions: Current strategic system of actinide and fission product separation and transmutation, progress in R and D on partitioning processes wet and dry, progress in R and D on transmutation and refinements of neutronic and other data, development of the fuel cycle processes fuel types and targets. (A.L.B.) 15. Preliminary screening of alternative technologies to incineration for treatment of chemical-agent-contaminated soil, Rocky Mountain Arsenal Energy Technology Data Exchange (ETDEWEB) Shem, L.M.; Rosenblatt, D.H.; Smits, M.P.; Wilkey, P.L.; Ballou, S.W. 1995-12-01 In support of the U.S. Armys efforts to determine the best technologies for remediation of soils, water, and structures contaminated with pesticides and chemical agents, Argonne National Laboratory has reviewed technologies for treating soils contaminated with mustard, lewisite, sarin, o-ethyl s-(2- (diisopropylamino)ethyl)methyl-phosphonothioate (VX), and their breakdown products. This report focuses on assessing alternatives to incineration for dealing with these contaminants. For each technology, a brief description is provided, its suitability and constraints on its use are identified, and its overall applicability for treating the agents of concern is summarized. Technologies that merit further investigation are identified. 16. Dynamic virtual AliEn Grid sites on Nimbus with CernVM Science.gov (United States) Harutyunyan, A.; Buncic, P.; Freeman, T.; Keahey, K. 2010-04-01 We describe the work on enabling one click deployment of Grid sites of AliEn Grid framework on the Nimbus 'science cloud' at the University of Chicago. The integration of computing resources of the cloud with the resource pool of AliEn Grid is achieved by leveraging two mechanisms: the Nimbus Context Broker developed at Argonne National Laboratory and the University of Chicago, and CernVM - a baseline virtual software appliance for LHC experiments developed at CERN. Two approaches of dynamic virtual AliEn Grid site deployment are presented. 17. Early experiences with the IBM SP1 and the high-performance switch Energy Technology Data Exchange (ETDEWEB) Gropp, W. [ed. 1993-11-01 The IBM SP1 is IBMs newest parallel distributed-memory computer. As part of a joint project with IBM, Argonne took delivery of an early system in order to evaluate the software environment and to begin porting programming packages and applications to this machine. This report discusses the results of those efforts once the high-performance switch was installed. An earlier report (ANL/MCS-TM-177) emphasized software usability and the initial ports to the SP1. This report contains performance results and discusses some applications and tools not covered in TM 177. 18. Experimental Test of 7.8 GHz Power Extractor Using Dielectric Loaded Rectangular Waveguide Structures Institute of Scientific and Technical Information of China (English) LU Zhi-Gang; GONG Yu-Bin; GAI Wei; GAO Peng; GAO Feng; WEI Yan- Yu; WANG Wen-Xiang 2009-01-01 We report on experimental test of a 7.8 GHz power extractor using a dielectric loaded rectangular waveguide structure. This work is conducted at the Argonne wakefield accelerator (AWA) facility. The wakefield is excited by an electron beam travelling through a dielectric loaded rectangular waveguide, and the generated rf power is then subsequently extracted with a properly designed rf coupler. In the experiment, 30 MW of output power is excited by a 66nC single electron bunch, and wakefield superposition by a train consisting of four bunches is also demonstrated. Both the results agree well with theoretical predictions. 19. Vehicle Modeling for use in the CAFE model: Process description and modeling assumptions Energy Technology Data Exchange (ETDEWEB) Moawad, Ayman [Argonne National Lab. (ANL), Argonne, IL (United States); Kim, Namdoo [Argonne National Lab. (ANL), Argonne, IL (United States); Rousseau, Aymeric [Argonne National Lab. (ANL), Argonne, IL (United States) 2016-06-01 The objective of this project is to develop and demonstrate a process that, at a minimum, provides more robust information that can be used to calibrate inputs applicable under the CAFE model’s existing structure. The project will be more fully successful if a process can be developed that minimizes the need for decision trees and replaces the synergy factors by inputs provided directly from a vehicle simulation tool. The report provides a description of the process that was developed by Argonne National Laboratory and implemented in Autonomie. 20. Large Area Pico-second Photodetectors (LAPPD) in Liquid Argon Science.gov (United States) Dharmapalan, Ranjan; Lappd Collaboration 2015-04-01 The Large Area Pico-second Photodetector (LAPPD) project has recently produced the first working devices with a small form factor and pico-second timing resolution. A number of current and proposed neutrino and dark matter experiments use liquid argon as a detector medium. A flat photodetector with excellent timing resolution will help with background suppression and improve the overall sensitivity of the experiment. We present the research done and some preliminary results to customize the LAPPD devices to work in a cryogenic environment. Argonne National Laboratory (LDRD) and DOE. 1. Review of past experiments at the FELIX facility and future plans for ITER applications Energy Technology Data Exchange (ETDEWEB) Hua, T.Q.; Turner, L.R. 1993-10-01 FELIX is an experimental test facility at Argonne National Laboratory (ANL) for the study of electromagnetic effects in first wall, blanket, shield systems of fusion reactors. From 1983 to 1986 five major test series, including static and dynamic tests, were conducted and are reviewed in this paper. The dynamic tests demonstrated an important coupling effect between eddy currents and motion in a conducting structure. Recently the US has proposed to the ITER Joint Central Team to use FELIX for testing mock-up components to study electromagnetic effects encountered during plasma disruptions and other off-normal events. The near and long term plans for ITER applications are discussed. 2. National coal utilization assessment. An integrated assessment of increased coal use in the Midwest: impacts and constraints Energy Technology Data Exchange (ETDEWEB) Hoover, L. John 1977-10-01 This study was performed as a part of the Argonne National Laboratory Regional Studies program, which is sponsored by the Department of Energy. The purpose is to assess the impacts and consequences associated with alternative energy options on a regional basis, and to identify and analyze alternative mitigation and solution strategies for increasing the acceptability of these options. The National Coal Utilization Assessment is being conducted as a part of the Regional Studies Program. This particular study is focusing on impacts and constraints on increased coal utilization. In addition, a major focal point for the study is the identification and analysis of alternative solution strategies applicable to these constraints and problems. 3. Probabilistic safety assessment of WWER440 reactors prediction, quantification and management of the risk CERN Document Server Kovacs, Zoltan 2014-01-01 The aim of this book is to summarize probabilistic safety assessment (PSA) of nuclear power plants with WWER440 reactors and demonstrate that the plants are safe enough for producing energy even in light of the Fukushima accident. The book examines level 1 and 2 full power, low power and shutdown PSA, and summarizes the author's experience gained during the last 35 years in this area. It provides useful examples taken from PSA training courses the author has lectured and organized by the International Atomic Energy Agency. Such training courses were organised in Argonne National Laboratory ( 4. Update of identification and estimation of socioeconomic impacts resulting from perceived risks and changing images: An annotated bibliography Energy Technology Data Exchange (ETDEWEB) Nieves, L.A.; Clark, D.E.; Wernette, D. 1991-08-01 This annotated bibliography reviews selected literature published through August 1991 on the identification of perceived risks and methods for estimating the economic impacts of risk perception. It updates the literature review found in Argonne National Laboratory report ANL/EAIS/TM-24 (February 1990). Included in this update are (1) a literature review of the risk perception process, of the relationship between risk perception and economic impacts, of economic methods and empirical applications, and interregional market interactions and adjustments; (2) a working bibliography (that includes the documents abstracted in the 1990 report); (3) a topical index to the abstracts found in both reports; and (4) abstracts of selected articles found in this update. 5. Decision analysis to support development of the Glen Canyon Dam long-term experimental and management plan Science.gov (United States) Runge, Michael C.; LaGory, Kirk E.; Russell, Kendra; Balsom, Janet R.; Butler, R. Alan; Coggins,, Lewis G.; Grantz, Katrina A.; Hayse, John; Hlohowskyj, Ihor; Korman, Josh; May, James E.; O'Rourke, Daniel J.; Poch, Leslie A.; Prairie, James R.; VanKuiken, Jack C.; Van Lonkhuyzen, Robert A.; Varyu, David R.; Verhaaren, Bruce T.; Veselka, Thomas D.; Williams, Nicholas T.; Wuthrich, Kelsey K.; Yackulic, Charles B.; Billerbeck, Robert P.; Knowles, Glen W. 2016-01-07 The U.S. Geological Survey, in cooperation with the Bureau of Reclamation, National Park Service, and Argonne National Laboratory, completed a decision analysis to use in the evaluation of alternatives in the Environmental Impact Statement concerning the long-term management of water releases from Glen Canyon Dam and associated management activities. Two primary decision analysis methods, multicriteria decision analysis and the expected value of information, were used to evaluate the alternative strategies against the resource goals and to evaluate the influence of uncertainty. 6. Dynamic virtual AliEn Grid sites on Nimbus with CernVM Energy Technology Data Exchange (ETDEWEB) Harutyunyan, A [Armenian e-Science Foundation, Yerevan (Armenia); Buncic, P [CERN, Geneva (Switzerland); Freeman, T; Keahey, K, E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [University of Chicago, Chicago IL (United States) 2010-04-01 We describe the work on enabling one click deployment of Grid sites of AliEn Grid framework on the Nimbus 'science cloud' at the University of Chicago. The integration of computing resources of the cloud with the resource pool of AliEn Grid is achieved by leveraging two mechanisms: the Nimbus Context Broker developed at Argonne National Laboratory and the University of Chicago, and CernVM - a baseline virtual software appliance for LHC experiments developed at CERN. Two approaches of dynamic virtual AliEn Grid site deployment are presented. 7. UbiWorld: An environment integrating virtual reality, supercomputing, and design Energy Technology Data Exchange (ETDEWEB) Disz, T.; Papka, M.E.; Stevens, R. [Argonne National Lab., IL (United States). Mathematics and Computer Science Div. 1997-07-01 UbiWorld is a concept being developed by the Futures Laboratory group at Argonne National Laboratory that ties together the notion of ubiquitous computing (Ubicomp) with that of using virtual reality for rapid prototyping. The goal is to develop an environment where one can explore Ubicomp-type concepts without having to build real Ubicomp hardware. The basic notion is to extend object models in a virtual world by using distributed wide area heterogeneous computing technology to provide complex networking and processing capabilities to virtual reality objects. 8. Toward formal analysis of ultra-reliable computers: A total systems approach Energy Technology Data Exchange (ETDEWEB) Chisholm, G.H.; Kljaich, J.; Smith, B.T.; Wojcik, A.S. 1986-01-01 This paper describes the application of modeling and analysis techniques to software that is designed to execute on four channel version of the the Charles Stark Draper Laboratory (CSDL) Fault-Tolerant Processor, referred to as the Draper FTP. The software performs sensor validation of four independent measures (singlas) from the primary pumps of the Experimental Breeder Reactor-II operated by Argonne National Laboratory-West, and from the validated signals formulates a flow trip signal for the reactor safety system. 11 refs., 4 figs. 9. Results of thermal test of metallic molybdenum disk target and fast-acting valve testing Energy Technology Data Exchange (ETDEWEB) Virgo, M. [Argonne National Lab. (ANL), Argonne, IL (United States); Chemerisov, S. [Argonne National Lab. (ANL), Argonne, IL (United States); Gromov, R. [Argonne National Lab. (ANL), Argonne, IL (United States); Jonah, C. [Argonne National Lab. (ANL), Argonne, IL (United States); Vandegrift, G. F. [Argonne National Lab. (ANL), Argonne, IL (United States) 2016-12-01 This report describes the irradiation conditions for thermal testing of helium-cooled metallic disk targets that was conducted on March 9, 2016, at the Argonne National Laboratory electron linac. The four disks in this irradiation were pressed and sintered by Oak Ridge National Laboratory from molybdenum metal powder. Two of those disks were instrumented with thermocouples. Also reported are results of testing a fast-acting-valve system, which was designed to protect the accelerator in case of a target-window failure. 10. Nuclear Physics Research at the University of Richmond progress report, November 1, 1992--October 31, 1993 Energy Technology Data Exchange (ETDEWEB) Vineyard, M.F.; Gilfoyle, G.P.; Major, R.W. 1993-12-31 Summarized in this report is the progress achieved during the period from November 1, 1992 to October 31, 1993 under Contract Number DE-FG05-88ER40459. The experimental work described in this report is in electromagnetic and heavy-ion nuclear physics. The effort in electromagnetic nuclear physics is in preparation for the research program at the Continuous Electron Beam Accelerator Facility (CEBAF) and is focussed on the construction and use of the CEBAF Large Acceptance Spectrometer (CLAS). The heavy-ion experiments were performed at the Argonne National Laboratory ATLAS facility and the University of Pennsylvania. 11. A correlated basis-function description of 16O with realistic interactions Science.gov (United States) Boscá, M. C. 1994-01-01 The correlated basis-function theory is applied at the lowest order to analyze the ground state and low-energy spectrum of the 16O nucleus. Results are quoted for both the Urbana and the Argonne υ 14 nucleon-nucleon interactions. The work includes state-dependent correlations and their radial components are determined by solving a set of Euler-Lagrange equations. The matrix elements are computed by using a cluster expansion and the sequential condition is imposed in order to insure convergence. The results clearly disagree with the experimental values. 12. Report of the US Department of Energy's team analyses of the Chernobyl-4 Atomic Energy Station accident sequence Energy Technology Data Exchange (ETDEWEB) 1986-11-01 In an effort to better understand the Chernobyl-4 accident of April 26, 1986, the US Department of Energy (DOE) formed a team of experts from the National Laboratories including Argonne National Laboratory, Brookhaven National Laboratory, Oak Ridge National Laboratory, and Pacific Northwest Laboratory. The DOE Team provided the analytical support to the US delegation for the August meeting of the International Atomic Energy Agency (IAEA), and to subsequent international meetings. The DOE Team has analyzed the accident in detail, assessed the plausibility and completeness of the information provided by the Soviets, and performed studies relevant to understanding the accident. The results of these studies are presented in this report. 13. Beam chopper For the Low-Energy Undulator Test Line (LEUTL) in the APS Energy Technology Data Exchange (ETDEWEB) Kang, Y.; Wang, J.; Milton, S.; Teng, L. [and others 1997-08-01 The low-energy undulator test line (LEUTL) is being built and will be tested with a short beam pulse from an rf gun in the Advanced Photon Source (APS) at the Argonne National Laboratory. In the LEUTL a beam chopper is used after the rf gun to deflect the unwanted beam to a beam dump. The beam chopper consists of a permanent magnet and an electric deflector that can compensate for the magnetic deflection. A 30-kV pulsed power supply is used for the electric deflector. The chopper subsystem was assembled and tested for beamline installation. The electrical and beam properties of the chopper assembly are presented. 14. Pyrochemical and Dry Processing Methods Program. A selected bibliography Energy Technology Data Exchange (ETDEWEB) McDuffie, H.F.; Smith, D.H.; Owen, P.T. 1979-03-01 This selected bibliography with abstracts was compiled to provide information support to the Pyrochemical and Dry Processing Methods (PDPM) Program sponsored by DOE and administered by the Argonne National Laboratory. Objectives of the PDPM Program are to evaluate nonaqueous methods of reprocessing spent fuel as a route to the development of proliferation-resistant and diversion-resistant methods for widespread use in the nuclear industry. Emphasis was placed on the literature indexed in the ERDA--DOE Energy Data Base (EDB). The bibliography includes indexes to authors, subject descriptors, EDB subject categories, and titles. 15. An integrated bioconversion process for the production of L-lactic acid from starchy feedstocks Energy Technology Data Exchange (ETDEWEB) Tsai, S.P.; Moon, S.H. 1997-07-01 The potential market for lactic acid as the feedstock for biodegradable polymers, oxygenated chemicals, and specialty chemicals is significant. L-lactic acid is often the desired enantiomer for such applications. However, stereospecific lactobacilli do not metabolize starch efficiently. In this work, Argonne researchers have developed a process to convert starchy feedstocks into L-lactic acid. The processing steps include starch recovery, continuous liquefaction, and simultaneous saccharification and fermentation. Over 100 g/L of lactic acid was produced in less than 48 h. The optical purity of the product was greater than 95%. This process has potential economical advantages over the conventional process. 16. The Advanced Photon Source: Performance and results from early operation Energy Technology Data Exchange (ETDEWEB) Moncton, D.E. [Argonne National Lab., IL (United States). Advanced Photon Source 1997-10-01 The Advanced Photon Source at Argonne National Laboratory is now providing researchers with extreme-brilliance undulator radiation in the hard x-ray region of the spectrum. All technical facilities and components are operational and have met design specifications. Fourteen research teams, occupying 20 sectors on the APS experiment hall floor, are currently installing beamline instrumentation or actively taking data. An overview is presented for the first operational years of the Advanced Photon Source. Emphasis is on the performance of accelerators and insertion devices, as well as early scientific results and future plans. 17. Cloning, expression, purification, crystallization and preliminary X-ray diffraction analysis of macrophage growth locus A (MglA) protein from Francisella tularensis Energy Technology Data Exchange (ETDEWEB) Subburaman, P.; Austin, B.P.; Shaw, G.X.; Waugh, D.S.; Ji, X. (NCI) 2010-11-03 Francisella tularensis, a potential bioweapon, causes a rare infectious disease called tularemia in humans and animals. The macrophage growth locus A (MglA) protein from F. tularensis associates with RNA polymerase to positively regulate the expression of multiple virulence factors that are required for its survival and replication within macrophages. The MglA protein was overproduced in Escherichia coli, purified and crystallized. The crystals diffracted to 7.5 {angstrom} resolution at the Advanced Photon Source, Argonne National Laboratory and belonged to the hexagonal space group P6{sub 1} or P6{sub 5}, with unit-cell parameters a = b = 125, c = 54 {angstrom}. 18. Generation of Homogeneous and Patterned Electron Beams using a Microlens Array Laser-Shaping Technique Energy Technology Data Exchange (ETDEWEB) Halavanau, Aliaksei [NICADD, DeKalb; Edstrom, Dean [Fermilab; Gai, Wei [Argonne, HEP; Ha, Gwanghui [Argonne, HEP; Piot, Philippe [NICADD, DeKalb; Power, John [Argonne, HEP; Qiang, Gao [Unlisted, CN; Ruan, Jinhao [Fermilab; Santucci, James [Fermilab; Wisniewski, Eric [Argonne, HEP 2016-06-01 In photocathodes the achievable electron-beam parameters are controlled by the laser used to trigger the photoemission process. Non-ideal laser distribution hampers the final beam quality. Laser inhomogeneities, for instance, can be "amplified" by space-charge force and result in fragmented electron beams. To overcome this limitation laser shaping methods are routinely employed. In the present paper we demonstrate the use of simple microlens arrays to dramatically improve the transverse uniformity. We also show that this arrangement can be used to produce transversely-patterned electron beams. Our experiments are carried out at the Argonne Wakefield Accelerator facility. 19. 2016 American Conference on Neutron Scattering (ACNS) Energy Technology Data Exchange (ETDEWEB) Woodward, Patrick [Materials Research Society, Warrendale, PA (United States) 2017-02-09 The 8th American Conference on Neutron Scattering (ACNS) was held July 10-14, 2016 in Long Beach California, marking the first time the meeting has been held on the west coast. The meeting was coordinated by the Neutron Scattering Society of America (NSSA), and attracted 285 attendees. The meeting was chaired by NSSA vice president Patrick Woodward (the Ohio State University) assisted by NSSA president Stephan Rosenkranz (Argonne National Laboratory) together with the local organizing chair, Brent Fultz (California Institute of Technology). As in past years he Materials Research Society assisted with planning, logistics and operation of the conference. 20. High-Pressure Experimental Studies on Geo-Liquids Using Synchrotron Radiation at the Advanced Photon Source Institute of Scientific and Technical Information of China (English) Yanbin Wang; Guoyin Shen 2014-01-01 We review recent progress in studying silicate, carbonate, and metallic liquids of geo-logical and geophysical importance at high pressure and temperature, using the large-volume high-pressure devices at the third-generation synchrotron facility of the Advanced Photon Source, Argonne National Laboratory. These integrated high-pressure facilities now offer a unique combina-tion of experimental techniques that allow researchers to investigate structure, density, elasticity, vis-cosity, and interfacial tension of geo-liquids under high pressure, in a coordinated and systematic fashion. Experimental techniques are described, along with scientific highlights. Future developments are also discussed. 1. ANL analysis of ZPPR-13A Energy Technology Data Exchange (ETDEWEB) Collins, P.J.; Brumbach, S.B. [comps. 1984-08-09 The ZPPR-13 experiments provide basic physics data for radial heterogeneous LMFBR cores of approximately 700MWe size. Assemblies ZPPR-13A, ZPPR-13B and ZPPR-13C comprised the JUPITER-II cooperative program between the U.S. Department of Energy (US DOE) and PNC of Japan. The measurements were made between August 1982 and April 1984. The core designs and the measurements were planned jointly by the two parties with substantial input from U.S. industrial interests to ensure coverage of the design requirements. This report describes in detail the results of the Argonne National Laboratory (ANL) analyses of phase 13A. 2. Spectral theory of Sturm-Liouville differential operators: proceedings of the 1984 workshop Energy Technology Data Exchange (ETDEWEB) Kaper, H.G.; Zettl, A. (eds.) 1984-12-01 This report contains the proceedings of the workshop which was held at Argonne during the period May 14 through June 15, 1984. The report contains 22 articles, authored or co-authored by the participants in the workshop. Topics covered at the workshop included the asymptotics of eigenvalues and eigenfunctions; qualitative and quantitative aspects of Sturm-Liouville eigenvalue problems with discrete and continuous spectra; polar, indefinite, and nonselfadjoint Sturm-Liouville eigenvalue problems; and systems of differential equations of Sturm-Liouville type. 3. Results of groundwater monitoring at Everest, Kansas, in April 2008. Energy Technology Data Exchange (ETDEWEB) LaFreniere, L. M.; Environmental Science Division 2008-11-05 On September 7, 2005, the Commodity Credit Corporation of the U.S. Department of Agriculture (CCC/USDA) presented a Scoping Memo (Argonne 2005) for preliminary consideration by the Kansas Department of Health and Environment (KDHE), suggesting possible remedial options for the carbon tetrachloride contamination in groundwater at Everest, Kansas. The suggested approaches were discussed by representatives of the KDHE, the CCC/USDA, and Argonne at the KDHE office in Topeka on September 8-9, 2005, along with other technical and logistic issues related to the Everest site. In response to these discussions, the KDHE recommended (KDHE 2005) evaluation of several remedial processes, either alone or in combination, as part of a Corrective Action Study (CAS) for Everest. The primary remedial processes suggested by the KDHE were the following: Hydraulic control by groundwater extraction with aboveground treatment; Air sparging (AS) coupled with soil vapor extraction (SVE) in large-diameter boreholes (LDBs); and Phytoremediation. As a further outcome of the 2005 meeting and as a precursor to development of a possible CAS, the CCC/USDA completed the following supplemental investigations at Everest to address several specific technical concerns discussed with the KDHE: (1) Construction of interpretive cross sections at strategic locations selected by the KDHE along the main plume migration pathway, to depict the hydrogeologic characteristics affecting groundwater flow and contaminant movement (Argonne 2006a). (2) A field investigation in early 2006 (Argonne 2006b), as follows: (a) Installation and testing of a production well and associated observation points, at locations approved by the KDHE, to determine the response of the Everest aquifer to groundwater extraction near the Nigh property. (b) Groundwater sampling for the analysis of volatile organic compounds (VOCs) and the installation of additional permanent monitoring points at locations selected by the KDHE, to further 4. Physics Division annual review, April 1, 1992--March 31, 1993 Energy Technology Data Exchange (ETDEWEB) Thayer, K.J. [ed. 1993-08-01 This document is the annual review of the Argonne National Laboratory Physics Division for the period April 1, 1992--March 31, 1993. Work on the ATLAS device is covered, as well as work on a number of others in lab, as well as collaborative projects. Heavy ion nuclear physics research looked at quasi-elastic, and deep-inelastic reactions, cluster states, superdeformed nuclei, and nuclear shape effects. There were programs on accelerator mass spectroscopy, and accelerator and linac development. There were efforts in medium energy nuclear physics, weak interactions, theoretical nuclear and atomic physics, and experimental atomic and molecular physics based on accelerators and synchrotron radiation. 5. Stabilization and Solidification of Nitric Acid Effluent Waste at Y-12 Energy Technology Data Exchange (ETDEWEB) Singh, Dileep [Argonne National Lab. (ANL), Argonne, IL (United States); Lorenzo-Martin, Cinta [Argonne National Lab. (ANL), Argonne, IL (United States) 2016-12-16 Consolidated Nuclear Security, LLC (CNS) at the Y-12 plant is investigating approaches for the treatment (stabilization and solidification) of a nitric acid waste effluent that contains uranium. Because the pH of the waste stream is 1-2, it is a difficult waste stream to treat and stabilize by a standard cement-based process. Alternative waste forms are being considered. In this regard, Ceramicrete technology, developed at Argonne National Laboratory, is being explored as an option to solidify and stabilize the nitric acid effluent wastes. 6. Summary of the Geographic Information System workshop, held in Chicago, Illinois, May 29--30, 1991. Final report, December 1989--December 1991 Energy Technology Data Exchange (ETDEWEB) Thompson, P J; Sullivan, R G; Sundell, R C; Messersmith, J [Argonne National Lab., IL (United States) 1991-12-01 The Gas Research Institute, in conjunction with Argonne National Laboratory, sponsored a workshop on May 29--30, 1991, in Chicago, Illinois, to give gas utilities the opportunity to learn about the availability, applications, and benefits of Geographic Information Systems (GISs). This report is a synopsis of that workshop and contains brief discussions, followed by copies of the viewgraphs shown at the workshop, for the following GIS topics: (1) introduction to GIS, (2) data development, (3) analytical functions, (4) use for gas pipeline right-of-way applications, and (5)video imaging and simulation. 7. COMPILATION OF CURRENT HIGH ENERGY PHYSICS EXPERIMENTS Energy Technology Data Exchange (ETDEWEB) Wohl, C.G.; Kelly, R.L.; Armstrong, F.E.; Horne, C.P.; Hutchinson, M.S.; Rittenberg, A.; Trippe, T.G.; Yost, G.P.; Addis, L.; Ward, C.E.W.; Baggett, N.; Goldschmidt-Clermong, Y.; Joos, P.; Gelfand, N.; Oyanagi, Y.; Grudtsin, S.N.; Ryabov, Yu.G. 1981-05-01 This is the fourth edition of our compilation of current high energy physics experiments. It is a collaborative effort of the Berkeley Particle Data Group, the SLAC library, and nine participating laboratories: Argonne (ANL), Brookhaven (BNL), CERN, DESY, Fermilab (FNAL), the Institute for Nuclear Study, Tokyo (INS), KEK, Serpukhov (SERP), and SLAC. The compilation includes summaries of all high energy physics experiments at the above laboratories that (1) were approved (and not subsequently withdrawn) before about April 1981, and (2) had not completed taking of data by 1 January 1977. We emphasize that only approved experiments are included. 8. Final work plan : phase I investigation of potential contamination at the former CCC/USDA grain storage facility in Montgomery City, Missouri. Energy Technology Data Exchange (ETDEWEB) LaFreniere, L. M.; Environmental Science Division 2010-08-16 former grain storage facility, the CCC/USDA will conduct investigations to (1) characterize the source(s), extent, and factors controlling the possible subsurface distribution and movement of carbon tetrachloride at the Montgomery City site and (2) evaluate the health and environmental threats potentially represented by the contamination. This work will be performed in accord with the Intergovernmental Agreement established between the Farm Service Agency of the USDA and the MoDNR, to address carbon tetrachloride contamination potentially associated with a number of former CCC/USDA grain storage facilities in Missouri. The investigations at Montgomery City will be conducted on behalf of the CCC/USDA by the Environmental Science Division of Argonne National Laboratory. Argonne is a nonprofit, multidisciplinary research center operated by UChicago Argonne, LLC, for the U.S. Department of Energy (DOE). The CCC/USDA has entered into an agreement with DOE, under which Argonne provides technical assistance to the CCC/USDA with environmental site characterization and remediation at its former grain storage facilities. The site characterization at Montgomery City will take place in phases. This approach is recommended by the CCC/USDA and Argonne, so that information obtained and interpretations developed during each incremental stage of the investigation can be used most effectively to guide subsequent phases of the program. This site-specific Work Plan outlines the specific technical objectives and scope of work proposed for Phase I of the Montgomery City investigation. This Work Plan also includes the community relations plan to be followed throughout the CCC/USDA program at the Montgomery City site. Argonne is developing a Master Work Plan specific to operations in the state of Missouri. In the meantime, Argonne has issued a Provisional Master Work Plan (PMWP; Argonne 2007) that has been reviewed and approved by the MoDNR for current use. The PMWP (Argonne 2007) provides 9. Research in mathematics and computer science, March 1, 1991--September 30, 1992 Energy Technology Data Exchange (ETDEWEB) Pieper, G.W. 1992-10-01 This report discusses the following topics in mathematics and computer science at Argonne National Laboratory: Harnessing the Power; Modeling Piezoelectric Crystals; A Two-Way Street; The Challenge Is On; A True Molecular Engineering Capability; CHAMMPions Attack Climate Issues; Studying Vortex Dynamics; Studying Vortex Structure; Providing Reliable and Fast Derivatives; Automating Reasoning for Scientific Problem Solving; Optimization and Mathematical Programming; Scalable Algorithms for Linear Algebra; Reliable Core Software; Computing Phylogenetic Trees; Managing Life-Critical Systems; Interacting with Data through Visualization; New Tools for New Technologies. 10. HGMF of 10-L solutions. Revision 1 Energy Technology Data Exchange (ETDEWEB) Larkin, K.A. 1995-04-27 This test plan describes the activities associated with the High Gradient Magnetic Filtration (HGMF) of plutonium-bearing solutions (10-L). The 10-L solutions were received from Argonne National Laboratories in 1972, are acidic, and are considered unstable. Tests on a HGMF were conducted in the 1980s for N-reactor. These tests suggested that this system would be useful for the removal of transuranic elements from solutions. The purpose of this testing activity is to show that HGMF is an applicable method of removing plutonium precipitates from solution. 11. Hot Fuel Examination Facility/South Energy Technology Data Exchange (ETDEWEB) 1990-05-01 This document describes the potential environmental impacts associated with proposed modifications to the Hot Fuel Examination Facility/South (HFEF/S). The proposed action, to modify the existing HFEF/S at the Argonne National Laboratory-West (ANL-W) on the Idaho National Engineering Laboratory (INEL) in southeastern Idaho, would allow important aspects of the Integral Fast Reactor (IFR) concept, offering potential advantages in nuclear safety and economics, to be demonstrated. It would support fuel cycle experiments and would supply fresh fuel to the Experimental Breeder Reactor-II (EBR-II) at the INEL. 35 refs., 12 figs., 13 tabs. 12. Practical superconductor development for electrical power applications: Quarterly report for the period ending December 31, 1999 Energy Technology Data Exchange (ETDEWEB) NONE 2000-02-02 This is a multiyear experimental research program focused on improving relevant material properties of high-T{sub c} superconductors (HTSS) and on development of fabrication methods that can be transferred to industry for production of commercial conductors. The development of teaming relationships through agreements with industrial partners is a key element of the Argonne (ANL) program. Recent results on substrate deposition for coated conductors, vortex studies, development of hardened Ag-alloy sheaths for powder-in-tube conductors, and sol-gel processing of NdBa{sub 2}Cu{sub 3}O{sub x} (Nd-123) are presented. 13. Practical superconductor development for electrical power applications quarterly report for the period ending March 31, 2000 Energy Technology Data Exchange (ETDEWEB) NONE 2000-04-19 This is a multiyear experimental research program focused on improving relevant material properties of high-T{sub c} superconductors (HTSs) and on development of fabrication methods that can be transferred to industry for production of commercial conductors. The development of teaming relationships through agreements with industrial partners is a key element of the Argonne program. Recent results are presented on YBa{sub 2}Cu{sub 3}O{sub x} (Y-123) coated conductors, sheathed (Bi,Pb){sub 2}Sr{sub 2}Ca{sub 2}Cu{sub 3}O{sub x} (Bi-2223) tapes, and applications development. 14. Biotelemetry system for Epilepsy Seizure Control Energy Technology Data Exchange (ETDEWEB) Smith, LaCurtise; Bohnert, George W. 2009-07-02 The Biotelemetry System for Epilepsy Seizure Control Project developed and tested an automated telemetry system for use in an epileptic seizure prevention device that precisely controls localized brain temperature. This project was a result of a Department of Energy (DOE) Global Initiatives for Proliferation Prevention (GIPP) grant to the Kansas City Plant (KCP), Argonne National Laboratory (ANL), and Pacific Northwest National Laboratory (PNNL) to partner with Flint Hills Scientific, LLC, Lawrence, KS and Biophysical Laboratory Ltd (BIOFIL), Sarov, Russia to develop a method to help control epileptic seizures. 15. Reactivity-worth estimates of the OSMOSE samples in the MINERVE reactor R1-UO2 configuration. Energy Technology Data Exchange (ETDEWEB) Klann, R. T.; Perret, G.; Nuclear Engineering Division 2007-10-03 An initial series of calculations of the reactivity-worth of the OSMOSE samples in the MINERVE reactor with the R1-UO2 core configuration were completed. The reactor model was generated using the REBUS code developed at Argonne National Laboratory. The calculations are based on the specifications for fabrication, so they are considered preliminary until sampling and analysis have been completed on the fabricated samples. The estimates indicate a range of reactivity effect from -22 pcm to +25 pcm compared to the natural U sample. 16. A user's guide to the national labs Energy Technology Data Exchange (ETDEWEB) Baron, S.; Marcuse, W. (Brookhaven National Lab., Upton, NY (US)) 1988-09-01 Recent initiatives by the Congress and the Administration have been directed to improving American industrial competitiveness. One of these initiatives is directed to encouraging industrial users to avail themselves of special facilities at the Federal Laboratories. The facilities available at the National Bureau of Standards (NBS) and seven Department of Energy (DOE) laboratories are presented here. One facility at each Laboratory is described in detail, the remainder are listed with the names of individuals to contact for further information. The seven laboratories are: Argonne, Brookhaven, Lawrence Berkeley, Los Alamos, Oak Ridge, Sandia and Lawrence Livermore. 17. A scaleable architecture for the modeling and simulation of intelligent transportation systems. Energy Technology Data Exchange (ETDEWEB) Ewing, T.; Tentner, A. 1999-03-17 A distributed, scaleable architecture for the modeling and simulation of Intelligent Transportation Systems on a network of workstations or a parallel computer has been developed at Argonne National Laboratory. The resulting capability provides a modular framework supporting plug-in models, hardware, and live data sources; visually realistic graphics displays to support training and human factors studies; and a set of basic ITS models. The models and capabilities are described, along with atypical scenario involving dynamic rerouting of smart vehicles which send probe reports to and receive traffic advisories from a traffic management center capable of incident detection. 18. Single-Particle Spectrum of Pure Neutron Matter CERN Document Server Gad, Khalaf 2015-01-01 We have calculated the self-consistent auxiliary potential effects on the binding energy of neutron matter using the Brueckner Hartree Fock approach by adopting the Argonne V18 and CD-Bonn potentials. The binding energy with the four different choices for the self-consistent auxiliary potential is discussed. Also, the binding energy of neutron matter has been computed within the framework of the self-consistent Green s function approach. We also compare the binding energies obtained in this study with those obtained by various microscopic approaches. 19. Influence of emittance on transverse dynamics of accelerated bunches in the plasma–dielectric wakefield accelerator Energy Technology Data Exchange (ETDEWEB) Kniaziev, R.R., E-mail: [email protected] [V.N. Karazin Kharkov National University, Kharkov (Ukraine); NSC Kharkov Institute of Physics and Technology, Kharkov (Ukraine); Sotnikov, G.V. [NSC Kharkov Institute of Physics and Technology, Kharkov (Ukraine) 2016-09-01 We study theoretically transverse dynamics of the bunch of charged particles with the finite emittance in the plasma–dielectric wakefield accelerator. Parameters of bunches are chosen the same as available from the 15 MeV Argonne Wakefield Accelerator beamline. The goal of the paper is to study the behavior of bunches of charged particles with different emittances while accelerating these bunches by wakefields in plasma–dielectric structures. Obtained results allow us to determine the limits of the emittance of the bunch where dynamics of the accelerated particles remains stable. 20. Status of health and environmental research relative to coal gasification 1976 to the present Energy Technology Data Exchange (ETDEWEB) Wilzbach, K.E.; Reilly, C.A. Jr. (comps.) 1982-10-01 Health and environmental research relative to coal gasification conducted by Argonne National Laboratory, the Inhalation Toxicology Research Institute, and Oak Ridge National Laboratory under DOE sponsorship is summarized. The studies have focused on the chemical and toxicological characterization of materials from a range of process streams in five bench-scale, pilot-plant and industrial gasifiers. They also address ecological effects, industrial hygiene, environmental control technology performance, and risk assessment. Following an overview of coal gasification technology and related environmental concerns, integrated summaries of the studies and results in each area are presented and conclusions are drawn. Needed health and environmental research relative to coal gasification is identified. 1. Cooperative Remote Monitoring, Arms control and nonproliferation technologies: Fourth quarter 1995 Energy Technology Data Exchange (ETDEWEB) Alonzo, G M [ed. 1995-01-01 The DOEs Cooperative Remote Monitoring programs integrate elements from research and development and implementation to achieve DOEs objectives in arms control and nonproliferation. The contents of this issue are: cooperative remote monitoring--trends in arms control and nonproliferation; Modular Integrated Monitoring System (MIMS); Authenticated Tracking and Monitoring Systems (ATMS); Tracking and Nuclear Materials by Wide-Area Nuclear Detection (WAND); Cooperative Monitoring Center; the International Remote Monitoring Project; international US and IAEA remote monitoring field trials; Project Dustcloud: monitoring the test stands in Iraq; bilateral remote monitoring: Kurchatov-Argonne-West Demonstration; INSENS Sensor System Project. 2. UV-Vis Spectroscopy as a Tool for Safeguards; Instrumentation installation and fundamental data collection Energy Technology Data Exchange (ETDEWEB) Smith, Nicholas A. [Argonne National Lab. (ANL), Argonne, IL (United States); Krebs, John F. [Argonne National Lab. (ANL), Argonne, IL (United States); Hebden, Andrew S. [Argonne National Lab. (ANL), Argonne, IL (United States) 2015-09-20 Two spectrophotometric process monitors, one optimized for high concentration (approximately 10 g/L) and one for trace levels (approximately 10 ppm),were developed at Argonne and installed at the SRS H-Canyon facility for field testing. These systems were built of Commercial-Off-The-Shelf components utilizing a custom, facility-specific hardware interface. The systems directly provide a qualitative measurement of process chemistry (i.e. valence state). With appropriate calibrations the systems could provide quantitative data. Laboratory tests were performed to determine the spectrophotometric molar absorptivity coefficients for relevant actinide and transition metals of interest. 3. Comparison of mechanical properties of glass-bonded sodalite and borosilicate glass high-level waste forms Energy Technology Data Exchange (ETDEWEB) O' Holleran, T. P.; DiSanto, T.; Johnson, S. G.; Goff, K. M. 2000-05-09 Argonne National Laboratory has developed a glass-bonded sodalite waste form to immobilize the salt waste stream from electrometallurgical treatment of spent nuclear fuel. The waste form consists of 75 vol.% crystalline sodalite and 25 vol.% glass. Microindentation fracture toughness measurements were performed on this material and borosilicate glass from the Defense Waste Processing Facility using a Vickers indenter. Palmqvist cracking was confined for the glass-bonded sodalite waste form, while median-radial cracking occurred in the borosilicate glass. The elastic modulus was measured by an acoustic technique. Fracture toughness, microhardness, and elastic modulus values are reported for both waste forms. 4. Status of US Maglev Program Energy Technology Data Exchange (ETDEWEB) Rote, D.M. 1993-11-01 Factors that have led to a reawakening of national interest in maglev technology in the United States are discussed. The development of the National Maglev program, its findings, and the four maglev design concepts resulting from the System Concept Definition study are reviewed. Technical requirements for the SCD contractors and for the Prototype Development Program are compared. Some legislative background information is given, with a review of the most important maglev legislation. Plans for the National Maglev Prototype Development Program are discussed, and activities related to maglev at Argonne National Laboratory are summarized. 5. Annual report of monitoring at Morrill, Kansas, in 2009 . Energy Technology Data Exchange (ETDEWEB) LaFreniere, L. M.; Environmental Science Division 2010-08-05 In September 2005, the Commodity Credit Corporation of the U.S. Department of Agriculture (CCC/USDA) initiated periodic sampling of groundwater in the vicinity of a grain storage facility formerly operated by the CCC/USDA at Morrill, Kansas. The sampling at Morrill is being performed on behalf of the CCC/USDA by Argonne National Laboratory, in accord with a monitoring program approved by the Kansas Department of Health and Environment (KDHE 2005), to monitor levels of carbon tetrachloride contamination identified in the groundwater at this site (Argonne 2004, 2005a). This report provides results for monitoring events in April and September 2009. Under the KDHE-approved monitoring plan (Argonne 2005b), groundwater was initially sampled twice yearly for a period of two years (in fall 2005, in spring and fall 2006, and in spring and fall 2007). The samples were analyzed for volatile organic compounds (VOCs), as well as for selected geochemical parameters to aid in the evaluation of possible natural contaminant degradation (reductive dechlorination) processes in the subsurface environment. The analytical results for groundwater sampling events at Morrill from September 2005 to October 2008 were documented previously (Argonne 2006a,b, 2007, 2008a,b, 2009). Those results consistently demonstrated the presence of carbon tetrachloride contamination, at levels exceeding the KDHE Tier 2 risk-based screening level of 5.0 {micro}g/L for this compound, in a groundwater plume extending generally south-southeastward from the former CCC/USDA facility, toward Terrapin Creek at the south edge of the town. Low levels ({le} 1.3 {micro}g/L) of carbon tetrachloride were persistently detected at monitoring well MW8S, on the bank of an intermittent tributary to Terrapin Creek. This observation suggested a possible risk of contamination of the surface waters of the creek. That concern is the regulatory driver for ongoing monitoring. In light of the early findings, in 2006 the CCC 6. A new horizon in secondary neutral mass spectrometry: post-ionization using a VUV free electron laser Energy Technology Data Exchange (ETDEWEB) Veryovkin, Igor V.; Calaway, Wallis F.; Moore, Jerry F.; Pellin, Michael J.; Lewellen, John W.; Li, Yuelin; Milton, Stephen V.; King, Bruce V.; Petravic, Mladen 2004-06-15 A new time-of-flight (TOF) mass spectrometer incorporating post-ionization of sputtered neutral species with tunable vacuum ultraviolet (VUV) light generated by a free electron laser (FEL) has been developed. Capabilities of this instrument, called SPIRIT, were demonstrated by experiments with photoionization of sputtered neutral gold atoms with 125 nm light generated by the VUV FEL located at Argonne National Laboratory (ANL). In a separate series of experiments with a fixed wavelength VUV light source, a 157 nm F{sub 2} laser, a useful yield (atoms detected per atoms sputtered) of about 12% and a mass resolution better than 1500 were demonstrated for molybdenum. 7. Internal polarized targets Energy Technology Data Exchange (ETDEWEB) Kinney, E.R.; Coulter, K.; Gilman, R.; Holt, R.J.; Kowalczyk, R.S.; Napolitano, J.; Potterveld, D.H.; Young, L. (Argonne National Lab., IL (USA)); Mishnev, S.I.; Nikolenko, D.M.; Popov, S.G.; Rachek, I.A.; Temnykh, A.B.; Toporkov, D.K.; Tsentalovich, E.P.; Wojtsekhowski, B.B. (AN SSSR, Novosibirsk (USSR). Inst. Yadernoj Fiziki) 1989-01-01 Internal polarized targets offer a number of advantages over external targets. After a brief review of the basic motivation and principles behind internal polarized targets, the technical aspects of the atomic storage cell will be discussed in particular. Sources of depolarization and the means by which their effects can be ameliorated will be described, especially depolarization by the intense magnetic fields arising from the circulating particle beam. The experience of the Argonne Novosibirsk collaboration with the use of a storage cell in a 2 GeV electron storage ring will be the focus of this technical discussion. 17 refs., 11 figs. 8. Optimizing floating guard ring designs for FASPAX N-in-P silicon sensors CERN Document Server Shin, Kyung-Wook; Lipton, Ronald; Deptuch, Gregory; Fahim, Farah; Madden, Tim; Zimmerman, Tom 2016-01-01 FASPAX (Fermi-Argonne Semiconducting Pixel Array X-ray detector) is being developed as a fast integrating area detector with wide dynamic range for time resolved applications at the upgraded Advanced Photon Source (APS.) A burst mode detector with intended$\\mbox{13 $MHz$}$image rate, FASPAX will also incorporate a novel integration circuit to achieve wide dynamic range, from single photon sensitivity to$10^{\\text{5}}\$ x-rays/pixel/pulse. To achieve these ambitious goals, a novel silicon sensor design is required. This paper will detail early design of the FASPAX sensor. Results from TCAD optimization studies, and characterization of prototype sensors will be presented. 9. Well-to-wheels analysis of energy use and greenhouse gas emissions of plug-in hybrid electric vehicles. Energy Technology Data Exchange (ETDEWEB) Elgowainy, A.; Han, J.; Poch, L.; Wang, M.; Vyas, A.; Mahalik, M.; Rousseau, A. 2010-06-14 Plug-in hybrid electric vehicles (PHEVs) are being developed for mass production by the automotive industry. PHEVs have been touted for their potential to reduce the US transportation sector's dependence on petroleum and cut greenhouse gas (GHG) emissions by (1) using off-peak excess electric generation capacity and (2) increasing vehicles energy efficiency. A well-to-wheels (WTW) analysis - which examines energy use and emissions from primary energy source through vehicle operation - can help researchers better understand the impact of the upstream mix of electricity generation technologies for PHEV recharging, as well as the powertrain technology and fuel sources for PHEVs. For the WTW analysis, Argonne National Laboratory researchers used the Greenhouse gases, Regulated Emissions, and Energy use in Transportation (GREET) model developed by Argonne to compare the WTW energy use and GHG emissions associated with various transportation technologies to those associated with PHEVs. Argonne researchers estimated the fuel economy and electricity use of PHEVs and alternative fuel/vehicle systems by using the Powertrain System Analysis Toolkit (PSAT) model. They examined two PHEV designs: the power-split configuration and the series configuration. The first is a parallel hybrid configuration in which the engine and the electric motor are connected to a single mechanical transmission that incorporates a power-split device that allows for parallel power paths - mechanical and electrical - from the engine to the wheels, allowing the engine and the electric motor to share the power during acceleration. In the second configuration, the engine powers a generator, which charges a battery that is used by the electric motor to propel the vehicle; thus, the engine never directly powers the vehicle's transmission. The power-split configuration was adopted for PHEVs with a 10- and 20-mile electric range because they require frequent use of the engine for acceleration and 10. EMERGE - ESnet/MREN Regional Science Grid Experimental NGI Testbed Energy Technology Data Exchange (ETDEWEB) Mambretti, Joe; DeFanti, Tom; Brown, Maxine 2001-07-31 This document is the final report on the EMERGE Science Grid testbed research project from the perspective of the International Center for Advanced Internet Research (iCAIR) at Northwestern University, which was a subcontractor to this UIC project. This report is a compilation of information gathered from a variety of materials related to this project produced by multiple EMERGE participants, especially those at Electronic Visualization Lab (EVL) at the University of Illinois at Chicago (UIC), Argonne National Lab and iCAIR. The EMERGE Science Grid project was managed by Tom DeFanti, PI from EVL at UIC. 11. Theoretical studies of chemical reaction dynamics Energy Technology Data Exchange (ETDEWEB) Schatz, G.C. [Argonne National Laboratory, IL (United States) 1993-12-01 This collaborative program with the Theoretical Chemistry Group at Argonne involves theoretical studies of gas phase chemical reactions and related energy transfer and photodissociation processes. Many of the reactions studied are of direct relevance to combustion; others are selected they provide important examples of special dynamical processes, or are of relevance to experimental measurements. Both classical trajectory and quantum reactive scattering methods are used for these studies, and the types of information determined range from thermal rate constants to state to state differential cross sections. 12. The Charge Form Factors of the Three- and Four-Body Nuclei Energy Technology Data Exchange (ETDEWEB) R. Schiavilla; V.R. Pandharipande; D.O. Riska 1990-01-01 The charge form factors of 3H, 3He, and 4He are calculated using the Monte Carlo method and variational ground-state wave functions obtained for the Argonne two-nucleon and Urbana-VII three-nucleon interactions. The model for the charge density operator contains the two-body exchange contributions of longest range. With some spread due to the uncertainty in the electromagnetic form factors of the nucleon the calculated charge form factors are in good agreement with the empirical values over the whole experimentally covered range of momentum transfer. 13. Rotating target wheel system for super-heavy element production at ATLAS CERN Document Server Greene, J P; Falout, J; Janssens, R V F 2004-01-01 A new scattering chamber housing a large diameter rotating target wheel has been designed and constructed in front of the Fragment Mass Analyzer (FMA) for the production of very heavy nuclei (Z greater than 100) using beams from the Argonne Tandem Linear Accelerator System (ATLAS). In addition to the target and drive system, the chamber is extensively instrumented in order to monitor target performance and deterioration. Capabilities also exist to install rotating entrance and exit windows for gas cooling of the target within the scattering chamber. The design and initial tests are described. 14. Performance evaluation of high-temperature superconducting current leads for micro-SMES systems Science.gov (United States) Niemann, R. C.; Cha, Y. S.; Hull, J. R.; Buckles, W. E.; Weber, B. R.; Yang, S. T. As part of the US Department of Energy's Superconductivity Technology Program, Argonne National Laboratory and Superconductivity, Inc., are developing high-temperature superconductor (HTS) current leads for application to micro-superconducting magnetic energy storage systems. Two 1500-A HTS leads have been designed and constructed. The performance of the current lead assemblies is being evaluated in a zero-magnetic-field test program that includes assembly procedures, tooling, and quality assurance; thermal and electrical performance; and flow and mechanical characteristics. Results of evaluations performed to data are presented. 15. Current leads and magnetic bearings Science.gov (United States) Hull, J. R. 1993-10-01 Since the discovery of high temperature superconductors (HTS's), Argonne National Laboratory (ANL) has been active in a broad spectrum of activities in developing these materials for applications. Work at every stage of development has involved industrial collaboration in order to accelerate commercialization. While most of the development work has been devoted to improving the properties of current-carrying wires, some effort has been devoted to applications that can utilize HTS's with properties available now or in the near future. In this paper, advances made in the area of current leads and magnetic bearings are discussed. 16. Proceedings of the fourth users meeting for the advanced photon source Energy Technology Data Exchange (ETDEWEB) 1992-02-01 The Fourth Users Meeting for the Advanced Photon Source (APS) was held on May 7--8, 1991 at Argonne National Laboratory. Scientists and engineers from universities, industry, and national laboratories came to review the status of the facility and to look ahead to the types of forefront science that will be possible when the APS is completed. The presentations at the meeting included an overview of the project; critical issues for APS operation; advances in synchrotron radiation applications; users perspectives, and funding perspectives. The actions taken at the 1991 Business Meeting of the Advanced Photon Source Users Organization are also documented. 17. Microscopic calculation of the spin-dependent neutron scattering lengths on 3He CERN Document Server Hofmann, H M 2003-01-01 We report on the spin.dependent neutron scattering length on 3He from a microscopic calculation of p-3H, n-3He, and d-2H scattering employing the Argonne v18 nucleon-nucleon potential with and without additional three-nucleon force. The results and that of a comprehensive R-matrix analysis are compared to a recent measurement. The overall agreement for the scattering lengths is quite good. The imaginary parts of the scattering lengths are very sensitive to the inclusion of three-nucleon forces, whereas the real parts are almost insensitive. 18. Adapting the serial Alpgen event generator to simulate LHC collisions on millions of parallel threads CERN Document Server Childers, J T; LeCompte, T J; Papka, M E; Benjamin, D P 2015-01-01 As the LHC moves to higher energies and luminosity, the demand for computing resources increases accordingly and will soon outpace the growth of the Worldwide LHC Computing Grid. To meet this greater demand, event generation Monte Carlo was targeted for adaptation to run on Mira, the supercomputer at the Argonne Leadership Computing Facility. Alpgen is a Monte Carlo event generation application that is used by LHC experiments in the simulation of collisions that take place in the Large Hadron Collider. This paper details the process by which Alpgen was adapted from a single-processor serial-application to a large-scale parallel-application and the performance that was achieved. 19. Letter to the editor : Impartial review is key. Energy Technology Data Exchange (ETDEWEB) Crabtree, G. W.; Materials Science Division 2002-08-22 The News Feature, 'Misconduct in physics: Time to wise up? [Nature 418, 120-121; 2002], raises important issues that the physical-science community must face. Argonne National Laboratory's code of ethics calls for a response very similar to that of Bell Labs, namely: 'The Laboratory director may appoint an ad-hoc scientific review committee to investigate internal or external charges of scientific misconduct, fraud, falsification of data, misinterpretation of data, or other activities involving scientific or technical matters.' 20. Astrophysics experiments with radioactive beams at ATLAS Directory of Open Access Journals (Sweden) B. B. Back 2014-02-01 Full Text Available Reactions involving short-lived nuclei play an important role in nuclear astrophysics, especially in explosive scenarios which occur in novae, supernovae or X-ray bursts. This article describes the nuclear astrophysics program with radioactive ion beams at the ATLAS accelerator at Argonne National Laboratory. The CARIBU facility as well as recent improvements for the in-flight technique are discussed. New detectors which are important for studies of the rapid proton or the rapid neutron-capture processes are described. At the end we briefly mention plans for future upgrades to enhance the intensity, purity and the range of in-flight and CARIBU beams.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5556110739707947, "perplexity": 5259.425691406364}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823309.55/warc/CC-MAIN-20171019141046-20171019161046-00520.warc.gz"}
https://arxiv.org/abs/0808.1586
cond-mat.str-el (what is this?) # Title: Glassy states in fermionic systems with strong disorder and interactions Abstract: We study the competition between interactions and disorder in two dimensions. Whereas a noninteracting system is always Anderson localized by disorder in two dimensions, a pure system can develop a Mott gap for sufficiently strong interactions. Within a simple model, with short-ranged repulsive interactions, we show that, even in the limit of strong interaction, the Mott gap is completely washed out by disorder for an infinite system for dimensions $D\le 2$. The probability of a nonzero gap falls onto a universal curve, leading to a glassy state for which we provide a scaling function for the frequency dependent susceptibility. Comments: 8 pages, 5 figures, expanded to contain some analytical results for one dimension Subjects: Strongly Correlated Electrons (cond-mat.str-el); Disordered Systems and Neural Networks (cond-mat.dis-nn) Journal reference: Phys. Rev. B 79, 125102 (2009) DOI: 10.1103/PhysRevB.79.125102 Cite as: arXiv:0808.1586 [cond-mat.str-el] (or arXiv:0808.1586v3 [cond-mat.str-el] for this version) ## Submission history From: Sudip Chakravarty [view email] [v1] Mon, 11 Aug 2008 21:17:31 GMT (461kb,D) [v2] Wed, 10 Sep 2008 22:23:36 GMT (458kb,D) [v3] Tue, 3 Feb 2009 22:44:58 GMT (131kb,D)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.714242696762085, "perplexity": 2808.5126007933954}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320545.67/warc/CC-MAIN-20170625170634-20170625190634-00302.warc.gz"}