url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
https://brilliant.org/problems/does-light-have-momentum/
# Does light have momentum? Find the momentum of a photon with wavelength 1000 nm submit your answer in terms of kg meters/second. ×
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8754655718803406, "perplexity": 1084.7170516657288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647576.75/warc/CC-MAIN-20180321043531-20180321063531-00116.warc.gz"}
https://proofwiki.org/wiki/Definition:Fundamental_Circuit_(Matroid)
# Definition:Fundamental Circuit (Matroid) ## Definition Let $M = \struct {S, \mathscr I}$ be a matroid. Let $B$ be a base of $M$. Let $x \in S \setminus B$. The fundamental circuit of $x$ in the base B, denoted $\map C {x, B}$, is the unique circuit such that: $x \in \map C {x, B} \subseteq B \cup \set x$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9973423480987549, "perplexity": 491.8093484048399}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057424.99/warc/CC-MAIN-20210923135058-20210923165058-00668.warc.gz"}
http://mathoverflow.net/questions/138885/applications-of-non-separable-hilbert-spaces/138887
# Applications of non-separable Hilbert spaces In applications, Hilbert spaces of interest are often assumed to be separable. In addition to being extremely convenient mathematically, this assumption can often be justified on computational or physical grounds. Are there applications where non-separable Hilbert spaces naturally arise? - ## 1 Answer The main example of a non-separable Hilbert space is the Besicovitch space of almost periodic functions. Almost periodic functions play a significant role in analysis, from differential equations to operator algebras, and this space is quite useful. - Here's Besicovich book (I was giving the same answer...) plouffe.fr/simon/math/… –  Pietro Majer Aug 8 '13 at 7:19
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8111119866371155, "perplexity": 779.9388823353233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507447421.45/warc/CC-MAIN-20141017005727-00023-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/quick-question-about-continuity-at-a-point.528908/
# Homework Help: Quick Question about continuity at a point 1. Sep 10, 2011 ### tylerc1991 1. The problem statement, all variables and given/known data I have always been comfortable with proving continuity of a function on an interval, but I have been running into problems proving that a function is continuous at a point in it's domain. For example: Prove $f(x) = x^2$ is continuous at $x = 7$. 2. Relevant equations We will be using the delta epsilon definition of continuity here. 3. The attempt at a solution Let $f(x) = x^2$ and $\varepsilon > 0$. Choose $\delta$= ________ (usually we choose $\delta$ last, so I am just leaving it blank right now). Now, if $|x - y| = |7 - y| = |y - 7| < \delta$, then $|f(x) - f(y)| = |49 - y^2| = |y^2 - 49| = |y + 7||y - 7|.$ This is where it gets a little awkward for me. I know that I may say $|y - 7| < \delta$, but what do I do with the $|y + 7|$? Could I say that $|y + 7| < \delta + 14$? Then I would have to choose a $\delta$ such that $\delta (\delta + 14) = \varepsilon$. Thank you for your help anyone! 2. Sep 12, 2011 ### Stephen Tashi There is probably a way to write the proof using mostly references to absoulte values. However, it is useful to know how to "grunge it out" when no elegant way comes to mind. When you have to get down and dirty, it is best to write things like $|y-7| < \delta$ in the equivalent form of: eq 1. $7 - \delta < y < 7 + \delta$ (For simplicity I'll label them "equations" but the they actually are inequalities.) To square eq. 1 and keep the inequality marks pointed the same way, we must make sure that all the terms are positive. We can make $7 - \delta > 0$ by chosing $\delta < 7$, so remember this condition. Squaring eq 1., we get: eq. 2. $49 - 14 \delta + \delta^2 < y^2 < 49 + 14\delta + \delta^2$ To get the functions of $\delta$ to be closer to $y^2$ than $\epsilon$ we need eq. 3 and eq. 4 to hold: eq 3. $49 - \epsilon < 49 - 14 \delta + \delta^2$ eq. 4. $49 + 14\delta + \delta^2 < 49 + \epsilon$ Thos equations simplify to eq 5. and eq 6. respectively: eq 5. $-\epsilon < -14 \delta + \delta^2$ eq 6. $14 \delta +\delta^2 < \epsilon$ Mutliplying eq 5. by -1 and reversing the inequality sign gives: eq 7. $14 \delta - \delta^2 < \epsilon$ If eq. 6 holds then eq 7 would also, so we only worry about eq 6. Rather than worry about solving quadratic equations, it's simpler to take advantage of the fact that we are dealing with inequalities and trying to make $\delta$ small. So add the condition $0 < \delta < 1$ so that we can say $\delta^2 < \delta$ This and eq 6. imply that we want: eq 8. $0 < 14 \delta + \delta^2 < 14\delta + \delta < \epsilon$ eq 9. $15 \delta < \epsilon$ So this imples we want: eq 10. $\delta < \frac {\epsilon}{15}$ We can satisfy eq 10. by setting $\delta$ equal to various things, for example $\delta = (0.5)\frac{\epsilon}{15}$ or $\delta = \frac{\epsilon}{16}$ etc. We have to remember the previous assumptions we made on $\delta$. To incorporate all of them , it is sufficient to say: eq 11. Let $\delta = min\{ \frac{\epsilon}{16}, 1.0 \}$ To have a real proof you have to go through the reasoning in reverse order.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9237013459205627, "perplexity": 399.30148079162956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823183.3/warc/CC-MAIN-20181209210843-20181209232843-00555.warc.gz"}
https://www.physicsforums.com/threads/integration-by-substitution.208186/
# Integration by substitution? 1. Jan 11, 2008 ### cabellos6 1. The problem statement, all variables and given/known data I want to integrate (1+x)/(1-x) 2. Relevant equations 3. The attempt at a solution I have looked at many examples of substitution method - this one appears simple but Im not finishing the last step..... - I know you must first take u=(1-x) - Then du = -dx what happens with the numerator (1+x) as this would be the integral of -(1+x)du/u id be very grateful if you could run me through the steps for this please. thanks 2. Jan 11, 2008 ### HallsofIvy Staff Emeritus You need to simplify the fraction first: dividing 1+ x by 1- x gives -1+ 2/(1-x)= -1- 2/(x-1). It's easy to integrate "-1" and to integrate -2/(x-1), let u= x-1. Similar Discussions: Integration by substitution?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9830060601234436, "perplexity": 2260.960394654039}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549426161.99/warc/CC-MAIN-20170726122153-20170726142153-00582.warc.gz"}
https://mathoverflow.net/questions/222616/whitney-sum-formula-for-pontryagin-classes-ii
# Whitney sum formula for Pontryagin classes II I have read in several places that the total Pontryagin classes of real vector bundles satisfy a Whitney sum formula $p(E\oplus F) = p(E)\cdot p(F)$ modulo 2-torsion. I would like to understand the 2-torsion part better. Is there a reference which describes the difference between $p(E\oplus F)$ and $p(E)\cdot p(F)$, perhaps in terms of Bocksteins of Stiefel-Whitney classes of $E$ and $F$? This question was previously part of Whitney sum formula for Pontryagin classes I; Qiaochu Yuan's answer to that question might be helpful. Under Whitney sum, $p_q\mapsto \sum_j r_{2q-j}\otimes r_j$, where $r_{2s} = p_s$ and $r_{2s+1} = (\delta w_{2s})^2+ p_s\delta w_1$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8711992502212524, "perplexity": 85.17083408271728}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655900335.76/warc/CC-MAIN-20200709131554-20200709161554-00187.warc.gz"}
https://tex.stackexchange.com/questions/287755/selective-overlay-option-of-textblock-with-the-textpos-package/287902#287902
# Selective overlay option of textblock with the textpos package The texpos package has an option called [overlay] that, when loading the package makes all the text boxes textblocks be above (obscuring) other elements of the page. Is there a way to control whether or not a particular textblock overlays or not? \documentclass{beamer} \usepackage[overlay]{texpos} \begin{document} \begin{frame}{title} Other elements \begin{textblock}{6}(5,7.1) %is there an option to NOT overlay this particular one Hello % or include a bulky image here. \end{textblock} \end{frame} \end{document} Since this is an emergency (my presentation is tomorrow) :) I will give one or two 100 point bounties for a solution or a workaround. You can't do this in general: the [overlay] option works by adjusting the TeX \shipout command so that all of the {textblock} material on a page is output either before (non-overlay) or after (overlay) the non-{textblock} material. Since this is a presentation, however, you might be able to hack this on a per-page basis. Try setting \makeatletter\TP@overlayfalse before the page you want to hack, and then \TP@overlaytrue after it. That should result in all of the {textblock} environments on the affected page being non-overlay. You might have to play around with the precise positioning of those commands, but putting them before and after the {frame} environment should work. I haven't tested this – let us know how you get on. • Hmm: I was fairly confident that would work – boo. I presume the rush is over for you, now, but I'll look at this again. I have a \TPoptions macro implemented in a version 1.8b1 which might be relevant here, and this should prompt me to release that, if only to add a note about how to achieve this sort of thing. Thanks for letting me know. Jan 20, 2016 at 11:23 • I've tried using \TPoptions to overlay just one textbox on the same page as another that is underneath the main text, and haven't got it to work. Am I missing something? Oct 27, 2016 at 12:34 • @hertzsprung The \TPoptions macro allows you to change the in-play options on a per-page basis, but it can't change the effect of those options. On a particular page, the {textblock} material will appear either all before or all after the non-{textblock} material. So no, there's no current way to overlay just one textblock. Doing so would not be impossible, I don't think, but I suspect it would require major surgery to the package. Oct 28, 2016 at 12:55
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.872024655342102, "perplexity": 1601.7114531883974}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103639050.36/warc/CC-MAIN-20220629115352-20220629145352-00265.warc.gz"}
https://www.semanticscholar.org/paper/Double-Ramification-Cycles-and-Quantum-Integrable-Buryak-Rossi/0ecc55524422a309a13a1b31d545f64e576615e9
# Double Ramification Cycles and Quantum Integrable Systems @article{Buryak2015DoubleRC, title={Double Ramification Cycles and Quantum Integrable Systems}, author={A. Buryak and P. Rossi}, journal={Letters in Mathematical Physics}, year={2015}, volume={106}, pages={289-317} } • Published 2015 • Mathematics, Physics • Letters in Mathematical Physics In this paper, we define a quantization of the Double Ramification Hierarchies of Buryak (Commun Math Phys 336:1085–1107, 2015) and Buryak and Rossi (Commun Math Phys, 2014), using intersection numbers of the double ramification cycle, the full Chern class of the Hodge bundle and psi-classes with a given cohomological field theory. We provide effective recursion formulae which determine the full quantum hierarchy starting from just one Hamiltonian, the one associated with the first descendant… Expand Integrable systems of double ramification type • Mathematics, Physics • 2016 In this paper we study various aspects of the double ramification (DR) hierarchy, introduced by the first author, and its quantization. We extend the notion of tau-symmetry to quantum integrableExpand Tau-Structure for the Double Ramification Hierarchies • Mathematics, Physics • 2016 In this paper we continue the study of the double ramification hierarchy of Buryak (Commun Math Phys 336(3):1085–1107, 2015). After showing that the DR hierarchy satisfies tau-symmetry we define itsExpand D ec 2 01 8 TAU-STRUCTURE FOR THE DOUBLE RAMIFICATION HIERARCHIES • 2018 In this paper we continue the study of the double ramification hierarchy of [Bur15]. After showing that the DR hierarchy satisfies tau-symmetry we define its partition function as the (logarithm ofExpand Deformation theory of Cohomological Field Theories • Mathematics, Physics • 2020 We develop the deformation theory of cohomological field theories (CohFTs), which is done as a special case of a general deformation theory of morphisms of modular operads. This leads us to introduceExpand Quantum D4 Drinfeld–Sokolov hierarchy and quantum singularity theory • Mathematics, Physics • 2019 Abstract In this paper we compute explicitly the double ramification hierarchy and its quantization for the D 4 Dubrovin–Saito cohomological field theory obtained applying the Givental–TelemanExpand Integrability, Quantization and Moduli Spaces of Curves This paper has the purpose of presenting in an organic way a new approach to integrable (1+1)-dimensional field systems and their systematic quantization emerging from intersection theory of theExpand Towards a description of the double ramification hierarchy for Witten's $r$-spin class • Mathematics, Physics • 2015 The double ramification hierarchy is a new integrable hierarchy of hamiltonian PDEs introduced recently by the first author. It is associated to an arbitrary given cohomological field theory. In thisExpand The quantum Witten-Kontsevich series and one-part double Hurwitz numbers We study the quantum Witten-Kontsevich series introduced by Buryak, Dubrovin, Guere and Rossi in \cite{buryak2016integrable} as the logarithm of a quantum tau function for the quantum KdV hierarchy.Expand INTEGRABLE SYSTEMS AND MODULI SPACES OF CURVES This document has the purpose of presenting in an organic way my research on integrable systems originating from the geometry of moduli spaces of curves, with applications to Gromov-Witten theory andExpand Quantum hydrodynamics from large-n supersymmetric gauge theories • Physics, Mathematics • 2015 We study the connection between periodic finite-difference Intermediate Long Wave ($$\Delta \hbox {ILW}$$ΔILW) hydrodynamical systems and integrable many-body models of Calogero and Ruijsenaars-type.Expand #### References SHOWING 1-10 OF 17 REFERENCES Recursion Relations for Double Ramification Hierarchies • Mathematics, Physics • 2014 In this paper we study various properties of the double ramification hierarchy, an integrable hierarchy of hamiltonian PDEs introduced in Buryak (CommunMath Phys 336(3):1085–1107, 2015) usingExpand Integrable systems and holomorphic curves In this paper we attempt a self-contained approach to infinite dimensional Hamiltonian systems appearing from holomorphic curve counting in Gromov-Witten theory. It consists of two parts. The firstExpand Normal forms of hierarchies of integrable PDEs, Frobenius manifolds and Gromov - Witten invariants • Mathematics, Physics • 2001 We present a project of classification of a certain class of bihamiltonian 1+1 PDEs depending on a small parameter. Our aim is to embed the theory of Gromov - Witten invariants of all genera into theExpand String, dilaton and divisor equation in Symplectic Field Theory • Mathematics, Physics • 2010 Infinite dimensional Hamiltonian systems appear naturally in the rich algebraic structure of Symplectic Field Theory. Carefully defining a generalization of gravitational descendants and adding themExpand Gromov–Witten invariants of target curves via Symplectic Field Theory Abstract We compute the Gromov–Witten potential at all genera of target smooth Riemann surfaces using Symplectic Field Theory techniques and establish differential equations for the full descendantExpand Integrals of psi-classes over double ramification cycles • Mathematics • 2012 DR-cycles are certain cycles on the moduli space of curves. Intuitively, they parametrize curves that allow a map to \mathbb{P}^1 with some specified ramification profile over two points. They areExpand Double Ramification Cycles and Integrable Hierarchies In this paper we present a new construction of a hamiltonian hierarchy associated to a cohomological field theory. We conjecture that in the semisimple case our hierarchy is related to theExpand Polynomial families of tautological classes on Mg,nrt • Mathematics • 2012 We study classes Pg,T(α;β) on Mg,nrt defined by pushing forward the virtual fundamental classes of spaces of relative stable maps to an unparameterized P1 with prescribed ramification over 0 and ∞. AExpand Integrals of ψ-classes over double ramification cycles • Mathematics • 2015 A double ramification cycle, or DR-cycle, is a codimension $g$ cycle in the moduli space $\overline{\mathcal M}_{g,n}$ of stable curves. Roughly speaking, given a list of integers $(a_1,\ldots,a_n)$,Expand Dubrovin-Zhang hierarchy for the Hodge integrals In this paper we prove that the generating series of the Hodge integrals over the moduli space of stable curves is a solution of a certain deformation of the KdV hierarchy. This hierarchy isExpand
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9275442957878113, "perplexity": 1689.0417182714948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780053918.46/warc/CC-MAIN-20210916234514-20210917024514-00309.warc.gz"}
https://chiefsfoundation.org/i-wanted-to-understand-the-best-way-to-uncover-distance-physics-as-well-as-the-answer-to-that-question-is-substantially-easier-than-you-feel-2/
# Let’s explore the idea. Distance is defined by two definitions. The very first may be the length along with the second is length/distance. If we define the length as the distance among two points, then we would have the second definition, which can be also called in essence the light cone or angle of incidence. So, how do we come up using a definition of the weight in physics? For those that are not acquainted with every day term, let me explain. The speed of light is usually a notion that has a number of applications. In Newtonian Physics, this speed is measured in units referred to as meters per second. It describes the price at which an object moves relative to some physical source for instance the earth or possibly a larger light source. It is also known as the time interval more than which a phenomenon occurs or adjustments. It may be the similar speed of light that we expertise as we move through our each day planet, the speed of sound. It’s also known as the speed of light in space, which means it is traveling faster than the speed of light in the infinite space around us. In terms of physics, that is the time interval in which an object is in a provided place when its velocity is equal towards the speed of light in the empty space surrounding the earth’s orbit and also the sun. What is the definition of the weight in physics? Weight is defined because the force that is certainly required to turn an object to accelerate it forward, plus the distinction involving this force along with the force of gravity is known as its weight. To calculate the acceleration of an object, you simply need to multiply the mass times the acceleration. How do we arrive at the definition of weightin physics? As a additional refinement, it turns out that mass is defined as the sum of all of the particles that make up the body. When an object is added towards the program, it requires on a smaller buy an essay function, that is inversely proportional towards the mass that is definitely utilized in the calculation. So, as the addition for the program goes away, the mass becomes slightly a lot more substantial. The equation might be rewritten in order that the acceleration is defined by the mass of the object divided by the square with the velocity on the object (that is the second definition of your weight in physics). This is usually a quite smaller piece from the story of how to uncover distance. Now, the following query is what does the direction in the angle of incidence imply? Nicely, this will depend on the direction in the supply of your light (that is the earth), nevertheless it is clear that the location on the supply is exactly where the light is reflected back from. To illustrate, let’s appear at a straight line passing straight in front on the sun and light entering from above. At this point, the angle of incidence would be good since the light was reflected off the surface of the sun. Another solution to express the principle of distance is usually to use a graphic representation. The term distance and also the word to define distance are derived from the reality that the distance within a circle must be expressed in meters as well as the distance in an ellipse has to be expressed in meters squared. The geometric point of view of your relationship involving a point along with a line has to be put into a system of equations, known as the metric. We can visualize this as a technique of equations that has a continuous E, which is the gravitational continuous. In physics, the constant E is referred to as the acceleration, the difference among the force of gravity plus the acceleration. How to find Distance Physics
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9477688670158386, "perplexity": 177.8477657698803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145657.46/warc/CC-MAIN-20200222085018-20200222115018-00305.warc.gz"}
http://link.springer.com/article/10.1007%2FJHEP03%282013%29108
Journal of High Energy Physics , 2013:108 # Competing orders in M-theory: superfluids, stripes and metamagnetism • Aristomenis Donos • Jerome P. Gauntlett • Julian Sonner • Benjamin Withers Article DOI: 10.1007/JHEP03(2013)108 Donos, A., Gauntlett, J.P., Sonner, J. et al. J. High Energ. Phys. (2013) 2013: 108. doi:10.1007/JHEP03(2013)108 ## Abstract We analyse the infinite class of d = 3 CFTs dual to skew-whiffed AdS4 × SE7 solutions of D = 11 supergravity at finite temperature and charge density and in the presence of a magnetic field. We construct black hole solutions corresponding to the unbroken phase, and at zero temperature some of these become dyonic domain walls of an Einstein-Maxwell-pseudo-scalar theory interpolating between AdS4 in the UV and new families of dyonic $$Ad{S_2}\times {{\mathbb{R}}^2}$$ solutions in the IR. The black holes exhibit both diamagnetic and paramagnetic behaviour. We analyse superfluid and striped instabilities and show that for large enough values of the magnetic field the superfluid instability disappears while the striped instability remains. For larger values of the magnetic field there is also a first-order metamagnetic phase transition and at zero temperature these black hole solutions exhibit hyperscaling violation in the IR with dynamical exponent z = 3/2 and θ = −2. ## Authors and Affiliations • Aristomenis Donos • 1 • Jerome P. Gauntlett • 1 • Julian Sonner • 2 • Benjamin Withers • 3 1. 1.Blackett LaboratoryImperial CollegeLondonU.K 2. 2.C.T.P., Massachusetts Institute of TechnologyCambridgeU.S.A 3. 3.Centre for Particle Theory and Department of Mathematical SciencesUniversity of DurhamDurhamU.K
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8499062061309814, "perplexity": 2725.374329961503}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738661962.7/warc/CC-MAIN-20160924173741-00116-ip-10-143-35-109.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/matlab-help-user-defined-function.650318/
# Matlab help! User defined function! 1. Nov 7, 2012 ### qiyan31 I made a user-defined function for Height. function Ht=Height(t,V,Theta); Ht=V*t*sin(Theta)-4.9*t.^2; end V is initial velocity, and i kept on getting input "V" is undefined. Can someone help me plz! 2. Nov 7, 2012 ### coalquay404 Works fine for me. What is the precise command that you're using to test the function?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9005970358848572, "perplexity": 4603.961824630698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891815318.53/warc/CC-MAIN-20180224033332-20180224053332-00410.warc.gz"}
http://math.stackexchange.com/questions/104956/why-does-int-00-5-frac1x2-0-1-doest-converge-and-int-00-5-f
Why does $\int_{0}^{0.5}\frac{1}{x^2-0.1}$ does't converge and $\int_{0}^{0.5}\frac{1}{x^2-0.3}$ does? In order to prove that $\int_{0}^{0.5}\frac{1}{x^2-1}$ converges I compared it to $\int_{0}^{0.5}\frac{1}{x}$ which converges, by checking that $\lim \frac{\frac{1}{x^2-1}}{\frac{1}{x}}=0$ when $x \to 0$. Then I went to Wolfram Alpha and tried to check $\int_{0}^{0.5}\frac{1}{x^2-0.1}$ (It is suppose to converges by the same test) but it said that it diverges, while $\int_{0}^{0.5}\frac{1}{x^2-0.3}$ doesn't. (with $0.2$ it just couldn't compute). What's really going on there between $0.1$ to $0.3$? Is W.A wrong? Edit: Sorry! $\int_{0}^{0.5}\frac{1}{x}$ obviously doesn't converge. So, in addition to the rest of the question, how can I prove my original integral does converge? Thank you very much. - Short answer to the edited question: Because $x^2$ ranges from $0$ to $0.25$, and $0.1$ is in this range (making the denominator $0$) whereas $0.3$ is outside of this range. –  Eric Naslund Feb 2 '12 at 14:23 It should also be pointed out that there are missing $dx$ terms in most of your posts (at least when they originally are written). This is a bad habit to start! –  JavaMan Feb 2 '12 at 17:58 Between $x=0$ and $x=0.5$, the function $\frac{1}{x^2-1}$ is perfectly respectable! Note that the denominator is never $0$ in our interval. The largest absolute value is reached at $x=0.5$. So your function has no issues, it is continuous on a closed interval. For the problem you were initially considering, we are finished. But the question you were led to ask is more interesting, and shows a good effort to understand the situation. Look first at $\frac{1}{x^2-0.3}$. The denominator is $0$ at $x=\pm\sqrt{0.3}$. The positive root is roughly $0.547722$, outside our interval, though not by much. Thus the function $\frac{1}{x^2-0.3}$ is well-behaved in the interval $[0,0.5]$. Look now at $\frac{1}{x^2-0.1}$. The denominator is $0$ at $x=\pm\sqrt{0.1}$. The positive root is about $0.3162278$, and this is inside our interval. So our function blows up inside our interval, and there may be a problem. Indeed there is. You know that a function can blow up, but despite that the integral converges. A standard example is $\int_0^1 \frac{dx}{\sqrt{x}}$. We will show that $\int_0^{0.5}\frac{dx}{x^2-0.1}$ diverges. As mentioned above, there is potential trouble at $\sqrt{0.1}$. To make typing easier, let $a=\sqrt{0.1}$. Our function is not defined at $x=a$, and blows up near $x=a$. Recall that $a$ is in our interval. When we are dealing with a singularity inside our interval, it is useful to break up the interval into two integrals, in this case from $0$ to $a$ and from $a$ to $0.5$. We will show that $\int_0^a \frac{dx}{x^2-0.1}$ diverges. (The integral from $a$ to $0.5$ also does, but showing that one of the integrals is bad is enough.) So we are looking at the integral $\int_0^a\frac{dx}{x^2-a^2}$. For no good reason, except for a preference for the positive, we look instead at $$\int_0^a \frac{dx}{a^2-x^2}.$$ Make the change of variable $w=a-x$. Note that $a^2-x^2=(a-x)(a+x)=w(2a-w)$. Quickly we arrive at $$\int_{w=0}^a \frac{dw}{w(2a-w)}.$$ This integral diverges, by comparison with $\int_0^a\frac{dw}{w}$, which, as pointed out by anonymous, diverges. - Thanks a lot! very helpful and clear. –  Jozef Feb 2 '12 at 15:28 You have a mistake. The integral $\int_{0}^{0.5} dx/x$ does not converge! in fact, we can easily calculate it, as the antiderivative of $1/x$ is $lnx$, and $\lim_{x\to 0^+} ln(x) = -\infty$. - Right! so how can I prove that my original integral does converge? –  Jozef Feb 2 '12 at 14:12 Well, your integral is actually a definite integral. The function $\frac{1}{x^2-1}$ is continuous on the interval $[0,1/2]$, so there is no question of convergence at all. –  the L Feb 2 '12 at 14:13
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9728354811668396, "perplexity": 178.3244116075019}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438043060830.93/warc/CC-MAIN-20150728002420-00148-ip-10-236-191-2.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/194677-calculate-volume-tetrahedron-print.html
# Calculate the volume of the tetrahedron • December 26th 2011, 05:09 AM kotsos Calculate the volume of the tetrahedron for this problem : Calculate the volume of the tetrahedron whose vertices are y=0, z=0, x=0 και y-x+z=1 this integral is it the corect one ? $\int_{x=0}^{1}\int_{y=0}^{-y+1}\int_{z=0}^{z=x-y+1}dG=\int_{0}^{1}\int_{y=0}^{-y+1}(x-y+1) dydx$ • December 26th 2011, 05:15 AM Prove It Re: Calculate the volume of the tetrahedron They both look fine to me... • December 27th 2011, 12:51 AM matheagle Re: Calculate the volume of the tetrahedron Are you sure of that plane? Because when you let z equal zero it makes a odd slice in the xy plane It's y=1+x which isn't going to give you a closed region with the x and y axis.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9587384462356567, "perplexity": 1773.024514905741}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471983553659.85/warc/CC-MAIN-20160823201913-00038-ip-10-153-172-175.ec2.internal.warc.gz"}
http://swmath.org/?term=fractional%20derivatives
• # FODE • Referenced in 217 articles [sw08377] • paper, firstly the time fractional, the sense of Riemann-Liouville derivative, Fokker-Planck equation ... time fractional ordinary differential equation (FODE) in the sense of Caputo derivative by discretizing ... properties of Riemann-Liouville derivative and Caputo derivative. Then combining the predictor-corrector approach with ... Planck equation, some numerical results for time fractional Fokker-Planck equation with several different fractional... • # SubIval • Referenced in 7 articles [sw22654] • numerical method for computations of the fractional derivative in IVPs (initial value problems ... backward differentiation formula) for a first order derivative. The formula resulting from SubIval is: t0Dαtx ... Liouville and Caputo definitions of the fractional derivative... • # differint • Referenced in 1 article [sw31175] • methods for the numerical computation of fractional derivatives and integrals have been defined. However, these ... numerical algorithms for the computation of fractional derivatives and integrals. This package is coded ... Letnikov, and Riemann-Liouville algorithms from the fractional calculus are included in this package... • Referenced in 1 article [sw17727] • with the Riemann-Liouville and Caputo fractional derivatives. As an additional information about the anomalous ... identification of two required parameters of the fractional diffusion equations by approximately known initial data ... fractional diffusivity, the order of fractional differentiation and the Laplace variable. Estimations of the upper ... error bound for this parameter are derived. A technique of optimal Laplace variable determination based... • # SymbMath • Referenced in 1 article [sw00936] • graphic computation, e.g. any order of derivative, fractional calculus, solve equation, plot data and user ... piecewise, recursive, multi-value functions and procedures, derivatives, integrals and rules... • # FOMNE • Referenced in 2 articles [sw22544] • analyzed. The fractional order memristor no equilibrium system is then derived from the integer order ... mode control algorithm is derived to globally synchronize the identical fractional order memristor systems... • # Algorithm 885 • Referenced in 8 articles [sw09118] • distribution. The first is a new algorithm derived from Algorithm 304’s calculation ... normal distribution via a series or continued fraction approximation, and it is good... • # COULCC • Referenced in 12 articles [sw11843] • COULCC: A continued-fraction algorithm for Coulomb functions of complex order with complex arguments ... varying Coulomb wave functions, and their radial derivatives, for complex η (Sommerfeld parameter), complex energies... • # FCC • Referenced in 1 article [sw24629] • methods. To accomplish this, a fractional differentiation matrix is derived at the Chebyshev Gauss-Lobatto ... order FDEs and a system of linear fractional-order delay-differential equations...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9202792048454285, "perplexity": 2697.933192475745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655899931.31/warc/CC-MAIN-20200709100539-20200709130539-00204.warc.gz"}
https://www.arxiv-vanity.com/papers/hep-ph/0505200/
arXiv Vanity renders academic papers from arXiv as responsive web pages so you don’t have to squint at a PDF. Read this paper on arXiv.org. # Neutrino Masses and Mixings in a Minimal SO(10) Model K.S. Babu Department of Physics, Oklahoma Center for High Energy Physics, Oklahoma State University, Stillwater, OK 74078, USA    C. Macesanu Department of Physics, Oklahoma Center for High Energy Physics, Oklahoma State University, Stillwater, OK 74078, USA Department of Physics, Syracuse University, Syracuse, NY 13244-1130, USA ###### Abstract We consider a minimal formulation of Grand Unified Theory wherein all the fermion masses arise from Yukawa couplings involving one and one 10 of Higgs multiplets. It has recently been recognized that such theories can explain, via the type–II seesaw mechanism, the large mixing as a consequence of unification at the GUT scale. In this picture, however, the CKM phase lies preferentially in the second quadrant, in contradiction with experimental measurements. We revisit this minimal model and show that the conventional type–I seesaw mechanism generates phenomenologically viable neutrino masses and mixings, while being consistent with CKM CP violation. We also present improved fits in the type–II seesaw scenario and suggest fully consistent fits in a mixed scenario. OSU-HEP-05-7 SU-4252-806 ## I Introduction Grand Unified Theories (GUT) provide a natural framework to understand the properties of fundamental particles such as their charges and masses. GUT models based on gauge symmetry have a number of particularly appealing features. All the fermions in a family fit in a single 16–dimensional spinor multiplet of . In order to complete this multiplet, a right–handed neutrino field is required, which would pave the way for the seesaw mechanism which explains the smallness of left–handed neutrino masses. contains and the left–right symmetric Pati–Salam symmetry group as subgroups, both with very interesting properties from a phenomenological perspective. With low energy supersymmetry, and models also lead remarkably to the unification of the three Standard Model gauge couplings at a scale GeV. In grand unified theories, the gauge sector and the fermionic matter sector are generally quite simple. However, the same is not true of the Higgs sector. Since the larger symmetry needs to be broken down to the Standard Model, generally one needs to introduce a large number of Higgs multiplets, with different symmetry properties under gauge transformations. If all of these Higgs fields couple to the fermion sector, one would lose much of the predictive power of the theory in the masses and mixings of quarks and leptons, and so also one of the attractive aspects of GUTs. Of interest then are the so–called minimal unification theories, in which only a small number of Higgs multiplets couple to the fermionic sector. One such realization is the minimal GUT babu in which only one 10 and one of Higgs fields couple to the fermions. These two Higgs fields are responsible for giving masses to all the fermions of the theory, including large Majorana masses to the right–handed neutrinos. This model is minimal in the following sense. The fermions belong to the of , and the fermion bilinears are given by . Thus 10, 120 and Higgs fields can have renormalizable Yukawa couplings. If only one of these Higgs fields is employed, there would be no family mixings, so two is the minimal set. The has certain advantages. It contains a Standard Model singlet field and so can break down to , changing the rank of the group. Its Yukawa couplings to the fermions also provide large Majorana masses to the right–handed neutrinos leading to the seesaw mechanism. It was noted in Ref. babu that due to the cross couplings between the and the 10 Higgs fields, the Standard Model doublet fields contained in the will acquire vacuum expectation values (VEVs) along with the VEVs of the Higgs doublets from the . The Yukawa coupling matrix will then contribute both to the Dirac masses of quarks and leptons, as well as to the Majorana masses of the right–handed neutrinos. It is not difficult to realize that this minimal model is highly constrained in explaining the fermion masses and mixings. There are two complex symmetric Yukawa coupling matrices, one of which can be taken to be real and diagonal without loss of generality. These matrices have 9 real parameters and six phases. The mass matrices also depend on two ratios of VEVs, leading to 11 magnitudes and six phases in total in the quark and lepton sector, to be compared with the 13 observables (9 masses, 3 CKM mixings and one CP phase). Since the phases are constrained to be between and , this system does provide restrictions. More importantly, once a fit is found for the charged fermions, the neutrino sector is fixed in this model. It is not obvious at all that including the neutrino sector the model can be phenomenologically viable. Early analyses babu ; lavoura found that just fitting the lepton-quark sector is highly constraining. Also, this fitting has been found to be highly nontrivial (in terms of complexity); therefore these analyses were done in the limit when the phases involved are either zero or . In such a framework, one finds that the parameters of the models are more or less determined by the fit to the lepton-quark sector (the quark masses themselves are not known with great precision, so there is still some room for small variations of the parameters). As a consequence, one could more or less predict the neutrino masses and mixings; however, since neutrino data was rather scarce at the time, one could not impose meaningful constraints on the minimal model from these predictions. In view of the new information on the neutrino sector gathered in the past few years solar ; atm ; chooz , one should ask if this model is still consistent with experimental data. Interest in the study of this model has also been reawakened by the observation that unification at GUT scale implies large (even close to maximal) mixing in the 2-3 sector of the neutrino mass matrix bajc , provided that the dominant contribution to the neutrino mass is from type–II seesaw. There has been a number of recent papers studying the minimal using varying approaches: some analytical, concentrating on the 23 neutrino sector bajc ; Bajc2 , some numerical, either in the approximation that the phases involved in reconstructing the lepton sector are zero fukuyama01 ; fukuyama02 ; fukuyama ; moha1 , or taking these phases into account moha2 ; Dutta1 . The conclusions of these analyses seem to be that the minimal cannot account by itself for the observed neutrino sector (although it comes pretty close). However, one might restore agreement with the neutrino data if one slightly modifies the minimal ; for example, one can set the quark sector CKM phase to lie in the second quadrant, and rely on new contributions from the SUSY breaking sector in order to explain data on quark CP violation moha2 ; or one might add higher dimensional operators to the theory moha2 ; Dutta1 , or even another Higgs multiplet (a 120) which will serve as small perturbation to fermion masses Dutta2 ; Bertolini ; Bertolini:2005 . In this paper we propose to revisit the analysis for the minimal model, with no extra fields added. The argument for this endeavor is that our approach is different in two significant ways from previous analyses. First, we use a different method than moha2 ; Dutta1 in fitting for the lepton–quark sector. Since this fit is technically rather difficult, and moreover, since the results of this fit define the parameter space in which one can search for an acceptable prediction for the neutrino sector, we think that it is important to have an alternative approach. Second, rather than relying on precomputed values of quark sector parameters at GUT scale, we use as inputs scale values, and run them up to unification scale. This allows for more flexibility and we think more reliable predictions for the parameter values at GUT scale. With these modifications in our approach, we find that we agree with some results obtained in moha2 ; Dutta1 (in particular, the fact that type–II seesaw does not work well when the CKM phase is in the first quadrant), but not with others. Most interesting, we find that it is possible to fit the neutrino sector in the minimal model, in the case when type–I seesaw contribution to neutrino mass dominates. We also present a mixed scenario which gives excellent agreement with the neutrino data. The paper is organized as follows. In the next section we give a quick overview of the features of the minimal model relevant for our purpose. In section III we address the problem of fitting the lepton–quark sector in this framework. We also define the experimentally allowed range in which the input parameters (quark and lepton masses at scale) are allowed to vary. We start section IV with a quick overview of the phenomenological constraints on the neutrino sector. There we provide a very good fit to all the fermion masses and mixings using type–I seesaw. We follow by analyzing the predictions of the minimal model in the case when type–II seesaw is the dominant contribution to neutrino masses. We then analyze the predictions in a type–I seesaw dominance scenario, and in a scenario when both contributions (type–I and type–II) have roughly the same magnitude. We end with our conclusions in Sec. V. ## Ii The minimal SO(10) model The model we consider in this paper is an supersymmetric model where the masses of the fermions are given by coupling with only two Higgs multiplets: a 10 and a babu . Both the and contain Higgs multiplets which are (2,2) under the subgroup. Most of these (2,2) Higgses acquire mass at the GUT scale. However, one pair of Higgs doublets and (which generally are linear combinations of the original ones) will stay light. (Details about the Higgs multiplet decomposition and breaking can be found, for example, in Bajc_sb ; Fukuyama_sb ; Aulakh_sb ; nasri1 ). Upon breaking of the symmetry of the Standard Model, the vacuum expectation value of the doublet will give mass to the up-type quarks and will generate a Dirac mass term for the neutrinos, while the vacuum expectation value of the doublet will give mass to the down-type quarks and the charged leptons. The mass matrices for quarks and leptons wil then have the following form: Mu = κuY10+κ′uY126 Md = κdY10+κ′dY126 MDν = κuY10−3κ′uY126 Ml = κdY10−3κ′dY126 (1) where are the Yukawa coefficients for the coupling of the fermions to the and multiplets respectively. Note that in the above equations the parameters as well as the Yukawa matrices are in general complex, thus insuring that the fermion mass matrices will contain CP violating phases. The multiplet also contains a and a Pati-Salam multiplets. The Higgs fields which are color singlets and / triplets (denoted by and ) may provide Majorana mass term for the right–handed and the left–handed neutrinos. One then has: MνR = ⟨ΔR⟩Y126 MνL = ⟨ΔL⟩Y126 . (2) If the vacuum expectation of the triplet is around GeV then the Majorana mass term for the right–handed neutrinos will give rise, through the seesaw mechanism, to left–handed neutrino masses of order eV. On the other hand, the VEV of contributes directly to the left–handed neutrino mass matrix (this contribution is called type–II seesaw), so this requires that the is either zero or at most of order eV. This requirement is satisfied naturally in such models, since generally aquires a VEV of order seesaw . ## Iii Lepton and quark masses and mixings Our first task is to account for the observed lepton and quark masses, and for the measured values of the CKM matrix elements. By expressing the Yukawa matrices and in Eqs. (II) in favor of and , we get a linear relation between the lepton and quark mass matrices; at GUT scale: Ml = a Mu+b Md , (3) where and are a combinations of the parameters in Eq. (II). For simplicity let’s work in a basis where is diagonal (this can be done without loss of generality). Then, with the diagonal up–quark mass matrix. If we allow the entries in the diagonal quark mass matrices to be complex: , , then the CKM matrix can be written in its standard form as a function of three real angles and a phase: VCKM = ⎛⎜⎝c12c13s12c13s13e−iδ−s12c23−c12s23s13eiδc12c23−s12s23s13eiδs23c13s12s23−c12c23s13eiδ−c12s23−s12c23s13eiδc23c13⎞⎟⎠ . (4) Since their phases can be absorbed in the definitions of the parameters, we will take the coefficients to be real, too. One of the quark mass phases can be set to zero without loss of generality, we set . It should be noted that a common phase of , which we denote as , will appear in the Dirac and Majorana mass matrices of the neutrinos, and will be relevant to the study of neutrino oscillations. The relation (3) will generally impose some constraints on the masses of the quarks and leptons. For example, if we take all the phases to be zero (or ), then on the right-hand side of the equation there are just two unknowns, the coefficients . On the other hand, the eigenvalues of the lepton mass matrix are known, which will give us 3 equations. It is not obvious, then, that this system can be solved; however, early analysis lavoura ; babu shows that solutions exist, in the range of experimentally allowed values, provided that the quark masses satisfy some constraints. Newer studies fukuyama ; moha1 ; moha2 allow for (some) phases to be non-zero, and thus relaxes somewhat the constraints on quark masses. However, it is interesting to note that these solutions are not very different from the purely real case. That is, most of the phases involved have to be close to zero (or ), and the values of the parameters do not change by much. We shall explain this in the following. The algebraic problem of solving for the lepton masses in the case when the elements of the matrices are complex is quite difficult. This would involve solving a system of 3 polynomial equations of degree six in unknown quantities . Most of the analysis so far is done by numerical simulations (some analytical results are obtained for the case of 2nd and 3rd families only bajc ; Bajc2 ). In this section we attempt to solve the full problem (with all the phases nonzero) in a semi-analytical manner, that is, by identifying the dominant terms in the equations and obtaining an approximate solution in the first step, which can then be made more accurate by successive iterations. Due to the hierarchy between the eigenvalues of the lepton mass matrix one can suspect that the mass matrix itself has a hierarchical form. This assumption is supported by the observation that the off-diagonal elements of are indeed hierarchical; for example . ( is a short–hand notation for , are the elements of .) Then, the three equations for the invariants of the real matrix (the trace, the determinant and the sum of its determinants) become: |L33|2+2|L23|2 ≃ m2τ |L22L33−L223|2 ≃ m2μm2τ Det[LL†] = m2em2μm2τ . (5) We find it is convenient to work in terms of the dimensionless parameters , and the ratios . Explicitly from the equations above in terms of these parameters, we obtain: ~L33=eiα1 = ~aV233+~b e−iz3 ~Δ23=~mμeiα2 = (~b rs+~a rceiz2)~L33+~a ~b V232ei(bb−bs) ~Δ=~me~mμeiα3 = ~b rd ~Δ23−~a2 ~b ei(at−bd)(rs(V31V33)2 (6) +2rcV31V32V21V22ei(z2−z3)+rcV231V22V33eiz2) Here we have kept only the leading terms, using . Moreover, note that only phase differences like can be determinated from Eq. (3); therefore, by multiplying with overall phases, we have written Eqs. (III) in terms of these differences (with the notation ). The key to solving this system is to recognize that there is some tuning involved. Analyzing the first two equations leads to the conclusion that . Then the phase in the first equation should be close to so that the two terms almost cancel each other. Similar cancellations happen in the second and the third equations, which require respectively that and 111Note that taking these phase differences to or zero results in exactly the mass signs which the analysis in fukuyama found to work for the real masses case.. Also, in the third equation, neglecting the small electron mass on the left hand side results in: ~a2≃rd ~mμrs|V31V33|2 . (7) For values of the parameters in the experimentally feasible region, this is consistent with the above estimate . Analytically solving Eqs. (III) with the approximations discussed will provide solutions for the phases and parameters accurate to the 10% level. Using these first order results, one can compute and put the neglected terms back in Eqs. (III,III), which can be solved again, thus defining an iterative procedure which can be implemented numerically, and brings us arbitrarily close to the exact solution. We find that 5 to 10 iterations are usually sufficient to recover the and masses with better than 0.1% accuracy ( can be brought to a fixed value by multiplying with an overall coefficient). We end this section with some comments on the range of input parameters (masses and phases) which allow for a solution to Eq. (3). As we discussed above, the phases are either close to or to zero. This is required by the necessity to almost cancel two large terms in the right-hand side of Eqs. (III). One can see that the larger the absolute magnitude of these terms (for example and in the first equation), the more stringent are the constraints on the phases. The opposite is also true; the smaller the and parameters, the more the phases can deviate from , and generally the easier it is to solve the system. This means that lower values of , are preferred; from Eq. (7), this implies a preference for low values of the ratio 222This also means higher values for are preferred. Since , this implies a preference for values of the CKM phase close to (as noted in moha2 ). (there is not much scope to vary ). It turns out that low values of and large values of can also help, since they lower the absolute magnitude of the larger term on the right-hand side of the equation for in (III). Previous analysis found indeed that fitting for the lepton masses require a low value for Dutta1 . ### iii.1 Low scale values and RGE running As was discussed in the above section, the relation (3) implies some constraints on the quark masses (the lepton masses being taken as input). That is, not all values of quark masses consistent with the experimental results are also consistent with the model we use. Our purpose first is to identify these points in the parameter space defined by the experimentally alowed values for quark masses, Let us then define what this parameter space is. Altought the relations in the previous section hold at GUT scale, one must necessarily start with the low energy values for our parameters. We choose to use as input the values of the quark masses and the CKM angles at the scale. Estimates of these quantities can be found for example in koide . However, we consider some of their numbers rather too precise (for example, their error in estimating the masses of the and quarks are only 25%, respectively 15%, while the corresponding errors in PDG pdg are much larger). Therefore, in the interest of making the parameter space as large as possible, we use the following values: • for the second family: 70 MeV 95 MeV 333Note that the lower limit for is rather low compared with koide ; however, the value at 2 GeV scale is well within the limits cited in pdg . Lattice results also seem to favor smaller values of (2 GeV) hashimoto .; 650 MeV 850 MeV. With a running factor from to 2 GeV of around 1.7, these limits would translate to values at 2 GeV scale of: 120 MeV 160 MeV ; 1.1 GeV 1.44 GeV. Lattice estimations Gupta would indicate a value in the lower part of the range for , and a upper part for . • for the light quarks: here generally the ratio of quark masses are more trustworthy than limits on the masses themselves; we therefore use use (as noted in the previous section, high values of this ratio are preferred), and . We note here that is a parameter which does not affect the results much. • for the heavy quarks: 2.9 GeV 3.11 GeV, (or 4.23 GeV 4.54 GeV) and for the pole top mass 171 GeV 181 GeV (the corresponding mass is evaluated using the three loops relation, and comes out about 10 GeV smaller). • the CKM angles at scale: s12=0.222±0.003 , s23=0.04±0.004 , s13=0.0035±0.0015. For the gauge coupling constants we take the following values at scale: . With these values at low scale one can get unification of coupling constants at the scale . The exact value of , as well as the values of the fermions Yukawas at the unification scale, will depend also on the supersymmetry breaking scale () and , the ratio between the up-type and down-type SUSY Higgs VEVs. We generally consider values of between 200 GeV and 1 TeV, and between 5 and 60. Having chosen specific values of the parameters described above, we then run the fermion Yukawa coupling and the quark sector mixing angles, first from to , using two-loop Standard Model renormalization group equations; then we run from SUSY scale to the GUT scale using two loop444More precisely, we use the two-loop RGEs for the running of the gauge coupling constants and the third family fermions ( and ). To evaluate the light fermion masses, we use the one-loop equations for the ratios and . This approximation is justified, since the leading two-loop effect on the fermion masses comes from the change in the values of gauge coupling constants at two-loop; however, the contributions due to the gauge terms are family-independent and will not affect these ratios. SUSY RGEs barger . After computing the neutrino mass matrix at GUT scale, we run its elements back to scale babuleung ; Chankowski and evaluate the resulting masses and mixing angles. ## Iv Neutrino masses and mixings In the present framework, there are two contributions to neutrino masses. First one has the canonical seesaw term: (Mν)seesawI=MDνM−1RMDν (8) with and given by (II). However, the existence in this model of the (,3,1) Higgs multiplet implies the possibility of a direct left-handed neutrino mass term when the Higgs triplet from this acquires a VEV (as it generally can be expected to happen). The neutrino mass contribution of such a term would be (Mν)seesawII=vLY126=λMR (9) where and is a factor depending on the specific form of the Higgs potential seesaw . The scale of the canonical seesaw contribution Eq. (8) (which we call type–I seesaw in the following) to the left handed neutrino mass matrix is given by . The contribution of the type–II seesaw factor (Eq. (9)) is of order . One cannot know apriori how the factor compares with unity, therefore one cannot say which type of seesaw dominates (or if they are of the same order of magnitude). Therefore, in the following each case will be analyzed separately. However, let us first review the current experimental data on the neutrino mixing angles and mass splittings. Latest analysis Maltoni sets the following bounds: • from oscillations: 1.4×10−3 eV2≤Δm223≤3.3×10−3 eV2  ;  0.34≤sin2θ23≤0.66 ; with the best fit for and (from atmospheric and K2K data). • from oscillations: 7.3×10−5 eV2≤Δm212≤9.1×10−5 eV2  ;  0.23≤sin2θ12≤0.37 ; with the best fit for and (from solar and KamLAND data). Note also that a previously acceptable region with a somewhat higher mass splitting (the LMA II solution fogli ) is excluded now at about by the latest KamLAND data. • finally, by using direct constraints from the CHOOZ reactor experiment as well as combined three-neutrino fitting of the atmospheric and solar oscillations, one can set the following upper limit on the mixing angle: sin2θ13≤0.022 . The procedure we use in searching for a fit to neutrino sector parameters is as follows. First the low scale values of the quark and lepton masses and the CKM matrix angles and phase are chosen. (Generally we take a fixed value for and , while the other parameters are chosen randomly from a predefined range; however, and can also be chosen randomly). Next we pick a value for and , and compute the quark-lepton sector quantities at GUT scale. Here we determine the relation between the lepton Yukawa couplings and quark Yukawa couplings, which amounts to determining the parameters and phases in Eq. (3). The phases combinations are chosen as input (that is, they are picked randomly), while , and the remaining two phases are obtained by the procedure of fitting the lepton eigenvalues described in Section III. Finally, we scan over the parameters which appear in the neutrino sector (if the neutrino mass matrix is either of type–I seesaw or of type–II seesaw, there is only one phase ; if both types appear, there will be two extra parameters, the relative magnitude and phase of the two contributions). The rest of this section is devoted to a detailed analysis of the predictions of the minimal model for the neutrino sector, in type–I, type–II and mixed scenarios. (Due to its relative simplicity, we will start with the type–II case). However, let us first summarize our results. We find that in the type–II scenario, there is no good fit to the neutrino sector if the CKM phase is consistent with experimental measurements (around 60 deg). This is in agreement with previous analysis moha2 ; Dutta1 ; however, our results are a bit more encouraging, in that that for we find reasonably good fits, which improve significantly with not very large increases in the CKM phase. We can obtain marginal fits for as low as 80 deg. More interesting are the results for the type–I case; here we can find good fits to the neutrino sector for values of as low as , certainly consistent with experimental limits. As such fits have been not found before, one might consider this to be the main result of our paper. Also, we find that in the mixed case, there is possible to obtain a good neutrino sector fit in the case when the contributions coming from type–I and type–II are roughly equal in magnitude and of opposite phase. ### iv.1 Example of Type–I Seesaw Fit We give here a representative example of a fit obtained in a type–I dominant case. This is obtained for GeV, GeV, GeV, 174 GeV, and GeV. The values of the quark and lepton masses at GUT scale (in GeV) and the CKM angles are: mu=0.0006745mc=0.3308mt=97.335md=0.0009726ms=0.02167mb=1.1475me=0.000344mμ=0.0726mτ=1.350s12=0.2248s23=0.03278s13=0.00216δCKM=1.193 . (10) Here the masses are defined as , where are the corresponding Yukawa couplings, and GeV is the SM Higgs vacuum expectation value555 One can write Eqs. (3), (IV.2) in terms of either the Yukawa couplings of the leptons and quarks, or their masses (that is, Yukawa couplings times running Higgs VEVs). In this paper we use the Yukawa couplings, but we multiply by the Higgs VEVs at the SUSY scale for simplicity of presentation. One can easily check then when going from one convention to the other, just the parameter rescales, while does not change.. The values of GUT scale phases (in radians) and parameters are given by: au=0.881ac=0.32678at=3.0382bd=3.63235bs=3.23784bb=0.a=0.08136b=5.9797σ=3.244 . (11) With these inputs, one can evaluate all mass matrices at GUT scale. In order to compute the neutrino mass matrix at scale, we use the running factors , where r22=(MνijMν33)MZ/(MνijMν33)MGUT ,  r23=(Mνi3Mν33)MZ/(Mνi3Mν33)MGUT , with . The elements of the neutrino matrix above are evaluated in a basis where the lepton mass matrix is diagonal. One then obtains for the neutrino parameters at low scale: Δm223/Δm212≃24 , sin2θ12≃0.27 , sin22θ23≃0.90 , sin22θ13≃0.08 . Note here that only the atmospheric angle is close to the experimental limit, the solar angle and the mass spliting ratio being close to the preferred values. The elements of the diagonal neutrino mass matrix are mνi ≃ {0.0021exp(0.11i) , 0.0098exp(−3.06i) , 0.048} in eV, with a normalization eV. The phases of the first two masses are the Majorana phases (in radians). Moreover, the Dirac phase appearing in the MNS matrix is rad, and one evaluates the effective neutrino mass for the neutrinoless double beta decay process to be |∑U2eimνi| ≃ 0.009 eV . ### iv.2 Type–II seesaw Much of the recent work on the neutrino sector in the minimal has concentrated on the scenario when the type–II seesaw contribution to neutrino masses is dominant. The reason for the interest in this case is that, with: Mν ∼ MR ∼ Ml−Md . unification at the GUT scale, , naturally leads to a small value of and hence large mixing in the 2-3 sector bajc . However, while the general argument holds, it has been difficult (or impossible) to fit both large and the hierarchy between the solar and atmospheric mass splittings at once. In this section we will try to show why this is so, and under which conditions this might be achievable. We will use the same conventions as in section III (that is, we work in a basis where is diagonal, and the parameters and are real and positive). However, in the construction of the neutrino mass matrices there will be an extra phase besides those which were relevant for the quark-lepton mass matrices. This phase can be though as an overall phase of . One then has: MR = y(eiσMl−Md) aMDν = −(beiσ+2)Mleiσ+3Md . (12) Following the analysis in Sec. III one can write: (Ml)22 ≃ |b| ms eib2 (Ml)23 ≃ a mt eia3V32V33 (Ml)33 = a mt eia3+b mb ≃ mτeiα , (13) with close to and close to zero. Then, the neutrino mass matrix will be proportional to: (Mν)[2,3]∼Ml−Mdde−iσ∼(msei(ϵ−α)(b−e−iσ)m23m23mτeiα−mbe−iσ) (14) Note also that is almost real positive, and due to the fact that and , the 22 and 23 elements in the neutrino mass matrix are roughly of the same order of magnitude (in practice, one get somewhat larger than ). One then sees that if the phase is chosen such that the two terms in the 33 mass matrix element cancel each other (that is ) then there will be large mixing in the 2-3 sector, with: tan(θν)23≃∣∣∣m23m33∣∣∣. However, this is not the whole story. One needs also some hierarchy between the atmospheric and solar neutrino mass splittings: Δm2solΔm2atm = r ≲ 120 (based on the experimental measurements of neutrino parameters reviewed in the previous section). In terms of the eigenvalues of the mass matrix (14), one then has: m22m23 ≃(|m22m33−m223||m222|+|m233|+2|m23|2)2 ≲ 120 . (15) In order for this to hold, one needs a cancellation between the and terms in the numerator of the above fraction. This in turn imposes a constraint on the phases involved: ϕ=Arg(mτ−mbe−i(σ−α))≃0 . (16) More detailed analysis shows that it is not possible (or very difficult) to get larger than 1 while satisfying the relation (15) between eigenvalues. However, this will create problems with the atmospheric mixing angle. The PMNS matrix is UPMNS=U†lUν where are the matrices which diagonalize the lepton and neutrino mass matrices, respectively. Since the lepton mass matrix has a hierarchical form, the matrix is close to unity, with , where . The atmospheric mixing angle will then be: tanθatm ≃ ∣∣ ∣ ∣∣(m23m33)∗ν−(m23m33)∗l1+(m23m33)∗ν(m23m33)l∣∣ ∣ ∣∣ where the and lower indices make clear that we are discussing elements of the lepton and neutrino mass matrices. Note, however, that Eq. (16) implies that and have the same phase of ; then, since , the net effect of the rotation coming from the lepton sector is to reduce the 2-3 mixing angle. Practically, since , even if one has a value of close to one from the neutrino mass matrix, will become of order 0.7 after rotation in the lepton sector is taken into account. This situation is represented graphically in Fig. 1. As discussed above, corresponds to the case most favorable for getting the right solar-atmospheric mass splitting ratio, while corresponds to the case of maximal mixing angle. In practice this means that most reasonable fits are actually obtained when the angle is close to around (otherwise generally either the angle or the mass ratio are too small) 666Contributions from the phases in the lepton mass matrix can also improve the goodness of the fit (for example, if is significantly different from zero, or different from ). However, this generally requires that the parameters have low values (as explained in section III). Hence we see that neutrino sector also prefers in the second quadrant and low .. Note however that (or any value greater than about ) would require that at GUT scale . We may infer that in order to obtain a large mixing in the 2-3 sector one needs that at GUT scale should be close to . This in turn can be insured by requiring large and/or . For example, in Fig. 2 we present the results obtained for values GeV, GeV, GeV and (with these values, the ratio ). Also, here we set the CKM phase , and let the other quark sector parameters vary between the limits discussed in section IIIA. The left panel shows the maximum atmospheric/solar mass splitting ratio as a function of the atmosperic mixing angle . The three different lines correspond to different cuts on the solar mixing angle: (dotted), (dashed) and no cut (solid). One can observe here the correlation betewee large atmospheric mixing and small atmospheric-solar mass ratio. Conversely, the right panel shows the maximum of the ratio as a function of the solar mixing angle . The three different lines correspond to cuts on the atmospheric mixing angle: (dotted), (dashed) and (solid line). We note here that the correlation between the solar angle and the mass ratio has the form of a step function (abrupt decrease in once goes over a certain threshold), while there seems to be a close to linear correlation between the maximal solar and atmospheric angles. It is interesting to consider how these results change if the paramenters and/or are modified. One finds out that the neutrino sector results have a strong dependence on the parameter . For example, if one keeps the parameters used in Fig. 2 fixed but increases , one finds that the fit for the atmospheric angle - atmospheric/solar mass ratio improves to a certain amount. That can be traced to the fact that the ratio increases with . However, one also finds that the solar angle generally gets smaller. This happens because there is a correlation between the solar angle and the value of the quark mass at GUT scale; namely increases with . On the other hand, the ratio decreases with increasing . Fig. 3 exemplifies this behaviour. The three lines correspond to the maximum value for the mass splitting ratio , at values of (dotted), (solid) and (dashed line). A cut on the solar angle is also imposed in the left panel, and a cut on the atmospheric angle in the right panel. One can see that at larger values for one might potentially get better fits for atmospheric angle and the atmospheric/solar mass ratio; however, the constraint on the solar angle becomes more restrictive. Smaller variations of the neutrino sector results will follow modification of the parameters and . However, these variations follow the same pattern as above: that is, an improvement in the fit for the atmospheric angle due to the increase of the ratio (which can be due to an increase in , or a decrease in ) coincide with a worsening of the fit for the solar angle. As a consequence, the results presented in Figs. 2, 3 can be improved only marginally. Scanning over a range of parameter space and 777In practice we find that the best results are obtained for large , large and large (such that is between 0.96 and 1)., we find the best fit to the neutrino sector to be , and the atmospheric/solar mass splitting ratio . We note that although these numbers provide a somewhat marginal fit to the experimental results (the mixing angles are close to the exclusion limit, while the value for mass ratio is central) they are still allowed. However, the results discussed above were obtained for a value of the CKM phase which is too large compared with the measured value ( from PDG pdg ). As argued in section III (and indeed noted by previous analysis) there is a strong dependence on the goodness of the fit on the value of , with larger values giving better fits. We show this dependence in Fig. 4. The parameters are the same as in Fig 2 ( GeV, GeV, GeV and ), but the three lines correspond to different values for : (dotted line), (solid) and (dashed line). One can notice a rapid deterioration in the goodness of the fit with decreased . Thus, for , the best fit to neutrino sector we find (after scanning over the SUSY parameter space) is , and . For purposes of illustration, we give a fit obtained for a type–II dominant case for , GeV, 181 GeV, and TeV. The quark masses at low scale are GeV, GeV. Then the values of the quark and lepton masses at GUT scale are (in GeV): mu=0.0008185mc=0.3772mt=139.876md=0.0015588ms=0.03554mb=2.3547me=0.000525mμ=0.1107mτ=2.420s12=0.225s23=0.0297s13=0.00384δCKM=1.4 . (17) The values of GUT scale phases (in radians) and parameters are given by: au=−0.4689ac=−1.0869at=3.0928bd=2.6063bs=2.2916bb=0.a=0.09093b=4.423σ=3.577 . (18) The running factors for the neutrino mass matrix are . One then obtains for the neutrino parameters at low scale: Δm223/Δm212≃18 , sin22θ12≃0.7 , sin22θ23≃0.88 , sin22θ13≃0.094 . The elements of the diagonal neutrino mass matrix (masses and Majorana phases) are mνi ≃ {0.0016exp(0.27i) , 0.011exp(−2.86i) , 0.048} in eV. The Dirac phase appearing in the MNS matrix is rad, and one evaluates the effective neutrino mass for the neutrinoless double beta decay process to be |∑U2eimνi| ≃ 0.01 eV . ### iv.3 Type–I seesaw The fact that in type–II seesaw one can obtain large mixing in the 23 sector is due to a lucky coincidence: the type–II neutrino mass matrix being written as a sum of two hierarchical matrices ( and ), the most natural form for the neutrino mass matrix is also hierarchical. However, since 33 elements of both matrices and are roughly of the same magnitude, by choosing the relative phase between the two to be close to , one can get a neutrino mass matrix of the form suited to explain large mixing in the 2-3 sector. The question arises then if such a coincidence happens for the type–I seesaw neutrino mass matrix. To see that, let’s write the Dirac neutrino mass matrix in the following form: MDν = beiσ+2a[~MR+b−e−iσbeiσ+2Md]∼~MR+~Md where is the scaled right-handed neutrino mass matrix and is a rescaled down-type quark diagonal mass matrix (the scaling factor in this later case is close to unity, since is roughly of order 10). Then the type–I seesaw neutrino mass matrix would be: MνI = MDνM−1RMDν ∼ ~MR+2~Md+~Md~MR−1~Md . (19) Now, for most values of the phase , is hierarchical, therefore so is , therefore the type–I neutrino mass matrix is the sum of three hierarchical matrices ( being diagonal). So it is not surprising that for most values of the phase is also hierarchical. What is remarkable is that there are some values of for which the type–I seesaw mass matrix has a large mixing in the 2-3 sector, and moreover, this happens for the same values of as in the case when the type–II mass matrix is non-hierarchical (that is, close to ). In order to see this let us consider the magnitude and the phase of the 33 elements (the largest ones) in the three terms on the right-hand side of Eq. (19). If is not close to , the magnitude of is of order , with varying phase ( in Fig. 1 in the first quadrant); the magnitude of is also of order , and the phase . For the last term, we make use of the fact that being hierarchical, ; then , with a phase close to . We see then that for most values of is of order , while the off-diagonal elements are small. However, for , the cancellation in the 33 element of is matched by a cancellation between the 33 elements of the and terms from Eq. (19) (since the relative phase between these is also ), thus leading to a non-hierarchical form for the type–I seesaw neutrino mass matrix. The fine-tuning between different contributions to the neutrino mass matrix is thus a little bit more involved in the type–I seesaw case compared to the type–II seesaw, but it can still lead to large mixing in the 2-3 sector. Moreover, since the correlations between the input parameters and the neutrino mass matrix elements are not so strong, most of the constraints discussed in the above section do not hold (for example, does not have to be necessarily very close to ). This may lead one to believe that it is possible to obtain a better fit for the neutrino sector in type–I models, and we found that in fact this is the case. For example, we show in Fig 5(left) the maximum atmospheric/solar mass splitting ratio as a function of the atmosperic mixing angle - with cuts on the solar mixing angle: (dotted), (dashed) and no cut (solid). In the left panel we show maximum as a function of the solar angle for (dotted), (dashed) and (solid line). This figure is obtained for values GeV, GeV, GeV and , while the CKM phase is allowed to vary between 60 and 70 deg. We see that it is possible to obtain a large atmosperic/solar mass splitting ratio for values of the atmospheric and solar mixings consisted with experimental constraints. How do these results change if we modify the SUSY parameters and
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9820085763931274, "perplexity": 480.182238385711}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400193391.9/warc/CC-MAIN-20200920031425-20200920061425-00429.warc.gz"}
http://slideplayer.com/slide/4894834/
# Concentrations of Solutions ## Presentation on theme: "Concentrations of Solutions"— Presentation transcript: Concentrations of Solutions Prentice-Hall Chapter 16.2 Dr. Yager Objectives Solve problems involving the molarity of a solution Describe the effect of dilution on the total moles of solute in solution Define percent by volume and percent by mass Molarity To make a 0. 5 molar (0. 5M) solution, first add 0 To make a 0.5 molar (0.5M) solution, first add 0.5 mol of solute to a 1-L volumetric flask half filled with distilled water. Swirl the flask carefully to dissolve the solute. Fill the flask with water exactly to the 1-Liter mark. A solution has a volume of 2. 0 L and contains 36 A solution has a volume of 2.0 L and contains 36.0 g of glucose (C6H12O6). If the molar mass of glucose is 180 g/mol, what is the molarity of the solution? Household laundry bleach is a dilute aqueous solution of sodium hypochlorite (NaClO). How many moles of solute are present in 1.5 L of 0.07 M NaClO? How many moles of solute are in 250 ml of 2. 0M CaCl2 How many moles of solute are in 250 ml of 2.0M CaCl2? How many grams of CaCl2 is this? The concentration of a solution is a measure of the amount of solute that is dissolved in a given quantity of solvent. A dilute solution is one that contains a small amount of solute. A concentrated solution contains a large amount of solute. Making Dilutions Key Idea Diluting a solution reduces the number of moles of solute per unit volume, but the total number of moles of solute in solution does not change. The total number of moles of solute remains unchanged upon dilution, so you can write this equation. M1 and V1 are the molarity and volume of the initial solution, and M2 and V2 are the molarity and volume of the diluted solution. Making a Dilute Solution To prepare 100 ml of 0. 40M MgSO4 from a stock solution of 2 To prepare 100 ml of 0.40M MgSO4 from a stock solution of 2.0M MgSO4, a student first measures 20 mL of the stock solution with a 20- mL pipet. She then transfers the 20 mL to a 100-mL volumetric flask. Finally she carefully adds water to the mark to make 100 mL of solution. Volume-Measuring Devices How many milliliters of a solution of 4 How many milliliters of a solution of 4.00 M KI are needed to prepare ml of M KI? M1 x V1 = M2 x V2 4.00 M x V1 = M x ml V1 = 47.5 ml Percent Solutions The concentration of a solution in percent can be expressed in two ways: as the ratio of the volume of the solute to the volume of the solution or as the ratio of the mass of the solute to the mass of the solution. Concentration in Percent (Volume/Volume) Isopropyl alcohol (2-propanol) is sold as a 91% solution Isopropyl alcohol (2-propanol) is sold as a 91% solution. This solution consist of 91 mL of isopropyl alcohol mixed with enough water to make 100 mL of solution. A bottle of the antiseptic hydrogen peroxide (H2O2) is labeled 3 A bottle of the antiseptic hydrogen peroxide (H2O2) is labeled 3.0% (v/v). How many milliliters of H2O2 are in a ml bottle of this solution? Concentration in Percent (Mass/Mass) A bottle of glucose is labeled 2. 8% (m/m) A bottle of glucose is labeled 2.8% (m/m). How many grams of glucose are in g of solution? 1. To make a 1. 00M aqueous solution of NaCl, 58 1. To make a 1.00M aqueous solution of NaCl, 58.4 g of NaCl are dissolved in 1.00 liter of water. enough water to make 1.00 liter solution. 1.00 kg of water. 100 mL of water. 1. To make a 1. 00M aqueous solution of NaCl, 58 1. To make a 1.00M aqueous solution of NaCl, 58.4 g of NaCl are dissolved in 1.00 liter of water. enough water to make 1.00 liter solution. 1.00 kg of water. 100 mL of water. 2. What mass of sodium iodide (NaI) is contained in 250 mL of a 0 2. What mass of sodium iodide (NaI) is contained in 250 mL of a 0.500M solution? 150 g 75.0 g 18.7 g 0.50 g 2. What mass of sodium iodide (NaI) is contained in 250 mL of a 0 2. What mass of sodium iodide (NaI) is contained in 250 mL of a 0.500M solution? 150 g 75.0 g 18.7 g 0.50 g 3. Diluting a solution does NOT change which of the following? concentration volume milliliters of solvent moles of solute 3. Diluting a solution does NOT change which of the following? concentration volume milliliters of solvent moles of solute 4. In a 2000 g solution of glucose that is labeled 5 4. In a 2000 g solution of glucose that is labeled 5.0% (m/m), the mass of water is 2000 g. 100 g. 1995 g. 1900 g. 4. In a 2000 g solution of glucose that is labeled 5 4. In a 2000 g solution of glucose that is labeled 5.0% (m/m), the mass of water is 2000 g. 100 g. 1995 g. 1900 g.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8204020857810974, "perplexity": 2862.56972995371}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361808.18/warc/CC-MAIN-20210228235852-20210301025852-00398.warc.gz"}
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=DBSHBB_2005_v42n1_111
COMPOSITION OPERATORS ON THE PRIVALOV SPACES OF THE UNIT BALL OF ℂn Title & Authors COMPOSITION OPERATORS ON THE PRIVALOV SPACES OF THE UNIT BALL OF ℂn UEKI SEI-ICHIRO; Abstract Let B and S be the unit ball and the unit sphere in $\small{\mathbb{C}^n}$, respectively. Let $\small{{\sigma}}$ be the normalized Lebesgue measure on S. Define the Privalov spaces $N^P(B)\;(1\;<\;p\;<\;{\infty})$ by N^P(B)\;=\;\{\;f\;{\in}\;H(B) : \sup_{0 be a holomorphic self-map of B. Let $\small{{\mu}}$ denote the pull-back measure $\small{{\sigma}o({\varphi}^{\ast})^{-1}}$. In this paper, we prove that the composition operator $\small{C_{\varphi}}$ is metrically bounded on $\small{N^P}$(B) if and only if $\small{{\mu}(S(\zeta,\delta)){\le}C{\delta}^n}$ for some constant C and $\small{C_{\varphi}}$ is metrically compact on $\small{N^P(B)}$ if and only if $\small{{\mu}(S(\zeta,\delta))=o({\delta}^n)}$ as $\small{{\delta}\;{\downarrow}\;0}$ uniformly in $\small{{\zeta}\;\in\;S}$. Our results are an analogous results for Mac Cluer's Carleson-measure criterion for the boundedness or compactness of $\small{C_{\varphi}}$ on the Hardy spaces $\small{H^P(B)}$. Keywords Hardy spaces;Privalov spaces;composition operators;unit ball of $\small{\mathbb{C}^n}$; Language English Cited by 1. On a product-type operator from Bloch spaces to weighted-type spaces on the unit ball, Applied Mathematics and Computation, 2011, 217, 12, 5930 2. Weighted composition operators from Bergman–Privalov-type spaces to weighted-type spaces on the unit ball, Applied Mathematics and Computation, 2010, 217, 5, 1939 3. Weighted Composition Operators and Integral-Type Operators between Weighted Hardy Spaces on the Unit Ball, Discrete Dynamics in Nature and Society, 2009, 2009, 1 4. Composition Operators from the Weighted Bergman Space to the th Weighted Spaces on the Unit Disc, Discrete Dynamics in Nature and Society, 2009, 2009, 1 5. Composition followed by differentiation from H∞ and the Bloch space to nth weighted-type spaces on the unit disk, Applied Mathematics and Computation, 2010, 216, 12, 3450 6. On the Generalized Hardy Spaces, Abstract and Applied Analysis, 2010, 2010, 1 7. On operator from the logarithmic Bloch-type space to the mixed-norm space on the unit ball, Applied Mathematics and Computation, 2010, 215, 12, 4248 8. Norms of multiplication operators on Hardy spaces and weighted composition operators from Hardy spaces to weighted-type spaces on bounded symmetric domains, Applied Mathematics and Computation, 2010, 217, 6, 2870 9. Products of composition and differentiation operators from Zygmund spaces to Bloch spaces and Bers spaces, Applied Mathematics and Computation, 2010, 217, 7, 3144 10. Norms of some operators on bounded symmetric domains, Applied Mathematics and Computation, 2010, 216, 1, 187 11. Weighted differentiation composition operators from H∞ and Bloch spaces to nth weighted-type spaces on the unit disk, Applied Mathematics and Computation, 2010, 216, 12, 3634 References 1. J. S. Choa and H. O. Kim, Composition operators on some F-algebras of holo- morphic functions, Nihonkai Math. J. 7 (1996), 29-39 2. J. S. Choa and H. O. Kim, Composition operators between Nevanlinna-type spaces, J. Math. Anal. Appl. 257 (2001), 378-402 3. C. C. Cowen and B. D. MacCluer, Composition Operators on Spaces of Analytic Functions, CRC Press, 1994 4. B. D. MacCluer, Spectra of compact composition operators on $H^p(B_N)$, Analysis 4 (1984), 87-103 5. B. D. MacCluer, Compact composition operators on $H^p(B_N)$, Michigan Math. J. 32 (1985), 237-248 6. B. D. MacCluer and J. H. Shapiro, Angular derivatives and compact composition operators on the Hardy and Bergman spaces, Canad. J. Math. 38 (1986), 878-906 7. Y. Matsugu and S. Ueki, Isometries of weighted Bergman-Privalov spaces on the unit ball of $C^n$, J. Math. Soc. Japan 54 (2002), 341-347 8. S. C. Power, Hormander's Carleson theorem for the ball, Glasg. Math. J. 26 (1985), 13-17 9. W. Rudin, Function Theory on the Unit Ball of $C^n$, Springer-Verlag, Berlin, New York, Heiderberg, 1980 10. M. Stoll, Mean growth and Taylor coefficients of some topological algebras of analytic functions, Ann. Polon. Math. 35 (1977), 139-158 11. M. Stoll, Invariant Potential Theory in the Unit Ball of $C^n$, Cambridge Univ. Press, 1994 12. A. V. Subbotin, Functional properties of Privalov spaces of holomorphic functions in several variables, Math. Notes 65 (1999), 230-237 13. J. Xiao, Compact composition operators on the Area-Nevanlinna class, Expo. 17 (1999), 255-264
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 15, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9785030484199524, "perplexity": 1133.4381173634592}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423681.33/warc/CC-MAIN-20170721022216-20170721042216-00489.warc.gz"}
http://mathhelpforum.com/pre-calculus/3638-factor-theorem.html
# Math Help - factor theorem. 1. ## factor theorem. Let f(x)=x^3-8x^2+17x-9. Use the factor theorem to find other solutions to f(x)-f(1)=0, besides x=1. My answer is 2,5 could that be right. thanks for looking. 2. Originally Posted by kwtolley Let f(x)=x^3-8x^2+17x-9. Use the factor theorem to find other solutions to f(x)-f(1)=0, besides x=1. My answer is 2,5 could that be right. thanks for looking. $f(x)=x^3-8x^2+17x-9$ $f(x)=f(1)=1-8+17-9=1$ $f(x)=x^3-8x^2+17x-9=1$ $f(x)=x^3-8x^2+17x-10=0$ Yes, 2 and 5 are solutions of this equation besides 1. KeepSmiling Malay 3. Hello, kwtolley! Another approach . . . Let $f(x) \:=\:x^3 - 8x^2 + 17x - 9$ Use the factor theorem to find other solutions to $f(x)-f(1)\,=\,0$, besides $x=1.$ My answers are: $2,\;5.$ We are given: . $f(x) - f(1) \:= \:0$ ]. . . . . $(x^3 - 8x^2 + 17x - 9) - (1^3 - 8\cdot1^2 + 17\cdot1 - 9) \;= \;0$ . . . . . . . . . $(x^3 - 1^3) - 8(x^2 - 1^2) + 17(x - 1) - 9 + 9 \;= \;0$ . . . $(x - 1)(x^2 + x + 1) - 8(x - 1)(x + 1) + 17(x - 1) \;= \;0$ . . . . . . . . . . . . . . . $(x - 1)(x^2 + x + 1 - 8[x+1] + 17) \;= \;0$ . . . . . . . . . . . . . . . . . . . . . . . . . $(x - 1)(x^2 - 7x + 10) \;= \;0$ . . . . . . . . . . . . . . . . . . . . . . . . . $(x - 1)(x - 2)(x - 5) \;= \;0$ Therefore, the solutions are: . $x \:= \:1,\;2,\;5$ 4. Originally Posted by Soroban Hello, kwtolley! Another approach . . . We are given: . $f(x) - f(1) \:= \:0$ ]. . . . . $(x^3 - 8x^2 + 17x - 9) - (1^3 - 8\cdot1^2 + 17\cdot1 - 9) \;= \;0$ . . . . . . . . . $(x^3 - 1^3) - 8(x^2 - 1^2) + 17(x - 1) - 9 + 9 \;= \;0$ . . . $(x - 1)(x^2 + x + 1) - 8(x - 1)(x + 1) + 17(x - 1) \;= \;0$ . . . . . . . . . . . . . . . $(x - 1)(x^2 + x + 1 - 8[x+1] + 17) \;= \;0$ . . . . . . . . . . . . . . . . . . . . . . . . . $(x - 1)(x^2 - 7x + 10) \;= \;0$ . . . . . . . . . . . . . . . . . . . . . . . . . $(x - 1)(x - 2)(x - 5) \;= \;0$ Therefore, the solutions are: . $x \:= \:1,\;2,\;5$ Great. Alternative approach are always interesting. KeepSmiling Malay 5. ## Factor theorem Thanks to everyone for looking it over with me.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 24, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9866786003112793, "perplexity": 66.95012432627594}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507442900.2/warc/CC-MAIN-20141017005722-00164-ip-10-16-133-185.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/557709/find-a-permutation
# Find a permutation For $x = (12)(34)$ and $y = (56)(13),$ find a permutation $a$ such that $a^{-1}xa = y$. I written $a^{-1}xa = y$ as $xa = ay$ thus $(12)(34)a = a(56)(13)$ but I can't find the $a$? Conjugation affects cycles like so: $\sigma(a_1~a_2~\cdots~a_r)\sigma^{-1}=(\sigma(a_1)~\sigma(a_2)~\cdots~\sigma(a_r))$. • In particular: (1) do you understand what I've said so far? (2) can you extrapolate to figure out how conjugation explicitly affects a product of (disjoint) cycles? (3) so then (rearranging $a^{-1}xa=y$ $\Leftrightarrow x=aya^{-1}$) what does $a(56)(13)a^{-1}$ look like? and finally (4) what can you select $a$ to be to achieve $(12)(34)$? – anon Nov 13 '13 at 4:32 • $a^{-1}(12)(34)a = (56)(13)$. I am stuck in finding such an a. – user104235 Nov 13 '13 at 4:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9651116132736206, "perplexity": 433.13253088201037}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573832.23/warc/CC-MAIN-20190920050858-20190920072858-00179.warc.gz"}
http://link.springer.com/article/10.1007%2FBF00911690
, Volume 20, Issue 3, pp 333-337 # A calculation of the parameters of the high-speed jet formed in the collapse of a bubble Rent the article at a discount Rent now * Final gross prices may vary according to local VAT. ## Abstract As is known, the collapse of vapor bubbles in a liquid can cause the intensive destruction of solid boundary surfaces. Experimental and theoretical investigations of bubble collapse have led to the conclusion that the surface of a bubble can deform and a liquid jet directed toward the solid surface can form in the process [1, 2]. In the theoretical reports [3, 4] too low jet velocities were obtained, inadequate to explain the destruction of the surface in a single impact. In [5] it was found as a result of numerical calculations that the formation of jets possessing enormous velocities is possible. It was also found that two fundamentally different schemes of jet formation are possible in the collapse of a bubble near a wall. The transition from one scheme to the other occurs upon a relatively small change in the initial shape of the bubble. In the present report we investigate the case of sufficiently small initial deformations of a bubble when the region occupied by the bubble remains simply connected during the formation of the jet; i.e., the separation of a small bubble from the bubble does not occur. In the case of the second scheme of bubble collapse near a wall the connectedness of the free boundary is disrupted and a small bubble separates off during the formation of the jet. Translated from Zhurnal Prikladnoi Mekhaniki i Tekhnicheskoi Fiziki, No. 3, pp. 94–99, May–June, 1979.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8397111296653748, "perplexity": 583.3028378008062}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927245.60/warc/CC-MAIN-20150521113207-00180-ip-10-180-206-219.ec2.internal.warc.gz"}
https://socratic.org/questions/an-isosceles-triangle-has-sides-that-are-sqrt125-sqrt125-and-10-units-what-is-it
Geometry Topics # An isosceles triangle has sides that are sqrt125, sqrt125, and 10 units. What is its area? Nov 21, 2015 $50 {\text{ units}}^{2}$ #### Explanation: the area of a triangle is found through $A = \frac{1}{2} b h$. We have the base, but not the length of the height. Imagine that an altitude is drawn down the center of the triangle, perpendicularly bisecting the base and bisecting the vertex angle. This creates two congruent right triangles inside the original isosceles triangle. You know each of these triangles has a hypotenuse of $\sqrt{125}$ and a base of $5$, since they're half the original base. We can use the Pythagorean theorem to figure out the length of the missing side, which is the height of the triangle. ${\left(5\right)}^{2} + {\left(h\right)}^{2} = {\left(\sqrt{125}\right)}^{2}$ $25 + {h}^{2} = 125$ ${h}^{2} = 100$ $h = 10$ So, we now know that the base is $10$ units and the height is $10$ units. Thus, $A = \frac{1}{2} \left(10\right) \left(10\right) = 50 {\text{ units}}^{2}$ ##### Impact of this question 1285 views around the world
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 11, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8413199782371521, "perplexity": 355.43056466541}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103322581.16/warc/CC-MAIN-20220626222503-20220627012503-00515.warc.gz"}
https://www.physicsforums.com/threads/chain-rule-help.110398/
# Chain rule help 1. Feb 12, 2006 ### ussjt f '(8)=5 g '(8)=3 f(4)=8 g(4)=10 g(4)=10 g(8)=2 f(8)=5 find (g o f)'(4) how do I go about setting up these types of problem. 2. Feb 12, 2006 ### AKG What have you tried; where are you getting stuck? 3. Feb 12, 2006 ### ussjt have not tried because I don't know how to set up the problem...I don't really care about the answer I just want to know how you go about setting up these kinds of problems because I have a quiz tomorrow. I figure the first step is g(f(x)) so g'(f(x))*f'(x) g'(8)*f'(x) 3*f'(x) Last edited: Feb 12, 2006 4. Feb 12, 2006 ### AKG (g o f)'(4) = g'(f(4))*f'(4) by chain rule = g'(8)*f'(4) since f(4) = 8 = 3*f'(4) since g'(8) = 3 And that's all you can do, since they don't tell you what f'(4) is. I suspect they do, and you just copied out the question wrong. Also, why have you given "g(4) = 10" twice? Anyways, the way to setting up the problem is this: Given a problem, "find X", write: X = A (by some theorem, or given fact, or logical inference) = B (again, give justification) = C (justification) = D (justification) until you get some answer D that you think the teacher will like, like an actual numeral. In this case, your X is (f o g)'(4), and your C is something like 3f'(4). You want a numeral for your D, but you can't get it yet from C because they haven't given you enough information (or you copied the question wrong). 5. Feb 12, 2006 ### ussjt For a given functionconsider the composite function Suppose we know that Calculate f ' (x) How do I go about setting up this type of problem? 6. Feb 13, 2006 ### HallsofIvy Staff Emeritus You titled this thread "chain rule"! It ought to occur to you to use the chain rule! If h(x)= f(2x3) then h'(x)= f '(2x3)(6x2). You are given that h'(x)= 7x5. You can easily solve f '(2x3)(6x2)= 7x5 for f '(2x3). Now let y= 2x3. What is f(y)? 7. Feb 14, 2006 ### ussjt the way our TA showed up, the answer ought to be: (7x^3/6)*(y/2)^(1/3)...but my answer must be in terms of x...so could someone please tell me if I went wrong somewhere or how to make it all in terms of x (by x I mean I can't have that "y"). Here are my steps: f '(2x^3)(6x^2)= 7x^5 f '(2x^3)= (7x^5)/(6x^2) f '(2x^3)= (7x^3)/6 ~~~~~~~~~~~~ 2x^3=y x^3= y/2 x= (y/2)^(1/3) ~~~~~~~~~~ 8. Feb 15, 2006 ### VietDao29 It's fine up to here. Now sub what you get in the expression: f '(2x3)= (7x3)/6, we have: f'(y) = 7y / 12. So what's f'(x)? Can you go from here? 9. Feb 15, 2006 ### ussjt where did the f'(y) come from? 10. Feb 15, 2006 ### NateTG In VietDao's post y is a place holder for $2x^3$ I don't know if this will make things any clearer for you: The problem gives you $$h(x)=f(z(x))$$ So, let's say we have some $a$ so that $x=z^{-1}(a)$ (provided that $z^{-1}$ actually exists). Then we can substitute that in $$h(z^{-1}(a))=f(z(z^{-1}(a))$$ then simplify $$h(z^{-1}(a))=f(a)$$ Now, we can take the derivative of both sides w.r.t. a $$h'(z^{-1}(a)) \times \left(z^{-1}\right)' (a) = f'(a)$$ Now, since $h'$ and $z$ are both known, you should be able to work out what the left hand side of the equation is equal to.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8814505338668823, "perplexity": 2305.9140626789217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279915.8/warc/CC-MAIN-20170116095119-00523-ip-10-171-10-70.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/uniform-circular-motion-of-a-particle.93044/
# Uniform Circular Motion of a particle 1. Oct 8, 2005 ### shawpeez Im new at physics and have been looking at this question all day, any help would be greatly appreciated, here it is A particle moves along a cirular path over a horizontal xy coordinate system, at constant speed. At time t1 = 4.00s, it is at point (5.oom, 6.00m) with velocity (3.00 m/s)j and acceleration in the positive x direction. At time t2 = 10.0s, it has velocity (-3.00 m/s)i and acceleration in the positive y direction. What are the x and y coordinates of the centre of the circular path. 2. Oct 8, 2005 ### mezarashi 1. The center of the circular motion Draw out a diagram of a particle under circular motion. At each quarter circle, draw out the vectors for velocity and acceleration. Compare it to your questions problem. The times give you a clue to finding the radius and subsequently the center of the motion. Within (10 - 4)seconds, the object moved from one part of the circle to another. You know d = vt. You also know that v^2/r defines acceleration in circular motion. A bit tricky, but I hope now you can start thinking about it. 3. Oct 8, 2005 ### shawpeez thanks for the post, but i'm still lost???? 4. Oct 8, 2005 ### mezarashi Draw out a diagram of a particle under circular motion. At each quarter circle, draw out the vectors for velocity and acceleration. Compare it to your questions problem. 5. Oct 8, 2005 ### shawpeez from the diagram i see that at the first point its in the 2nd quarter and at the second point its in the 4th quarter( moving in a clockwise direction). Would the radius be the square root of (5^2 + 6^2). I still can't understand how this would relate to an xy centre coordinate(I thought the centre coordinate should be the origin of your plane). 6. Oct 8, 2005 ### mezarashi Aha, you're starting to understand a bit now. Your direction of motion is also correct. Not so fast about the radius just yet. You were drawing your own diagram based on the center of the motion being at 0,0. But in this case, the center isn't there, and you must find it. Can you find the corresponding point on the question's x-y coordinate for the first point? From there you can possibly estimate where the 2nd point would be, but you can't tell until you know the radius. Now to finding this radius. You know that points 1 and 2 are separated by 3/4 of a circle, do you agree? Refer back to the diagram you drew. The circumference of a circle can be described as $$2\pi r$$. Now we know it took (10-4) seconds to make its way across three quarters of it, where d = vt. Getting closer? 7. Oct 8, 2005 ### HallsofIvy Staff Emeritus You know that the circle passes through (5, 6) and that the vector 3.00j is tangent to the circle there. Since 3j is vertical, you know that (5,6) is at one end of a horizontal diameter. Further, you know that its speed, at any time, is 3 m/s. You also know that, 6 seconds later later, "it has velocity (-3.00 m/s)i and acceleration in the positive y direction. " so at that time, when it has moved 3(6)= 18 m, it is at the top of the vertical diameter. That appears to mean that it has moved 3/4 of the entire circle. What is the circumference of the circle? What is the diameter of the circle? Knowing the radius and that one end of a horizontal diameter is (5,6), it should be easy to find the coordinates of the center. (Notice the word "appears". It is also possible that in those 6 seconds, it has gone completely around the circle and then one quarter of the circle.) 8. Oct 8, 2005 ### shawpeez so 3/4 of the trip was 18m , then the total circumference would be 24m , the diameter 7.6 and radius 3.8 m. Then the centre of the circle would be x = 8.8 y = 6
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8126362562179565, "perplexity": 509.91427482693393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560281331.15/warc/CC-MAIN-20170116095121-00437-ip-10-171-10-70.ec2.internal.warc.gz"}
https://en.wikipedia.org/wiki/Sociable_number
# Sociable number Sociable numbers are numbers whose aliquot sums form a cyclic sequence that begins and ends with the same number. They are generalizations of the concepts of amicable numbers and perfect numbers. The first two sociable sequences, or sociable chains, were discovered and named by the Belgian mathematician Paul Poulet in 1918.[1] In a set of sociable numbers, each number is the sum of the proper factors of the preceding number, i.e., the sum excludes the preceding number itself. For the sequence to be sociable, the sequence must be cyclic and return to its starting point. The period of the sequence, or order of the set of sociable numbers, is the number of numbers in this cycle. If the period of the sequence is 1, the number is a sociable number of order 1, or a perfect number—for example, the proper divisors of 6 are 1, 2, and 3, whose sum is again 6. A pair of amicable numbers is a set of sociable numbers of order 2. There are no known sociable numbers of order 3, and searches for them have been made up to ${\displaystyle 5\times 10^{7}}$ as of 1970 [2]. It is an open question whether all numbers end up at either a sociable number or at a prime (and hence 1), or, equivalently, whether there exist numbers whose aliquot sequence never terminates, and hence grows without bound. ## Example An example with period 4: The sum of the proper divisors of ${\displaystyle 1264460}$ (${\displaystyle =2^{2}\cdot 5\cdot 17\cdot 3719}$) is: 1 + 2 + 4 + 5 + 10 + 17 + 20 + 34 + 68 + 85 + 170 + 340 + 3719 + 7438 + 14876 + 18595 + 37190 + 63223 + 74380 + 126446 + 252892 + 316115 + 632230 = 1547860 The sum of the proper divisors of ${\displaystyle 1547860}$ (${\displaystyle =2^{2}\cdot 5\cdot 193\cdot 401}$) is: 1 + 2 + 4 + 5 + 10 + 20 + 193 + 386 + 401 + 772 + 802 + 965 + 1604 + 1930 + 2005 + 3860 + 4010 + 8020 + 77393 + 154786 + 309572 + 386965 + 773930 = 1727636 The sum of the proper divisors of ${\displaystyle 1727636}$ (${\displaystyle =2^{2}\cdot 521\cdot 829}$) is: 1 + 2 + 4 + 521 + 829 + 1042 + 1658 + 2084 + 3316 + 431909 + 863818 = 1305184 The sum of the proper divisors of ${\displaystyle 1305184}$ (${\displaystyle =2^{5}\cdot 40787}$) is: 1 + 2 + 4 + 8 + 16 + 32 + 40787 + 81574 + 163148 + 326296 + 652592 = 1264460. ## List of known sociable numbers The following categorizes all known sociable numbers as of July 2018 by the length of the corresponding aliquot sequence: Sequence length Number of known sequences 1 51 2 1225736919[3] 4 5398 5 1 6 5 8 4 9 1 28 1 It is conjectured that if n mod 4 = 3, then there are no such sequence with length n. The smallest number of the only known 28-cycle is 14316. ## Searching for sociable numbers The aliquot sequence can be represented as a directed graph, ${\displaystyle G_{n,s}}$, for a given integer ${\displaystyle n}$, where ${\displaystyle s(k)}$ denotes the sum of the proper divisors of ${\displaystyle k}$.[4] Cycles in ${\displaystyle G_{n,s}}$ represent sociable numbers within the interval ${\displaystyle [1,n]}$. Two special cases are loops that represent perfect numbers and cycles of length two that represent amicable pairs. ## Conjecture of the sum of sociable number cycles As the number of sociable number cycles with length greater than 2 approaches infinity, the percentage of the sums of the sociable number cycles divisible by 10 approaches 100%. (sequence A292217 in the OEIS). ## References 1. ^ P. Poulet, #4865, L'Intermédiaire des Mathématiciens 25 (1918), pp. 100–101. (The full text can be found at ProofWiki: Catalan-Dickson Conjecture.) 2. ^ Bratley, Paul; Lunnon, Fred; McKay, John (1970). "Amicable numbers and their distribution". Mathematics of Computation. 24 (110): 431–432. doi:10.1090/S0025-5718-1970-0271005-8. ISSN 0025-5718. 3. ^ Sergei Chernykh Amicable pairs list 4. ^ Rocha, Rodrigo Caetano; Thatte, Bhalchandra (2015), Distributed cycle detection in large-scale sparse graphs, Simpósio Brasileiro de Pesquisa Operacional (SBPO), doi:10.13140/RG.2.1.1233.8640 • H. Cohen, On amicable and sociable numbers, Math. Comp. 24 (1970), pp. 423–429
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 15, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.891385018825531, "perplexity": 334.86822658341873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737238.53/warc/CC-MAIN-20200808021257-20200808051257-00044.warc.gz"}
http://mathhelpforum.com/differential-equations/164609-wave-equation-polar-coordinates-print.html
# wave equation in polar coordinates The wave equation is given to me as : $\frac{\partial^{2}u}{\partial t^{2}}=\nabla^{2}u$ and that $c^2=1$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9979020953178406, "perplexity": 247.05295487130195}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189686.56/warc/CC-MAIN-20170322212949-00645-ip-10-233-31-227.ec2.internal.warc.gz"}
https://gitlab.math.univ-paris-diderot.fr/beppe/occurrence-typing/-/commit/70a43e940bd50b32fcee63428d9df92b19c57c55
Commit 70a43e94 by Giuseppe Castagna ### rewording parent a06aa2ef No preview for this file type No preview for this file type ... ... @@ -98,7 +98,7 @@ inference). Second, the result of our analysis can be used to infer intersection types for functions, even in the absence of precise type annotations such as the one in the definition of \code{foo} in~\eqref{foo2}: to put it simply, we are able to infer the type~\eqref{eq:inter} for the unannotated pure JavaScript code of \code{foo}. Third, we show how to combine occurrence typing with gradual typing, and in show how to combine occurrence typing with gradual typing, in particular how the former can be used to optimize the compilation of the latter. ... ... @@ -201,7 +201,7 @@ particular if the static type of $e'$ is an intersection of arrows). Additionally, we can repeat the reasoning for all subterms of $e'$ and $e''$ as long as they are applications, and deduce distinct types for all subexpressions of $e$ that form applications. How to do it precisely---not only for applications, but also for other terms such as pairs, projections, records etc---is explained in the rest of the paper but the key ideas are pretty simple and are explained next. the paper but the key ideas are pretty simple and are presented next. \subsection{Key ideas}\label{sec:ideas} ... ... Markdown is supported 0% or . You are about to add 0 people to the discussion. Proceed with caution. Finish editing this message first!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9544256329536438, "perplexity": 1271.844444221319}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103943339.53/warc/CC-MAIN-20220701155803-20220701185803-00050.warc.gz"}
https://www.researchgate.net/profile/Margareta-Halicka
# Margaréta HalickáComenius University Bratislava · Department of Applied Mathematics and Statistics 25 Publications 875 A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more 277 Citations Introduction Skills and Expertise ## Publications Publications (25) Article In data envelopment analysis (DEA), non-radial graph models represent an important class characterized by the independent treatment of each input and output factor in the efficiency measurement. The extensive literature on this topic often analyses individual models in isolation, so much so that the same model may be known under different names due... Article The hyperbolic measure (HM) model is a radial, non-oriented model that is often used in Data Envelopment Analysis (DEA). It is formulated as a non-linear programming problem and hence the conventional linear programming methods, customarily used in DEA, cannot be applied to it in general. In this paper, we reformulate the hyperbolic measure model i... Article Full-text available In data envelopment analysis for environmental performance measurement the undesirable outputs are taken into account. Ones of the standard approaches for dealing with the undesirable outputs are the hyperbolic and the directional distance measures. They both allow a simultaneous expansion of desirable outputs and a contraction of undesirable outpu... Chapter A one-dimensional free terminal time optimal control problem stemming from mathematical finance is studied. To find the optimal solution and prove its optimality the standard maximum principle procedure including Arrow’s sufficiency theorem is combined with specific properties of the problem. Certain unexpected features of the solution are pointed... Article Throughout its evolution, data envelopment analysis (DEA) has mostly relied on linear programming, particularly because of simple primal-dual relations and the existence of standard software for solving linear programs. Although also nonlinear models, such as Russell measure or hyperbolic measure models, have been introduced, their use in applicati... Article A model of sustainable economic growth in an economy with two types of exhaustible resources is analyzed. The resources are assumed to be perfect substitutes with marginal rate of substitution varying over time. The optimal control framework is used to characterize the optimal paths under the maximin criterion. It is shown that the resource with in... Article This paper studies limiting behaviour of infeasible weighted central paths in semidefinite programming under strict complementarity assumption. It is known that weighted central paths associated with the ‘Cholesky factor’ symmetrization of the μ-parameterized centring condition are well defined for some classes of weight matrices, and they are anal... Article Full-text available It was recently shown in [4] that, unlike in linear optimization, the central path in semidefinite optimization (SDO) does not converge to the analytic center of the optimal set in general. In this paper we analyze the limiting behavior of the central path to explain this unexpected phenomenon. This is done by deriving a new necessary and sufficien... Article In this paper we study the limiting behavior of the central path for semidefinite programming (SDP). We show that the central path is an analytic function of the barrier parameter even at the limit point, provided that the semidefinite program has a strictly complementary solution. A consequence of this property is that the derivatives – of any ord... Article Full-text available The central path in linear optimization always converges to the analytic center of the optimal set. This result was extended to semidefinite optimization in [D. Goldfarb and K. Scheinberg, SIAM J. Optim. 8, 871–886 (1998; Zbl 0914.90215)]. We show that this latter result is not correct in the absence of strict complementarity. We provide a countere... Article Several papers have appeared recently establishing the analyticity of the central path at the boundary point for both linear programming (LP) and linear complementarity problems (LCP). While the proofs for LP are long, proceeding from limiting properties of the corresponding derivatives, the proofs for LCP are very simple, consisting of an applicat... Article In this paper we discuss results of Data Envelopment Analysis for the assessment of efficiency of a large structured network of bank branches. We focus on the problem of a suitable choice of efficiency measures and we show how these measures can influence results. As an underlying model we make use of the so-called normalized weighted additive mode... Article This note shows the incorrectness of several results concerning robustness measures introduced by M. S. Mahmoud (1996: Some robustness measures for a class of discretetime systems. IMA J. Math. Control & info . 13 , 117–128). Some confusing issues are discussed, and the correct forms of the corresponding results are provided. Article We study the properties of the weighted central paths in linear programming. We consider each path as the function of the parameter μ≥0 where the value at μ=0 corresponds to the limit point at the boundary of the feasible set. We calculate the recursive formulas for the central path derivatives of all orders valid at each μ≥0. We establish the geom... Article Full-text available . In this paper a duality of transformation functions in the interior point method is treated. A dual pair of convex or linear programming problems is considered and the primal problem is transformed by the parametrized transformation function of a more general form than logarithmic is. The construction of the parametrized transformation function f... Chapter The stabilization problem of linear discrete-time large scale systems (LSS) is studied. Our recent results on stability robustness bound Halická and Rosinová (1992) are employed and a sufficient stability condition for LSS is developed which comprises different Lyapunov - type bounds as special cases. The obtained condition yields a decentralized s... Article The stabilization problem of linear discrete-time large scale systems (LSS) is studied. Our recent results on stability robustness bound Halická and Rosinova (1992) are employed and a sufficient stability condition for LSS is developed which comprises different Lyapunov - type bounds as special cases. The obtained condition yields a decentralized s... Article Robustness bound estimates, based on the direct Lyapunov method for discrete-time nominally linear systems, are analysed and compared. Although various robustness bound estimates were introduced recently, little effort has been made to compare them. We develop a scheme for obtaining the estimates, which brings a new robustness bound estimate and pr... Article Full-text available . Monotonicity of the Lagrangian function corresponding to the general root quasibarrier as well as to the general inverse barrier function of convex programming is proved. It is shown that monotonicity generally need not take place. On the other hand for LP-problems with some special structure monotonicity is proved for a very general class of int... Article The problem of existence of a regular synthesis for the linear time-optimal control problem with convex control constraints is studied. A regular synthesis on the whole reachable set cannot be established for this problem by direct use of Brunovsky's general existence theorem. This is in accord with the example of a nonsubanalytic reachable set due... Project (1)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9204496741294861, "perplexity": 670.6370983409723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571056.58/warc/CC-MAIN-20220809155137-20220809185137-00213.warc.gz"}
https://www.physicsforums.com/threads/is-there-an-intermediate-particle-to-pair-production.323530/
# Is there an intermediate 'particle' to pair production? 1. Jul 6, 2009 ### Rymer This question was brought to mind while reading other threads. I didn't think it was appropriate to diverge those threads off their subjects, so I started a new one. (hope this is right). I'm specifically thinking about the extremely rare event of the interaction of two gamma rays as the pair production event. It would seem that there should be an intermediate 'combined' particle since the event can't be instantaneous. Does this particle have a name? What would its properties have to be? (Is it spin 2 like a graviton?) This is not my field, just interested. Thank you. 2. Jul 6, 2009 ### malawi_glenn in terms of Feynman diagrams, the intermediate particles are virtual electrons/positrons, but in the REAL interaction (Feynman diagrams are just mathematical remnants when we do perturbation theory - I don't think that Nature DOES perturbation theory), just as middle steps in our calculations when we calculate work etc. Now, ask yourself WHY the event CAN NOT be instantaneous, why this bias? 3. Jul 6, 2009 ### Rymer The process involves finite constrained interaction -- as all must to some degree -- it cannot be instantaneous. Uncertainty Principle I thought. So what are the properties of the intermediate? 4. Jul 6, 2009 ### malawi_glenn You have confused with what we can now with a statistical certainty and how nature works. Also, when we approach nature, we can only apply our models and see if they make sense, we can never go "down there" and see how these things occur with our own eyes. In the language of QFT with perturbation theory, I don't think it makes sense to ask how long time it took for the intermediate virtual particle to propagate etc. Also, why is the forming of this intermediate state NON-instantaneous? Last edited: Jul 6, 2009 5. Jul 6, 2009 ### Bob_for_short You can think of a superposition of positronium states (positronium is a neutral system which can have spin 2 in excited states). Last edited: Jul 6, 2009 6. Jul 6, 2009 Why?? 7. Jul 6, 2009 ### Rymer I have not calculated it -- so I guess I don't understand it. To me instantaneous and infinitesimal seem to be mathematical conveniences -- not scientific possibilities. 8. Jul 6, 2009 ### malawi_glenn But you argued that 2gamma's -> 1electron + 1positron can not be a instantaneous process, but that it should go though an intermediate state [with spin 2?? why not spin 1 or 0? ;-) ] thus 2gamma's -> intermediate state ->1electron + 1positron why is this cancelling the "instantaneous" "problem"? 9. Jul 6, 2009 ### Rymer Don't know. Only mentioned spin 2 because I understood spin 0 would be disallowed and spin 1 would not balance. I was wondering if this could be related to the pair production reported near super-massive objects in the galactic core. 10. Jul 6, 2009 ### malawi_glenn why is spin0 dissalowed and spin1 not "balance"? define "balance" We, as far as I know, do not know so much of that ep-prodution yet, but what is related to it? The "intermediate step mechanicsm" or?? why should it? 11. Jul 6, 2009 ### Rymer No idea why ... that was why the question. I don't know the basics of how this interaction occurs. Are these photons both spin-up or down? How are they 'approaching' each other ... a 'head-on' or at an angle? How is the interaction occurring? 12. Jul 6, 2009 ### malawi_glenn do you even know ANY interaction in quantum field theory? "how" they occur, as I wrote, it all depends on what model you are using. You can, in the lowest order in perturbation theory, write it as 2 real photons, exhanging virtual electrons/positrons with a spectator nucleus, giving rise to 1 real electron and 1 real positron. Then we can derive angular- and spin dependent cross section (probability function that the process occur - recall that we are in the quantum realm). 13. Jul 6, 2009 ### Rymer I know very little about it -- but have read that there is another reaction -- far more rare -- that does not require the presence of a 'spectator nucleus'. It was that interaction to which I was referring. 14. Jul 6, 2009 ### malawi_glenn But where did you read it? Maybe if you show us we can help you more :-) Saying things like "I have read/heard" is not so good :-( 15. Jul 6, 2009 ### Rymer More than one place -- sorry I had thought it was 'common knowledge' in the field. But since, I know little about this field, I guess I'm wrong. I'll do some checking and track it down. 16. Jul 6, 2009 ### malawi_glenn Well since you said you had little knowledge about this field, you should not assume what is common knowledge and not, just an advice for the future ;-) 17. Jul 6, 2009 ### Rymer My knowledge -- what little there is of it -- is 40 years old. A quick internet search gave a reference to: http://prola.aps.org/abstract/PR/v155/i5/p1404_1 It seems to be from 'my era' so it may be passe now. 18. Jul 6, 2009 ### malawi_glenn ok it should just be a virtual particle exchange of an electron or a positron in lowest order perturbation theory. Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook Similar Discussions: Is there an intermediate 'particle' to pair production?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8287399411201477, "perplexity": 1840.887490064022}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934803944.17/warc/CC-MAIN-20171117204606-20171117224606-00340.warc.gz"}
https://socratic.org/questions/how-does-intensity-affect-the-photoelectric-effect
Chemistry Topics # How does intensity affect the photoelectric effect? ##### 1 Answer May 9, 2015 If the frequency of electromagnetic waves is higher than the extraction threshold of the metal and electrons are emitted from the metal surface, then an increase of light intensity will result in a proportional increase of electrical current of the electrical circuit where the emitted electrons are conveyed. This is simply explained by the photon model of light. In that model an electromagnetic wave carries its energy not like a continuum, but as multitude of grains of energy which have the same individisible amount of energy. The indivisible grain of light energy, also called "light dart" by Einstein, and then electromagnetic energy quanta or photons is given by the Planck-Einstein's law: ${E}_{\text{photon" = h*f}}$ where $f$ is the frequency of light. For each single photon at the same frequency. From the point of view of photons, a more intense light is not made of "higher waves", but of a higher number of photons. Than is obvious that, if each photon is capable to expel an electron, the more intense is the light the more number of electrons will be expelled. If frequency is under the threshold, then even the higher intense light will be equally incapable to eject even a single electron. It is impossible to explain these experimental outcomes with the wave model of light. ##### Impact of this question 7935 views around the world You can reuse this answer Creative Commons License
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8803362846374512, "perplexity": 718.066655934792}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986679439.48/warc/CC-MAIN-20191018081630-20191018105130-00453.warc.gz"}
https://www.physicsforums.com/threads/putting-a-satellite-into-orbit.129342/
# Putting a satellite into orbit 1. Aug 21, 2006 ### wizzart Geez it's been ages since I last was here. Anyway, I was thinking a little on the subject below and remembered this place. It's probably ridiculously trivial, but still I wonder: A satellite stays in orbit around the earth because it's falling at the same pace as the earth's surface curves, simply put. But that's a sattelite in stationary orbit. What happens if I release a sattelite in an orbit that's to high for it's speed? My guess: it falls back to earth and because it's angular momentum needs to be preserved it gradually picks up more tangential speed until it lands into a stationary orbit, wich would be somewhere between the orbit where it should've been and where it was released... Correct or rubbish? 2. Aug 21, 2006 ### DaveC426913 You've put it in an elliptical orbit. Its apogee (maximum altitude) is where you released it, its perigee (minimum altitude) will be where it is closest to the Earth (but moving fastest). Provided the perigee is still above Earth's atmospheric drag, it will remain in that stable, elliptical orbit. BTW, ALL orbits are elliptical, though some have an eccentricity of nearly zero (meaning it is nearly circular). Rubbish. See above. There is no such thing as a "stationary orbit". Last edited: Aug 21, 2006 3. Aug 21, 2006 ### wizzart hm...sounds plausible too Well, quasi-stationary then...slowly spiralling down to earth. 4. Aug 21, 2006 ### Farsight The thing to note, wizzart, is that if you're in an orbiting spaceship and you want to get into a higher orbit, you put your foot on the gas and go faster. You don't point the nose up and fire the rockets, because that doesn't give you the forward velocity that stretches your arc of fall into a bigger circle. It would just give you an eccentric orbit, like Dave says. 5. Aug 21, 2006 ### BobG I think your term, stationary, is wrong, making readers wonder what you mean. Do you mean stable orbit? Circular orbit? Any object with any tangential velocity at all travels in an elliptical orbit. Some of those orbits just happen to intersect the Earth (a Roger Clemens fastball, for example). If the satellite's trajectory misses the Earth and is high enough that it isn't affected by the Earth's atmosphere, the satellite's mechanical energy stays constant through the entire orbit. The balance between potential and kinetic energy just changes. As DaveC explained, by time the satellite reaches perigee, it has enough kinetic energy that starts gaining altitude again and returns to the same point it started. The orbit is just as stable as a circular orbit. The orbit remains elliptical and never becomes circular. The only way the satellite spirals down to Earth is if the satellite is low enough to be affected by the Earth's atmosphere. Generally, only satellites with an altitude lower than 1000 km are affected and circular orbits are affected more than elliptical orbits. In this case, atmospheric drag is decreasing the total mechanical energy rather than just changing the balance between potential and kinetic. Edit: For an elliptical orbit where perigee is low enough to be affected by atmospheric drag, your assumption would be correct. If you slow a satellite at perigee due to atmospheric drag, it's not the perigee altitude that changes; it's the apogee altitude. As soon as the satellite was slowed, a new orbit was created and the satellite's current location has to be part of that new orbital ellipse. So atmospheric drag would slowly decrease the apogee height until you had a circular orbit, at which point the satellite would 'spiral' into the Earth's atmosphere. Last edited: Aug 21, 2006 6. Aug 21, 2006 ### WhyIsItSo Did you mean Geo-Stationary? 7. Aug 21, 2006 ### DaveC426913 There is no such thing as a quasi-stationary orbit. The words stationary and orbit used in conjunction with each other are very nearly an oxymoron. (The only combination that makes sense is geo-stationary orbit which is something completely different and has nothing to do with this thread.) An orbit that is "slowly spiralling down to earth" is an orbit that is decaying. The only thing (within reason) that could be causing this is drag, sapping forward motion from the satellite. I think the term you are looking for is stable orbit. Unless acted upon by an outside force (such as another gravitational body, or friction) (or, an inside force such as rockets), a satellite will remain in a stable elliptical orbit. Period. This is critical to your understanding of orbits. Now, if you play with the adjustments carefully, you can get the satellite's orbit to have virtually zero eccentricity, meaning its orbit is circular, and meaning that it will move around while maintaining a constant altitude above the Earth. Last edited: Aug 21, 2006 8. Aug 21, 2006 ### DaveC426913 Look at the attached illo. #### Attached Files: • ###### PF060821orbits.gif File size: 26.5 KB Views: 80 9. Aug 21, 2006 ### wizzart Dang, it's right here in my 1st year Relativity book...need to critically review some of the old stuff I guess. As for stationarity: stationary to me means nothing changes. For the orbiting object everything changes constantly, without drag the orbit itself (circle or ellipse) doesn't change shape, hence my use of the word stationary. If you prefer stable, I can see why. 10. Aug 21, 2006 ### DaveC426913 :surprised :surprised You're in post-2ndary??:surprised :surprised 11. Aug 21, 2006 ### wizzart Dude, I'll make it a lil worse...I'm one subject away from my bachelors degree :P Did research on noise in Quantum Dots and all that. The mechanics course was 5 years ago, and back then I didn't really get the whole orbit stuff (and the big point in the course was relativity, so I got away with it)...read through it again now and makes perfect sense. It's just one of those questions that pops into your head and you go "I'm pretty sure I should know this...but I don't". Similar Discussions: Putting a satellite into orbit
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8362319469451904, "perplexity": 1303.1856513476507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948587496.62/warc/CC-MAIN-20171216084601-20171216110601-00259.warc.gz"}
https://blackhole12.blogspot.de/2013/05/
## May 30, 2013 ### How To Complain About Men And Be Sexist At The Same Time What if I told you about an article that complained about how social media and instant gratification has eroded away at our social fabric? How we don't take the time to pause and reflect on how we live our lives? How we fail to really work at making the world a better place, and instead waste time building websites to share cat photos on? I'd think that it raised several important issues about modern society. What if I then told you that the entire article only applied this to men? What if it was titled, "Why Men Aren't Really Men Anymore"? Suddenly, things just got a lot more sexist. In fact, that entire article is built almost entirely out of gender stereotypes. It's that subtle, classy kind of sexism; bigotry that hides behind delicate prose, hiding it's true nature. If the article were rewritten so that it described these issues in gender-nuetral terms, I'd be liable to agree with it. Trolling, aggression, lack of human interaction, billions of dollars being spent on worthless startups that solve first world problems - these are all real issues. Yet, to imply that men are the ones with these problems, to imply that any class of problems belong to only one gender, is sexist. The article is wonderfully subtle in its sexism. Just look at how it claims "real" men should treat women: Real men are not selfish. Real men are just as concerned for the feelings, needs and minds of women as they are for their own — not just women’s bodies and their sexual usefulness. Real men have a well-defined code of ethics and respect that they follow. Isn't that sweet? Except, oh wait, we have this disturbing sentence in the middle of the article: Men have become lazy pussies. I don’t even want to use the word pussy because it brings to mind women, who nowadays have much more character than men. Not to be outdone, the article's last paragraph contains this gem: Some great women are settling for these fools and then finding that they themselves have no choice but to wear the pants in the family because their “man” is PMSing. This is horribly sexist. These three sentences enforce multiple gender stereotypes and tell us a number of things: 1. Women want manly men. If they think they don't want a manly man, they just haven't found one manly enough yet. 2. Men are always the ones who should be wearing the pants in the family, because men always have to be the manly ones. 3. Women shouldn't have more character than men. 4. Women are allowed to be PMSing because they're women, and everyone knows women get emotional, but a man should never be emotional, because he's a man. See, the reasoning behind all these is just "because men are men and women are women." That isn't a reason, it's sexism. It's bigotry. It's enforcing stereotypes and trying to tell people how they should live their lives based on a set of gender roles that society arbitrarily decided on. It simply assumes it knows what women want, and goes so far to imply that the only reason a women wouldn't want a "real" man is because they haven't seen one yet. This is exactly like those annoying little assholes who tell you "Oh you just haven't heard good dubstep! You'll like it then!" Inevitably, after you still don't like it, they just tell you that something is clearly wrong with you and you have no taste. At no point do they entertain the notion that, maybe, just maybe, you actually don't like dubstep. Our society suffers from the same tunnel vision. We assume that when a women is working overtime and the man is doing laundry that it's the man's fault for not being manly enough and the women has been forced to become the head of the household. If she had just married a man manly enough, she wouldn't have had to do that! It never crosses their minds that maybe, just maybe, the women actually likes it this way. Maybe some men just don't want to be manly. Maybe some women like men who aren't manly. Maybe you can't fit every single human being into nice neat little gender boxes. It is not a man's job to be manly simply because they are men. It is not a women's job to be PMSing or making you a sandwich. It is not society's place to tell anyone how they should live their lives. You do not know what they want and you should never pretend you do. We can make certain observations about genders, like men tend to be more aggressive, and women tend to be more emotional, but we should never assume that men should be more aggressive, or women should be more emotional. That is for the individual to decide, not society. A human being is something precious, something complicated, something that can't be easily categorized. Stop trying to stuff them into square boxes. ## May 25, 2013 ### Course Notes It's standard procedure at the University of Washington to allow a single sheet of handwritten notes during a Mathematics exam. I started collecting these sheets after I realized how useful it was to have a reference that basically summarized all the useful parts of the course on a single sheet of paper. Now that I've graduated, it's easy for me to quickly forget all the things I'm not using. The problem is that, when I need to say, develop an algorithm for simulating turbulent airflow, I need to go back and re-learn vector calculus, differential equations and nonlinear dynamics. So I've decided to digitize all my notes and put them in one place where I can reference them. I've uploaded it here in case they come in handy to anyone else. The earlier courses listed here had to be reconstructed from my class notes because I'd thrown my final notesheet away or otherwise lost it. The classes are not listed in the order I took them, but rather organized into related groups. This post may be updated later with expanded explanations for some concepts, but these are highly condensed notes for reference, and a lot of it won't make sense to someone who hasn't taken a related course. Math 124 - Calculus I lost Math 125 - Calculus II lost Math 126 - Calculus III lost Math 324 - Multivariable Calculus I $r^2 = x^2 + y^2$ $x= r\cos\theta$ $y=r\sin\theta$ $\iint\limits_R f(x,y)\,dA = \int_\alpha^\beta\int_a^b f(r\cos\theta,r\sin\theta)r\,dr\,d\theta=\int_\alpha^\beta\int_{h_1(\theta}^{h_2(\theta)} f(r\cos\theta,r\sin\theta)r\,dr\,d\theta$ $m=\iint\limits_D p(x,y)\,dA \begin{cases} M_x=\iint\limits_D y p(x,y)\,dA & \bar{x}=\frac{M_y}{m}=\frac{1}{m}\iint x p(x,y)\,dA \\ M_y=\iint\limits_D x p(x,y)\,dA & \bar{y}=\frac{M_x}{m}=\frac{1}{m}\iint y p(x,y)\,dA \end{cases}$ $Q = \iint\limits_D \sigma(x,y)\,dA$ $I_x = \iint\limits_D y^2 p(x,y)\,dA$ $I_y = \iint\limits_D x^2 p(x,y)\,dA$ $I_0 = \iint\limits_D (x^2 + y^2) p(x,y)\,dA$ $\iiint f(x,y,z) dV = \lim_{l,m,n\to\infty}\sum_{i=1}^l\sum_{j=1}^m\sum_{k=1}^n f(x_i,y_j,z_k) \delta V$ $\iiint\limits_B f(x,y,z)\,dV=\int_r^s\int_d^c\int_a^b f(x,y,z)\,dx\,dy\,dz = \int_a^b\int_r^s\int_d^c f(x,y,z)\,dy\,dz\,dx$ $E$ = general bounded region Type 1: $E$ is between graphs of two continuous functions of $x$ and $y$. $E=\{(x,y,z)|(x,y)\in D, u_1(x,y) \le z \le u_2(x,y)\}$ $D$ is the projection of E on to the xy-plane, where $\iiint\limits_E f(x,y,z)\,dV = \iint\limits_D\left[\int_{u_1(x,y)}^{u_2(x,y)} f(x,y,z)\,dz \right]\,dA$ $D$ is a type 1 planar region: $\iiint\limits_E f(x,y,z)\,dV = \int_a^b \int_{g_1(x)}^{g_2(x)} \int_{u_1(x,y)}^{u_2(x,y)} f(x,y,z)\,dz\,dy\,dx$ $D$ is a type 2 planar region: $\iiint\limits_E f(x,y,z)\,dV = \int_d^c \int_{h_1(y)}^{h_2(y)} \int_{u_1(x,y)}^{u_2(x,y)} f(x,y,z)\,dz\,dx\,dy$ Type 2: $E$ is between $y$ and $z$, $D$ is projected on to $yz$-plane $E=\{(x,y,z)|(y,z)\in D, u_1(y,z) \le x \le u_2(y,z)\}$ $\iiint\limits_E f(x,y,z)\,dV = \iint\limits_D\left[\int_{u_1(y,z)}^{u_2(y,z)} f(x,y,z)\,dx \right]\,dA$ Type 3: $E$ is between $x$ and $z$, $D$ is projected on to $xz$-plane $E=\{(x,y,z)|(x,z)\in D, u_1(x,z) \le y \le u_2(x,z)\}$ $\iiint\limits_E f(x,y,z)\,dV = \iint\limits_D\left[\int_{u_1(x,z)}^{u_2(x,z)} f(x,y,z)\,dy \right]\,dA$ Mass $m = \iiint\limits_E p(x,y,z)\,dV$ $\bar{x} = \frac{1}{m}\iiint\limits_E x p(x,y,z)\,dV$ $\bar{y} = \frac{1}{m}\iiint\limits_E y p(x,y,z)\,dV$ $\bar{z} = \frac{1}{m}\iiint\limits_E z p(x,y,z)\,dV$ Center of mass: $(\bar{x},\bar{y},\bar{z})$ $Q = \iiint\limits_E \sigma(x,y,z)\,dV$ $I_x = \iiint\limits_E (y^2 + z^2) p(x,y,z)\,dV$ $I_y = \iiint\limits_E (x^2 + z^2) p(x,y,z)\,dV$ $I_z = \iiint\limits_E (x^2 + y^2) p(x,y,z)\,dV$ Spherical Coordinates: $z=p\cos\phi$ $r=p\sin\phi$ $dV=p^2\sin\phi$ $x=p\sin\phi\cos\theta$ $y=p\sin\phi\sin\theta$ $p^2 = x^2 + y^2 + z^2$ $\iiint\limits_E f(x,y,z)\,dV = \int_c^d\int_\alpha^\beta\int_a^b f(p\sin\phi\cos\theta,p\sin\phi\sin\theta,p\cos\phi) p^2\sin\phi\,dp\,d\theta\,d\phi$ Jacobian of a transformation $T$ given by $x=g(u,v)$ and $y=h(u,v)$ is: $\frac{\partial (x,y)}{\partial (u,v)} = \begin{vmatrix} \frac{\partial x}{\partial u} & \frac{\partial x}{\partial v} \\ \frac{\partial y}{\partial u} & \frac{\partial y}{\partial v} \end{vmatrix} = \frac{\partial x}{\partial u} \frac{\partial y}{\partial v} - \frac{\partial x}{\partial v} \frac{\partial y}{\partial u}$ Given a transformation T whose Jacobian is nonzero, and is one to one: $\iint\limits_R f(x,y)\,dA = \iint\limits_S f\left(x(u,v),y(u,v)\right)\left|\frac{\partial (x,y)}{\partial (u,v)}\right|\,du\,dv$ Polar coordinates are just a special case: $x = g(r,\theta)=r\cos\theta$ $y = h(r,\theta)=r\sin\theta$ $\frac{\partial (x,y)}{\partial (u,v)} = \begin{vmatrix} \frac{\partial x}{\partial r} & \frac{\partial x}{\partial \theta} \\ \frac{\partial y}{\partial r} & \frac{\partial y}{\partial \theta} \end{vmatrix} = \begin{vmatrix} \cos\theta & -r\sin\theta \\ \sin\theta & r\cos\theta \end{vmatrix} = r\cos^2\theta + r\sin^2\theta=r(\cos^2\theta + \sin^2\theta) = r$ $\iint\limits_R f(x,y)\,dx\,dy = \iint\limits_S f(r\cos\theta, r\sin\theta)\left|\frac{\partial (x,y)}{\partial (u,v)}\right|\,dr\,d\theta=\int_\alpha^\beta\int_a^b f(r\cos\theta,r\sin\theta)|r|\,dr\,d\theta$ For 3 variables this expands as you would expect: $x=g(u,v,w)$ $y=h(u,v,w)$ $z=k(u,v,w)$ $\frac{\partial (x,y,z)}{\partial (u,v,w)}=\begin{vmatrix} \frac{\partial x}{\partial u} & \frac{\partial x}{\partial v} & \frac{\partial x}{\partial w} \\ \frac{\partial y}{\partial u} & \frac{\partial y}{\partial v} & \frac{\partial y}{\partial w} \\ \frac{\partial z}{\partial u} & \frac{\partial z}{\partial v} & \frac{\partial z}{\partial w} \end{vmatrix}$ $\iiint\limits_R f(x,y,z)\,dV = \iiint\limits_S f(g(u,v,w),h(u,v,w),k(u,v,w)) \left|\frac{\partial (x,y,z)}{\partial (u,v,w)} \right| \,du\,dv\,dw$ Line Integrals Parameterize: $r(t)=\langle x(t),y(t),z(t) \rangle$ where $r'(t)=\langle x'(t),y'(t),z'(t) \rangle$ and $|r'(t)|=\sqrt{x'(t)^2 + y'(t)^2 + z'(t)^2}$ $\int_C f(x,y,z)\,ds = \int_a^b f(r(t))\cdot |r'(t)|\,dt = \int_a^b f(x(t),y(t),z(t))\cdot\sqrt{x'(t)^2 + y'(t)^2 + z'(t)^2}\,dt\;\;\;a<t<b$ For a vector function $\mathbf{F}$: $\int_C \mathbf{F}\cdot dr = \int_a^b \mathbf{F}(r(t))\cdot r'(t)\,dt$ Surface Integrals Parameterize: $r(u,v) = \langle x(u,v),y(u,v),z(u,v) \rangle$ $\begin{matrix} r_u=\frac{\partial x}{\partial u}\vec{\imath} + \frac{\partial y}{\partial u}\vec{\jmath} + \frac{\partial z}{\partial u}\vec{k} \\ r_v=\frac{\partial x}{\partial v}\vec{\imath} + \frac{\partial y}{\partial v}\vec{\jmath} + \frac{\partial z}{\partial v}\vec{k} \end{matrix}$ $r_u \times r_v = \begin{vmatrix} \vec{\imath} & \vec{\jmath} & \vec{k} \\ \frac{\partial x}{\partial u} & \frac{\partial y}{\partial u} & \frac{\partial z}{\partial u} \\ \frac{\partial x}{\partial v} & \frac{\partial y}{\partial v} & \frac{\partial z}{\partial v} \end{vmatrix}$ $\iint\limits_S f(x,y,z) dS = \iint\limits_D f(r(t))|r_u \times r_v|\,dA$ For a vector function $\mathbf{F}$: $\iint\limits_S \mathbf{F}\cdot dr = \iint\limits_D \mathbf{F}(r(u,v))\cdot (r_u \times r_v)\,dA)$ Any surface $S$ with $z=g(x,y)$ is equivalent to $x=x$, $y=y$, and $z=g(x,y)$, so $xy$ plane: $\iint\limits_S f(x,y,z)\,dS = \iint\limits_D f(x,y,g(x,y))\sqrt{\left(\frac{\partial z}{\partial x}\right)^2+\left(\frac{\partial z}{\partial y}\right)^2+1}\,dA$ $yz$ plane: $\iint\limits_S f(x,y,z)\,dS = \iint\limits_D f(x,h(x,z),z)\sqrt{\left(\frac{\partial y}{\partial x}\right)^2+\left(\frac{\partial y}{\partial z}\right)^2+1}\,dA$ $xz$ plane: $\iint\limits_S f(x,y,z)\,dS = \iint\limits_D f(g(y,z),y,z)\sqrt{\left(\frac{\partial x}{\partial y}\right)^2+\left(\frac{\partial x}{\partial z}\right)^2+1}\,dA$ Flux: $\iint\limits_S\mathbf{F}\cdot dS = \iint\limits_D\mathbf{F}\cdot (r_u \times r_v)\,dA$ The gradient of $f$ is the vector function $\nabla f$ defined by: $\nabla f(x,y)=\langle f_x(x,y),f_y(x,y)\rangle = \frac{\partial f}{\partial x}\vec{\imath} + \frac{\partial f}{\partial y}\vec{\jmath}$ Directional Derivative: $D_u\,f(x,y) = f_x(x,y)a + f_y(x,y)b = \nabla f(x,y)\cdot u \text{ where } u = \langle a,b \rangle$ $\int_C\,ds=\int_a^b |r'(t)|=L$ If $\nabla f$ is conservative, then: $\int_{c_1} \nabla f\,dr=\int_{c_2} \nabla f\,dr$ This means that the line integral between two points will always be the same, no matter what curve is used to go between the two points - the integrals are path-independent and consequently only depend on the starting and ending positions in the conservative vector field. A vector function is conservative if it can be expressed as the gradient of some potential function $\psi$: $\nabla \psi = \mathbf{F}$ $curl\,\mathbf{F} =\nabla\times\mathbf{F}$ $div\,\mathbf{F} =\nabla\cdot\mathbf{F}$ Math 326 - Multivariable Calculus II $f(x,y)$ is continuous at a point $(x_0,y_0)$ if $\lim\limits_{(x,y)\to(0,0)} f(x,y) = f(x_0,y_0)$ $f+g$ is continuous if $f$ and $g$ are continuous, as is $\frac{f}{g}$ if $g \neq 0$ A composite function of a continuous function is continuous $\frac{\partial f}{\partial x} = f_x(x,y)$ $\frac{\partial f}{\partial x}\bigg|_{(x_0,y_0)}=\left(\frac{\partial f}{\partial x}\right)_{(x_0,y_0)}=f_x(x_0,y_0)$ To find $\frac{\partial z}{\partial x} F(x,y,z)$, differentiate $x$ as normal, hold y constant, and differentiate $z$ as a function (such that $z^2 = 2z \frac{\partial z}{\partial x}$ and $2z = 2 \frac{\partial z}{\partial x}$) Ex: $F(x,y,z) = \frac{x^2}{16} + \frac{y^2}{12} + \frac{z^2}{9} = 1$ $\frac{\partial z}{\partial x} F = \frac{2x}{16} + \frac{2z}{}\frac{\partial z}{\partial x} = 0$ The tangent plane of $S$ at $(a,b,c): z-c = f_x(a,b)(x-a) + f_y(a,b)(y-b)$ where $z=f(x,y)$, or $z =f(a,b) + f_x(a,b)(x-a) + f_y(a,b)(y-b)$ Note that $f_x(a,b)=\frac{\partial z}{\partial x}\bigg|_{(a,b)}$ which enables you to find tangent planes implicitly. Set $z=f(x,y)$. $f_x=f_y=0$ at a relative extreme $(a,b)$. Distance from origin: $D^2 = z^2 + y^2 + x^2 = f(a,b)^2 + y^2 + x^2$ Minimize $D$ to get point closest to the origin. The differential of $f$ at $(x,y)$: $df(x,y;dx,dy)=f_x(x,y)\,dx + f_y(x,y)\,dy$ $dz=\frac{\partial f}{\partial x}\,dx + \frac{\partial f}{\partial y}\,dy$ $f$ is called differentiable at $(x,y)$ if it $s$ defined for all points near $(x,y)$ and if there exists numbers $A$,$B$ such that $\lim_{(n,k)\to(0,0)}\frac{|f(x+h,y+k) - f(x,y) - Ah - Bk|}{\sqrt{h^2 + k^2}}=0$ If $f$ is differentiable at $(x,y)$ it is continuous there. $A = f_x(x,y)$ and $B=f_y(x,y)$ If $F_x(x,y)$ and $F_y(x,y)$ are continuous at a point $(x_0,y_0)$ defined in $F$, then $F$ is differentiable there. Ex: The differential of $f(x,y)=3x^2y+2xy^2+1$ at $(1,2)$ is $df(1,2;h,k)=20h+11k$ $d(u+v)=du+dv$ $d(uv)=u\,dv + v\,du$ $d\left(\frac{u}{v}\right)=\frac{v\,du -u\,dv}{v^2}$ Taylor Series: $f^{(n)}(t)=\left[\left(h\frac{}{} + k\frac{}{})^n F(x,y) \right) \right]_{x=a+th,y=b+tk}$ Note: $f''(t)=h^2 F_{xx} + 2hk F_{xy} + k^2 F_{yy}$ $\begin{matrix} x=f(u,v) \\ y=g(u,v) \end{matrix}$ $J=\frac{\partial(f,g)}{\partial(u,v)}$ $\begin{matrix} u=F(x,y) \\ v=G(x,y) \end{matrix}$ $j = J^{-1} = \frac{1}{\frac{\partial(f,g)}{\partial(u,v)}}$ $\begin{matrix} x = u-uv \\ y=uv \end{matrix}$ $\iint\limits_R\frac{dx\,dy}{x+y}$ R bounded by $\begin{matrix} x+y=1 & x+y=4 \\ y=0 & x=0 \end{matrix}$ $\int_0^1\int_0^x \frac{du\,dv}{u-uv+ux}\left|\frac{\partial(x,y)}{\partial(u,v)}\right|$ $\frac{\partial(F,G)}{\partial(u,v)}=\begin{vmatrix} \frac{\partial F}{\partial u} & \frac{\partial F}{\partial v} \\ \frac{\partial G}{\partial u} & \frac{\partial G}{\partial v} \end{vmatrix}$ $\nabla f = \langle f_x, f_y, f_z \rangle$ $G_x(s_0,t_0) =F_x(a,b)f_x(s_0,t_0) + F_y(a,b)g_x(s_0,t_0)$ $U\times V = U_xV_y - U_yV_x \text{ or } A\times B = \left\langle \begin{vmatrix} a_y & a_z \\ b_y & b_z \end{vmatrix}, -\begin{vmatrix} a_x & a_z \\ b_x & b_z \end{vmatrix}, \begin{vmatrix} a_x & a_y \\ b_x & b_y \end{vmatrix} \right\rangle$ Given $G(s,t)=F(f(s,t),g(s,t))$, then: $\begin{matrix} \frac{\partial G}{\partial s} = \frac{\partial F}{\partial x}\frac{\partial f}{\partial s} + \frac{\partial F}{\partial y}\frac{\partial g}{\partial s} \\ \frac{\partial G}{\partial t} = \frac{\partial F}{\partial x}\frac{\partial f}{\partial t} + \frac{\partial F}{\partial y}\frac{\partial g}{\partial t} \end{matrix}$ Alternatively, $u=F(x,y,z)=F(f(t),g(t),h(t))$ yields $\frac{du}{dt}=\frac{\partial u}{\partial x}\frac{dx}{dt} + \frac{\partial u}{\partial y}\frac{dy}{dt} + \frac{\partial u}{\partial z}\frac{dz}{dt}$ Examine limit along line $y=mx$: $\lim_{x\to 0} f(x,mx)$ If $g_x$ and $g_y$ are continuous, then $g$ is differentiable at that point (usually $0,0$). Notice that if $f_x(0,0)=0$ and $f_y(0,0)=0$ then $df(0,0;h,l)=0h+0k=0$ The graph of $y(x,y)$ lies on a level surface $F(x,y,z)=c$ means $f(x,y(x,z),z)=c$. So then use the chain rule to figure out the result in terms of $F$ partials by considering $F$ a composite function $F(x,y(x,z),z)$. Fundamental implicit function theorem: Let $F(x,y,z)$ be a function defined on an open set $S$ containing the point $(x_0,y_0,z_0)$. Suppose $F$ has continuous partial derivatives in $S$. Furthermore assume that: $F(x_0,y_0,z_0)=0, F_z(x_0,y_0,z_0)\neq 0$. Then $z=f(x,y)$ exists, is continuous, and has continuous first partial derivatives. $f_x = -\frac{F_x}{F_z}$ $f_y = -\frac{F_y}{F_z}$ Alternatively, if $\begin{vmatrix} F_x & F_y \\ G_x & G_y \end{vmatrix} \neq 0$ , then we can solve $x$ and $y$ as functions of $z$. Since the cross-product is made of these determinants, if the $x$ component is nonzero, you can solve $y,z$ as functions of $x$, and therefore graph it on the $x-axis$. To solve level surface equations, let $F(x,y,z)=c$ and $G(x,y,z)=d$, then use the chain rule, differentiating by the remaining variable (e.g. $\frac{dy}{dx}$ and $\frac{dz}{dx}$ means do $x$) $\begin{matrix} F_x + F_y y_x + F_z z_x = 0 \\ G_x + G_y y_x + G_z z_x = 0 \end{matrix}$ if you solve for $y_x$,$z_x$, you get $\left[ \begin{matrix} F_y & F_z \\ G_y & G_z \end{matrix} \right] \left[ \begin{matrix} y_x \\ z_x \end{matrix} \right] = \left[\begin{matrix}F_x \\ G_x \end{matrix} \right]$ Mean value theorem: $f(b) - f(a) = (b-a)f'(X)$ $a < X < b$ or: $f(a+h)=f(a)+hf'(a + \theta h)$ $0 < \theta < 1$ xy-plane version: $F(a+h,b+k)-F(a,b)=h F_x(a+\theta h,b+\theta k)+k F_y(a+\theta h, b+\theta k)$ $0 < \theta < 1$ Lagrange Multipliers: $\nabla f = \lambda\nabla g$ for some scale $\lambda$ if $(x,y,z)$ is a minimum: $f_x=\lambda g_x$ $f_y=\lambda g_y$ $f_z=\lambda g_z$ Set $f=x^2 + y^2 + z^2$ for distance and let $g$ be given. Math 307 - Introduction to Differential Equations lost Math 308 - Matrix Algebra lost Math 309 - Linear Analysis lost AMath 353 - Partial Differential Equations Fourier Series: $f(x)=b_0 + \sum_{n=1}^{\infty} \left(a_n \sin\frac{n\pi x}{L} + b_n\cos\frac{n\pi x}{L} \right)$ $b_0 = \frac{1}{2L}\int_{-L}^L f(y)\,dy$ $a_m \frac{1}{L}\int_{-L}^L f(y) \sin\frac{m\pi y}{L}\,dy$ $b_m = \frac{1}{L}\int_{-L}^L f(y)\cos\frac{m\pi y}{L}\,dy$ $m \ge 1$ $u_t=\alpha^2 u_{xx}$ $\alpha^2 = \frac{k}{pc}$ $u(x,t)=F(x)G(t)$ Dirichlet: $u(0,t)=u(L,t)=0$ Neumann:$u_x(0,t)=u_x(L,t)=0$ Robin:$a_1 u(0,t)+b_1 u_x(0,t) = a_2 u(L,t) + b_2 u_x(L,t) = 0$ Dirichlet: $\lambda_n=\frac{n\pi}{L}\;\;\; n=1,2,...$ $u(x,t)=\sum_{n=1}^{\infty} A_n \sin\left(\frac{n\pi x}{L} \right)\exp\left(-\frac{n^2\alpha^2\pi^2 t}{L^2}\right)$ $A_n = \frac{2}{L} \int_0^L f(y) \sin\frac{n\pi y}{L}\,dy = 2 a_m\text{ for } 0\text{ to } L$ Neumann: $\lambda_n=\frac{n\pi}{L}\;\;\; n=1,2,...$ $u(x,t)=B_0 + \sum_{n=1}^{\infty} B_n \cos\left(\frac{n\pi x}{L} \right)\exp\left(-\frac{n^2\alpha^2\pi^2 t}{L^2}\right)$ $B_0 = \frac{1}{L} \int_0^L f(y)\,dy$ $B_n = \frac{2}{L} \int_0^L f(y) \cos\frac{n\pi y}{L}\,dy = 2 b_m\text{ for } 0\text{ to } L$ Dirichlet/Neumann: $\lambda_n=\frac{\pi}{2L}(2n + 1)\;\;\; n=0,1,2,...$ $u(x,t)=\sum_{n=0}^{\infty} A_n \sin\left(\lambda_n x\right) \exp\left(-\alpha^2 \lambda_n^2 t\right)$ $A_n = \frac{2}{L} \int_0^L f(y) \sin\left(\lambda_n y\right)\,dy$ Neumann/Dirichlet: $\lambda_n=\frac{\pi}{2L}(2n + 1)\;\;\; n=0,1,2,...$ $u(x,t)=\sum_{n=0}^{\infty} B_n \cos\left(\lambda_n x\right) \exp\left(-\alpha^2 \lambda_n^2 t\right)$ $B_n = \frac{2}{L}\int_0^L f(y)\cos(\lambda_n y)\,dy$ $v_2(x,y)=\sum_{n=1}^{\infty} C_n(t)[\sin/\cos](\lambda_n x)$ Replace $[\sin/\cos]$ with whatever was used in $u(x,t)$ $C_n(t)=\int_0^t p_n(s) e^{\lambda_n^2\alpha^2 (s-t)}\,ds$ $p_n(t)=\frac{2}{L}\int_0^L p(y,t)[\sin/\cos](\lambda_n y)\,dy$ Replace $[\sin/\cos]$ with whatever was used in $u(x,t)$. Note that $\lambda_n$ for $C_n$ and $p_n(t)$ is the same as used for $u_1(x,t)$ Sturm-Liouville: $p(x)\frac{d^2}{dx^2}+p'(x)\frac{d}{dx}+q(x)$ $a_2(x)\frac{d^2}{dx^2} + a_1(x)\frac{d}{dx} + a_0(x) \rightarrow p(x)=e^{\int\frac{a_1(x)}{a_2(x)}\,dx}$ $q(x)=p(x)\frac{a_0(x)}{a_2(x)}$ Laplace's Equation: $\nabla^2 u=0$ $u=F(x)G(y)$ $\frac{F''(x)}{F(x)} = -\frac{G''(y)}{G(y)} = c$ for rectangular regions $F_?(x)=A =\sinh(\lambda x) + B \cosh(\lambda x)$ use odd $\lambda_n$ if $F$ or $G$ equal a single $\cos$ term. $G_?(y)=C =\sinh(\lambda y) + D \cosh(\lambda y)$ $u(x,?)=G(?)$ $u(?,y)=F(y)$ Part 1: All $u_?(x,?)=0$ $\lambda_n=\frac{n\pi}{L y}\text{ or }\frac{(2n+1)\pi}{2L y}$ $u_1(x,y)=\sum_{n=0}^{\infty}A_n F_1(x)G_1(y)$ $A_n = \frac{2}{L_y F_1(?)}\int_0^{L_y} u(?,y)G_1(y)\,dy$ $u(?,y)=h(y)$ $? = L_x\text{ or }0$ Part 2: All $u_?(?,y)=0$ $\lambda_n=\frac{n\pi}{L x}\text{ or }\frac{(2n+1)\pi}{2L x}$ $u_2(x,y)=\sum_{n=1}^{\infty}B_n F_2(x)G_2(y)$ $B_n = \frac{2}{L_x G_2(?)}\int_0^{L_x} u(x,?)F_2(x)\,dx$ $u(x,?)=q(x)$ $? = L_y\text{ or }0$ $u(x,y)=u_1(x,y)+u_2(x,y)$ Circular $\nabla^2 u$: $u_{rr} + \frac{1}{r} u_r + \frac{1}{r^2}u_{\theta \theta} = 0$ $\frac{G''(\theta)}{G(\theta)}= - \frac{r^2 F''(r) + r F(r)}{F(r)} = c$ $\left\langle f,g \right\rangle = \int_a^b f(x) g(x)\,dx$ $\mathcal{L}_s=-p(x)\frac{d^2}{dx^2}-p'(x)\frac{d}{dx} + q(x)$ $\mathcal{L}_s\phi(x)=\lambda r(x) \phi(x)$ $\left\langle\mathcal{L}_s y_1,y_2 \right\rangle =\int_0^L \left(-[p y_1']'+q y_1\right)y_2\,dx$ $\mathcal{L}_s = f(x)$, then $\mathcal{L}_s^{\dagger} v(x) = 0$, where if $v=0$, $u(x)$ is unique, otherwise $v(x)$ exists only when forcing is orthogonal to all trivial solutions. if $c=0$, $G(\theta)=B$ $F(r)=C\ln r + D$ if $c=-\lambda^2 <0$, $G(\theta)=A\sin(\lambda\theta) + B\cos(\lambda\theta)$ $F(r) = C\left(\frac{r}{R} \right)^n + D\left( \frac{r}{R} \right)^{-n}$ $u(r,\theta)=B_0 + \sum_{n=1}^{\infty} F(r) G(\theta) = B_0 + \sum_{n=1}^{\infty}F(r)[A_n\sin(\lambda\theta) + B_n\cos(\lambda\theta)]$ $A_n = \frac{1}{\pi}\int_0^{2\pi} f(\theta)\sin(n\theta)\,d\theta$ $B_n = \frac{1}{\pi}\int_0^{2\pi} f(\theta)\cos(n\theta)\,d\theta$ $B_0 = \frac{1}{2\pi}\int_0^{2\pi} f(\theta)\,d\theta$ Wave Equation: $u_{tt}=c^2 u_{xx}$ $u = F(x)G(t)$ $\frac{F''(x)}{F(x)}=\frac{G''(t)}{G(t)}=k \text{ where } u(t,0)=F(0)=u(t,L)=F(L)=0$ $F(x) = A\sin(\lambda x) + B\cos(\lambda x)$ $u_t(x,0)=g(x)=\sum_{n=1}^{\infty} A_n\lambda_n c \sin(\lambda_n x)$ $A_n = \frac{2}{\lambda_n c L}\int_0^L g(y)\sin(\lambda_n y)\,dy$ $G(t) = C\sin(\lambda t) + D\cos(\lambda t)$ $u(x,0)=f(x)=\sum_{n=1}^{\infty}B_n\sin(\lambda_n x)$ $B_n = \frac{2}{L}\int_0^L f(y)\sin(\lambda_n y)\,dy$ $u(x,t)=\sum_{n=1}^{\infty}F(x)G(t)=\sum_{n=1}^{\infty}F(x)\left[C\sin(\lambda t) + D\cos(\lambda t)\right]$ $\lambda_n=\frac{n\pi}{L}\text{ or }\frac{(2n+1)\pi}{2L}$ Inhomogeneous: $u(0,t)=F(0)=p(t)$ $u(L,t)=F(L)=q(t)$ $u(x,t)=v(x,t) + \phi(x)p(t) + \psi(x)q(t)$ Transform to forced: $v(x,y) = \begin{cases} v_{tt}=c^2 v_{xx} - \phi(x)p''(x) - \psi(x)q''(t) = c^2 v_{xx} + R(x,t) & \phi(x)=1 - \frac{x}{L}\;\;\;\psi(x)=\frac{x}{L}\\ v(0,t)=v(L,t)=0 & t>0 \\ v(x,0)=f(x)-\phi(x)p(0) - \psi(x)q(0) & f(x)=u(x,0) \\ v_t(x,0)=g(x)-\phi(x)p'(0) - \psi(x)q'(0) & g(x)=u_t(x,0) \end{cases}$ Then solve as a forced equation. Forced: $u_{tt}=c^2 u_{xx} + R(x,t)$ $u(x,t)=u_1(x,t)+u_2(x,t)$ $u_1$ found by $R(x,t)=0$ and solving. $u_2(x,t)=\sum_{n=1}^{\infty} C_n(t)\sin\left(\frac{n\pi x}{L}\right)$ where $\sin\frac{n\pi x}{L}$ are the eigenfunctions from solving $R(x,t)=0$ $R(x,t)=\sum_{n=1}^{\infty} R_n(t)\sin\left(\frac{n\pi x}{L}\right)$ $R_n(t)=\frac{2}{L}\int_0^L R(y,t)\sin\left(\frac{n\pi x}{L}\right)\,dy$ $C_n''(t) + k^2 C_n(t)=R_n(t)$ $C_n(t)=\alpha\sin(k t) + \beta\cos(k t) + sin(k t)\int_0^t R_n(s)\cos(k s)\,ds - \cos(k t)\int_0^t R_n(s)\sin(k s)\,ds$ where $C_n(0)=0$ and $c_n'(0)=0$ Fourier Transform: $\mathcal{F}(f)=\hat{f}(k)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty} f(\xi) e^{-i k \xi}\,d\xi$ $\mathcal{F}^{-1}(f)=f(x)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\infty} \hat{f}(k) e^{ikx}\,dk$ $\mathcal{F}(f+g)=\widehat{f+g}=\hat{f} + \hat{g}$ $\widehat{fg}\neq\hat{f}\hat{g}$ $\widehat{f'}=ik\hat{f}\text{ or }\widehat{f^{(n)}}=(ik)^n\hat{f}$ $\widehat{u_t} = \frac{\partial \hat{u}}{\partial t} = \hat{u}_t$ $\widehat{u_{tt}}=\frac{\partial^2\hat{u}}{\partial t^2}=\hat{u}_{tt}$ $\widehat{u_{xx}}=(ik)^2\hat{u}=-k^2\hat{u}$ $u(x,t)=\mathcal{F}^{-1}(\hat{u})=\frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} \hat{f}(k) e^{\alpha^2 k^2 t} e^{ikx}\,dk$ $u(x,t)=\mathcal{F}^{-1}\left(\hat{f}(k) e^{-\alpha^2 k^2 t}\right)$ Semi-infinite: $\frac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} \hat{F}(k) e^{\alpha^2 k^2 t} e^{ikx}\,dk$ where $\hat{F}(k)=-i\sqrt{\frac{2}{\pi}}\int_0^{\infty} f(\xi)\sin(k\xi)\,d\xi$ $f(x)$ $\hat{f}(k)$ $1\text{ if } -s < x < s, 0\text{ otherwise }$ $\sqrt{\frac{2}{\pi}}\frac{\sin(ks)}{k}$ $\frac{1}{x^2 + s^2}$ $\frac{1}{s}\sqrt{\frac{\pi}{2}}e^{-s|k|}$ $e^{-sx^2}$ $\frac{1}{\sqrt{2s}}e^{\frac{k^2}{4s}}$ $\frac{\sin(sx)}{x}$ $\sqrt{\frac{\pi}{2}}\text{ if } |k| AMath 402 - Dynamical Systems and Chaos Dimensionless time \[ \tau = \frac{t}{T}$ where T gets picked so that $\frac{}{}$ and $\frac{}{}$ are of order 1. Fixed points of $x'=f(x)$: set $f(x)=0$ and solve for roots Bifurcation: Given $x'=f(x,r)$, set $f(x,r)=0$ and solve for $r$, then plot $r$ on the $x-axis$ and $x$ on the $y-axis$. Transcritical Subcritical Pitchfork Supercritical Pitchfork $\begin{matrix} x'=ax+by \\ y'=cx+dy \end{matrix}$ $A=\begin{bmatrix} a & b \\ c & d \end{bmatrix}$ $\begin{vmatrix} a-\lambda & b \\ c & d-\lambda \end{vmatrix}=0$ $\begin{matrix} \lambda^2 - \tau\lambda + \delta=0 & \lambda = \frac{\tau \pm \sqrt{\tau^2 - 4\delta}}{2} \\ \tau = a+d & \delta=ad-bc \end{matrix}$ $x(t)=c_1 e^{\lambda_1 t}v_1 + c_2 e^{\lambda_2 t}v_2$ $v_1,v_2$ are eigenvectors. Given $\tau = \lambda_1 + \lambda_2$ and $\delta=\lambda_1\lambda_2$: if $\delta < 0$, the eigenvalues are real with opposite signs, so the fixed point is a saddle point. otherwise, if $\tau^2-4\delta > 0$, its a node. This node is stable if $\tau < 0$ and unstable if $\tau > 0$ if $\tau^2-4\delta < 0$, its a spiral, which is stable if $\tau < 0$, unstable if $\tau > 0$, or a center if $\tau=0$ if $\tau^2-4\delta = 0$, its degenerate. $\begin{bmatrix} x'=f(x,y) \\ y'=g(x,y) \end{bmatrix}$ Fixed points are found by solving for $x'=0$ and $y'=0$ at the same time. nullclines are when either $x'=0$ or $y'=0$ and are drawn on the phase plane. $\begin{bmatrix} \frac{\partial f}{\partial x} & \frac{\partial f}{\partial y} \\ \frac{\partial g}{\partial x} & \frac{\partial g}{\partial y} \end{bmatrix}$ $\leftarrow$ For nonlinear equations, evaluate this matrix at each fixed point, then use the above linear classification scheme to classify the point. A basin for a given fixed point is the area of all trajectories that eventually terminate at that fixed point. Given $x'=f(x)$, $E(x)$ is a conserved quantity if $\frac{dE}{dt}=0$ A limit cycle is an isolated closed orbit in a nonlinear system. Limit cycles can only exist in nonlinear systems. If a function can be written as $\vec{x}'=-\nabla V$, then its a gradient function and can't have closed orbits. The Liapunov function $V(x)$ for a fixed point $x^*$ satisfies $V(x) > 0 \forall x \neq x^*, V(x^*)=0, V' < 0 \forall x \neq x^*$ A Hopf bifurcation occurs as the fixed points eigenvalues (in terms of $\mu$) cross the imaginary axis. Math 300 - Introduction to Mathematical Reasoning P Q $P \Rightarrow Q$ $\neg P \vee Q$ T T T T T F F F F T T T F F T T All valid values of $x$ constitute the Domain. $f(x)=y$ The range in which $y$ must fall is the codomain The image is the set of $y$ values that are possible given all valid values of $x$ So, $\frac{1}{x}$ has a domain $\mathbb{R} - \{0\}$ and a codomain of $\mathbb{R}$. However, no value of $x$ can ever produce $f(x)=0$, so the Image is $\mathbb{R}-\{0\}$ Injective: No two values of $x$ yield the same result. $f(x_1)\neq f(x_2)$ if $x_1 \neq x_2$ for all $x$ Surjective: All values of $y$ in the codomain can be produced by $f(x)$. In other words, the codomain equals the image. Bijective: A Bijective function is both Injective and Surjective - all values of $y$ are mapped to exactly one value of $x$. A simple way to prove this is to solve $f(x)$ for $x$. If this can be done without creating multiple solutions (a square root, for example, yields $\pm$, not a single answer), then it's a bijective function. If $f(x)$ is a bijection, it is denumerable. Any set that is equivalent to the natural numbers is denumerable. $\forall x \in \mathbb{R}$ means "for all $x$ in $\mathbb{R}$", where $\mathbb{R}$ can be replaced by any set. $\exists y \in \mathbb{R}$ means "there exists a $y$ in $\mathbb{R}$", where $\mathbb{R}$ can be replaced by any set. $A \vee B = A\text{ or }B$ $A \wedge B = A\text{ and }B$ $P \Leftrightarrow Q\text{ means }P \Rightarrow Q \wedge Q\Rightarrow P \text{ or } P\text{ iff }Q \text{ (if and only if)}$ $A \cup B$ = Union - All elements that are in either A or B, or both. $\{x|x \in A \text{ or } x \in B \}$ $A \cap B$ = Intersection - All elements that are in both A and B $\{x|x \in A \text{ and } x \in B \}$ $A\subseteq B$ = Subset - Indicates A is a subset of B, meaning that all elements in A are also in B. $\{x \in A \Rightarrow x \in B \}$ $A \subset B$ = Strict Subset - Same as above, but does not allow A to be equal to B (which happens if A has all the elements of B, because then they are the exact same set). $A-B$ = Difference - All elements in A that aren't also in B $\{x|x \in A \text{ and } x \not\in B \}$ $A\times B$ = Cartesian Product - All possible ordered pairs of the elements in both sets: $\{(x,y)|x \in A\text{ and }y\in B\}$ Proof We use induction on $n$ Base Case: [Prove $P(n_0)$] Inductive Step: Suppose now as inductive hypothesis that [$P(k)$ is true] for some integer $k$ such that $k \ge n_0$. Then [deduce that $P(k+1)$ is true]. This proves the inductive step. Conclusion: Hence, by induction, [$P(n)$ is true] for all integers $n \ge n_0$. Proof We use strong induction on $n$ Base Case: [Prove $P(n_0)$, $P(n_1)$, ...] Inductive Step: Suppose now as inductive hypothesis that [$P(n)$ is true for all $n \le k$] for some positive integer $k$, then [deduce that $P(k+1)$ is true]. This proves the inductive step. Conclusion: Hence, by induction [$P(n)$ is true] for all positive integers $n$. Proof: Suppose, for contradiction, that the statement $P$ is false. Then, [create a contradiction]. Hence our assumption that $P$ is false must be false. Thus, $P$ is true as required. The composite of $f:X\rightarrow Y$ and $g:Y\rightarrow Z$ is $g\circ f: x\rightarrow Z$ or just $gf: X\rightarrow Z$. $g\circ f = g(f(x)) \forall x \in X$ $a\equiv b \bmod m \Leftrightarrow b\equiv a \bmod m$ If $a \equiv b \bmod m\text{ and }b \equiv c \bmod m,\text{ then }a$ Negation of $P \Rightarrow Q$ is $P \wedge (\neg Q)$ or $P and (not Q)$ If $m$ is divisible by $a$, then $a b_1 \equiv a b_2 \bmod m \Leftrightarrow b_1 \equiv b_2 \bmod\left(\frac{m}{a} \right)$ Fermat's little theorem: If $p$ is a prime, and $a \in \mathbb{Z}^+$ which is not a multiple of $p$, then $a^{p-1}\equiv 1 \bmod p$. Corollary 1: If $p$ is prime, then $\forall a \in \mathbb{Z}, a^p = a \bmod p$ Corollary 2: If $p$ is prime, then $(p-1)! \equiv -1 \bmod p$ Division theorem: $a,b \in \mathbb{Z}$, $b > 0$, then $a = bq+r$ where $q,r$ are unique integers, and $0 \le r < b$. Thus, for $a=bq+r$, $\gcd(a,b)=\gcd(b,r)$. Furthermore, $b$ divides $a$ if and only if $r=0$, and if $b$ divides $a$, $\gcd(b,a)=b$ Euclidean Algorithm: $\gcd(136,96)$ $136=96\cdot 1 + 40$ $290x\equiv 5\bmod 357 \rightarrow 290x + 357y = 5$ $96=40\cdot 2 + 16$ $ax\equiv b \bmod c \rightarrow ax+cy=b$ $40=16\cdot 2 + 8$ $16=8\cdot 2 + 0 \leftarrow$ stop here, since the remainder is now 0 $\gcd(136,96)=8$ Diophantine Equations (or linear combinations): $140m + 63n = 35$ $m,n \in \mathbb{Z}$ exist for $am+bn=c$ iff $\gcd(a,b)$ divides $c$ $140=140\cdot 1 + 63\cdot 0$ $63=140\cdot 0 + 63\cdot 1$ dividing $am+bn=c$ by $\gcd(a,b)$ always yields coprime $m$ and $n$. $14=140\cdot 1 - 63\cdot 2$ $7=63\cdot 1 - 14\cdot 4 = 63 - (140\cdot 1 - 63\cdot 2)\cdot 4 = 63 - 140\cdot 4 + 63\cdot 8 = 63\cdot 9 - 140\cdot 4$ $0=14 - 7\cdot 2 =140\cdot 1 - 63\cdot 2 - 2(63\cdot 9 - 140\cdot 4) = 140\cdot 9-63\cdot 20$ So $m=9$ and $n=-20$ Specific: $7=63\cdot 9 - 140\cdot 4 \rightarrow 140\cdot(-4)+63\cdot 9=7\rightarrow 140\cdot(-20)+63\cdot45=35$ So for $140m+63n=35, m=-20, n=45$ Homogeneous: $140m+63n=0 \; \gcd(140,63)=7 \; 20m + 9n = 0 \rightarrow 29m=-9n$ $m=9q, n=-20q$ for $q\in \mathbb{Z}$ or $am+bn=0 \Leftrightarrow (m,n)=(bq,-aq)$ General: To find all solutions to $140m+63n=35$, use the specific solution $m=-20, n=45$ $140(m+20) + 63(n-45) = 0$ $(m+20,n-45)=(9q,-20q)$ use the homogeneous result and set them equal. $(m,n)=(9q-20,-20q+45)$ So $m=9q-20$ and $n=-20q+45$ $[a]_m = \{x\in \mathbb{Z}| x\equiv a \bmod m \} \Rightarrow \{ mq + a|q \in \mathbb{Z} \}$ $ax\equiv b \bmod m$ has a unique solution $\pmod{m}$ if $a$ and $m$ are coprime. If $\gcd(a,b)=1$, then $a$ and $b$ are coprime. $[a]_m$ is invertible if $\gcd(a,m)=0$. Math 327 - Introduction to Real Analysis I Cauchy Sequence: For all $\epsilon > 0$ there exists an integer $N$ such that if $n,m \ge N$, then $|a_n-a_m|<\epsilon$ Alternating Series: Suppose the terms of the series $\sum u_n$ are alternately positive and negative, that $|u_{n+1}| \le |u_n|$ for all $n$, and that $u_n \to 0$ as $n\to\infty$. Then the series $\sum u_n$ is convergent. Bolzano-Weierstrass Theorem: If $S$ is a bounded, infinite set, then there is at least one point of accumulation of $S$. $\sum u_n$ is absolutely convergent if $\sum|u_n|$ is convergent If $\sum u_n$ is absolutely convergent, then $\sum u_n = \sum a_n - \sum b_n$ where both $\sum a_n$ and $\sum b_n$ are convergent. If $\sum u_n$ is conditionally convergent, both $\sum a_n$ and $\sum b_n$ are divergent. $\sum u_n = U = u_0 + u_1 + u_2 + ...$ $w_0 = u_0 v_0$ $w_1 = u_0 v_1 + u_1 v_0$ $\sum v_n = V = v_0 + v_1 + v_2 + ...$ $w_n = u_0 v_n + u_1 v_{n-1} + ... + u_n v_0$ $\sum w_n = UV = w_0 + w_1 + w_2 + ...$ Provided $\sum u_n, \sum v_n$ are absolutely convergent. $\sum a_n x^n$ is either absolutely convergent for all $x$, divergent for all $x\neq 0$, or absolutely convergent if $-R < x < R$ (it may or may not converge for $x=\pm R$ ). $-R < x < R$ is the interval of convergence. $R$ is the radius of convergence. $R=\lim_{n \to \infty}\left|\frac{a_n}{a_{n+1}}\right|$ if the limit exists or is $+\infty$ Let the functions of the sequence $f_n(x)$ be defined on the interval $[a,b]$. If for each $\epsilon > 0$ there is an integer $N$ independent of x such that $|f_n(x) - f_m(x)| < \epsilon$ where $N \le m,n$. You can't converge uniformly on any interval containing a discontinuity. $|a|+|b|\le|a+b|$ $\lim_{n \to \infty}(1+\frac{k}{n})^n = e^k$ $lim_{n \to \infty} n^{\frac{1}{n}}=1$ $\sum_{k=0}^{\infty}=\frac{a}{1-r}$ if $|r| < 1$ $\sum a_n x^n$ $R = \lim_{n \to \infty}\left|\frac{a_n}{a_{n+1}} \right|$ if it exists or is $+\infty$. The interval of convergence is $-R < x < R$ $\sum^{\infty}\frac{x^2}{(1+x^2)^n} = x^2\sum\left(\frac{1}{1+x^2}\right)^n = x^2\left(\frac{1}{1-\frac{1}{1+x^2}}\right) = x^2\left( \frac{1+x^2}{1+x^2-1}\right) = x^2\left( \frac{1+x^2}{x^2} \right) = 1+x^2$ Math 427 - Complex Analysis 0 $\frac{\pi}{6}$ $\frac{\pi}{4}$ $\frac{\pi}{3}$ $\frac{\pi}{2}$ $\sin$ 0 $\frac{1}{2}$ $\frac{\sqrt{2}}{2}$ $\frac{\sqrt{3}}{2}$ 1 $\cos$ 1 $\frac{\sqrt{3}}{2}$ $\frac{\sqrt{2}}{2}$ $\frac{1}{2}$ 0 $\tan$ 0 $\frac{1}{\sqrt{3}}$ 1 $\sqrt{3}$ $\varnothing$ $\frac{z_1}{z_2} = \frac{x_1 x_2 + y_1 y_2}{x_2^2 + y_2^2} + i\frac{y_1 x_1 - x_1 y_2}{x_2^2 + y_2^2}$ $|z| = \sqrt{x^2 + y^2}$ $\bar{z} = x-i y$ $|\bar{z}|=|z|$ $\text{Re}\,z=\frac{z+\bar{z}}{2}$ $\text{Im}\,z=\frac{z-\bar{z}}{2i}$ $\text{Arg}\,z=\theta\text{ for } -\pi < \theta < \pi$ $z=re^{i\theta}=r(\cos(\theta) + i\sin(\theta))$ $\lim_{z \to z_0} f(z) = w_0 \iff \lim_{x,y \to x_0,y_0} u(x,y) = u_0 \text{ and }\lim_{x,y \to x_0,y_0} v(x,y)=v_0 \text{ where } w_0=u_0+iv_0 \text{ and } z_0=x_0+iy+0$ $\lim_{z \to z_0} f(z) = \infty \iff \lim_{z \to z_0} \frac{1}{f(z)}=0$ $\lim_{z \to \infty} f(z) = w_0 \iff \lim_{z \to 0} f\left(\frac{1}{z}\right)=w_0$ $\text{Re}\,z\le |\text{Re}\,z|\le |z|$ $\text{Im}\,z\le |\text{Im}\,z| \le z$ $\lim_{z \to \infty} f(z) = \infty \iff \lim_{z\to 0}\frac{1}{f\left(\frac{1}{z}\right)}=0$ $\left| |z_1| - |z_2| \right| \le |z_1 + z_2| \le |z_1| + |z_2|$ Roots: $z = \sqrt[n]{r_0} \exp\left[i\left(\frac{\theta_0}{n} + \frac{2k\pi}{n} \right)\right]$ Harmonic: $f_{xx} + f_{yy} = 0 \text{ or } f_{xx}=-f_{yy}$ Cauchy-Riemann: $f(z) = u(x,y)+iv(x,y)$ $\sinh(z)=\frac{e^z-e^{-z}}{2}=-i\sin(iz)$ $\frac{d}{dz}\sinh(z)=\cosh(z)$ $u_x = v_y$ $u_y = -v_x$ $\cosh(z) = \frac{e^z + e^{-z}}{2} = cos(iz)$ $\frac{d}{dz}\cosh(z)=\sinh(z)$ $f(z) = u(r,\theta)+iv(r,\theta)$ $\sin(z)=\frac{e^{iz}-e{-iz}}{2i}=i-\sinh(iz)=\sin(x)\cosh(y) + i\cos(x)\sinh(y)$ $r u_r = v_{\theta}$ $u_{\theta}=-r v_r$ $\cos(z)=\frac{e^{iz}+e^{-iz}}{2}=\cosh(iz)=\cos(x)\cosh(y) - i \sin(x)\sinh(y)$ $\sin(x)^2 + \sinh(y)^2 = 0$ $|\sin(z)|^2 = \sin(x)^2 + \sinh(y)^2$ $|\cos(z)|^2 = \cos(x)^2 + \sinh(y)^2$ $e^z$ and Log: $e^z=e^{x+iy}=e^x e^{iy}=e^x(\cos(y) + i\sin(y))$ $e^x e^{iy} = \sqrt{2} e^{i\frac{\pi}{4}}$ $e^x = \sqrt{2}$ $y = \frac{\pi}{4} + 2k\pi$ $x = \ln\sqrt{2}$ So, $z = \ln\sqrt{2} + i\pi\left(\frac{1}{4} + 2k\right)$ $\frac{d}{dz} log(f(z)) = \frac{f'(z)}{f(z)}$ Cauchy-Gourgat: If $f(z)$ is analytic at all points interior to and on a simple closed contour $C$, then $\int_C f(z) \,dz=0$ $\int_C f(z) \,dz = \int_a^b f[z(t)] z'(t) \,dt$ $a \le t \le b$ $z(t)$ is a parameterization. Use $e^{i\theta}$ for circle! Maximum Modulus Principle: If $f(z)$ is analytic and not constant in $D$, then $|f(z)|$ has no maximum value in $D$. That is, no point $z_0$ is in $D$ such that $|f(z)| \le |f(z_0)|$ for all $z$ in $D$. Corollary: If $f(z)$ is continuous on a closed bounded region $R$ and analytic and not constant through the interior of $R$, then the maximum value of $|f(z)|$ in $R$ always exists on the boundary, never the interior. Taylor's Theorem: Suppose $f(z)$ is analytic throughout a disk $|z-z_0|<R$. Then $f(z)$ has a power series of the form $f(z) = \sum_{n=0}^{\infty}\frac{f^{(n)}(z_0)}{n!}(z-z_0)^n$ where $|z-z_0| < R_0$ and $n=0,1,2,...$ Alternatively, $\sum_{n=0}^{\infty} a_n(z-z_0)^n \text{ where } a_n=\frac{f^{(n)}(z_0)}{n!}$ For $|z|<\infty$: $e^z = \sum_{n=0}^{\infty}\frac{z^n}{n!}$ $\sin(z) = \sum_{n=0}^{\infty}(-1)^n\frac{z^{2n+1}}{(2n+1)!}$ $\cos(z) = \sum_{n=0}^{\infty}(-1)^n\frac{z^{2n}}{(2n)!}$ Note that $\frac{1}{1 - \frac{1}{z}}=\sum_{n=0}^{\infty}\left(\frac{1}{z}\right)^n \text{ for } \left|\frac{1}{z}\right| < 1$ , which is really $1 < |z|$ or $|z| > 1$ For $|z| < 1$: $\frac{1}{1-z} = \sum_{n=0}^{\infty} z^n$ $\frac{1}{1+z} = \sum_{n=0}^{\infty} (-1)^n z^n$ For $|z-1|<1$: $\frac{1}{z} = \sum_{n=0}^{\infty} (-1)^n (z-1)^n$ Analytic: $f(z)$ is analytic at point $z_0$ if it has a derivative at each point in some neighborhood of $z_0$. $f(z)=u(x,y)+iv(x,y)$ is analytic if and only if $v$ is a harmonic conjugate of $u$. If $u(x,y)$ and $v(x,y)$ are harmonic functions such that $u_{xx}=-u{yy}$ and $v_{xx}=-v_{yy}$, and they satisfy the Cauchy-Reimann conditions, then $v$ is a harmonic conjugate of $u$. Differentiable: $f(z)=u(x,y)+iv(x,y)$ is differentiable at a point $z_0$ if $f(z)$ is defined within some neighborhood of $z_0=x_0+iy_0$, $u_x$ $u_y$ $v_x$ and $v_y$ exist everywhere in the neighborhood, those first-order partial derivatives exist at $(x_0,y_0)$ and satisfy the Reimann-Cauchy conditions at $(x_0,y_0)$. Cauchy Integral Formula: $\int_C \frac{f(z)}{z-z_0}\,dz = 2\pi i f(z_0)$ where $z_0$ is any point interior to $C$ and $f(z)$ is analytic everywhere inside and on $C$ taken in the positive sense. Note that $f(z)$ here refers to the actual function $f(z)$, not $\frac{f(z)}{z-z_0}$ Generalized Cauchy Integral Formula: $\int_C \frac{f(z)}{(z-z_0)^{n+1}}\,dz = \frac{2\pi i}{n!} f^{(n)}(z_0)$ (same conditions as above) Remember, discontinuities that are roots can be calculated and factored: $\frac{1}{z^2-w_0}=\frac{1}{(z-z_1)(z-z_2)}$ Residue at infinity: $\def\res{\mathop{Res}\limits} \int_C f(z)\,dz = 2\pi i \,\res_{z=0}\left[\frac{1}{z^2} f\left(\frac{1}{z}\right) \right]$ A residue at $z_0$ is the $n=-1$ term of the Laurent series centered at $z_0$. A point $z_0$ is an isolated singular point if $f(z_0)$ fails to be analytic, but is analytic at some point in every neighborhood of $z_0$, and there's a deleted neighborhood $0 < |z-z_0| < \epsilon$ of $z_0$ throughout which $f$ is analytic. $\res_{z=z_0}\,f(z)=\frac{p(z_0)}{q'(z_0)}\text{ where }f(z) = \frac{p(z)}{q(z)}\text{ and } p(z_0)\neq 0$ $\int_C f(z)\,dz = 2\pi i \sum_{k=1}^n\,\res_{z=z_k}\,f(z)$ where $z_0$, $z_1$, $z_2$,... $z_k$ are all the singular points of $f(z)$ (which includes poles). If a series has a finite number of $(z-z_0)^{-n}$ terms, its a pole ($\frac{1}{z^4}$ is a pole of order 3). If a function, when put into a series, has no $z^{-n}$ terms, it's a removable singularity. If it has an infinite number of $z^{-n}$ terms, its an essential singularity (meaning, you can't get rid of it). AMath 401 - Vector Calculus and Complex Variables $||\vec{x}||=\sqrt{x_1^2+x_2^2+ ...}$ $\vec{u}\cdot\vec{v} = ||\vec{u}||\cdot||\vec{v}|| \cos(\theta)=u_1v_1+u_2v_2+ ...$ $||\vec{u}\times\vec{v}|| = ||\vec{u}||\cdot||\vec{v}|| \sin(\theta) = \text{ area of a parallelogram}$ $\vec{u}\times\vec{v} = \begin{vmatrix} \vec{\imath} & \vec{\jmath} & \vec{k} \\ u_1 & u_2 & u_3 \\ v_1 & v_2 & v_3 \end{vmatrix}$ $\begin{vmatrix} a & b \\ c & d \end{vmatrix} = ad-bc$ $\vec{u}\cdot\vec{v}\times\vec{w} = \vec{u}\cdot(\vec{v}\times\vec{w})=\text{ volume of a parallelepiped}$ $h_?\equiv\left|\left|\frac{\partial\vec{r}}{\partial ?}\right|\right|$ $\hat{e}_r(\theta)=\frac{1}{h_r}\frac{\partial\vec{r}}{\partial r}$ $\hat{e}_r'(\theta)=\frac{d\hat{e}_r}{d\theta}=\hat{e}_{\theta}(\theta)$ $\vec{r}=r(t)\hat{e}_r(\theta(t))$ $\vec{r}'(t)=r'(t)\hat{e}_r + r(t)\frac{d\hat{e}_r}{dt}$ $\vec{r}'(t)=r'(t)\hat{e}_r + r(t)\frac{d\hat{e}_r}{d\theta}\frac{d\theta}{dt}=r'(t)\hat{e}_r + r(t)\hat{e}_{\theta} \theta'(t)$ Projection of $\vec{u}$ onto $\vec{v}$: $||\vec{u}|| \cos(\theta) = \frac{\vec{u}\cdot\vec{v}}{||\vec{v}||}$ Arc Length: $\int_a^b \left|\left| \frac{d\vec{r}}{dt} \right|\right| \,dt = \int_a^b ds$ Parametrization: $s(t) = \int_a^t \left|\left|\frac{d\vec{r}}{dt}\right|\right| \,dt$ $\frac{dx}{dt}=x'(t)$ $dx = x'(t)\,dt$ $\oint x \,dy - y \,dx = \oint\left(x \frac{dy}{dt} - y \frac{dx}{dt}\right)\,dt = \oint x y'(t) - y x'(t) \,dt$ $\nabla = \frac{\partial}{\partial x}\vec{\imath} + \frac{\partial}{\partial y}\vec{\jmath} + \frac{\partial}{\partial z}\vec{k}$ $\nabla f = \left\langle \frac{\partial f}{\partial x},\frac{\partial f}{\partial y},\frac{\partial f}{\partial z} \right\rangle$ $D_{\vec{u}} f(\vec{P})=f'_{\vec{u}}=\nabla f \cdot \vec{u}$ $\nabla(fg)=f\nabla g + g \nabla f$ $\vec{F}(x,y,z)=F_1(x,y,z)\vec{\imath} + F_2(x,y,z)\vec{\jmath} + F_3(x,y,z)\vec{k}$ $\text{div}\, \vec{F} = \nabla \cdot \vec{F} = \frac{\partial F_1}{\partial x} + \frac{\partial F_2}{\partial y} + \frac{\partial F_3}{\partial z}$ $\text{curl}\, \vec{F} = \nabla \times \vec{F} = \left\langle \frac{\partial F_3}{\partial y} - \frac{\partial F_2}{\partial z}, \frac{\partial F_1}{\partial z} - \frac{\partial F_3}{\partial x}, \frac{\partial F_2}{\partial x} - \frac{\partial F_1}{\partial y} \right\rangle$ Line integral: $\int_{\vec{r}(a)}^{\vec{r}(b)}\vec{F}\cdot \vec{r}'(t)\,dt = \int_{\vec{r}(a)}^{\vec{r}(b)}\vec{F}\cdot d\vec{r}$ Closed path: $\oint \vec{F}\cdot d\vec{r}$ $d\vec{r} = \nabla\vec{r}\cdot \langle du_1,du_2,du_3 \rangle = h_1 du_1 \hat{e}_1 + h_2 du_2 \hat{e}_2 + h_3 du_3 \hat{e}_3$ $h_?= \left|\left| \frac{\partial\vec{r}}{\partial ?} \right|\right| = \frac{1}{||\partial ?||}$ $\hat{e}_?=\frac{1}{h_?}\frac{\partial\vec{r}}{\partial?}=h_?\nabla?(x,y,z)=\frac{\nabla?}{||\nabla?||}$ If $\vec{r}=\vec{r}(s)$ then $\int_a^b\vec{F}\cdot \frac{d\vec{r}}{ds} \,ds = \int_a^b\vec{F}\cdot\vec{T} \,ds$ $\nabla f \cdot \hat{e}_?=\frac{1}{h_?}\frac{\partial f}{\partial ?}$ $\nabla f = (\nabla f \cdot \hat{e}_r)\hat{e}_r + (\nabla f \cdot \hat{e}_{\theta})\hat{e}_{\theta} = \frac{\partial f}{\partial r}\hat{e}_r + \frac{1}{r}\frac{\partial f}{\partial \theta}\hat{e}_{\theta}$ $ds = \sqrt{d\vec{r}\cdot d\vec{r}} = \left|\left| \frac{d\vec{r}}{dt}dt \right|\right| = \sqrt{h_1^2 du_1^2 + h_2^2 du_2^2+h_3^2 du_3^2}$ $Spherical: h_r=1, h_{\theta}=r, h_{\phi}=r\sin(\theta)$ $\vec{r}=\frac{\nabla f}{||\nabla f||}$ $\vec{r}=\vec{r}(u,v)$ $\vec{n}= \frac{\frac{\partial\vec{r}}{\partial u}\times\frac{\partial\vec{r}}{\partial v}}{\left|\left| \frac{\partial\vec{r}}{\partial u}\times\frac{\partial\vec{r}}{\partial v} \right|\right|}$ $\vec{F}$ is conservative if: $\vec{F}=\nabla\phi$ $\nabla\times\vec{F}=0$ $\vec{F}=\nabla\phi \iff \nabla\times\vec{F}=0$ $\iint\vec{F}\cdot d\vec{S}$ $d\vec{S}=\left(\frac{\partial\vec{r}}{\partial u} \times \frac{\partial\vec{r}}{\partial v} \right)\,du\,dv$ $d\vec{S}=\vec{n}\,dS$ Surface Area: $\iint \,dS$ Flux: $\iint\vec{F}\cdot d\vec{S}$ Shell mass: $\iint p\cdot dS$ Stokes: $\iint(\nabla\times\vec{F})\cdot d\vec{S} = \oint\vec{F}\cdot d\vec{r}$ Divergence: $\iiint\limits_V\nabla\cdot\vec{F}\,dV=\iint\limits_{\partial v} \vec{F}\cdot d\vec{S}$ If $z=f(x,y)$, then $\vec{r}(x,y) = \langle x,y,f(x,y) \rangle$ $\nabla\cdot\vec{F}=\frac{1}{h_1 h_2 h_3}\left[\frac{\partial}{\partial u_1}(F_1 h_2 h_3) + \frac{\partial}{\partial u_2}(F_2 h_1 h_3) + \frac{\partial}{\partial u_3}(F_3 h_1 h_2) \right]$ $dV = \left| \frac{\partial\vec{r}}{\partial u_1}\cdot\left(\frac{\partial\vec{r}}{\partial u_2}\times\frac{\partial\vec{r}}{\partial u_3} \right) \right| du_1 du_2 du_3$ If orthogonal, $I = \iiint\limits_V f(u_1,u_2,u_3)h_1 h_2 h_3\,du_1\,du_2\,du_3$ $(x-c_x)^2 + (y-c_y)^2 = r^2 \Rightarrow \vec{r}(t)=\langle c_x + r\cos(t),c_y+r\sin(t) \rangle$ Ellipse: $\Rightarrow \vec{r}(t)=\langle c_x+a\cos(t), c_y+b\sin(t) \rangle$ For spherical, if $\theta$ is innermost, its max value is $\pi$, or if its on the z value. $\vec{r}(\theta,\phi)=\langle \sin(\theta)\cos(\phi), \sin(\theta)\sin(\phi),\cos(\theta) \rangle$ Laplace Transform: $\mathcal{L}\left[f(t)\right]=F(s)=\int_0^{\infty}e^{-st}f(t)\,dt$ $\mathcal{L}[f'(t)] = s\mathcal{L}[f(t)] - f(0) = s\cdot F(s)-f(0)$ $\mathcal{L}[f''(t)] = s^2\mathcal{L}[f(t)] - s\cdot f(0) - f'(0) = s^2\cdot F(s) - s\cdot f(0) - f'(0)$ $\mathcal{L}[0] = 0$ $\mathcal{L}[1] = \frac{1}{s}$ $\mathcal{L}[k] = \frac{k}{s}$ $\mathcal{L}[e^{at}] = \frac{1}{s-a}$ $\mathcal{L}[\cos(\omega t)] = \frac{s}{s^2 + \omega^2}$ $\mathcal{L}[\sin(\omega t)] = \frac{\omega}{s^2 + \omega^2}$ Math 461 - Combinatorial Theory I General Pigeon: Let $n$,$m$, and $r$ be positive integers so that $n>rm$. Let us distribute $n$ balls into $m$ boxes. Then there will be at least 1 box in which we place at least $r+1$ balls. Base Case: Prove $P(0)$ or $P(1)$ Inductive Step: Show that by assuming the inductive hypothesis $P(k)$, this implies that $P(k+1)$ must be true. Strong Induction: Show that $P(k+1)$ is true if $P(n)$ for all $n < k+1$ is true. There are $n!$ permutations of an $n$-element set (or $n!$ linear orderings of $n$ objects) $n$ objects sorted into $a,b,c,...$ groups have $\frac{n!}{a!b!c!...}$ permutations. This is also known as a $k$-element multiset of $n$. Number of $k$-digit strings in an $n$-element alphabet: $n^k$. All subsets of an $n$-element set: $2^n$ Let $n$ and $k$ be positive integers such that $n \ge k$, then the number of $k$-digit strings over an $n$-element alphabet where no letter is used more than once is $\frac{n!}{(n-k)!}=(n)_k$ $\binom{n}{k} = \frac{n!}{k!(n-k)!} = \frac{(n)_k}{k!} \rightarrow$ This is the number of $k$-element subsets in $[n]$, where $[n]$ is an $n$-element set. $\binom{n}{k} = \binom{n}{n-k}$ $\binom{n}{0} = \binom{n}{n} = \binom{0}{0} = 1$ Number of $k$-element multisets in $[n]$: $\binom{n+k-1}{k}$ Binomial Theorem: $(x+y)^n = \sum_{k=0}^n\binom{n}{k}x^k y^{n-k} \text{ for } n \ge 0$ Multinomial Theorem: $(x_1 + x_2 + ... + x_k)^n = \sum_{a_1,a_2,...,a_k} \binom{n}{a_1,a_2,...,a_k} x_1^{a_1} x_2^{a_2} ... x_k^{a_k}$ $\binom{n}{k} + \binom{n}{k+1} = \binom{n+1}{k+1} \text{ for } n,k \ge 0$ $\binom{k}{k} + \binom{k+1}{k} + ... + \binom{n}{k} = \binom{n+1}{k+1} \text{ for } n,k \ge 0$ $\binom{n}{k} \le \binom{n}{k} \text{ if and only if } k \le \frac{n-1}{2} \text{ and } n=2k+1$ $\binom{n}{k} \ge \binom{n}{k} \text{ if and only if } k \ge \frac{n-1}{2} \text{ and } n=2k+1$ $\binom{n}{a_1,a_2,...,a_k} = \frac{n!}{a_1!a_2!...a_k!}$ $\binom{n}{a_1,a_2,...,a_k} = \binom{n}{a_1}\cdot \binom{n-a_1}{a_2} \cdot ... \cdot \binom{n-a_1-a_2-...-a_{k-1}}{a_k} \text{ if and only if } n=\sum_{i=1}^k a_i$ $\binom{m}{k} = \frac{m(m-1)...(m-k+1)}{k!} \text{ for any real number } m$ $(1+x)^m = \sum_{n\ge 0}^{\infty} \binom{m}{n}x^n$ $\sum_{k=0}^n (-1)^k\binom{n}{k} = 0 \text{ for } n \ge 0$ $2^n = \sum_{k=0}^n \binom{n}{k} = 0 \text{ for } n \ge 0$ $\sum_{k=1}^n k\binom{n}{k} = n 2^{n-1} \text{ for } n \ge 0$ $\binom{n+m}{k} = \sum_{i=0}^k \binom{n}{i} \binom{n}{k-i} \text{ for } n,m,k \ge 0$ $|E| = \frac{1}{2}\sum_{v\in V} deg(v)$ or, the number of edges in a graph is half the sum of its vertex degrees. Compositions: $n$ identical objects $k$ distinct boxes $\binom{n-1}{k-1}$ $n$ identical objects any number of distinct boxes $2^{n-1}$ Weak Compositions: (empty boxes allowed) $n$ identical objects $k$ distinct boxes $\binom{n+k-1}{k-1}$ $n$ distinct objects $k$ distinct boxes $k^n$ Split $n$ people into $k$ groups of $\frac{n}{k}$: Unordered: $\frac{n!}{\left(\left(\frac{n}{k}\right)!\right)^k k!}$ Ordered: $\frac{n!}{\left(\left(\frac{n}{k}\right)!\right)^k}$ Steps from (0,0) to (n,k) on a lattice: $\binom{n+k}{k}$ Ways to roll $n$ dice so all numbers 1-6 show up at least once: $6^n - \binom{6}{5}5^n + \binom{6}{4}4^n + \binom{6}{3}3^n + \binom{6}{2}2^n + \binom{6}{1}$ The Sieve: $|A_1| + |A_2| - |A_1\cap A_2|$ or $|A_1| + |A_2| + |A_3| - |A_1\cap A_2| - |A_1\cap A_3| - |A_2\cap A_3| + |A_1\cap A_2\cap A_3|$ Also, $|S - (S_A\cap S_B\cap S_C)| = |S| - |S_A| - |S_B| - |S_C| + |S_A\cap S_B| + |S_A\cap S_C| + |S_B\cap S_C| - |S_A\cap S_B\cap S_C|$ Graphs: A simple graph has no loops (edges connecting a vertex to itself) or multiple edges between the same vertices. A walk is a path through a series of connected edges. A trail is a walk where no edge is traveled on more than once. A closed trail starts and stops on the same vertex. A Eulerian trail uses all edges in a graph. A trail that doesn't touch a vertex more than once is a path. $K_n$ is the complete graph on $n$-vertices. If one can reach any vertex from any other on a graph $G$, then $G$ is a connected graph. A connected graph has a closed Eulerian trail if and only if all its vertices have even degree. Otherwise, it has a Eulerian trail starting on S and ending on T if only S and T have odd degree. In a graph without loops there are an even number of vertices of odd-degree. A cycle touches all vertices only once, except for the vertex it starts and ends on. A Hamiltonian cycle touches all vertices on a graph. Let $G$ be a graph on $n \ge 3$ vertices, then $G$ has a Hamiltonian cycle if all vertices in $G$ have degree equal or greater than $n/2$. A complete graph $K_n$ has $\binom{n}{k}\frac{(k-1)!}{2}$ cycles of $k$-vertices. Total Cycles: $\sum_{k=3}^n\binom{n}{k}\frac{(k-1)!}{2}$ Hamiltonian cycles: $\frac{(n-1)!}{2}$ Two graphs are the same if for any pair of vertices, the number of edges connecting them is the same in both graphs. If this holds when they are unlabeled, they are isomorphic. A Tree is a minimally connected graph: Removing any edge yields a disconnected graph. Trees have no cycles, so any connected graph with no cycles is a tree. All trees on $n$ vertices have $n-1$ edges, so any connected graph with $n-1$ edges is a tree. Let $T$ be a tree on $n \ge 2$ vertices, then T has at least two vertices of degree 1. Let $F$ be a forest on $n$ vertices with $k$ trees. Then F has $n-k$ edges. Cayley's formula: The number of all trees with vertex set $[n]$ is $A_n = n^{n-2}$ A connected graph $G$ can be drawn on a plane so that no two edges intersect, then G is a planar graph. A connected planar graph or convex polyhedron satisfies: Vertices + Faces = Edges + 2 $K_{3,3}$ and $K_5$ are not planar, nor is any graph with a subgraph that is edge-equivalent to them. A convex Polyhedron with V vertices, E edges, and F faces satisfies: $3F \le 3E$, $3V \le 3E$, $E \le 3V-6$, $E \le 3F-6$, one face has at most 5 edges, and one vertex has at most degree 5. $K_{3,3}$ has 6 vertices and 9 edges A graph on $n$-vertices has $\binom{n}{2}$ possible edges. If a graph is planar, then $E \le 3V - 6$. However, some non-planar graphs, like $K_{3,3}$, satisfy this too. Prüfer code: Remove the smallest vertex from the graph, write down only its neighbor's value, repeat. Math 462 - Combinatorial Theory II All labeled graphs on $n$ vertices: $2^{\binom{n}{2}}$ There are $(n-1)!$ ways to arrange $n$ elements in a cycle. Given $h_n=3 h_{n-1} - 2 h_{n-2}$, change to $h_{n+2}=3 h_{n+1} - 2 h_n$, then $h_{n+2}$ is $n=2$ so multiply by $x^{n+2}$ $\sum h_{n+2}x^{n+2} = 3 \sum h_{n+1}x^{n+2} - 2 \sum h_n x^{n+2}$ Then normalize to $\sum_{n=2}^{\infty}h_n x^n$ by subtracting $h_0$ and $h_1$ $\sum h_n x^n - x h_1 - h_0 = 3 x \sum h_{n+1}x^{n+1} - 2 x^2 \sum h_n x^n$ $G(x) - x h_1 - h_0 = 3 x (G(x)-h_0) - 2 x^2 G(x)$ Solve for: $G(x) = \frac{1-x}{2x^2-3x+1} = \frac{1}{1-2x} = \sum (2x)^n$ $G(x) = \sum(2x)^n = \sum 2^n x^n = \sum h_n x^n \rightarrow h_n=2^n$ $\sum_{n=0}^{\infty} \frac{x^n}{n!} = e^x$ $\sum_{n=0}^{\infty} (cx)^n = \frac{1}{1-cx}$ Also, $2e_1 + 5e_2 = n \rightarrow \sum h_n x^n = \sum x^{2e_1+5e_2} = \sum x^{2e_1} x^{5e_2} = \left(\sum^{\infty} x^{2e_1}\right)\left(\sum^{\infty} x^{5e_2}\right)=\frac{1}{(1-x^2)(1-x^5)}$ $S(n,k)$: A Stirling number of the second kind is the number of nonempty partitions of $[n]$ into k blocks where the order of the blocks doesn't matter. $S(n,k)=S(n-1,k-1)+k S(n-1,k)$, $S(n,n-2) = \binom{n}{3} + \frac{1}{2}\binom{k}{2}\binom{n+2}{2}$ Bell Numbers: $B(n) = \sum_{i=0}^n S(n,i)$ , or the number of all partitions of $[n]$ into nonempty parts (order doesn't matter). Catalan Number $C_n$: $\frac{1}{n+1}\binom{2n}{n}$ derived from $\sum_{n \ge 1} c_{n-1} x^n = x C(x) \rightarrow C(x) - 1 = x C(x)\cdot C(x) \rightarrow C(x) = \frac{1 - \sqrt{1-4x}}{2x}$ Products: Let $A(n) = \sum a_n x^n$ and $B(n) = \sum b_n x^n$. Then $A(x)B(x)=C(x)=\sum c_n x^n$ where $c_n = \sum_{i=0}^n a_i b_{n-i}$ Cycles: The number of $n$-permuatations with $a_i$ cycles of length $i \in [n]$ is \frac{n!}{a_1!a_2!...a_n!1^{a_1}2^{a_2}...n^{a_n}} The number of n-permutations with only one cycle is $(n-1)!$ $c(n,k)$: The number of $n$-permutations with $k$-cycles is called a signless stirling number of the first kind. $c(n,k) = c(n-1,k-1) + (n-1) c(n-1,k)$ $c(n,n-2)= 2\binom{n}{3} + \frac{1}{2}\binom{n}{2}\binom{n-2}{2}$ $s(n,k) = (-1)^{n-k} c(n,k)$ and is called a signed stirling number of the first kind. Let $i \in [n]$, then fro all $k \in [n]$, there are $(n-1)!$ permutations that contain $i$ in a $k$-cycle. $T(n,k)=\frac{k-1}{2k}n^2 - \frac{r(k-r)}{2k}$ Let Graph $G$ have $n$ vertices and more than $T(n,k)$ edges. Then $G$ contains a $K_{k+1}$ subgraph, and is therefore not $k$-colorable. $N(T)$ = all neighbors of the set of vertices $T$ $a_{s,d} = \{ s,s+d,s+2d,...,s+(n-1)d \}$ $\chi(H)$: Chromatic number of Graph $H$, or the smallest integer $k$ for which $H$ is $k$-colorable. A 2-colorable graph is bipartite and can divide its vertices into two disjoint sets. A graph is bipartite if and only if it does not contain a cycle of odd length. A bipartite graph has at most $\frac{n^2}{4}$ edges if $n$ is even, and at most $\frac{n^2 - 1}{4}$ edges if $n$ is odd. If graph $G$ is not an odd cycle nor complete, then $\chi(G) \le$ the largest vertex degree $\ge 3$. A bipartite graph on $n$ vertices with a max degree of $k$ has at most $k\cdot (n-k)$ edges. A tree is always bipartite (2-colorable). Philip Hall's Theorem: Let a bipartite graph $G=(X,Y)$. Then $X$ has a perfect matching to $Y$ if and only if for all $T \subset X, |T| \le |N(T)|$ $R(k,l):$ The Ramsey Number $R(k,l)$ is the smallest integer such that any 2-coloring of a complete graph on $R(k,l)$ vertices will contain a red $k$-clique or blue $l$-clique. A $k$-clique is a complete subgraph on $k$ vertices. $R(k,l) \le R(k,l-1) + R(k-1,l)$ $R(k,k) \le 4^{k-1}$ $R(k,l) \le \binom{k+l-2}{l-1}$ Let $P$ be a partially ordered set (a "poset"), then: 1) $\le$ is reflexive, so $x \le x$ for all $x \in P$ 2) $\le$ is transitive, so that if $x \le y$ and $y \le z$ then $x \le z$ 3) $\le$ is anti-symmetric, such that if $x\le y$ and $y \le x$, then $x=y$ Let $P$ be the set of all subsets of $[n]$, and let $A \le B$ if $A \subset B$. Then this forms a partially ordered set $B_n$, or a Boolean Algebra of degree $n$. $E(X)=\sum_{i \in S} i\cdot P(X=i)$ A chain is a set with no two incomparable elements. An antichain has no comparable elements. Real numbers are a chain. $\{ (2,3), (1,3), (3,4), (2,4) \}$ is an antichain in $B_4$, since no set contains another. Dilworth: In a finite partially ordered set $P$, the size of any maximum antichain is equal to the number of chains in any chain cover. Weak composition of n into 4 parts: $a_1 + a_2 + a_3 + a_4 = n$ Applying these rules to the above equation: $a_1 \le 2, a_2\text{ mod }2 \equiv 0, a_3\text{ mod }2 \equiv 1, a_3 \le 7, a_4 \ge 1$ Yields the following: $a_1 + 2a_2 + (2a_3 + 1) + a_4 = n$ $(\sum_0^2 x^{a_1})(\sum_0^{\infty} x^{2a_2})(\sum_0^3 x^{2a_3 + 1})(\sum_1^{\infty} x^{a_4})=\frac{1+x+x^2}{1-x^2}(x+x^3+x^5+x^7)\left(\frac{1}{1-x} - 1\right)$ Math 394 - Probability I both E and F: $P(EF) = P(E\cap F)$ If E and F are independent, then $P(E\cap F) = P(E)P(F)$ $P(F) = P(E) + P(E^c F)$ $P(E\cup F) = P(E) + P(F) - P(EF)$ $P(E\cup F \cup G) = P(E) + P(F) + P(G) - P(EF) - P(EG) - P(FG) + P(EFG)$ E occurs given F: $P(E|F)=\frac{P(EF)}{P(F)}$ $P(EF)=P(FE)=P(E)P(F|E)$ Bayes formula: $P(E)=P(EF)+P(EF^c)$ $P(E)=P(E|F)P(F) + P(E|F^c)P(F^c)$ $P(E)=P(E|F)P(F) + P(E|F^c)[1 - P(F)]$ $P(B|A)=P(A|B)\frac{P(B)}{P(A)}$ The odds of A: $\frac{P(A)}{P(A^c)} = \frac{P(A)}{1-P(A)}$ $P[\text{Exactly } k \text{ successes}]=\binom{n}{k}p^k(1-p)^{n-k}$ $P(n \text{ successes followed by } m \text{ failures}) = \frac{p^{n-1}(1-q^m)}{p^{n-1}+q^{m-1}-p^{n-1}q^{m-1}}$ where $p$ is the probability of success, and $q=1-p$ for failure. $\sum_{i=1}^{\infty}p(x_i)=1$ where $x_i$ is the $i^{\text{th}}$ value that $X$ can take on. $E[X] = \sum_{x:p(x) > 0}x p(x) \text{ or }\sum_{i=1}^{\infty} x_i p(x_i)$ $E[g(x)]=\sum_i g(x_i)p(x_i)$ $E[X^2] = \sum_i x_i^2 p(x_i)$ $Var(X) = E[X^2] - (E[X])^2$ $Var(aX+b) = a^2 Var(X)$ for constant $a,b$ $SD(X) = \sqrt{Var(X)}$ Binomial random variable $(n,p)$: $p(i)=\binom{n}{i}p^i(1-p)^{n-i}\; i=0,1,...,n$ where $p$ is the probability of success and $n$ is the number of trials. Poisson: $E[X] = np$ $E[X^2] = \lambda(\lambda + 1)$ $Var(X)=\lambda$ $P[N(t)=k)] = e^{-\lambda t}\frac{(\lambda t)^k}{k!} \: k=0,1,2,...$ where $N(t)=[s,s+t]$ $E[X] = \int_{-\infty}^{\infty} x f(x) \,dx$ $P\{X \le a \} = F(a) = \int_{-\infty}^a f(x) \,dx$ $\frac{d}{da} F(g(a))=g'(a)f(g(a))$ $E[g(X)]=\int_{-\infty}^{\infty} g(x) f(x) \,dx$ $P\{ a \le X \le b \} = \int_a^b f(x) \,dx$ Uniform: $f(x) = \frac{1}{b-a} \text{ for } a\le x \le b$ $E[X] = \frac{a+b}{2}$ $Var(X) = \frac{(b-a)^2}{12}$ Normal: $f(x) = \frac{1}{\sqrt{2\pi} \sigma} e^{-\frac{(x-\mu)^2}{2\sigma^2}} \text{ for } -\infty \le x \le \infty$ $E[X] = \mu$ $Var(X) = \sigma^2$ $Z = \frac{X-\mu}{\sigma}$ $P\left[a \le X \le b\right] = P\left[ \frac{a - \mu}{\sigma} < \frac{X - \mu}{\sigma} < \frac{b - \mu}{\sigma}\right] = \phi\left(\frac{b-\mu}{\sigma}\right) - \phi\left(\frac{a-\mu}{\sigma}\right)$ $\phi(x) = GRAPH$ $P[X \le a]=\phi\left(\frac{a-\mu}{\sigma}\right)$ $P[Z > x] = P[Z \le -x]$ $\phi(-x)=1-\phi(x)$ $Y=f(X)$ $F_Y=P[Y\le a]= P[f(X) \le a] = P[X \le f^{-1}(a)]=F_x(f^{-1}(a))$ $f_Y=\frac{d}{da}(f^{-1}(a))f_x(f^{-1}(a))$ $Y = X^2$ $F_Y = P[Y \le a] = P[X^2 \le a] = P[-\sqrt{a} \le X \le \sqrt{a}] = \int_{-\sqrt{a}}^{\sqrt{a}} f_X(x) dx$ $f_Y = \frac{d}{da}(F_Y)$ $N(k) \ge x \rightarrow 1 - N(k) < x$ $P(A \cap N(k) \ge x) = P(A) - P(A \cap N(k) < x)$ Discrete: $P(X=1) = P(X \le 1) - P(X < 1)$ Continuous: $P(X \le 1) = P(X < 1)$ Exponential: $f(x) = \lambda e^{-\lambda x} \text{ for } x \ge 0$ $E[X] = \frac{1}{\lambda}$ $Var(X) = \frac{1}{\lambda^2}$ Gamma: $f(x) = \frac{\lambda e^{-\lambda x} (\lambda x)^{\alpha - 1}}{\Gamma(\alpha)}$ $\Gamma(\alpha)=\int_0^{\infty} e^{-x}x^{\alpha-1} \,dx$ $E[X] = \frac{\alpha}{\lambda}$ $Var(X) = \frac{\alpha}{\lambda^2}$ $E[X_iX_j] = P(X_i = k \cap X_j=k) = P(X_i = k)P(X_j = k|X_i=k)$ Stat 390 - Probability and Statistics for Engineers and Scientists $p(x) = \text{categorical (discrete)}$ $f(x) = \text{continuous}$ $\mu_x=E[x]$ $\sigma_x^2 = V[x]$ Binomial $n\pi$ $n\pi (1-\pi)$ Normal $\mu$ $\sigma^2$ Poisson $\lambda$ $\lambda$ Exponential $\frac{1}{\lambda}$ $\frac{1}{\lambda^2}$ Uniform $\frac{b+a}{2}$ $\frac{(b-a)^2}{12}$ Binomial $p(x) = \binom{n}{x} \pi^x (1 - \pi)^{n-x}$ Poisson $p(x) = \frac{e^{-\lambda} \lambda^x}{x!}$ Normal $f(x) = \frac{1}{\sqrt{2 \pi \sigma^2}} e^{-\frac{1}{2} \left(\frac{x - \mu}{\sigma} \right)}$ Exponential $f(x) = \lambda e^{-\lambda x}$ Uniform $f(x) = \frac{1}{b-a}$ $\pi = p =$ proportion $n\pi = C = \lambda =$ mean $\mu = n \pi$ $\sigma^2 = n \pi (1 - \pi)$ Sample mean: $\bar{x} =$ $\frac{1}{n} \sum_{i=1}^n x_i$ Sample median: $\tilde{x} = \text{if } n \text{ is odd,} \left(\frac{n+1}{2}\right)^{\text{th}} \text{ value}$ $\text{if } n \text{ is even, average} \frac{n}{2} \text{ and } \frac{n}{2}+1$ Sample variance: $s^2 = \frac{1}{n-1}\sum (x_i - \bar{x})^2 = \frac{S_{xx}}{n-1} = \sum x_i^2 - \frac{1}{n} \left( \sum x_i \right)^2$ Standard deviation: $s = \sqrt{s^2}$ low/high quartile: median of lower/upper half of data. If $n$ is odd, include median in both. low = 1st quartile high = 3rd quartile median = 2nd quartile IQR: high - low quartiles range: max - min Total of something: $\bar{x} n$ An outlier is any data point outside the range defined by IQR $\cdot 1.5$ An extreme is any data point outside the range defined by IQR $\cdot 3$ $\mu_{\bar{x}} = \mu = \bar{x}$ $\sigma_{\bar{x}} = \frac{\sigma}{\sqrt{n}}$ $\mu_p = p = \pi$ $\sigma_p = \sqrt{\frac{p(1-p)}{n}}$ $p(x)$ distribution [discrete] mean: $\mu_x = \sum x \cdot p(x) = E[x]$ variance: $\sigma_x^2 = \sum(x-\mu_x)^2 p(x) = V[x]$ $f(x)$ distribution [continuous] mean: $\mu_x = \int_{-\infty}^{\infty} x \cdot f(x)$ median: $\tilde{\mu} \rightarrow \int_{-\infty}^{\tilde{\mu}} f(x) = \int_0^{\tilde{\mu}} f(x) = 0.5$ variance: $\sigma^2 = \int_{-\infty}^{\infty} (x =\mu_x)\cdot f(x)$ Normal $k = \frac{\mu + k \sigma - \mu}{\sigma}$standardize: $\frac{x - \mu}{\sigma}$$-k = \frac{\mu - k \sigma - \mu}{\sigma}$upper quartile: $\mu + 0.675\sigma$lower quartile: $\mu - 0.675\sigma$ Exponential $-ln(c) \cdot \frac{1}{\lambda} \text{ where c is the quartile }(0.25,0.5,0.75)$ $S_{xx} = \sum x_i^2 - \frac{1}{n}\left(\sum x_i\right)^2 = \sum(x_i - \bar{x})^2$ $S_{yy} = \sum y_i^2 - \frac{1}{n}\left(\sum y_i\right)^2 = \sum(y_i - \bar{y})^2$ $S_{xy} = \sum{x_i y_i} - \frac{1}{n}\left(\sum x_i\right)\left(\sum y_i\right)$ $\text{SSResid} = \text{SSE (error sum of squares)} = \sum(y_i - \hat{y}_i)^2 = S_{yy} - b S_{xy}$ $\text{SSTo} = \text{SST} = S_{yy} = \text{Total sum of squares} = \text{SSRegr} + \text{SSE} = \text{SSTr} + \text{SSE} = \sum_i^k \sum_j^n (x_{ij} - \bar{\bar{x}})^2$ $\text{SSRegr} = \text{regression sum of squares}$ $r^2 = 1 - \frac{\text{SSE}}{\text{SST}} = \frac{\text{SSRegr}}{\text{SST}} = \text{coefficient of determination}$ $r = \frac{S_{xy}}{\sqrt{S_{xx}}\sqrt{S_{yy}}}$ $\hat{\beta} = \frac{S_{xy}}{S_{xx}}$ $\hat{\alpha} = \bar{y} - \beta\bar{x}$ prediction: $\hat{\alpha} = \bar{y} - \beta\bar{x}$ Percentile ($\eta_p$): $\int_{-\infty}^{\eta_p} f(x) = p$ MSE (Mean Square Error) = $\frac{1}{n} SSE = \frac{1}{n}\sum (y_i-\hat{y}_i)^2 = \frac{1}{n} \sum (y_i - \alpha - \beta x_i)^2$ MSTr (Mean Square for treatments) ANOVA $SST = SS_{\text{explained}} + SSE$ $R^2 = 1 - \frac{SSE}{SST}$ $\sum(y_i - \bar{y})^2 = \sum(\hat{y}_i - \bar{y})^2 + \sum(y_i - \hat{y}_i)^2$ $s_e^2 = \frac{SSE}{n-2} \text{ or } \frac{SSE}{n-(k+1)}$ $R_{adj}^2 = 1-\frac{SSE(n-1)}{SST(n-(k+1))}$ $H_0: \mu_1 = \mu_2 = .. = \mu_k$ $\bar{\bar{x}} = \left(\frac{n_1}{n}\right)\bar{x}_1 + \left(\frac{n_2}{n}\right)\bar{x}_2 + ... + \left(\frac{n_k}{n}\right)\bar{x}_k$ $SSTr = n_1(\bar{x}_1 - \bar{\bar{x}})^2 + n_2(\bar{x}_2 - \bar{\bar{x}})^2 + ... + n_k(\bar{x}_k - \bar{\bar{x}})^2$ Sources df SS MS F Between Samples $k-1$ SSTr MSTr $\frac{\text{MSTr}}{\text{MSE}}$ Within Samples $n-k$ SSE MSE Total $n-1$ SST One-sample t interval: $\bar{x} \pm t^* \frac{s}{\sqrt{n}}$ (CI) Prediction interval: $\bar{x} \pm t^* s\sqrt{1+\frac{1}{n}}$ (PI) Tolerance interval: $\bar{x} \pm k^* s$ Difference t interval: $\bar{x}_1 - \bar{x}_2 \pm t^*\sqrt{\frac{s_1^2}{n_1} + \frac{s_2^2}{n_2}}$ $R^2 = 1 - \left(\frac{n-1}{n-(k+1)}\right) \frac{SSE}{SST}$ Type I error: Reject $H_0$ when true; If the F-test p-value is small, it's useful. Type II error: Don't reject $H_0$ when false. B(Type II) = $z^* \sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}$ Simple linear regression: $y = \alpha + \beta x$ General multiple regression: $y = \alpha + \beta_1 x_1 + ... + \beta_k x_k$ Prediction error: $\hat{y} - y^* = \sqrt{s_{\hat{y}}^2 + s_e^2} \cdot t$ $t = \frac{\hat{y}-y^*}{\sqrt{s_{\hat{y}}^2 + s_e^2}}$ $P(\hat{y} - y^* > 11) = P\left(\sqrt{s_{\hat{y}}^2 + s_e^2} \cdot t > 11\right) = P\left(t > \frac{11}{\sqrt{s_{\hat{y}}^2 + s_e^2}}\right)$ $\mu_{\hat{x}_1 - \hat{x}_2} = \mu_{\hat{x}_1} - \mu_{\hat{x}_2} = \mu_1 - \mu_2$ $\sigma_{\hat{x}_1 - \hat{x}_2} = \sqrt{\frac{\sigma_1^2}{n_1} + \frac{\sigma_2^2}{n_2}}$ $\hat{x}_1 - \hat{x}_2 \pm z^* \sqrt{\frac{s_1^2}{n_1} + \frac{s_2^2}{n_2}}$ To test if $\mu_1 = \mu_2$, use a two-sample t test with $\delta = 0$ If you're looking for the true average, it's a CI, not the standard deviation about the regression. A large n value may indicate a z--test, but you must think about whether or not its paired or not paired. A hypothesis is only valid if it tests a population parameter. Do not extrapolate outside of a regression analysis unless you use a future predictor. F-Test $F = \frac{MSRegr}{MSE}$ $MSRegr = \frac{SSRegr}{k}$ $MSResid = \frac{SSE}{n - (k+1)}$ $H_0: \beta_1=\beta_2=...=\beta_k=0$ Chi-Squared ($\chi^2$) $H_0: \pi_1 = \frac{\hat{n}_1}{n_1},\pi_2 = \frac{\hat{n}_2}{n_2}, ..., \pi_k = \frac{\hat{n}_k}{n_k}$ $\chi^2 = \sum_{i=1}^k \frac{(n_i - \hat{n}_i)^2}{\hat{n}_i} = \sum \left(\frac{\text{observed - expected}}{\text{expected}}\right)$ Linear Association Test $H_0: p=0$ $t = \frac{r\sqrt{n-2}}{\sqrt{1-r^2}}$ $\sigma_{\hat{y}} = \sigma \sqrt{\frac{1}{n} + \frac{(x^* - \bar{x})^2}{S_{xx}} }$ $\mu_{\hat{y}} = \hat{y} = \alpha + \beta x^*$ $\hat{y} \pm t^* s_{\hat{y}} df = n-2 \text{ for a mean y value (population)}$ $\hat{y} \pm t^* \sqrt{s_e^2 + s_{\hat{y}}^2} df=n-2 \text{ for single future y-values (PI)}$ Paired T-test $\bar{d} = \bar{(x-y)}$ $t = \frac{\bar{d} - \delta}{\frac{s_d}{\sqrt{n}}}$ $\sigma_d = \sigma \text{ of x-y pairs } = s_d$ Large Sample: $z = \frac{\bar{x} - 0.5}{\frac{s}{\sqrt{n}}}$ $\text{P-value } \le \alpha$ Small Sample: $z = \frac{\bar{x} - \mu}{\frac{s}{\sqrt{n}}}$ $df = n-1$ Difference: $H-0: \mu_1 - \mu_2 = \delta$ $t = \frac{\bar{x}_1 - \bar{x}_2 - \delta}{\sqrt{\frac{s_1^2}{n_1} + \frac{s_2^2}{n_2}}}$ $df = \frac{\left(\frac{s_1^2}{n_1} + \frac{s_2^2}{n_2} \right)}{\frac{1}{n_1-1} \left(\frac{s_1^2}{n_1}\right)^2 + \frac{1}{n_2-1} \left(\frac{s_2^2}{n_2}\right)^2 }$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8216547966003418, "perplexity": 524.9026195716284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948567785.59/warc/CC-MAIN-20171215075536-20171215095536-00029.warc.gz"}
https://checkit.clontz.org/banks/tbil-la/V9/
V9 - Homogeneous systems Example 1 🔗 Consider the homogeneous system of equations \begin{alignat*}{6} & & & & x_{3} & & &+& x_{5} &=& 0 \\-x_{1} &-& 5 \, x_{2} &+& 5 \, x_{3} &+& 2 \, x_{4} &+& 7 \, x_{5} &=& 0 \\x_{1} &+& 5 \, x_{2} &-& x_{3} &-& 2 \, x_{4} &-& 3 \, x_{5} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccccc|c} 0 & 0 & 1 & 0 & 1 & 0 \\ -1 & -5 & 5 & 2 & 7 & 0 \\ 1 & 5 & -1 & -2 & -3 & 0 \end{array}\right] = \left[\begin{array}{ccccc|c} 1 & 5 & 0 & -2 & -2 & 0 \\ 0 & 0 & 1 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} -5 \, a + 2 \, b + 2 \, c \\ a \\ -c \\ b \\ c \end{array}\right] \middle|\,a\text{\texttt{,}}b\text{\texttt{,}}c\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} -5 \\ 1 \\ 0 \\ 0 \\ 0 \end{array}\right] , \left[\begin{array}{c} 2 \\ 0 \\ 0 \\ 1 \\ 0 \end{array}\right] , \left[\begin{array}{c} 2 \\ 0 \\ -1 \\ 0 \\ 1 \end{array}\right] \right\}$$. Example 2 🔗 Consider the homogeneous system of equations \begin{alignat*}{5} 3 \, x_{1} &+& 7 \, x_{2} &-& 9 \, x_{3} &+& 7 \, x_{4} &=& 0 \\4 \, x_{1} &+& 9 \, x_{2} &-& 10 \, x_{3} &+& 6 \, x_{4} &=& 0 \\-2 \, x_{1} &-& 5 \, x_{2} &+& 7 \, x_{3} &-& 6 \, x_{4} &=& 0 \\-x_{1} & & &+& x_{3} &-& 3 \, x_{4} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{cccc|c} 3 & 7 & -9 & 7 & 0 \\ 4 & 9 & -10 & 6 & 0 \\ -2 & -5 & 7 & -6 & 0 \\ -1 & 0 & 1 & -3 & 0 \end{array}\right] = \left[\begin{array}{cccc|c} 1 & 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & -2 & 0 \\ 0 & 0 & 1 & -2 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} -a \\ 2 \, a \\ 2 \, a \\ a \end{array}\right] \middle|\,a\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} -1 \\ 2 \\ 2 \\ 1 \end{array}\right] \right\}$$. Example 3 🔗 Consider the homogeneous system of equations \begin{alignat*}{5} x_{1} &+& x_{2} &-& 2 \, x_{3} &+& 7 \, x_{4} &=& 0 \\x_{1} &-& x_{2} &-& 5 \, x_{3} &+& 9 \, x_{4} &=& 0 \\ &-& 3 \, x_{2} &-& 5 \, x_{3} &+& 4 \, x_{4} &=& 0 \\ & & 2 \, x_{2} &-& x_{3} &+& 6 \, x_{4} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{cccc|c} 1 & 1 & -2 & 7 & 0 \\ 1 & -1 & -5 & 9 & 0 \\ 0 & -3 & -5 & 4 & 0 \\ 0 & 2 & -1 & 6 & 0 \end{array}\right] = \left[\begin{array}{cccc|c} 1 & 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 2 & 0 \\ 0 & 0 & 1 & -2 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} -a \\ -2 \, a \\ 2 \, a \\ a \end{array}\right] \middle|\,a\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} -1 \\ -2 \\ 2 \\ 1 \end{array}\right] \right\}$$. Example 4 🔗 Consider the homogeneous system of equations \begin{alignat*}{4} 4 \, x_{1} &-& 6 \, x_{2} &-& 6 \, x_{3} &=& 0 \\4 \, x_{1} &-& 11 \, x_{2} &-& x_{3} &=& 0 \\ & & 4 \, x_{2} &-& 4 \, x_{3} &=& 0 \\x_{1} &-& x_{2} &-& 2 \, x_{3} &=& 0 \\-5 \, x_{1} &+& 12 \, x_{2} &+& 3 \, x_{3} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccc|c} 4 & -6 & -6 & 0 \\ 4 & -11 & -1 & 0 \\ 0 & 4 & -4 & 0 \\ 1 & -1 & -2 & 0 \\ -5 & 12 & 3 & 0 \end{array}\right] = \left[\begin{array}{ccc|c} 1 & 0 & -3 & 0 \\ 0 & 1 & -1 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} 3 \, a \\ a \\ a \end{array}\right] \middle|\,a\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} 3 \\ 1 \\ 1 \end{array}\right] \right\}$$. Example 5 🔗 Consider the homogeneous system of equations \begin{alignat*}{4} x_{1} &-& 5 \, x_{2} &+& 6 \, x_{3} &=& 0 \\x_{1} & & &+& x_{3} &=& 0 \\4 \, x_{1} &+& x_{2} &+& 3 \, x_{3} &=& 0 \\-5 \, x_{1} &+& 4 \, x_{2} &-& 9 \, x_{3} &=& 0 \\ & & 2 \, x_{2} &-& 2 \, x_{3} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccc|c} 1 & -5 & 6 & 0 \\ 1 & 0 & 1 & 0 \\ 4 & 1 & 3 & 0 \\ -5 & 4 & -9 & 0 \\ 0 & 2 & -2 & 0 \end{array}\right] = \left[\begin{array}{ccc|c} 1 & 0 & 1 & 0 \\ 0 & 1 & -1 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} -a \\ a \\ a \end{array}\right] \middle|\,a\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} -1 \\ 1 \\ 1 \end{array}\right] \right\}$$. Example 6 🔗 Consider the homogeneous system of equations \begin{alignat*}{6} & & x_{2} & & &-& 2 \, x_{4} &+& 2 \, x_{5} &=& 0 \\-x_{1} &-& x_{2} &+& 3 \, x_{3} &+& 9 \, x_{4} &+& 7 \, x_{5} &=& 0 \\ & & & & x_{3} &+& 2 \, x_{4} &+& 3 \, x_{5} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccccc|c} 0 & 1 & 0 & -2 & 2 & 0 \\ -1 & -1 & 3 & 9 & 7 & 0 \\ 0 & 0 & 1 & 2 & 3 & 0 \end{array}\right] = \left[\begin{array}{ccccc|c} 1 & 0 & 0 & -1 & 0 & 0 \\ 0 & 1 & 0 & -2 & 2 & 0 \\ 0 & 0 & 1 & 2 & 3 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} a \\ 2 \, a - 2 \, b \\ -2 \, a - 3 \, b \\ a \\ b \end{array}\right] \middle|\,a\text{\texttt{,}}b\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} 1 \\ 2 \\ -2 \\ 1 \\ 0 \end{array}\right] , \left[\begin{array}{c} 0 \\ -2 \\ -3 \\ 0 \\ 1 \end{array}\right] \right\}$$. Example 7 🔗 Consider the homogeneous system of equations \begin{alignat*}{5} x_{1} &-& 2 \, x_{2} &-& 7 \, x_{3} &+& x_{4} &=& 0 \\-x_{1} &-& x_{2} &+& x_{3} &+& 5 \, x_{4} &=& 0 \\ & & 2 \, x_{2} &+& 4 \, x_{3} &-& 4 \, x_{4} &=& 0 \\2 \, x_{1} &+& 2 \, x_{2} &-& 2 \, x_{3} &-& 10 \, x_{4} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{cccc|c} 1 & -2 & -7 & 1 & 0 \\ -1 & -1 & 1 & 5 & 0 \\ 0 & 2 & 4 & -4 & 0 \\ 2 & 2 & -2 & -10 & 0 \end{array}\right] = \left[\begin{array}{cccc|c} 1 & 0 & -3 & -3 & 0 \\ 0 & 1 & 2 & -2 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} 3 \, a + 3 \, b \\ -2 \, a + 2 \, b \\ a \\ b \end{array}\right] \middle|\,a\text{\texttt{,}}b\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} 3 \\ -2 \\ 1 \\ 0 \end{array}\right] , \left[\begin{array}{c} 3 \\ 2 \\ 0 \\ 1 \end{array}\right] \right\}$$. Example 8 🔗 Consider the homogeneous system of equations \begin{alignat*}{5} x_{1} &-& x_{2} &+& 6 \, x_{3} &+& 6 \, x_{4} &=& 0 \\-x_{1} &-& 2 \, x_{2} &+& 7 \, x_{3} &+& 4 \, x_{4} &=& 0 \\ & & 2 \, x_{2} &-& 9 \, x_{3} &-& 7 \, x_{4} &=& 0 \\-x_{1} &-& x_{2} &+& 3 \, x_{3} &+& x_{4} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{cccc|c} 1 & -1 & 6 & 6 & 0 \\ -1 & -2 & 7 & 4 & 0 \\ 0 & 2 & -9 & -7 & 0 \\ -1 & -1 & 3 & 1 & 0 \end{array}\right] = \left[\begin{array}{cccc|c} 1 & 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 & 0 \\ 0 & 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} -a \\ -a \\ -a \\ a \end{array}\right] \middle|\,a\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} -1 \\ -1 \\ -1 \\ 1 \end{array}\right] \right\}$$. Example 9 🔗 Consider the homogeneous system of equations \begin{alignat*}{6} 3 \, x_{1} &-& 2 \, x_{2} &-& 3 \, x_{3} &+& 3 \, x_{4} &+& 8 \, x_{5} &=& 0 \\-2 \, x_{1} &-& 3 \, x_{2} &+& 2 \, x_{3} &+& 11 \, x_{4} &-& x_{5} &=& 0 \\-x_{1} &+& 2 \, x_{2} &+& x_{3} &-& 5 \, x_{4} &-& 4 \, x_{5} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccccc|c} 3 & -2 & -3 & 3 & 8 & 0 \\ -2 & -3 & 2 & 11 & -1 & 0 \\ -1 & 2 & 1 & -5 & -4 & 0 \end{array}\right] = \left[\begin{array}{ccccc|c} 1 & 0 & -1 & -1 & 2 & 0 \\ 0 & 1 & 0 & -3 & -1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} a + b - 2 \, c \\ 3 \, b + c \\ a \\ b \\ c \end{array}\right] \middle|\,a\text{\texttt{,}}b\text{\texttt{,}}c\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} 1 \\ 0 \\ 1 \\ 0 \\ 0 \end{array}\right] , \left[\begin{array}{c} 1 \\ 3 \\ 0 \\ 1 \\ 0 \end{array}\right] , \left[\begin{array}{c} -2 \\ 1 \\ 0 \\ 0 \\ 1 \end{array}\right] \right\}$$. Example 10 🔗 Consider the homogeneous system of equations \begin{alignat*}{4} 2 \, x_{1} &+& 3 \, x_{2} &-& 6 \, x_{3} &=& 0 \\-x_{1} &-& 3 \, x_{2} &+& 6 \, x_{3} &=& 0 \\-2 \, x_{1} &-& 6 \, x_{2} &+& 12 \, x_{3} &=& 0 \\-x_{1} &+& x_{2} &-& 2 \, x_{3} &=& 0 \\-x_{1} &-& x_{2} &+& 2 \, x_{3} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccc|c} 2 & 3 & -6 & 0 \\ -1 & -3 & 6 & 0 \\ -2 & -6 & 12 & 0 \\ -1 & 1 & -2 & 0 \\ -1 & -1 & 2 & 0 \end{array}\right] = \left[\begin{array}{ccc|c} 1 & 0 & 0 & 0 \\ 0 & 1 & -2 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} 0 \\ 2 \, a \\ a \end{array}\right] \middle|\,a\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} 0 \\ 2 \\ 1 \end{array}\right] \right\}$$. Example 11 🔗 Consider the homogeneous system of equations \begin{alignat*}{6} x_{1} &+& 4 \, x_{2} &-& 5 \, x_{3} &+& 5 \, x_{4} &-& 6 \, x_{5} &=& 0 \\ & & & & & & x_{4} &-& x_{5} &=& 0 \\x_{1} &+& 4 \, x_{2} &-& 5 \, x_{3} &-& x_{4} & & &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccccc|c} 1 & 4 & -5 & 5 & -6 & 0 \\ 0 & 0 & 0 & 1 & -1 & 0 \\ 1 & 4 & -5 & -1 & 0 & 0 \end{array}\right] = \left[\begin{array}{ccccc|c} 1 & 4 & -5 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 & -1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} -4 \, a + 5 \, b + c \\ a \\ b \\ c \\ c \end{array}\right] \middle|\,a\text{\texttt{,}}b\text{\texttt{,}}c\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} -4 \\ 1 \\ 0 \\ 0 \\ 0 \end{array}\right] , \left[\begin{array}{c} 5 \\ 0 \\ 1 \\ 0 \\ 0 \end{array}\right] , \left[\begin{array}{c} 1 \\ 0 \\ 0 \\ 1 \\ 1 \end{array}\right] \right\}$$. Example 12 🔗 Consider the homogeneous system of equations \begin{alignat*}{6} 7 \, x_{1} &+& 5 \, x_{2} &-& 5 \, x_{3} &-& 7 \, x_{4} &-& 3 \, x_{5} &=& 0 \\-3 \, x_{1} &-& 2 \, x_{2} &+& 2 \, x_{3} &+& 3 \, x_{4} &+& x_{5} &=& 0 \\-3 \, x_{1} &-& 5 \, x_{2} &+& 6 \, x_{3} & & &+& 10 \, x_{5} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccccc|c} 7 & 5 & -5 & -7 & -3 & 0 \\ -3 & -2 & 2 & 3 & 1 & 0 \\ -3 & -5 & 6 & 0 & 10 & 0 \end{array}\right] = \left[\begin{array}{ccccc|c} 1 & 0 & 0 & -1 & 1 & 0 \\ 0 & 1 & 0 & -3 & 1 & 0 \\ 0 & 0 & 1 & -3 & 3 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} a - b \\ 3 \, a - b \\ 3 \, a - 3 \, b \\ a \\ b \end{array}\right] \middle|\,a\text{\texttt{,}}b\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} 1 \\ 3 \\ 3 \\ 1 \\ 0 \end{array}\right] , \left[\begin{array}{c} -1 \\ -1 \\ -3 \\ 0 \\ 1 \end{array}\right] \right\}$$. Example 13 🔗 Consider the homogeneous system of equations \begin{alignat*}{6} x_{1} & & &-& 3 \, x_{3} &-& 8 \, x_{4} &+& 4 \, x_{5} &=& 0 \\-x_{1} &+& x_{2} &+& 3 \, x_{3} &+& 6 \, x_{4} &-& 3 \, x_{5} &=& 0 \\ &-& x_{2} &+& x_{3} &+& 4 \, x_{4} &-& 3 \, x_{5} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccccc|c} 1 & 0 & -3 & -8 & 4 & 0 \\ -1 & 1 & 3 & 6 & -3 & 0 \\ 0 & -1 & 1 & 4 & -3 & 0 \end{array}\right] = \left[\begin{array}{ccccc|c} 1 & 0 & 0 & -2 & -2 & 0 \\ 0 & 1 & 0 & -2 & 1 & 0 \\ 0 & 0 & 1 & 2 & -2 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} 2 \, a + 2 \, b \\ 2 \, a - b \\ -2 \, a + 2 \, b \\ a \\ b \end{array}\right] \middle|\,a\text{\texttt{,}}b\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} 2 \\ 2 \\ -2 \\ 1 \\ 0 \end{array}\right] , \left[\begin{array}{c} 2 \\ -1 \\ 2 \\ 0 \\ 1 \end{array}\right] \right\}$$. Example 14 🔗 Consider the homogeneous system of equations \begin{alignat*}{4} -2 \, x_{1} &-& 5 \, x_{2} & & &=& 0 \\ & & x_{2} & & &=& 0 \\ & & 3 \, x_{2} & & &=& 0 \\-x_{1} &+& 8 \, x_{2} & & &=& 0 \\x_{1} & & & & &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccc|c} -2 & -5 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 3 & 0 & 0 \\ -1 & 8 & 0 & 0 \\ 1 & 0 & 0 & 0 \end{array}\right] = \left[\begin{array}{ccc|c} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} 0 \\ 0 \\ a \end{array}\right] \middle|\,a\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} 0 \\ 0 \\ 1 \end{array}\right] \right\}$$. Example 15 🔗 Consider the homogeneous system of equations \begin{alignat*}{4} x_{1} &+& 3 \, x_{2} &+& 4 \, x_{3} &=& 0 \\-2 \, x_{1} &-& 7 \, x_{2} &-& 9 \, x_{3} &=& 0 \\2 \, x_{1} &+& 7 \, x_{2} &+& 9 \, x_{3} &=& 0 \\ &-& x_{2} &-& x_{3} &=& 0 \\ & & x_{2} &+& x_{3} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccc|c} 1 & 3 & 4 & 0 \\ -2 & -7 & -9 & 0 \\ 2 & 7 & 9 & 0 \\ 0 & -1 & -1 & 0 \\ 0 & 1 & 1 & 0 \end{array}\right] = \left[\begin{array}{ccc|c} 1 & 0 & 1 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} -a \\ -a \\ a \end{array}\right] \middle|\,a\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} -1 \\ -1 \\ 1 \end{array}\right] \right\}$$. Example 16 🔗 Consider the homogeneous system of equations \begin{alignat*}{6} -x_{1} &-& 5 \, x_{2} &-& 3 \, x_{3} &-& 10 \, x_{4} &+& 8 \, x_{5} &=& 0 \\x_{1} &+& 5 \, x_{2} &+& 2 \, x_{3} &+& 7 \, x_{4} &-& 6 \, x_{5} &=& 0 \\2 \, x_{1} &+& 10 \, x_{2} &-& 2 \, x_{3} &-& 4 \, x_{4} & & &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccccc|c} -1 & -5 & -3 & -10 & 8 & 0 \\ 1 & 5 & 2 & 7 & -6 & 0 \\ 2 & 10 & -2 & -4 & 0 & 0 \end{array}\right] = \left[\begin{array}{ccccc|c} 1 & 5 & 0 & 1 & -2 & 0 \\ 0 & 0 & 1 & 3 & -2 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} -5 \, a - b + 2 \, c \\ a \\ -3 \, b + 2 \, c \\ b \\ c \end{array}\right] \middle|\,a\text{\texttt{,}}b\text{\texttt{,}}c\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} -5 \\ 1 \\ 0 \\ 0 \\ 0 \end{array}\right] , \left[\begin{array}{c} -1 \\ 0 \\ -3 \\ 1 \\ 0 \end{array}\right] , \left[\begin{array}{c} 2 \\ 0 \\ 2 \\ 0 \\ 1 \end{array}\right] \right\}$$. Example 17 🔗 Consider the homogeneous system of equations \begin{alignat*}{5} & & & & x_{3} &-& x_{4} &=& 0 \\-x_{1} &-& 2 \, x_{2} &+& 6 \, x_{3} &-& 3 \, x_{4} &=& 0 \\-3 \, x_{1} &-& 6 \, x_{2} &+& 10 \, x_{3} &-& x_{4} &=& 0 \\-2 \, x_{1} &-& 4 \, x_{2} &+& 12 \, x_{3} &-& 6 \, x_{4} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{cccc|c} 0 & 0 & 1 & -1 & 0 \\ -1 & -2 & 6 & -3 & 0 \\ -3 & -6 & 10 & -1 & 0 \\ -2 & -4 & 12 & -6 & 0 \end{array}\right] = \left[\begin{array}{cccc|c} 1 & 2 & 0 & -3 & 0 \\ 0 & 0 & 1 & -1 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} -2 \, a + 3 \, b \\ a \\ b \\ b \end{array}\right] \middle|\,a\text{\texttt{,}}b\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} -2 \\ 1 \\ 0 \\ 0 \end{array}\right] , \left[\begin{array}{c} 3 \\ 0 \\ 1 \\ 1 \end{array}\right] \right\}$$. Example 18 🔗 Consider the homogeneous system of equations \begin{alignat*}{5} -x_{1} &-& 4 \, x_{2} &-& 7 \, x_{3} &-& 6 \, x_{4} &=& 0 \\x_{1} &+& 4 \, x_{2} &+& 6 \, x_{3} &+& 5 \, x_{4} &=& 0 \\-2 \, x_{1} &-& 8 \, x_{2} &-& 12 \, x_{3} &-& 10 \, x_{4} &=& 0 \\3 \, x_{1} &+& 12 \, x_{2} &+& 10 \, x_{3} &+& 7 \, x_{4} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{cccc|c} -1 & -4 & -7 & -6 & 0 \\ 1 & 4 & 6 & 5 & 0 \\ -2 & -8 & -12 & -10 & 0 \\ 3 & 12 & 10 & 7 & 0 \end{array}\right] = \left[\begin{array}{cccc|c} 1 & 4 & 0 & -1 & 0 \\ 0 & 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} -4 \, a + b \\ a \\ -b \\ b \end{array}\right] \middle|\,a\text{\texttt{,}}b\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} -4 \\ 1 \\ 0 \\ 0 \end{array}\right] , \left[\begin{array}{c} 1 \\ 0 \\ -1 \\ 1 \end{array}\right] \right\}$$. Example 19 🔗 Consider the homogeneous system of equations \begin{alignat*}{5} x_{1} & & &+& 2 \, x_{3} &+& 2 \, x_{4} &=& 0 \\ & & x_{2} &+& 2 \, x_{3} &-& x_{4} &=& 0 \\ & & 5 \, x_{2} &+& 10 \, x_{3} &-& 5 \, x_{4} &=& 0 \\x_{1} &-& 7 \, x_{2} &-& 12 \, x_{3} &+& 9 \, x_{4} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{cccc|c} 1 & 0 & 2 & 2 & 0 \\ 0 & 1 & 2 & -1 & 0 \\ 0 & 5 & 10 & -5 & 0 \\ 1 & -7 & -12 & 9 & 0 \end{array}\right] = \left[\begin{array}{cccc|c} 1 & 0 & 2 & 2 & 0 \\ 0 & 1 & 2 & -1 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} -2 \, a - 2 \, b \\ -2 \, a + b \\ a \\ b \end{array}\right] \middle|\,a\text{\texttt{,}}b\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} -2 \\ -2 \\ 1 \\ 0 \end{array}\right] , \left[\begin{array}{c} -2 \\ 1 \\ 0 \\ 1 \end{array}\right] \right\}$$. Example 20 🔗 Consider the homogeneous system of equations \begin{alignat*}{6} x_{1} &+& x_{2} &+& 2 \, x_{3} &+& x_{4} &-& x_{5} &=& 0 \\-x_{1} &-& 3 \, x_{2} &-& 4 \, x_{3} &-& 7 \, x_{4} &-& x_{5} &=& 0 \\ & & x_{2} &+& x_{3} &+& 3 \, x_{4} &+& x_{5} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccccc|c} 1 & 1 & 2 & 1 & -1 & 0 \\ -1 & -3 & -4 & -7 & -1 & 0 \\ 0 & 1 & 1 & 3 & 1 & 0 \end{array}\right] = \left[\begin{array}{ccccc|c} 1 & 0 & 1 & -2 & -2 & 0 \\ 0 & 1 & 1 & 3 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} -a + 2 \, b + 2 \, c \\ -a - 3 \, b - c \\ a \\ b \\ c \end{array}\right] \middle|\,a\text{\texttt{,}}b\text{\texttt{,}}c\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} -1 \\ -1 \\ 1 \\ 0 \\ 0 \end{array}\right] , \left[\begin{array}{c} 2 \\ -3 \\ 0 \\ 1 \\ 0 \end{array}\right] , \left[\begin{array}{c} 2 \\ -1 \\ 0 \\ 0 \\ 1 \end{array}\right] \right\}$$. Example 21 🔗 Consider the homogeneous system of equations \begin{alignat*}{4} x_{1} &-& 3 \, x_{2} &-& 11 \, x_{3} &=& 0 \\x_{1} &-& 2 \, x_{2} &-& 8 \, x_{3} &=& 0 \\ &-& 4 \, x_{2} &-& 12 \, x_{3} &=& 0 \\ &-& 2 \, x_{2} &-& 6 \, x_{3} &=& 0 \\ &-& 3 \, x_{2} &-& 9 \, x_{3} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccc|c} 1 & -3 & -11 & 0 \\ 1 & -2 & -8 & 0 \\ 0 & -4 & -12 & 0 \\ 0 & -2 & -6 & 0 \\ 0 & -3 & -9 & 0 \end{array}\right] = \left[\begin{array}{ccc|c} 1 & 0 & -2 & 0 \\ 0 & 1 & 3 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} 2 \, a \\ -3 \, a \\ a \end{array}\right] \middle|\,a\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} 2 \\ -3 \\ 1 \end{array}\right] \right\}$$. Example 22 🔗 Consider the homogeneous system of equations \begin{alignat*}{4} 2 \, x_{1} &+& 2 \, x_{2} & & &=& 0 \\-4 \, x_{1} &-& 11 \, x_{2} & & &=& 0 \\-4 \, x_{1} &-& 9 \, x_{2} & & &=& 0 \\ & & 4 \, x_{2} & & &=& 0 \\-x_{1} &+& x_{2} & & &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccc|c} 2 & 2 & 0 & 0 \\ -4 & -11 & 0 & 0 \\ -4 & -9 & 0 & 0 \\ 0 & 4 & 0 & 0 \\ -1 & 1 & 0 & 0 \end{array}\right] = \left[\begin{array}{ccc|c} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} 0 \\ 0 \\ a \end{array}\right] \middle|\,a\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} 0 \\ 0 \\ 1 \end{array}\right] \right\}$$. Example 23 🔗 Consider the homogeneous system of equations \begin{alignat*}{4} x_{1} &-& x_{2} &+& 2 \, x_{3} &=& 0 \\-2 \, x_{1} &+& 3 \, x_{2} &-& 5 \, x_{3} &=& 0 \\ &-& 2 \, x_{2} &+& 2 \, x_{3} &=& 0 \\5 \, x_{1} &-& 6 \, x_{2} &+& 11 \, x_{3} &=& 0 \\3 \, x_{1} &-& 7 \, x_{2} &+& 10 \, x_{3} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccc|c} 1 & -1 & 2 & 0 \\ -2 & 3 & -5 & 0 \\ 0 & -2 & 2 & 0 \\ 5 & -6 & 11 & 0 \\ 3 & -7 & 10 & 0 \end{array}\right] = \left[\begin{array}{ccc|c} 1 & 0 & 1 & 0 \\ 0 & 1 & -1 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} -a \\ a \\ a \end{array}\right] \middle|\,a\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} -1 \\ 1 \\ 1 \end{array}\right] \right\}$$. Example 24 🔗 Consider the homogeneous system of equations \begin{alignat*}{4} x_{1} &-& 5 \, x_{2} &+& 10 \, x_{3} &=& 0 \\ & & x_{2} &-& 2 \, x_{3} &=& 0 \\ & & x_{2} &-& 2 \, x_{3} &=& 0 \\ & & x_{2} &-& 2 \, x_{3} &=& 0 \\x_{1} &-& 6 \, x_{2} &+& 12 \, x_{3} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccc|c} 1 & -5 & 10 & 0 \\ 0 & 1 & -2 & 0 \\ 0 & 1 & -2 & 0 \\ 0 & 1 & -2 & 0 \\ 1 & -6 & 12 & 0 \end{array}\right] = \left[\begin{array}{ccc|c} 1 & 0 & 0 & 0 \\ 0 & 1 & -2 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} 0 \\ 2 \, a \\ a \end{array}\right] \middle|\,a\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} 0 \\ 2 \\ 1 \end{array}\right] \right\}$$. Example 25 🔗 Consider the homogeneous system of equations \begin{alignat*}{5} x_{1} &-& x_{2} &+& 5 \, x_{3} &-& 8 \, x_{4} &=& 0 \\2 \, x_{1} &-& x_{2} &+& 5 \, x_{3} &-& 8 \, x_{4} &=& 0 \\ &-& x_{2} &+& 6 \, x_{3} &-& 10 \, x_{4} &=& 0 \\ & & 2 \, x_{2} &-& 7 \, x_{3} &+& 10 \, x_{4} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{cccc|c} 1 & -1 & 5 & -8 & 0 \\ 2 & -1 & 5 & -8 & 0 \\ 0 & -1 & 6 & -10 & 0 \\ 0 & 2 & -7 & 10 & 0 \end{array}\right] = \left[\begin{array}{cccc|c} 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & -2 & 0 \\ 0 & 0 & 1 & -2 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} 0 \\ 2 \, a \\ 2 \, a \\ a \end{array}\right] \middle|\,a\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} 0 \\ 2 \\ 2 \\ 1 \end{array}\right] \right\}$$. Example 26 🔗 Consider the homogeneous system of equations \begin{alignat*}{4} 4 \, x_{1} &-& 7 \, x_{2} &+& 11 \, x_{3} &=& 0 \\x_{1} &-& 3 \, x_{2} &+& 4 \, x_{3} &=& 0 \\ & & 4 \, x_{2} &-& 4 \, x_{3} &=& 0 \\-x_{1} &-& x_{2} & & &=& 0 \\-x_{1} &+& x_{2} &-& 2 \, x_{3} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccc|c} 4 & -7 & 11 & 0 \\ 1 & -3 & 4 & 0 \\ 0 & 4 & -4 & 0 \\ -1 & -1 & 0 & 0 \\ -1 & 1 & -2 & 0 \end{array}\right] = \left[\begin{array}{ccc|c} 1 & 0 & 1 & 0 \\ 0 & 1 & -1 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} -a \\ a \\ a \end{array}\right] \middle|\,a\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} -1 \\ 1 \\ 1 \end{array}\right] \right\}$$. Example 27 🔗 Consider the homogeneous system of equations \begin{alignat*}{4} -2 \, x_{1} &+& 6 \, x_{2} &+& 7 \, x_{3} &=& 0 \\3 \, x_{1} &-& 9 \, x_{2} &-& 11 \, x_{3} &=& 0 \\-2 \, x_{1} &+& 6 \, x_{2} &+& 3 \, x_{3} &=& 0 \\-2 \, x_{1} &+& 6 \, x_{2} &+& 8 \, x_{3} &=& 0 \\-2 \, x_{1} &+& 6 \, x_{2} &+& 3 \, x_{3} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccc|c} -2 & 6 & 7 & 0 \\ 3 & -9 & -11 & 0 \\ -2 & 6 & 3 & 0 \\ -2 & 6 & 8 & 0 \\ -2 & 6 & 3 & 0 \end{array}\right] = \left[\begin{array}{ccc|c} 1 & -3 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} 3 \, a \\ a \\ 0 \end{array}\right] \middle|\,a\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} 3 \\ 1 \\ 0 \end{array}\right] \right\}$$. Example 28 🔗 Consider the homogeneous system of equations \begin{alignat*}{4} -x_{1} &-& 3 \, x_{2} &+& 4 \, x_{3} &=& 0 \\ & & x_{2} &-& x_{3} &=& 0 \\x_{1} &-& x_{2} & & &=& 0 \\-x_{1} &+& 6 \, x_{2} &-& 5 \, x_{3} &=& 0 \\-2 \, x_{1} &+& 11 \, x_{2} &-& 9 \, x_{3} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccc|c} -1 & -3 & 4 & 0 \\ 0 & 1 & -1 & 0 \\ 1 & -1 & 0 & 0 \\ -1 & 6 & -5 & 0 \\ -2 & 11 & -9 & 0 \end{array}\right] = \left[\begin{array}{ccc|c} 1 & 0 & -1 & 0 \\ 0 & 1 & -1 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} a \\ a \\ a \end{array}\right] \middle|\,a\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} 1 \\ 1 \\ 1 \end{array}\right] \right\}$$. Example 29 🔗 Consider the homogeneous system of equations \begin{alignat*}{6} x_{1} &-& 8 \, x_{2} &+& 4 \, x_{3} &+& 4 \, x_{4} &+& 9 \, x_{5} &=& 0 \\ & & x_{2} & & &-& x_{4} &-& x_{5} &=& 0 \\ & & 3 \, x_{2} &+& x_{3} &-& 4 \, x_{4} &-& 3 \, x_{5} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccccc|c} 1 & -8 & 4 & 4 & 9 & 0 \\ 0 & 1 & 0 & -1 & -1 & 0 \\ 0 & 3 & 1 & -4 & -3 & 0 \end{array}\right] = \left[\begin{array}{ccccc|c} 1 & 0 & 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & -1 & -1 & 0 \\ 0 & 0 & 1 & -1 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} -b \\ a + b \\ a \\ a \\ b \end{array}\right] \middle|\,a\text{\texttt{,}}b\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} 0 \\ 1 \\ 1 \\ 1 \\ 0 \end{array}\right] , \left[\begin{array}{c} -1 \\ 1 \\ 0 \\ 0 \\ 1 \end{array}\right] \right\}$$. Example 30 🔗 Consider the homogeneous system of equations \begin{alignat*}{5} x_{1} &+& x_{2} &-& 3 \, x_{3} &+& 7 \, x_{4} &=& 0 \\ & & x_{2} &-& 4 \, x_{3} &+& 7 \, x_{4} &=& 0 \\2 \, x_{1} &+& x_{2} &-& x_{3} &+& 5 \, x_{4} &=& 0 \\ &-& x_{2} &-& x_{3} &+& 3 \, x_{4} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{cccc|c} 1 & 1 & -3 & 7 & 0 \\ 0 & 1 & -4 & 7 & 0 \\ 2 & 1 & -1 & 5 & 0 \\ 0 & -1 & -1 & 3 & 0 \end{array}\right] = \left[\begin{array}{cccc|c} 1 & 0 & 0 & 2 & 0 \\ 0 & 1 & 0 & -1 & 0 \\ 0 & 0 & 1 & -2 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} -2 \, a \\ a \\ 2 \, a \\ a \end{array}\right] \middle|\,a\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} -2 \\ 1 \\ 2 \\ 1 \end{array}\right] \right\}$$. Example 31 🔗 Consider the homogeneous system of equations \begin{alignat*}{6} -x_{1} &+& 4 \, x_{2} &-& 10 \, x_{3} &-& 6 \, x_{4} &+& 8 \, x_{5} &=& 0 \\-2 \, x_{1} &+& x_{2} &+& x_{3} &+& 2 \, x_{4} &+& 2 \, x_{5} &=& 0 \\x_{1} &-& 2 \, x_{2} &+& 4 \, x_{3} &+& 2 \, x_{4} &-& 4 \, x_{5} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccccc|c} -1 & 4 & -10 & -6 & 8 & 0 \\ -2 & 1 & 1 & 2 & 2 & 0 \\ 1 & -2 & 4 & 2 & -4 & 0 \end{array}\right] = \left[\begin{array}{ccccc|c} 1 & 0 & -2 & -2 & 0 & 0 \\ 0 & 1 & -3 & -2 & 2 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} 2 \, a + 2 \, b \\ 3 \, a + 2 \, b - 2 \, c \\ a \\ b \\ c \end{array}\right] \middle|\,a\text{\texttt{,}}b\text{\texttt{,}}c\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} 2 \\ 3 \\ 1 \\ 0 \\ 0 \end{array}\right] , \left[\begin{array}{c} 2 \\ 2 \\ 0 \\ 1 \\ 0 \end{array}\right] , \left[\begin{array}{c} 0 \\ -2 \\ 0 \\ 0 \\ 1 \end{array}\right] \right\}$$. Example 32 🔗 Consider the homogeneous system of equations \begin{alignat*}{5} x_{1} &-& 8 \, x_{2} &+& x_{3} &+& 7 \, x_{4} &=& 0 \\5 \, x_{1} &-& 9 \, x_{2} &+& 5 \, x_{3} &+& 4 \, x_{4} &=& 0 \\ & & 2 \, x_{2} & & &-& 2 \, x_{4} &=& 0 \\3 \, x_{1} &-& x_{2} &+& 3 \, x_{3} &-& 2 \, x_{4} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{cccc|c} 1 & -8 & 1 & 7 & 0 \\ 5 & -9 & 5 & 4 & 0 \\ 0 & 2 & 0 & -2 & 0 \\ 3 & -1 & 3 & -2 & 0 \end{array}\right] = \left[\begin{array}{cccc|c} 1 & 0 & 1 & -1 & 0 \\ 0 & 1 & 0 & -1 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} -a + b \\ b \\ a \\ b \end{array}\right] \middle|\,a\text{\texttt{,}}b\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} -1 \\ 0 \\ 1 \\ 0 \end{array}\right] , \left[\begin{array}{c} 1 \\ 1 \\ 0 \\ 1 \end{array}\right] \right\}$$. Example 33 🔗 Consider the homogeneous system of equations \begin{alignat*}{6} x_{1} & & &-& 3 \, x_{3} &-& 8 \, x_{4} &-& 2 \, x_{5} &=& 0 \\ & & x_{2} &+& x_{3} &+& 2 \, x_{4} &+& 4 \, x_{5} &=& 0 \\ &-& x_{2} & & & & &-& 3 \, x_{5} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccccc|c} 1 & 0 & -3 & -8 & -2 & 0 \\ 0 & 1 & 1 & 2 & 4 & 0 \\ 0 & -1 & 0 & 0 & -3 & 0 \end{array}\right] = \left[\begin{array}{ccccc|c} 1 & 0 & 0 & -2 & 1 & 0 \\ 0 & 1 & 0 & 0 & 3 & 0 \\ 0 & 0 & 1 & 2 & 1 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} 2 \, a - b \\ -3 \, b \\ -2 \, a - b \\ a \\ b \end{array}\right] \middle|\,a\text{\texttt{,}}b\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} 2 \\ 0 \\ -2 \\ 1 \\ 0 \end{array}\right] , \left[\begin{array}{c} -1 \\ -3 \\ -1 \\ 0 \\ 1 \end{array}\right] \right\}$$. Example 34 🔗 Consider the homogeneous system of equations \begin{alignat*}{4} x_{1} &+& 2 \, x_{2} &-& 3 \, x_{3} &=& 0 \\2 \, x_{1} &+& 11 \, x_{2} &-& 6 \, x_{3} &=& 0 \\x_{1} &+& 3 \, x_{2} &-& 3 \, x_{3} &=& 0 \\ & & 3 \, x_{2} & & &=& 0 \\3 \, x_{1} &+& 11 \, x_{2} &-& 9 \, x_{3} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccc|c} 1 & 2 & -3 & 0 \\ 2 & 11 & -6 & 0 \\ 1 & 3 & -3 & 0 \\ 0 & 3 & 0 & 0 \\ 3 & 11 & -9 & 0 \end{array}\right] = \left[\begin{array}{ccc|c} 1 & 0 & -3 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} 3 \, a \\ 0 \\ a \end{array}\right] \middle|\,a\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} 3 \\ 0 \\ 1 \end{array}\right] \right\}$$. Example 35 🔗 Consider the homogeneous system of equations \begin{alignat*}{6} x_{1} &+& 2 \, x_{2} &+& 11 \, x_{3} &-& 11 \, x_{4} &-& 12 \, x_{5} &=& 0 \\-x_{1} &-& x_{2} &-& 8 \, x_{3} &+& 8 \, x_{4} &+& 8 \, x_{5} &=& 0 \\x_{1} &+& 2 \, x_{2} &+& 11 \, x_{3} &-& 10 \, x_{4} &-& 11 \, x_{5} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccccc|c} 1 & 2 & 11 & -11 & -12 & 0 \\ -1 & -1 & -8 & 8 & 8 & 0 \\ 1 & 2 & 11 & -10 & -11 & 0 \end{array}\right] = \left[\begin{array}{ccccc|c} 1 & 0 & 5 & 0 & 1 & 0 \\ 0 & 1 & 3 & 0 & -1 & 0 \\ 0 & 0 & 0 & 1 & 1 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} -5 \, a - b \\ -3 \, a + b \\ a \\ -b \\ b \end{array}\right] \middle|\,a\text{\texttt{,}}b\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} -5 \\ -3 \\ 1 \\ 0 \\ 0 \end{array}\right] , \left[\begin{array}{c} -1 \\ 1 \\ 0 \\ -1 \\ 1 \end{array}\right] \right\}$$. Example 36 🔗 Consider the homogeneous system of equations \begin{alignat*}{4} -2 \, x_{1} &-& 5 \, x_{2} &-& 8 \, x_{3} &=& 0 \\x_{1} &-& 4 \, x_{2} &-& 9 \, x_{3} &=& 0 \\x_{1} & & &-& x_{3} &=& 0 \\-x_{1} &+& 2 \, x_{2} &+& 5 \, x_{3} &=& 0 \\-x_{1} & & &+& x_{3} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccc|c} -2 & -5 & -8 & 0 \\ 1 & -4 & -9 & 0 \\ 1 & 0 & -1 & 0 \\ -1 & 2 & 5 & 0 \\ -1 & 0 & 1 & 0 \end{array}\right] = \left[\begin{array}{ccc|c} 1 & 0 & -1 & 0 \\ 0 & 1 & 2 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} a \\ -2 \, a \\ a \end{array}\right] \middle|\,a\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} 1 \\ -2 \\ 1 \end{array}\right] \right\}$$. Example 37 🔗 Consider the homogeneous system of equations \begin{alignat*}{6} -3 \, x_{1} &+& 4 \, x_{2} &-& 2 \, x_{3} &-& 8 \, x_{4} &+& 3 \, x_{5} &=& 0 \\x_{1} & & &+& 3 \, x_{3} &-& x_{4} &+& 11 \, x_{5} &=& 0 \\4 \, x_{1} &-& 5 \, x_{2} &+& 3 \, x_{3} &+& 10 \, x_{4} &-& 2 \, x_{5} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccccc|c} -3 & 4 & -2 & -8 & 3 & 0 \\ 1 & 0 & 3 & -1 & 11 & 0 \\ 4 & -5 & 3 & 10 & -2 & 0 \end{array}\right] = \left[\begin{array}{ccccc|c} 1 & 0 & 0 & 2 & -1 & 0 \\ 0 & 1 & 0 & -1 & 2 & 0 \\ 0 & 0 & 1 & -1 & 4 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} -2 \, a + b \\ a - 2 \, b \\ a - 4 \, b \\ a \\ b \end{array}\right] \middle|\,a\text{\texttt{,}}b\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} -2 \\ 1 \\ 1 \\ 1 \\ 0 \end{array}\right] , \left[\begin{array}{c} 1 \\ -2 \\ -4 \\ 0 \\ 1 \end{array}\right] \right\}$$. Example 38 🔗 Consider the homogeneous system of equations \begin{alignat*}{5} -x_{1} &-& 5 \, x_{2} &-& x_{3} &+& 7 \, x_{4} &=& 0 \\x_{1} &+& 5 \, x_{2} &-& 2 \, x_{3} &+& 5 \, x_{4} &=& 0 \\-x_{1} &-& 5 \, x_{2} &+& x_{3} &-& x_{4} &=& 0 \\ & & &-& 3 \, x_{3} &+& 12 \, x_{4} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{cccc|c} -1 & -5 & -1 & 7 & 0 \\ 1 & 5 & -2 & 5 & 0 \\ -1 & -5 & 1 & -1 & 0 \\ 0 & 0 & -3 & 12 & 0 \end{array}\right] = \left[\begin{array}{cccc|c} 1 & 5 & 0 & -3 & 0 \\ 0 & 0 & 1 & -4 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} -5 \, a + 3 \, b \\ a \\ 4 \, b \\ b \end{array}\right] \middle|\,a\text{\texttt{,}}b\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} -5 \\ 1 \\ 0 \\ 0 \end{array}\right] , \left[\begin{array}{c} 3 \\ 0 \\ 4 \\ 1 \end{array}\right] \right\}$$. Example 39 🔗 Consider the homogeneous system of equations \begin{alignat*}{6} x_{1} &-& 2 \, x_{2} &+& 5 \, x_{3} &+& x_{4} & & &=& 0 \\x_{1} &-& x_{2} &+& 3 \, x_{3} & & &+& x_{5} &=& 0 \\ & & 3 \, x_{2} &-& 6 \, x_{3} &-& 3 \, x_{4} &+& 3 \, x_{5} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccccc|c} 1 & -2 & 5 & 1 & 0 & 0 \\ 1 & -1 & 3 & 0 & 1 & 0 \\ 0 & 3 & -6 & -3 & 3 & 0 \end{array}\right] = \left[\begin{array}{ccccc|c} 1 & 0 & 1 & -1 & 2 & 0 \\ 0 & 1 & -2 & -1 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} -a + b - 2 \, c \\ 2 \, a + b - c \\ a \\ b \\ c \end{array}\right] \middle|\,a\text{\texttt{,}}b\text{\texttt{,}}c\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} -1 \\ 2 \\ 1 \\ 0 \\ 0 \end{array}\right] , \left[\begin{array}{c} 1 \\ 1 \\ 0 \\ 1 \\ 0 \end{array}\right] , \left[\begin{array}{c} -2 \\ -1 \\ 0 \\ 0 \\ 1 \end{array}\right] \right\}$$. Example 40 🔗 Consider the homogeneous system of equations \begin{alignat*}{4} 3 \, x_{1} &+& 10 \, x_{2} &-& 6 \, x_{3} &=& 0 \\-x_{1} &-& 3 \, x_{2} &+& 2 \, x_{3} &=& 0 \\x_{1} &-& x_{2} &-& 2 \, x_{3} &=& 0 \\3 \, x_{1} &+& 10 \, x_{2} &-& 6 \, x_{3} &=& 0 \\ & & 5 \, x_{2} & & &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccc|c} 3 & 10 & -6 & 0 \\ -1 & -3 & 2 & 0 \\ 1 & -1 & -2 & 0 \\ 3 & 10 & -6 & 0 \\ 0 & 5 & 0 & 0 \end{array}\right] = \left[\begin{array}{ccc|c} 1 & 0 & -2 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} 2 \, a \\ 0 \\ a \end{array}\right] \middle|\,a\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} 2 \\ 0 \\ 1 \end{array}\right] \right\}$$. Example 41 🔗 Consider the homogeneous system of equations \begin{alignat*}{4} x_{1} &-& 4 \, x_{2} &-& 6 \, x_{3} &=& 0 \\x_{1} &-& 3 \, x_{2} &-& 5 \, x_{3} &=& 0 \\ & & & & 0 &=& 0 \\ &-& 5 \, x_{2} &-& 5 \, x_{3} &=& 0 \\x_{1} &-& x_{2} &-& 3 \, x_{3} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccc|c} 1 & -4 & -6 & 0 \\ 1 & -3 & -5 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & -5 & -5 & 0 \\ 1 & -1 & -3 & 0 \end{array}\right] = \left[\begin{array}{ccc|c} 1 & 0 & -2 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} 2 \, a \\ -a \\ a \end{array}\right] \middle|\,a\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} 2 \\ -1 \\ 1 \end{array}\right] \right\}$$. Example 42 🔗 Consider the homogeneous system of equations \begin{alignat*}{5} x_{1} &+& x_{2} &+& x_{3} &-& 2 \, x_{4} &=& 0 \\-x_{1} & & &-& 4 \, x_{3} &+& 4 \, x_{4} &=& 0 \\ & & x_{2} &-& 3 \, x_{3} &+& 3 \, x_{4} &=& 0 \\5 \, x_{1} &+& 3 \, x_{2} &+& 11 \, x_{3} &-& 12 \, x_{4} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{cccc|c} 1 & 1 & 1 & -2 & 0 \\ -1 & 0 & -4 & 4 & 0 \\ 0 & 1 & -3 & 3 & 0 \\ 5 & 3 & 11 & -12 & 0 \end{array}\right] = \left[\begin{array}{cccc|c} 1 & 0 & 4 & 0 & 0 \\ 0 & 1 & -3 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} -4 \, a \\ 3 \, a \\ a \\ 0 \end{array}\right] \middle|\,a\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} -4 \\ 3 \\ 1 \\ 0 \end{array}\right] \right\}$$. Example 43 🔗 Consider the homogeneous system of equations \begin{alignat*}{6} x_{1} & & &-& 5 \, x_{3} &+& 9 \, x_{4} &-& 9 \, x_{5} &=& 0 \\ & & x_{2} &-& 5 \, x_{3} &+& 3 \, x_{4} &-& 11 \, x_{5} &=& 0 \\-x_{1} & & &+& 6 \, x_{3} &-& 10 \, x_{4} &+& 11 \, x_{5} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccccc|c} 1 & 0 & -5 & 9 & -9 & 0 \\ 0 & 1 & -5 & 3 & -11 & 0 \\ -1 & 0 & 6 & -10 & 11 & 0 \end{array}\right] = \left[\begin{array}{ccccc|c} 1 & 0 & 0 & 4 & 1 & 0 \\ 0 & 1 & 0 & -2 & -1 & 0 \\ 0 & 0 & 1 & -1 & 2 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} -4 \, a - b \\ 2 \, a + b \\ a - 2 \, b \\ a \\ b \end{array}\right] \middle|\,a\text{\texttt{,}}b\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} -4 \\ 2 \\ 1 \\ 1 \\ 0 \end{array}\right] , \left[\begin{array}{c} -1 \\ 1 \\ -2 \\ 0 \\ 1 \end{array}\right] \right\}$$. Example 44 🔗 Consider the homogeneous system of equations \begin{alignat*}{6} x_{1} &-& x_{2} &-& 6 \, x_{3} &+& 4 \, x_{4} & & &=& 0 \\ & & x_{2} &+& 2 \, x_{3} &-& 2 \, x_{4} &-& x_{5} &=& 0 \\-2 \, x_{1} &-& x_{2} &+& 6 \, x_{3} &-& 2 \, x_{4} &+& 3 \, x_{5} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccccc|c} 1 & -1 & -6 & 4 & 0 & 0 \\ 0 & 1 & 2 & -2 & -1 & 0 \\ -2 & -1 & 6 & -2 & 3 & 0 \end{array}\right] = \left[\begin{array}{ccccc|c} 1 & 0 & -4 & 2 & -1 & 0 \\ 0 & 1 & 2 & -2 & -1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} 4 \, a - 2 \, b + c \\ -2 \, a + 2 \, b + c \\ a \\ b \\ c \end{array}\right] \middle|\,a\text{\texttt{,}}b\text{\texttt{,}}c\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} 4 \\ -2 \\ 1 \\ 0 \\ 0 \end{array}\right] , \left[\begin{array}{c} -2 \\ 2 \\ 0 \\ 1 \\ 0 \end{array}\right] , \left[\begin{array}{c} 1 \\ 1 \\ 0 \\ 0 \\ 1 \end{array}\right] \right\}$$. Example 45 🔗 Consider the homogeneous system of equations \begin{alignat*}{4} & & 5 \, x_{2} &-& 10 \, x_{3} &=& 0 \\x_{1} &+& 6 \, x_{2} &-& 11 \, x_{3} &=& 0 \\ &-& 4 \, x_{2} &+& 8 \, x_{3} &=& 0 \\x_{1} &+& x_{2} &-& x_{3} &=& 0 \\x_{1} & & &+& x_{3} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccc|c} 0 & 5 & -10 & 0 \\ 1 & 6 & -11 & 0 \\ 0 & -4 & 8 & 0 \\ 1 & 1 & -1 & 0 \\ 1 & 0 & 1 & 0 \end{array}\right] = \left[\begin{array}{ccc|c} 1 & 0 & 1 & 0 \\ 0 & 1 & -2 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} -a \\ 2 \, a \\ a \end{array}\right] \middle|\,a\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} -1 \\ 2 \\ 1 \end{array}\right] \right\}$$. Example 46 🔗 Consider the homogeneous system of equations \begin{alignat*}{6} & & x_{2} & & &+& 4 \, x_{4} &+& 3 \, x_{5} &=& 0 \\-x_{1} &-& x_{2} &-& 2 \, x_{3} &-& x_{4} &-& 8 \, x_{5} &=& 0 \\-2 \, x_{1} &-& x_{2} &-& 3 \, x_{3} &+& 2 \, x_{4} &-& 11 \, x_{5} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccccc|c} 0 & 1 & 0 & 4 & 3 & 0 \\ -1 & -1 & -2 & -1 & -8 & 0 \\ -2 & -1 & -3 & 2 & -11 & 0 \end{array}\right] = \left[\begin{array}{ccccc|c} 1 & 0 & 0 & -3 & 1 & 0 \\ 0 & 1 & 0 & 4 & 3 & 0 \\ 0 & 0 & 1 & 0 & 2 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} 3 \, a - b \\ -4 \, a - 3 \, b \\ -2 \, b \\ a \\ b \end{array}\right] \middle|\,a\text{\texttt{,}}b\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} 3 \\ -4 \\ 0 \\ 1 \\ 0 \end{array}\right] , \left[\begin{array}{c} -1 \\ -3 \\ -2 \\ 0 \\ 1 \end{array}\right] \right\}$$. Example 47 🔗 Consider the homogeneous system of equations \begin{alignat*}{5} -2 \, x_{1} & & & & &+& 2 \, x_{4} &=& 0 \\-x_{1} &-& x_{2} & & &-& 2 \, x_{4} &=& 0 \\3 \, x_{1} &+& 2 \, x_{2} & & &+& 3 \, x_{4} &=& 0 \\-x_{1} & & & & &+& x_{4} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{cccc|c} -2 & 0 & 0 & 2 & 0 \\ -1 & -1 & 0 & -2 & 0 \\ 3 & 2 & 0 & 3 & 0 \\ -1 & 0 & 0 & 1 & 0 \end{array}\right] = \left[\begin{array}{cccc|c} 1 & 0 & 0 & -1 & 0 \\ 0 & 1 & 0 & 3 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} b \\ -3 \, b \\ a \\ b \end{array}\right] \middle|\,a\text{\texttt{,}}b\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} 0 \\ 0 \\ 1 \\ 0 \end{array}\right] , \left[\begin{array}{c} 1 \\ -3 \\ 0 \\ 1 \end{array}\right] \right\}$$. Example 48 🔗 Consider the homogeneous system of equations \begin{alignat*}{6} x_{1} & & &+& 2 \, x_{3} &-& x_{4} &+& x_{5} &=& 0 \\ & & x_{2} &-& 3 \, x_{3} &+& 5 \, x_{4} & & &=& 0 \\-4 \, x_{1} &-& 2 \, x_{2} &-& x_{3} &-& 7 \, x_{4} &-& 5 \, x_{5} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccccc|c} 1 & 0 & 2 & -1 & 1 & 0 \\ 0 & 1 & -3 & 5 & 0 & 0 \\ -4 & -2 & -1 & -7 & -5 & 0 \end{array}\right] = \left[\begin{array}{ccccc|c} 1 & 0 & 0 & 1 & 3 & 0 \\ 0 & 1 & 0 & 2 & -3 & 0 \\ 0 & 0 & 1 & -1 & -1 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} -a - 3 \, b \\ -2 \, a + 3 \, b \\ a + b \\ a \\ b \end{array}\right] \middle|\,a\text{\texttt{,}}b\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} -1 \\ -2 \\ 1 \\ 1 \\ 0 \end{array}\right] , \left[\begin{array}{c} -3 \\ 3 \\ 1 \\ 0 \\ 1 \end{array}\right] \right\}$$. Example 49 🔗 Consider the homogeneous system of equations \begin{alignat*}{6} x_{1} &+& x_{2} &-& 9 \, x_{3} &+& 10 \, x_{4} &+& 9 \, x_{5} &=& 0 \\ & & x_{2} &-& 5 \, x_{3} &+& 6 \, x_{4} &+& 4 \, x_{5} &=& 0 \\x_{1} & & &-& 3 \, x_{3} &+& 3 \, x_{4} &+& 4 \, x_{5} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccccc|c} 1 & 1 & -9 & 10 & 9 & 0 \\ 0 & 1 & -5 & 6 & 4 & 0 \\ 1 & 0 & -3 & 3 & 4 & 0 \end{array}\right] = \left[\begin{array}{ccccc|c} 1 & 0 & 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 & -1 & 0 \\ 0 & 0 & 1 & -1 & -1 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} -b \\ -a + b \\ a + b \\ a \\ b \end{array}\right] \middle|\,a\text{\texttt{,}}b\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} 0 \\ -1 \\ 1 \\ 1 \\ 0 \end{array}\right] , \left[\begin{array}{c} -1 \\ 1 \\ 1 \\ 0 \\ 1 \end{array}\right] \right\}$$. Example 50 🔗 Consider the homogeneous system of equations \begin{alignat*}{4} 4 \, x_{1} &-& 12 \, x_{2} &+& 4 \, x_{3} &=& 0 \\3 \, x_{1} &-& 5 \, x_{2} &+& 3 \, x_{3} &=& 0 \\x_{1} &-& 5 \, x_{2} &+& x_{3} &=& 0 \\3 \, x_{1} &-& 10 \, x_{2} &+& 3 \, x_{3} &=& 0 \\x_{1} &-& 2 \, x_{2} &+& x_{3} &=& 0 \\ \end{alignat*} 1. Find the solution space of this system. 2. Find a basis of the solution space. $\operatorname{RREF} \left[\begin{array}{ccc|c} 4 & -12 & 4 & 0 \\ 3 & -5 & 3 & 0 \\ 1 & -5 & 1 & 0 \\ 3 & -10 & 3 & 0 \\ 1 & -2 & 1 & 0 \end{array}\right] = \left[\begin{array}{ccc|c} 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{array}\right]$ 1. The solution space is $$\left\{ \left[\begin{array}{c} -a \\ 0 \\ a \end{array}\right] \middle|\,a\in\mathbb{R}\right\}$$ 2. A basis of the solution space is $$\left\{ \left[\begin{array}{c} -1 \\ 0 \\ 1 \end{array}\right] \right\}$$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000090599060059, "perplexity": 1711.8492544149226}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600401600771.78/warc/CC-MAIN-20200928104328-20200928134328-00726.warc.gz"}
https://eprints.soton.ac.uk/424998/
The University of Southampton University of Southampton Institutional Repository # A note on a system theoretic approach to a conjecture by Peller-Khrushchev: The general case Ober, Raimund (1990) A note on a system theoretic approach to a conjecture by Peller-Khrushchev: The general case. IMA Journal of Mathematical Control and Information, 7 (1), 35-45. Record type: Article ## Abstract Based on the construction of infinite-dimensional balanced realizations, an alternative solution to the following inverse spectral problem is presented. Given a decreasing sequence of positive numbers (σn)n>1 (i.e. σ1≥σ2≥σ3≥... ≥0), does there exist a Hankel operator whose sequance of singular values is (σn)n>1? This paper is an extension of a previously published paper in which the same approach was taken in the case of a monotonically decreasing sequence(σn)n>1. Full text not available from this repository. Published date: 1 March 1990 ## Identifiers Local EPrints ID: 424998 URI: http://eprints.soton.ac.uk/id/eprint/424998 ISSN: 0265-0754 PURE UUID: 99c27f68-68ab-4df8-ac38-701e633d05e0 ORCID for Raimund Ober: orcid.org/0000-0002-1290-7430 ## Catalogue record Date deposited: 09 Oct 2018 16:30 ## Contributors Author: Raimund Ober
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9510824084281921, "perplexity": 3857.5658474247607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057202.68/warc/CC-MAIN-20210921101319-20210921131319-00518.warc.gz"}
https://arxiv.org/abs/1905.05094v1
math.RA # Title:Orthogonal tensor decomposition and orbit closures from a linear algebraic perspective Abstract: We study orthogonal decompositions of symmetric and ordinary tensors using methods from linear algebra. For the field of real numbers we show that the sets of decomposable tensors can be defined be equations of degree 2. This gives a new proof of some of the results of Robeva and Boralevi et al. Orthogonal decompositions over the field of complex numbers had not been studied previously; we give an explicit description of the set of decomposable tensors using polynomial equalities and inequalities, and we begin a study of their closures. The main open problem that arises from this work is to obtain a complete description of the closures. This question is akin to that of characterizing border rank of tensors in algebraic complexity. We give partial results using in particular a connection with approximate simultaneous diagonalization (the so-called "ASD property"). Subjects: Rings and Algebras (math.RA); Computational Complexity (cs.CC); Algebraic Geometry (math.AG) Cite as: arXiv:1905.05094 [math.RA] (or arXiv:1905.05094v1 [math.RA] for this version) ## Submission history From: Pascal Koiran [view email] [v1] Mon, 13 May 2019 15:38:09 UTC (33 KB) [v2] Mon, 10 Jun 2019 19:51:01 UTC (33 KB)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8005496859550476, "perplexity": 806.9556276549915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999800.5/warc/CC-MAIN-20190625051950-20190625073950-00146.warc.gz"}
https://billy-inn.github.io/
Notes on Convex Optimization (5): Newton's Method For $x\in\mathbf{dom}\ f$, the vector is called the Newton step (for $f$, at $x$). Minimizer of second-order approximation The second-order Taylor approximation $\hat f$ of $f$ at $x$ is $$\hat f(x+v) = f(x) + \nabla f(x)^T v + \frac12 v^T \nabla^2 f(x) v. \tag{1} \label{eq:1}$$ which is a convex quadratic function of $v$, and is minimized when $v=\Delta x_{nt}$. Thus, the Newton step $\Delta x_{nt}$ is what should be added to the point $x$ to minimize the second-order approximation of $f$ at $x$. Notes on Convex Optimization (4): Gradient Descent Method Descent methods • $f(x^{(k+1)}) < f(x^{(k)})$ • $\Delta x$ is the step or search direction; $t$ is the step size or step length • from convexity, $\nabla f(x)^T \Delta x <0$ General descent method. given a starting point $x \in \mathbf{dom} \enspace f$. repeat - Determine a descent direction $\Delta x$. - Line search. Choose a step size $t > 0$. - Update. $x := x+ t\Delta x$. until stopping criterion is satisfied Notes on Convex Optimization (3): Unconstrained Minimization Problems Unconstrained optimization problems are defined as follows: $$\text{minimize}\quad f(x) \tag{1} \label{eq:1}$$ where $f: \mathbf{R}^n \rightarrow \mathbf{R}$ is convex and twice continously differentiable (which implies that $\mathbf{dom}\enspace f$ is open). We denote the optimal value $\inf_xf(x)=f(x^\ast)$, as $p^\ast$. Since $f$ is differentiable and convex, a necessary and sufficient condition for a point $x^\ast$ to be optimal is $$\nabla f(x^\ast)=0. \tag{2} \label{eq:2}$$ Thus, solving the unconstrained minimization problem \eqref{eq:1} is the same as finding a solution of \eqref{eq:2}, which is a set of $n$ equations in the $n$ variables $x_1, \dots, x_n$. Usually, the problem must be solved by an iterative algorithm. By this we mean an algorithm that computes a sequence of points $x^{(0)}, x^{(1)}, \dots \in \mathbf{dom}\enspace f$ with $f(x^{(k)})\rightarrow p^\ast$ as $k\rightarrow\infty$. The algorithm is terminated when $f(x^{k}) - p^\ast \le \epsilon$, where $\epsilon>0$ is some specified tolerance. [Notes on Mathematics for ESL] Chapter 10: Boosting and Additive Trees 10.5 Why Exponential Loss? Derivation of Equation (10.16) Since $Y\in{-1,1}$, we can expand the expectation as follows: In order to minimize the expectation, we equal derivatives w.r.t. $f(x)$ as zero: which gives: [Notes on Mathematics for ESL] Chapter 6: Kernel Smoothing Methods 6.1 One-Dimensional Kernel Smoothers Notes on Local Linear Regression Locally weighted regression solves a separate weighted least squares problem at each target point $x_0$: The estimate is $\hat f(x_0)=\hat\alpha(x_0)+\hat\beta(x_0)x_0$. Define the vector-value function $b(x)^T=(1,x)$. Let $\mathbf{B}$ be the $N \times 2$ regression matrix with $i$th row row $b(x_i)^T$, $\mathbf{W}(x_0)$ the $N\times N$ diagonal matrix with $i$th diagonal element $K_\lambda (x_0, x_i)$, and $\theta=(\alpha(x_0), \beta(x_0))^T$. Then the above optimization problem can be rewritten as Equal the derivative w.r.t $\theta$ as zero, we get [Notes on Mathematics for ESL] Chapter 5: Basis Expansions and Regularization 5.4 Smoothing Splines Derivation of Equation (5.12) Equal the derivative of Equation (5.11) as zero, we get Put the terms related to $\theta$ on one side and the others on the other side, we get Multiply the inverse of $N^TN+\lambda\Omega_N$ on both sides completes the derivation of Equation (5.12) Notes on Convex Optimization (2): Convex Functions 1. Basic Properties and Examples 1.1 Definition $f:\mathbb{R}^n \rightarrow \mathbb R$ is convex if $\mathbf{dom}\ f$ is a convex set and for all $x,y\in \mathbf{dom}\ f, 0\le\theta\le1$ • $f$ is concave if $-f$ is convex • $f$ is strictly convex if $\mathbf{dom}\ f$ is convex and $% $ for $x,y\in\mathbf{dom}\ f,x\ne y, 0<\theta<1$ $f:\mathbb{R}^n \rightarrow \mathbb R$ is convex if and only if the function $g: \mathbb{R} \rightarrow \mathbb{R}$, is convex (in $t$) for any $x \in \mathbf{dom}\ f, v\in\mathbb R^n$ Notes on Convex Optimization (1): Convex Sets 1. Affine and Convex Sets Suppose $x_1\ne x_2$ are two points in $\mathbb{R}^n$. 1.1 Affine sets line through $x_1$, $x_2$: all points affine set: contains the line through any two distinct points in the set [Notes on Mathematics for ESL] Chapter 4: Linear Methods for Classification 4.3 Linear Discriminant Analysis Derivation of Equation (4.9) For that each class’s density follows multivariate Gaussian Take the logarithm of $f_k(x)$, we get where $c = -\log [(2\pi)^{p/2}\lvert\Sigma\rvert^{1/2}]$ and $\mu_k^T\Sigma^{-1}x=x^T\Sigma^{-1}\mu_k$. Following the above formula, we can derive Equation (4.9) easily [Notes on Mathematics for ESL] Chapter 3: Linear Regression Models and Least Squares 3.2 Linear Regression Models and Least Squares Derivation of Equation (3.8) The least squares estimate of $\beta$ is given by the book’s Equation (3.6) From the previous post, we know that $\mathrm{E}(\mathbf{y})=X\beta$. As a result, we obtain Then, we get The variance of $\hat \beta$ is computed as If we assume that the entries of $\mathbf{y}$ are uncorrelated and all have the same variance of $\sigma^2$, then $\mathrm{Var}(\varepsilon)=\sigma^2I_N$ and the above equation becomes This completes the derivation of Equation (3.8).
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 3, "x-ck12": 0, "texerror": 0, "math_score": 0.9906136393547058, "perplexity": 531.6185338954191}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358233.7/warc/CC-MAIN-20211127193525-20211127223525-00526.warc.gz"}
https://www.gradesaver.com/textbooks/math/calculus/thomas-calculus-13th-edition/chapter-14-partial-derivatives-section-14-7-extreme-values-and-saddle-points-exercises-14-7-page-843/16
## Thomas' Calculus 13th Edition Two Saddle points at $(0,0) , (-2,2)$; Local minimum at $(0,2)=-12$ and Local maximum at $(-2,0)=-4$ Given: $f_x(x,y)=3x^2+6x=0, f_y(x,y)=3y^2-6y=0$ Simplify the given two equations. Critical points: $(0,0),(0,2),(-2,0), (-2,2)$ In order to solve this problem we will have to apply Second derivative test that suggests the following conditions to calculate the local minimum, local maximum and saddle points of $f(x,y)$ or $f(x,y,z)$. 1. If $D(a,b)=f_{xx}(a,b)f_{yy}(a,b)-[f_{xy}(a,b)]^2 \gt 0$ and $f_{xx}(a,b)\gt 0$ , then $f(a,b)$ is a local minimum. 2. If $D(a,b)=f_{xx}(a,b)f_{yy}(a,b)-[f_{xy}(a,b)]^2 \gt 0$ and $f_{xx}(a,b)\lt 0$ , then $f(a,b)$ is a local maximum. 3. If $D(a,b)=f_{xx}(a,b)f_{yy}(a,b)-[f_{xy}(a,b)]^2 \lt 0$ , then $f(a,b)$ is a not a local minimum or local maximum but a saddle point. $D(0,0)=f_{xx}f_{yy}-f^2_{xy}=-36$ and $D=-36 \lt 0$ Thus, Saddle point at $(0,2)$ $D(0,2)=f_{xx}f_{yy}-f^2_{xy}=36 \gt 0$ and $f_{xx} \gt 0$ So, Local minimum point at $f(0,2)=-12$ $D(-2,0)=f_{xx}f_{yy}-f^2_{xy}=36$ and $D=36 \gt 0$ and $f_{xx} \lt 0$ Thus, Local maximum point at $f(-2,0)=-4$ $D(-2,2)=f_{xx}f_{yy}-f^2_{xy}=-36$ and $D=-36 \lt 0$ Hence, Two Saddle points at $(0,0) , (-2,2)$; Local minimum at $(0,2)=-12$ and Local maximum at $(-2,0)=-4$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8956664204597473, "perplexity": 96.02098684267055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505731.37/warc/CC-MAIN-20200401130837-20200401160837-00441.warc.gz"}
https://quantiki.org/wiki/bqp
# BQP BQP, in computational complexity theory, stands for "Bounded error, Quantum, Polynomial time". It denotes the class of problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/4 for all instances. In other words, there is an algorithm for a quantum computer that is guaranteed to run in polynomial time. On any given run of the algorithm, it has a probability of at most 1/4 that it will give the wrong answer. That is true, whether the answer is YES or NO. The choice of 1/4 in the definition is arbitrary. Changing the constant to any real number k such that 0 < k < 1/2 does not change the set BQP. The idea is that there is a small probability of error, but running the algorithm many times produces an exponentially-small chance that the majority of the runs are wrong. The number of qubits in the computer is allowed to be a function of the instance size. For example, algorithms are known for factoring an n-bit integer using just over 2n qubits. Quantum computers have gained widespread interest because some problems of practical interest are known to be in BQP, but suspected to be outside P. Currently, only three such problems are known: This class is defined for a quantum computer. The corresponding class for an ordinary Turing machine plus a source of randomness is BPP. BQP contains P and BPP and is contained in PP and PSPACE. In fact, BQP is low for PP, meaning that a PP machine achieves no benefit from being able to solve BQP problems instantly, an indication of the vast difference in power between these similar classes.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8696542978286743, "perplexity": 333.45081456894945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648404.94/warc/CC-MAIN-20180323161421-20180323181421-00551.warc.gz"}
http://mathhelpforum.com/algebra/170993-adding-subtracting-radicals.html
# Math Help - Adding and Subtracting Radicals How would you simplify and do these problems? (square root of) 2/5 + (square root of) 1/2 8 - (square root of) 5/7 I understand the concept... I just always get these wrong for some reason, so please be detailed so I can understand! Thanks! 2. Originally Posted by chandler How would you simplify and do these problems? (square root of) 2/5 + (square root of) 1/2 8 - (square root of) 5/7 I understand the concept... I just always get these wrong for some reason, so please be detailed so I can understand! Thanks! Are there two separate questions there? Is the first one $\sqrt {\dfrac{2}{5}} + \sqrt {\dfrac{1}{2}}~?$ 3. ## Yes, two seperate Yes, these are two different equations. Also... How do you make the square root symbol? 4. Originally Posted by chandler How would you simplify and do these problems? (square root of) 2/5 + (square root of) 1/2 8 - (square root of) 5/7 I understand the concept... I just always get these wrong for some reason, so please be detailed so I can understand! Thanks! $\displaystyle \sqrt{\frac{2}{5}} + \sqrt{\frac{1}{2}} = \sqrt{\frac{4}{10}} + \sqrt{\frac{5}{10}}$ $\displaystyle = \frac{\sqrt{4}}{\sqrt{10}} + \frac{\sqrt{5}}{\sqrt{10}}$ $\displaystyle = \frac{\sqrt{4} + \sqrt{5}}{\sqrt{10}}$ $\displaystyle = \frac{2 + \sqrt{5}}{\sqrt{10}}$ $\displaystyle = \frac{\sqrt{10}(2 + \sqrt{5})}{10}$ $\displaystyle = \frac{2\sqrt{10} + \sqrt{50}}{10}$ $\displaystyle = \frac{2\sqrt{10} + \sqrt{25}\sqrt{2}}{10}$ $\displaystyle = \frac{2\sqrt{10} + 5\sqrt{2}}{10}$. 5. Originally Posted by chandler How do you make the square root symbol? $$\sqrt {\dfrac{2}{5}} + \sqrt {\dfrac{1}{2}}$$ gives $\sqrt {\dfrac{2}{5}} + \sqrt {\dfrac{1}{2}}$ 6. ## Thanks guys! One more thing... Please do this one too: $\sqrt {\dfrac{1}{2}} + \sqrt {\dfrac{1}{2}}$ = 7. Originally Posted by chandler One more thing... Please do this one too: $\sqrt {\dfrac{1}{2}} + \sqrt {\dfrac{1}{2}}$ = $\sqrt {\dfrac{1}{2}}={\dfrac{\sqrt2}{2}}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8904813528060913, "perplexity": 2460.859750403805}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375094690.4/warc/CC-MAIN-20150627031814-00255-ip-10-179-60-89.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/solubility-of-chlorophyll-and-carotenoids.412083/
# Homework Help: Solubility of chlorophyll and carotenoids 1. Jun 24, 2010 ### Puchinita5 1. The problem statement, all variables and given/known data okay. so i did a chromotography experiment with spinach leaves. A line of pigments separated from the spinach leaves was put on the chromatography paper. Then the paper was put in petroleum ether. It showed carotene at the top, followed by xanthophylls, then chlorophyll A, then chlorophyll B. Therefore, I concluded that the carotenoids are less polar than the chlorophylls and is more soluble than chlorophylls. THEN, and this is what makes no sense to me, we mixed a solution of ethanol and pigment with petroleum ether, shook it, and two layers formed. A dark dark layer and a yellowish layer. The p.ether was said to be at top cuz it was less dense. WHY does chlorophyll separate into the p.ether while the carotenoids separate into the ethanol , whereas in the first experiment it was the other way around?! I would expect that the chlorphyll would dissolve more easily in ethanol and the carotenoids would separate into the petroleum ether. 2. Relevant equations 3. The attempt at a solution 2. Jun 24, 2010 ### Puchinita5 i guess i could also ask the question, why would a non-polar pigment such as carotene dissolve more readily in a more polar substance (ethanol) rather than in a more non-polar substance (petroleum ether). 3. Jun 25, 2010 ### chemisttree Carotene is known to http://pubs.acs.org/doi/abs/10.1021/ja01326a056" [Broken] in plant tissues. Remember that these essential oils and oily pigments are present in plant tissues which is an aqueous environment. Is it likely that water-insoluble compounds like carotene exist in pure form or as microemulsions associated with natural surfactants in plant tissues? How about in your extraction? Compare the structure of chlorophyll with that of a typical surfactant. Do you see a non polar tail and a polar head? Last edited by a moderator: May 4, 2017
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8648681640625, "perplexity": 4506.629120422804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864387.54/warc/CC-MAIN-20180622084714-20180622104714-00456.warc.gz"}
https://mathoverflow.net/questions/193469/schreiers-formula-and-supersolvable-groups
# Schreier's formula and supersolvable groups A finitely generated profinite group $G$ is said to satisfy Schreier's formula if for every open subgroup $L \leq_o G$ we have $d(L) = (d(G)-1)[G:L] + 1$. Here $d$ stands for the smallest cardinality of a (topological) generating set of a group. A group $G$ is called supersolvable if if there exists a normal series: $$\{1\} = H_0 \lhd H_1 \lhd \dots \lhd H_{n-1} \lhd H_{n} = G$$ such that each $H_{i+1}/H_i$ is cyclic and $H_i \lhd G$. A group is called prosupersolvable if it is an inverse limit of finite supersolvable groups. Let $p$ be a prime number. As $p$-groups are supersolvable, finitely generated free pro-$p$ groups satisfy Schreier's formula. Is this essentially the only example? • Is every finitely generated prosupersolvable group $G$ satisfying Schreier's formula is virtually a free pro-$p$ group for some prime $p$? • I studied this 10 years ago with Auinger. I believed we proved it has an open normal free pro-p subgroup such that the quotient is a finite abelian group of exponent dividing p-1 but I will double check – Benjamin Steinberg Jan 8 '15 at 18:17 • @BenjaminSteinberg I think that your arguments work only for varieties of groups. – Pablo Jan 8 '15 at 18:33 • The paper only gives the proof for relatively free groups because we were interested in that. My memory seems to be we had a more general proof that we left out because we had a slicker proof for the relatively free case. The problem is I don't recall it yet from 10 years ago! – Benjamin Steinberg Jan 8 '15 at 18:46 • At the moment I only remember the proof if there are only finitely many prime divisors of the order of the group. – Benjamin Steinberg Jan 8 '15 at 18:59 My vague memory is that we proved with Auinger 10 years ago that a freely indexed pro-supersolvable group is virtually pro-p. We were only interested in the case of relatively free groups, which appears in http://link.springer.com/article/10.1007%2Fs00208-006-0767-2 This case admits a number of simplifications and the published version is very different from the first proof we had, which I believe worked for what you want. I can't remember the details so let me outline a special case. Suppose first that $G$ is finitely generated (and not pro-cyclic), freely indexed and pro-supersolvable with order divisible by only finitely many primes. Then by an old result of Oltikar and Ribes http://projecteuclid.org/euclid.pjm/1102806646 the Frattini subgroup of $G$ is open. The Frattini subgroup is also pro-nilpotent. An open normal subgroup of a freely indexed group is again freely indexed. A theorem of Lubotzky says that a pro-nilpotent freely indexed (and not procyclic) group is free pro-p for some prime p. Thus a freely indexed pro-supersolvable group with order divisible by only finitely many primes has Frattini subgroup open and free pro-p for some prime p. So one wants to show in the general case that a freely indexed pro-supersolvable groups has only finitely many prime divisors of its order. We gave a geometric consequence for the Cayley graph of a freely indexed profinite group. Namely, any closed connected (in the sense of Ribes and Guldenhuys) subgraph of the Cayley graph contains each edge between two of its vertices. Using this, we essentially showed that such a profinite group is not a subdirect product of two profinite groups. This is how we showed a relatively free pro-supersolvable group which is freely indexed has only finitely many prime divisors. I am not sure if one can use this in general. I'd have to reread the paper more carefully or dig up some ancient versions of the paper on long lost computers to see if we really did do the general case at one point. • Thank you very much for the detailed explaination, I was aware of the reduction to the case of infinitely many primes dividing the order. There do exist freely indexed prosolvable groups with infinitely many prime divisors, constructed by Lubotzky and v.d. Dries. But their example is not prosupersolvable... – Pablo Jan 8 '15 at 19:06 • Supersolvable is crucial here. Every finite supersolvable group is a subdirect product of groups with a normal p-subgroups corresponding quotient an abelian group of order dividing p-1. This is what we used for the relatively free case. – Benjamin Steinberg Jan 8 '15 at 19:21 • What is a relatively free profinite group? – Pablo Jan 8 '15 at 19:24 • It means it is the pro-C completion of a free group with respect to a variety of finite groups in the sense of the book of Ribes and Zalesski. Or it is the free object in the class of profinite groups satisfying some profinite identity. – Benjamin Steinberg Jan 8 '15 at 20:03 • So in this case I suspect that every occurrence of prosolvable in your answer should be prosupersolvable. – Pablo Jan 8 '15 at 20:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8898830413818359, "perplexity": 380.25765907288326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107912807.78/warc/CC-MAIN-20201031032847-20201031062847-00227.warc.gz"}
http://neutronstars.utk.edu/class/p616_s21/lect23.html
# Physics 616 • Prof. Andrew W. Steiner (or Andrew or "Dr. Steiner") • Office hour: 103 South College, Thursday 11am • Email: [email protected] • Homework: Electronically as .pdf • You may work with each other on the homework, but you must write the solution in your own words Use down and up arrows to proceed to the next or previous slide. ## Outline • Cosmology by Weinberg • Cyburt et al. (2016) for BBN ## Distance Indicators • Distance measurements are key in determining the Hubble parameter and its dependence on redshift • Is the Hubble parameter larger or smaller at earlier times? How would you determine this? • For nearby objects • Trigonometric parallax • Photometric parallax • Red clump stars • RR Lyrae stars • Eclipsing binaries • Cepheid variables • Confusion between "peculiar motion" and Hubble expansion ## Distance Indicators I - Tully-Fisher Phenomenological correlation between angular velocity of stars in a galaxy around center and the luminosity of that galaxy Standard correlation includes only stars Slightly different correlations in different bands Found tighter correlation when luminosity includes stars and gas Mass $\propto v^{3-4}$ Originally applied to spiral galaxies, applied to ellipticals in "Faber-Jackson" relation Also evidence for dark matter Tully and Fisher (1977), this figure from Karachentsev et al. (2002) ## Distance Indicators II - Type Ia supernova White dwarf always has a mass near the Chandrasekhar limit Luminosity correlated with rise and decline time of the emitted light Emitted light is from the decay of Nickel-56 "Phillips relationship" In detail, $$M_{\mathrm{max}}(B) = -21.7 + 2.7 \Delta m_{15}(B)$$ Calibrate correlation with other distance measurements "Standardizable candle" From Phillips ## More General Cosmological Models • The Friedmann equations give $$\rho = \frac{3 H_0^2}{8 \pi G} \left[ \Omega_{\Lambda} + \Omega_M \left( \frac{a_0}{a}\right)^3 + \Omega_R \left(\frac{a_0}{a}\right)^4\right]$$ with $$\rho_{V0} = \frac{3 H_0^2 \Omega_{\Lambda}}{8 \pi G}, \quad \rho_{M0} = \frac{3 H_0^2 \Omega_{M}}{8 \pi G}, \quad \rho_{R0} = \frac{3 H_0^2 \Omega_{R}}{8 \pi G},$$ and $$\Omega_{\Lambda} + \Omega_M + \Omega_R = 1-\Omega_K$$ and $$\Omega_K = -\frac{K}{a_0^2 H_0^2}$$ • Using the Friedmann equation $$\dot{a}^2 + K = \frac{8 \pi G \rho a^2}{3}$$ ## More General Cosmological Models II • ...and defining $x \equiv a/a_0$, we get $$dt = \frac{dx}{H_0 x \sqrt{\Omega_{\Lambda} + \Omega_K x^{-2} + \Omega_M x^{-3}+ \Omega_R x^{-4}}}$$ • Define $t=0$ to be at $z=\infty$ and we have $a/a_0 = 1/(1+z)$ so $$dt = \frac{-dz}{H_0 (1+z) \sqrt{\Omega_{\Lambda} + \Omega_K (1+z)^{2} + \Omega_M (1+z)^{3}+ \Omega_R (1+z)^{4}}}$$ thus the age of the universe is $$t = \frac{1}{H_0} \int_0^{1} \frac{dx}{x \sqrt{\Omega_{\Lambda} + \Omega_K x^{-2} + \Omega_M x^{-3}+ \Omega_R x^{-4}}}$$ • This can be integrated in the approximation that $$\Omega_R \approx \Omega_K = 0$$ ## Cosmic Microwave Background • Energy density in radiation is $$U = \frac{8 \pi^5 k_B^4 T^4}{15 h^3 c^3}$$ which gives $$\Omega_{\gamma} \equiv \frac{\rho_{\gamma 0}}{\rho_{0,\mathrm{crit}}} = 2.47 \times 10^{-5} h^{-2}$$ where $$H = 100~h~\mathrm{km}/\mathrm{Mpc}/\mathrm{s}$$ which naturally implies that $\Omega_{\gamma}$ is too small enough to be ignored in the current epoch. ## Cosmic Microwave Background • Neutrinos increase this energy density by about a factor of two • The present number density of photons is $$n_{\gamma} = \frac{30 \zeta(3) a_B T^3}{\pi^4 k_B} = \frac{410~\mathrm{photons}}{\mathrm{cm}^3}$$ • Compare to baryons, $$n_B = \frac{3 \Omega_B H_0^2}{8 \pi G m_N} = 1.1 \times 10^{-5} \Omega_B h^2 ~\mathrm{nucleons}/\mathrm{cm}^3$$ • Current values of baryon-to-photon ratio actually a few $\times 10^{-10}$ ## Recombination • Matter and radiation decoupled when electrons and protons combine at $z=1100$ and $T=4000~\mathrm{K}$ and $t=360,000~\mathrm{yr}$. • Rough estimate from Saha equation for $$p + e \leftrightarrow H + \gamma$$ is $$\frac{n_p n_e}{n_{\mathrm{H}}} = \left( \frac{m_e k_B T}{2 \pi \hbar^2} \right) \exp \left( - \frac{E_I}{k_B T}\right)$$ where $E_I$ is the ionization energy $13.6~\mathrm{eV}$ of hydrogen. • In reality, excited states (2p, 2s) make a contribution • Deviations from equilibrium impact these results at the 10% level ## Dipole Anisotropy • CMB provides a frame of reference, solar system moves relative to this • Number of photons is Lorentz invariant, so boost momenta $$| \mathbf{p} | = \left(\frac{1+\beta \cos \theta}{\sqrt{1-\beta^2}}\right) | \mathbf{p}^{\prime} | =$$ and this results in $$T^{\prime} = T \left(\frac{1+\beta \cos \theta}{\sqrt{1-\beta^2}}\right)$$ • Solar system "barycenter" has velocity of 370 km/s (0.1% of $c$) relative to CMB • Local group of galaxies moves at 630 km/s relative to the CMB ## Sunayev-Zel'dovich effect • Are CMB photons completely unimpeded by the time they reach the observer? • No, galaxies contain high-temperature (but low-density) electrons • Inverse Compton scattering increases the photon energy • Change in temperature (for $\omega/k_B T_{\gamma} \ll 1$) $$\frac{\Delta T_{\gamma}}{T_{\gamma}} = \frac{\Delta N_{\gamma}}{N_{\gamma}} = - \frac{2 \sigma_T}{m_e c^2} \int~d\ell~n_e(\ell)~k_B T_e(\ell)$$ • Measure $n_e$ from X-ray observations, then use S-Z effect to determine $H_0$ ## Big Bang Nucleosynthesis • Neutron-to-proton ratio nearly equal until 1 second • Protons slightly favored over neutrons from Saha equation • Saha equation is relationship between chemical potentials in thermal equilibrium, in this case, $$\mu_n + m_n = \mu_p + m_p + \mu_e$$ • Freeze out (interactions slower than expansion) at $T\sim 0.8~\mathrm{MeV}$ • Competition between fusion and neutron decay leads to 25% deuterium by mass, 75% protons and trace amounts of helium-3, deuterium, lithium-7, etc. • Uncertainty in neutron lifetime caused difficulties for early models • These abundances strongly determined by baryon-to-photon ratio ## Big Bang Nucleosynthesis II More helium-4 in the universe than can be explained by stellar evolution Deuterium is difficult to create (low binding energy) Observational determinations generally agree with model predictions based on CMB determinations of $\eta$ Except lithium 7, "no solution that is either not tuned or requires substantial departures from standard model physics" Cyburt et al. (2016) Stellar depletion of lithium-7 to explain why observed value is smaller than BBN prediction From NASA ## Big Bang Nucleosynthesis III • 7 protons for each neutron • Begin with $$n + p \rightarrow d + \gamma$$ but deuteron binding energy is small so photon breakup is significant until the deuterium bottleneck is resolved • All neutrons end up in helium-4 $$Y_{\mathrm{helium}-4} = \frac{2(n/p)}{1+(n/p)} \approx 0.25$$ • An accurate fit is $$\eta_{10} = 273.3036 \Omega_B h^2 \left(1 + 7.16958\times10^{-3} Y_p \right) \left(\frac{2.7255~\mathrm{K}}{T_{\gamma}^0}\right)^3$$ where $T_{\gamma}^0$ is the current photon temperature ## Cold Dark Matter • Cold if (i) non-relativistic, and (at the time of radiation-matter equality) (ii) dissipationless and (iii) collisionless • Cold dark matter leads to bottom-up structure formation • Bottom-up: small objects clump first, then merge to form larger object • What is the opposite of "bottom-up"? • WIMPs and axions ## WIMP Miracle • Weak interaction magically provides good dark matter candidates • Weak scale, $m_W \sim 100~\mathrm{GeV}$ • Weak coupling, $G_F~\sim 10^{-5}~\mathrm{GeV}^{-2}$ • Density of dark matter particles $$\frac{d (na^3)}{dt} = -n^2 a^3 \left<\sigma v\right>$$ or $$n a^3 = \frac{n(t_1) a^3(t_1)}{1+ n(t_1) a^3(t_1) \int_{t_1}^{\infty} \left<\sigma v\right> a^{-3}(t) dt^{\prime}}$$ • Turns out that $$\left<\sigma v\right> \sim G_F^2 m_W^2$$ gives almost exactly the right result ## Group Work • shuffle list Group 1: Spokesperson: Erich Bermel Tuhin Das Ibrahim Mirza Group 2: Spokesperson: Jason Forson Satyajit Roy Group 3: Spokesperson: Leonard Mostella Andrew Tarrence • In BBN, at freezeout, the neutron to proton ratio is determined by the Saha equation. Presume that the electron contribution is negligable, thus $\mu_n + m_n = \mu_p + m_p$ and that $n \propto \exp (\mu_i/T)$. What is the neutron to proton ratio at freezeout ($T=0.8~\mathrm{MeV}$) ? • BBN is delayed until fusion can overcome the "deuterium bottleneck". Cyburt et al. states that this temperature is approximately when $$\eta^{-1} \exp(-E_B/T) = 1$$ where $E_B$ is the binding energy of the deuteron and $\eta$ is the baryon-to-photon ratio. What is $T$?
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9528747797012329, "perplexity": 3349.8800301877077}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243992440.69/warc/CC-MAIN-20210517180757-20210517210757-00550.warc.gz"}
http://math.stackexchange.com/questions/28352/ky-fan-norm-question
# Ky Fan Norm Question How can one simply see that Ky Fan $k$-norm satisfies the triangle inequality? (The Ky Fan $k$-norm of a matrix is the sum of the $k$ largest singular values of the matrix) Thanks. - Could you please define the Ky Fan $k$-norm and let us know what you already tried? Also, is this homework? –  Glen Wheeler Mar 21 '11 at 19:17 The Ky Fan $k$-norm of a matrix is the sum of the $k$ largest singular values of the matrix –  user4727 Mar 21 '11 at 19:28 @user4727 Thanks for the edit. Are you aware of the original papers by Ky Fan (~1950)? The reason why it is called the "Ky Fan norm" is because he proved it is a norm first. This might be a good opportunity for some "research"! :) –  Glen Wheeler Mar 21 '11 at 19:47 For any $k$-plane $U$ in $\mathbb{C}^n$, let $i_U$ be the inclusion of $U$ into $\mathbb{C}^n$ and let $p_U$ be the orthogonal projection of $\mathbb{C}^n$ onto $U$. Lemma: Let the singular values of $A$ be $\sigma_1 \geq \sigma_2 \geq \cdots \geq \sigma_n$. Then $\max_{U, V} \ \mathrm{Tr} \left( p_V \circ A \circ i_U \right)= \sigma_1+ \cdots +\sigma_k$, where the max is over all pairs of $k$-planes in $\mathbb{C}^n$. Then $\max_{U, V} \ \mathrm{Tr} \left(p_V \circ (A+B) \circ i_U\right) \leq \max_{U, V} \ \mathrm{Tr} \left( p_V \circ A \circ i_U \right) + \max_{U, V} \ \mathrm{Tr} \left( p_V \circ B \circ i_U \right)$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.955800473690033, "perplexity": 179.5767482445307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098849.37/warc/CC-MAIN-20150627031818-00283-ip-10-179-60-89.ec2.internal.warc.gz"}
https://jmservera.com/solve-for-a-a1-82-2-56-5-242/
# Solve for A A=((1/8)^2*(-2/5)^6*(-5/2)^4)^2 A=((18)2⋅(-25)6⋅(-52)4)2 Apply the product rule to 18. A=(1282⋅(-25)6⋅(-52)4)2 One to any power is one. A=(182⋅(-25)6⋅(-52)4)2 Raise 8 to the power of 2. A=(164⋅(-25)6⋅(-52)4)2 Move the negative in front of the fraction. A=(164⋅(-25)6⋅(-52)4)2 Use the power rule (ab)n=anbn to distribute the exponent. Apply the product rule to -25. A=(164⋅((-1)6(25)6)⋅(-52)4)2 Apply the product rule to 25. A=(164⋅((-1)62656)⋅(-52)4)2 A=(164⋅((-1)62656)⋅(-52)4)2 Simplify the expression. Raise -1 to the power of 6. A=(164⋅(12656)⋅(-52)4)2 Multiply 2656 by 1. A=(164⋅2656⋅(-52)4)2 Raise 2 to the power of 6. A=(164⋅6456⋅(-52)4)2 Raise 5 to the power of 6. A=(164⋅6415625⋅(-52)4)2 A=(164⋅6415625⋅(-52)4)2 Cancel the common factor of 64. Cancel the common factor. A=(164⋅6415625⋅(-52)4)2 Rewrite the expression. A=(115625⋅(-52)4)2 A=(115625⋅(-52)4)2 Use the power rule (ab)n=anbn to distribute the exponent. Apply the product rule to -52. A=(115625⋅((-1)4(52)4))2 Apply the product rule to 52. A=(115625⋅((-1)45424))2 A=(115625⋅((-1)45424))2 Simplify the expression. Raise -1 to the power of 4. A=(115625⋅(15424))2 Multiply 5424 by 1. A=(115625⋅5424)2 Raise 5 to the power of 4. A=(115625⋅62524)2 Raise 2 to the power of 4. A=(115625⋅62516)2 A=(115625⋅62516)2 Cancel the common factor of 625. Factor 625 out of 15625. A=(1625(25)⋅62516)2 Cancel the common factor. A=(1625⋅25⋅62516)2 Rewrite the expression. A=(125⋅116)2 A=(125⋅116)2 Multiply 125 and 116. A=(125⋅16)2 Simplify the expression. Multiply 25 by 16. A=(1400)2 Apply the product rule to 1400. A=124002 One to any power is one. A=14002 Raise 400 to the power of 2. A=1160000 A=1160000 The result can be shown in multiple forms. Exact Form: A=1160000 Decimal Form: A=6.25⋅10-6 Solve for A A=((1/8)^2*(-2/5)^6*(-5/2)^4)^2 ## Our Professionals ### Lydia Fran #### We are MathExperts Solve all your Math Problems: https://elanyachtselection.com/ Scroll to top
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8010350465774536, "perplexity": 4324.9459964688285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500294.64/warc/CC-MAIN-20230205224620-20230206014620-00499.warc.gz"}
https://library.kiwix.org/wikipedia_en_top_maxi/A/Trace_(linear_algebra)
# Trace (linear algebra) In linear algebra, the trace of a square matrix A, denoted tr(A),[1][2] is defined to be the sum of elements on the main diagonal (from the upper left to the lower right) of A. The trace of a matrix is the sum of its (complex) eigenvalues (counted with multiplicities), and it is invariant with respect to a change of basis. This characterization can be used to define the trace of a linear operator in general. The trace is only defined for a square matrix (n × n). The trace is related to the derivative of the determinant (see Jacobi's formula). ## Definition The trace of an n × n square matrix A is defined as[2][3][4]:34 ${\displaystyle \operatorname {tr} (\mathbf {A} )=\sum _{i=1}^{n}a_{ii}=a_{11}+a_{22}+\dots +a_{nn}}$ where aii denotes the entry on the ith row and ith column of A. ## Example Let A be a matrix, with ${\displaystyle \mathbf {A} ={\begin{pmatrix}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{pmatrix}}={\begin{pmatrix}1&0&3\\11&5&2\\6&12&-5\end{pmatrix}}}$ Then ${\displaystyle \operatorname {tr} (\mathbf {A} )=\sum _{i=1}^{3}a_{ii}=a_{11}+a_{22}+a_{33}=1+5+(-5)=1}$ ## Properties ### Basic properties The trace is a linear mapping. That is,[2][3] {\displaystyle {\begin{aligned}\operatorname {tr} (\mathbf {A} +\mathbf {B} )&=\operatorname {tr} (\mathbf {A} )+\operatorname {tr} (\mathbf {B} )\\\operatorname {tr} (c\mathbf {A} )&=c\operatorname {tr} (\mathbf {A} )\end{aligned}}} for all square matrices A and B, and all scalars c.[4]:34 A matrix and its transpose have the same trace:[2][3][4]:34 ${\displaystyle \operatorname {tr} (\mathbf {A} )=\operatorname {tr} \left(\mathbf {A} ^{\mathsf {T}}\right).}$ This follows immediately from the fact that transposing a square matrix does not affect elements along the main diagonal. ### Trace of a product The trace of a square matrix which is the product of two matrices can be rewritten as the sum of entry-wise products of their elements. More precisely, if A and B are two m × n matrices, then: ${\displaystyle \operatorname {tr} \left(\mathbf {A} ^{\mathsf {T}}\mathbf {B} \right)=\operatorname {tr} \left(\mathbf {A} \mathbf {B} ^{\mathsf {T}}\right)=\operatorname {tr} \left(\mathbf {B} ^{\mathsf {T}}\mathbf {A} \right)=\operatorname {tr} \left(\mathbf {B} \mathbf {A} ^{\mathsf {T}}\right)=\sum _{i,j}a_{ij}b_{ij}.}$ This means that the trace of a product of equal-sized matrices functions in a similar way to a dot product of vectors (imagine A and B as long vectors with columns stacked on each other). For this reason, generalizations of vector operations to matrices (e.g. in matrix calculus and statistics) often involve a trace of matrix products. For real matrices A and B, the trace of a product can also be written in the following forms: ${\displaystyle \operatorname {tr} \left(\mathbf {A} ^{\mathsf {T}}\mathbf {B} \right)=\sum _{i,j}(\mathbf {A} \circ \mathbf {B} )_{ij}}$ (using the Hadamard product, also known as the entrywise product). ${\displaystyle \operatorname {tr} \left(\mathbf {A} ^{\mathsf {T}}\mathbf {B} \right)=\operatorname {vec} (\mathbf {B} )^{\mathsf {T}}\operatorname {vec} (\mathbf {A} )=\operatorname {vec} (\mathbf {A} )^{\mathsf {T}}\operatorname {vec} (\mathbf {B} )}$ (using the vectorization operator). The matrices in a trace of a product can be switched without changing the result: If A is an m × n matrix and B is an n × m matrix, then[2][3][4]:34[note 1] ${\displaystyle \operatorname {tr} (\mathbf {A} \mathbf {B} )=\operatorname {tr} (\mathbf {B} \mathbf {A} )}$ Additionally, for real column matrices ${\displaystyle \mathbf {a} \in \mathbb {R} ^{n}}$ and ${\displaystyle \mathbf {b} \in \mathbb {R} ^{n}}$, the trace of the outer product is equivalent to the inner product: ${\displaystyle \operatorname {tr} \left(\mathbf {b} \mathbf {a} ^{\textsf {T}}\right)=\mathbf {a} ^{\textsf {T}}\mathbf {b} }$ ### Cyclic property More generally, the trace is invariant under cyclic permutations, that is, ${\displaystyle \operatorname {tr} (\mathbf {A} \mathbf {B} \mathbf {C} \mathbf {D} )=\operatorname {tr} (\mathbf {B} \mathbf {C} \mathbf {D} \mathbf {A} )=\operatorname {tr} (\mathbf {C} \mathbf {D} \mathbf {A} \mathbf {B} )=\operatorname {tr} (\mathbf {D} \mathbf {A} \mathbf {B} \mathbf {C} ).}$ This is known as the cyclic property. Arbitrary permutations are not allowed: in general, ${\displaystyle \operatorname {tr} (\mathbf {A} \mathbf {B} \mathbf {C} )\neq \operatorname {tr} (\mathbf {A} \mathbf {C} \mathbf {B} ).}$ However, if products of three symmetric matrices are considered, any permutation is allowed, since: ${\displaystyle \operatorname {tr} (\mathbf {A} \mathbf {B} \mathbf {C} )=\operatorname {tr} \left(\left(\mathbf {A} \mathbf {B} \mathbf {C} \right)^{\mathsf {T}}\right)=\operatorname {tr} (\mathbf {C} \mathbf {B} \mathbf {A} )=\operatorname {tr} (\mathbf {A} \mathbf {C} \mathbf {B} ),}$ where the first equality is because the traces of a matrix and its transpose are equal. Note that this is not true in general for more than three factors. ### Trace of a matrix product Unlike the determinant, the trace of the product is not the product of traces, that is there exist matrices A and B such that ${\displaystyle \operatorname {tr} (\mathbf {A} \mathbf {B} )\neq \operatorname {tr} (\mathbf {A} )\operatorname {tr} (\mathbf {B} )}$ For example, if ${\displaystyle \mathbf {A} ={\begin{pmatrix}0&1\\0&0\end{pmatrix}},\ \ \mathbf {B} ={\begin{pmatrix}0&0\\1&0\end{pmatrix}},}$ then the product is ${\displaystyle \mathbf {AB} ={\begin{pmatrix}1&0\\0&0\end{pmatrix}},}$ and the traces are ${\displaystyle \operatorname {tr} (\mathbf {A} \mathbf {B} )=1\neq 0\cdot 0=\operatorname {tr} (\mathbf {A} )\operatorname {tr} (\mathbf {B} ).}$ ### Trace of a Kronecker product The trace of the Kronecker product of two matrices is the product of their traces: ${\displaystyle \operatorname {tr} (\mathbf {A} \otimes \mathbf {B} )=\operatorname {tr} (\mathbf {A} )\operatorname {tr} (\mathbf {B} ).}$ ### Full characterization of the trace The following three properties: {\displaystyle {\begin{aligned}\operatorname {tr} (\mathbf {A} +\mathbf {B} )&=\operatorname {tr} (\mathbf {A} )+\operatorname {tr} (\mathbf {B} ),\\\operatorname {tr} (c\mathbf {A} )&=c\operatorname {tr} (\mathbf {A} ),\\\operatorname {tr} (\mathbf {A} \mathbf {B} )&=\operatorname {tr} (\mathbf {B} \mathbf {A} ),\end{aligned}}} characterize the trace completely in the sense that follows. Let f be a linear functional on the space of square matrices satisfying f(xy) = f(yx). Then f and tr are proportional.[note 2] ### Similarity invariance The trace is similarity-invariant, which means that for any square matrix A and any invertible matrix P of the same dimensions, the matrices A and P−1AP have the same trace. This is because ${\displaystyle \operatorname {tr} \left(\mathbf {P} ^{-1}\mathbf {A} \mathbf {P} \right)=\operatorname {tr} \left(\mathbf {P} ^{-1}(\mathbf {A} \mathbf {P} )\right)=\operatorname {tr} \left((\mathbf {A} \mathbf {P} )\mathbf {P} ^{-1}\right)=\operatorname {tr} \left(\mathbf {A} \left(\mathbf {P} \mathbf {P} ^{-1}\right)\right)=\operatorname {tr} (\mathbf {A} ).}$ ### Trace of product of symmetric and skew-symmetric matrix If A is symmetric and B is skew-symmetric, then ${\displaystyle \operatorname {tr} (\mathbf {A} \mathbf {B} )=0}$. #### Trace of the identity matrix The trace of the n × n identity matrix is the dimension of the space, namely n.[1] ${\displaystyle \operatorname {tr} \left(\mathbf {I} _{n}\right)=n}$ This leads to generalizations of dimension using trace. #### Trace of an idempotent matrix The trace of an idempotent matrix A (a matrix for which A2 = A) is equal to the rank of A. #### Trace of a nilpotent matrix The trace of a nilpotent matrix is zero. When the characteristic of the base field is zero, the converse also holds: if tr(Ak) = 0 for all k, then A is nilpotent. When the characteristic n > 0 is positive, the identity in n dimensions is a counterexample, as ${\displaystyle \operatorname {tr} \left(\mathbf {I} _{n}^{k}\right)=\operatorname {tr} \left(\mathbf {I} _{n}\right)=n\equiv 0}$, but the identity is not nilpotent. #### Trace equals sum of eigenvalues More generally, if ${\displaystyle f(x)=\prod _{i=1}^{k}\left(x-\lambda _{i}\right)^{d_{i}}}$ is the characteristic polynomial of a matrix A, then ${\displaystyle \operatorname {tr} (\mathbf {A} )=\sum _{i=1}^{k}d_{i}\lambda _{i}}$ that is, the trace of a square matrix equals the sum of the eigenvalues counted with multiplicities. ### Trace of commutator When both A and B are n × n matrices, the trace of the (ring-theoretic) commutator of A and B vanishes: tr([A,B]) = 0, because tr(AB) = tr(BA) and tr is linear. One can state this as "the trace is a map of Lie algebras glnk from operators to scalars", as the commutator of scalars is trivial (it is an Abelian Lie algebra). In particular, using similarity invariance, it follows that the identity matrix is never similar to the commutator of any pair of matrices. Conversely, any square matrix with zero trace is a linear combinations of the commutators of pairs of matrices.[note 3] Moreover, any square matrix with zero trace is unitarily equivalent to a square matrix with diagonal consisting of all zeros. ### Trace of Hermitian matrix The trace of a Hermitian matrix is real, because the elements on the diagonal are real. ### Trace of permutation matrix The trace of a permutation matrix is the number of fixed points, because the diagonal term aii is 1 if the ith point is fixed and 0 otherwise. ### Trace of projection matrix The trace of a projection matrix is the dimension of the target space. {\displaystyle {\begin{aligned}\mathbf {P} _{\mathbf {X} }&=\mathbf {X} \left(\mathbf {X} ^{\mathsf {T}}\mathbf {X} \right)^{-1}\mathbf {X} ^{\mathsf {T}}\\[3pt]\Longrightarrow \operatorname {tr} \left(\mathbf {P} _{\mathbf {X} }\right)&=\operatorname {rank} (\mathbf {X} ).\end{aligned}}} The matrix PX is idempotent, and more generally, the trace of any idempotent matrix equals its own rank. ## Exponential trace Expressions like tr(exp(A)), where A is a square matrix, occur so often in some fields (e.g. multivariate statistical theory), that a shorthand notation has become common: ${\displaystyle \operatorname {tre} (A):=\operatorname {tr} (\exp(A)).}$ tre is sometimes referred to as the exponential trace function; it is used in the Golden–Thompson inequality. ## Trace of a linear operator In general, given some linear map f : VV (where V is a finite-dimensional vector space), we can define the trace of this map by considering the trace of a matrix representation of f, that is, choosing a basis for V and describing f as a matrix relative to this basis, and taking the trace of this square matrix. The result will not depend on the basis chosen, since different bases will give rise to similar matrices, allowing for the possibility of a basis-independent definition for the trace of a linear map. Such a definition can be given using the canonical isomorphism between the space End(V) of linear maps on V and VV*, where V* is the dual space of V. Let v be in V and let f be in V*. Then the trace of the indecomposable element vf is defined to be f(v); the trace of a general element is defined by linearity. Using an explicit basis for V and the corresponding dual basis for V*, one can show that this gives the same definition of the trace as given above. ### Eigenvalue relationships If A is a linear operator represented by a square matrix with real or complex entries and if λ1, …, λn are the eigenvalues of A (listed according to their algebraic multiplicities), then ${\displaystyle \operatorname {tr} (\mathbf {A} )=\sum _{i}\lambda _{i}}$ This follows from the fact that A is always similar to its Jordan form, an upper triangular matrix having λ1, …, λn on the main diagonal. In contrast, the determinant of A is the product of its eigenvalues; that is, ${\displaystyle \det(\mathbf {A} )=\prod _{i}\lambda _{i}.}$ More generally, ${\displaystyle \operatorname {tr} \left(\mathbf {A} ^{k}\right)=\sum _{i}\lambda _{i}^{k}.}$ ### Derivatives The trace corresponds to the derivative of the determinant: it is the Lie algebra analog of the (Lie group) map of the determinant. This is made precise in Jacobi's formula for the derivative of the determinant. As a particular case, at the identity, the derivative of the determinant actually amounts to the trace: tr = det′I. From this (or from the connection between the trace and the eigenvalues), one can derive a connection between the trace function, the exponential map between a Lie algebra and its Lie group (or concretely, the matrix exponential function), and the determinant: ${\displaystyle \det(\exp(\mathbf {A} ))=\exp(\operatorname {tr} (\mathbf {A} )).}$ For example, consider the one-parameter family of linear transformations given by rotation through angle θ, ${\displaystyle \mathbf {R} _{\theta }={\begin{pmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \end{pmatrix}}.}$ These transformations all have determinant 1, so they preserve area. The derivative of this family at θ = 0, the identity rotation, is the antisymmetric matrix ${\displaystyle A={\begin{pmatrix}0&-1\\1&0\end{pmatrix}}}$ which clearly has trace zero, indicating that this matrix represents an infinitesimal transformation which preserves area. A related characterization of the trace applies to linear vector fields. Given a matrix A, define a vector field F on Rn by F(x) = Ax. The components of this vector field are linear functions (given by the rows of A). Its divergence div F is a constant function, whose value is equal to tr(A). By the divergence theorem, one can interpret this in terms of flows: if F(x) represents the velocity of a fluid at location x and U is a region in Rn, the net flow of the fluid out of U is given by tr(A) · vol(U), where vol(U) is the volume of U. The trace is a linear operator, hence it commutes with the derivative: ${\displaystyle \operatorname {d} \operatorname {tr} (\mathbf {X} )=\operatorname {tr} (\operatorname {d} \mathbf {X} ).}$ ## Applications The trace of a 2 × 2 complex matrix is used to classify Möbius transformations. First, the matrix is normalized to make its determinant equal to one. Then, if the square of the trace is 4, the corresponding transformation is parabolic. If the square is in the interval [0,4), it is elliptic. Finally, if the square is greater than 4, the transformation is loxodromic. See classification of Möbius transformations. The trace is used to define characters of group representations. Two representations A, B : GGL(V) of a group G are equivalent (up to change of basis on V) if tr(A(g)) = tr(B(g)) for all gG. The trace also plays a central role in the distribution of quadratic forms. ## Lie algebra The trace is a map of Lie algebras ${\displaystyle \operatorname {tr}$ :{\mathfrak {gl}}_{n}\to K} from the Lie algebra ${\displaystyle {\mathfrak {gl}}_{n}}$ of linear operators on an n-dimensional space (n × n matrices with entries in ${\displaystyle K}$) to the Lie algebra K of scalars; as K is Abelian (the Lie bracket vanishes), the fact that this is a map of Lie algebras is exactly the statement that the trace of a bracket vanishes: ${\displaystyle \operatorname {tr} ([\mathbf {A} ,\mathbf {B} ])=0{\text{ for each }}\mathbf {A} ,\mathbf {B} \in {\mathfrak {gl}}_{n}.}$ The kernel of this map, a matrix whose trace is zero, is often said to be traceless or trace free, and these matrices form the simple Lie algebra ${\displaystyle {\mathfrak {sl}}_{n}}$, which is the Lie algebra of the special linear group of matrices with determinant 1. The special linear group consists of the matrices which do not change volume, while the special linear Lie algebra is the matrices which do not alter volume of infinitesimal sets. In fact, there is an internal direct sum decomposition ${\displaystyle {\mathfrak {gl}}_{n}={\mathfrak {sl}}_{n}\oplus K}$ of operators/matrices into traceless operators/matrices and scalars operators/matrices. The projection map onto scalar operators can be expressed in terms of the trace, concretely as: ${\displaystyle \mathbf {A} \mapsto {\frac {1}{n}}\operatorname {tr} (\mathbf {A} )\mathbf {I} .}$ Formally, one can compose the trace (the counit map) with the unit map ${\displaystyle K\to {\mathfrak {gl}}_{n}}$ of "inclusion of scalars" to obtain a map ${\displaystyle {\mathfrak {gl}}_{n}\to {\mathfrak {gl}}_{n}}$ mapping onto scalars, and multiplying by n. Dividing by n makes this a projection, yielding the formula above. In terms of short exact sequences, one has ${\displaystyle 0\to {\mathfrak {sl}}_{n}\to {\mathfrak {gl}}_{n}{\overset {\operatorname {tr} }{\to }}K\to 0}$ which is analogous to ${\displaystyle 1\to \operatorname {SL} _{n}\to \operatorname {GL} _{n}{\overset {\det }{\to }}K^{*}\to 1}$ (where ${\displaystyle K^{*}=K\setminus \{0\}}$) for Lie groups. However, the trace splits naturally (via ${\displaystyle 1/n}$ times scalars) so ${\displaystyle {\mathfrak {gl}}_{n}={\mathfrak {sl}}_{n}\oplus K}$, but the splitting of the determinant would be as the nth root times scalars, and this does not in general define a function, so the determinant does not split and the general linear group does not decompose: ${\displaystyle \operatorname {GL} _{n}\neq \operatorname {SL} _{n}\times K^{*}.}$ ### Bilinear forms The bilinear form (where X, Y are square matrices) ${\displaystyle B(\mathbf {X} ,\mathbf {Y} )=\operatorname {tr} (\operatorname {ad} (\mathbf {X} )\operatorname {ad} (\mathbf {Y} ))\quad {\text{where }}\operatorname {ad} (\mathbf {X} )\mathbf {Y} =[\mathbf {X} ,\mathbf {Y} ]=\mathbf {X} \mathbf {Y} -\mathbf {Y} \mathbf {X} }$ is called the Killing form, which is used for the classification of Lie algebras. The trace defines a bilinear form: ${\displaystyle (\mathbf {X} ,\mathbf {Y} )\mapsto \operatorname {tr} (\mathbf {X} \mathbf {Y} ).}$ The form is symmetric, non-degenerate[note 4] and associative in the sense that: ${\displaystyle \operatorname {tr} (\mathbf {X} [\mathbf {Y} ,\mathbf {Z} ])=\operatorname {tr} ([\mathbf {X} ,\mathbf {Y} ]\mathbf {Z} ).}$ For a complex simple Lie algebra (such as ${\displaystyle {\mathfrak {sl}}}$n), every such bilinear form is proportional to each other; in particular, to the Killing form. Two matrices X and Y are said to be trace orthogonal if ${\displaystyle \operatorname {tr} (\mathbf {X} \mathbf {Y} )=0}$. ## Inner product For an m × n matrix A with complex (or real) entries and H being the conjugate transpose, we have ${\displaystyle \operatorname {tr} \left(\mathbf {A} ^{\mathsf {H}}\mathbf {A} \right)\geq 0}$ with equality if and only if A = 0.[5]:7 The assignment ${\displaystyle \langle \mathbf {A} ,\mathbf {B} \rangle =\operatorname {tr} \left(\mathbf {A} ^{\mathsf {H}}\mathbf {B} \right)}$ yields an inner product on the space of all complex (or real) m × n matrices. The norm derived from the above inner product is called the Frobenius norm, which satisfies submultiplicative property as matrix norm. Indeed, it is simply the Euclidean norm if the matrix is considered as a vector of length mn. It follows that if A and B are real positive semi-definite matrices of the same size then ${\displaystyle 0\leq \left[\operatorname {tr} (\mathbf {A} \mathbf {B} )\right]^{2}\leq \operatorname {tr} \left(\mathbf {A} ^{2}\right)\operatorname {tr} \left(\mathbf {B} ^{2}\right)\leq \left[\operatorname {tr} (\mathbf {A} )\right]^{2}\left[\operatorname {tr} (\mathbf {B} )\right]^{2}.}$[note 5] ## Generalizations The concept of trace of a matrix is generalized to the trace class of compact operators on Hilbert spaces, and the analog of the Frobenius norm is called the Hilbert–Schmidt norm. If K is trace-class, then for any orthonormal basis ${\displaystyle (\varphi _{n})_{n}}$, the trace is given by ${\displaystyle \operatorname {tr} (K)=\sum _{n}\left\langle \varphi _{n},K\varphi _{n}\right\rangle ,}$ and is finite and independent of the orthonormal basis.[6] The partial trace is another generalization of the trace that is operator-valued. The trace of a linear operator Z which lives on a product space AB is equal to the partial traces over A and B: ${\displaystyle \operatorname {tr} (Z)=\operatorname {tr} _{A}\left(\operatorname {tr} _{B}(Z)\right)=\operatorname {tr} _{B}\left(\operatorname {tr} _{A}(Z)\right).}$ For more properties and a generalization of the partial trace, see traced monoidal categories. If A is a general associative algebra over a field k, then a trace on A is often defined to be any map tr : Ak which vanishes on commutators: tr([a,b]) for all a, bA. Such a trace is not uniquely defined; it can always at least be modified by multiplication by a nonzero scalar. A supertrace is the generalization of a trace to the setting of superalgebras. The operation of tensor contraction generalizes the trace to arbitrary tensors. ## Coordinate-free definition The trace can also be approached in a coordinate-free manner, i.e., without referring to a choice of basis, as follows: the space of linear operators on a finite-dimensional vector space V (defined over the field F) is isomorphic to the space VV via the linear map ${\displaystyle V\otimes V^{*}\to \operatorname {Hom} (V,V),v\otimes h\mapsto (w\mapsto h(w)v).}$ There is also a canonical bilinear function t : V × VF that consists of applying an element w of V to an element v of V to get an element of F: ${\displaystyle t\left(v,w^{*}\right):=w^{*}(v)\in F.}$ This induces a linear function on the tensor product (by its universal property) t : VV → F, which, as it turns out, when that tensor product is viewed as the space of operators, is equal to the trace. In particular, given a rank one operator A (equivalently, a simple tensor ${\displaystyle v\otimes w^{*}}$), the square is ${\displaystyle A^{2}=\lambda A,}$ because on its one-dimensional image, A is just scalar multiplication. In terms of the tensor expression, ${\displaystyle \lambda =w^{*}(v),}$ and it is the trace (and only non-zero eigenvalue) of A; this gives a coordinate-free interpretation of the diagonal entry. Every operator on an n-dimensional space can be expressed as a sum of n rank one operators; this gives a coordinate-free version of the sum of diagonal entries. This also clarifies why tr(AB) = tr(BA) and why tr(AB) ≠ tr(A)tr(B), as composition of operators (multiplication of matrices) and trace can be interpreted as the same pairing. Viewing ${\displaystyle \operatorname {End} (V)\cong V\otimes V^{*},}$ one may interpret the composition map ${\displaystyle \operatorname {End} (V)\times \operatorname {End} (V)\to \operatorname {End} (V)}$ as ${\displaystyle (V\otimes V^{*})\times (V\otimes V^{*})\to (V\otimes V^{*})}$ coming from the pairing V × VF on the middle terms. Taking the trace of the product then comes from pairing on the outer terms, while taking the product in the opposite order and then taking the trace just switches which pairing is applied first. On the other hand, taking the trace of A and the trace of B corresponds to applying the pairing on the left terms and on the right terms (rather than on inner and outer), and is thus different. In coordinates, this corresponds to indexes: multiplication is given by ${\displaystyle (\mathbf {A} \mathbf {B} )_{ik}=\sum _{j}a_{ij}b_{jk},}$ so ${\displaystyle \operatorname {tr} (\mathbf {A} \mathbf {B} )=\sum _{ij}a_{ij}b_{ji}\quad {\text{and}}\quad \operatorname {tr} (\mathbf {B} \mathbf {A} )=\sum _{ij}b_{ij}a_{ji}}$ which is the same, while ${\displaystyle \operatorname {tr} (\mathbf {A} )\cdot \operatorname {tr} (\mathbf {B} )=\sum _{i}a_{ii}\cdot \sum _{j}b_{jj},}$ which is different. For finite-dimensional V, with basis {ei} and dual basis {ei}, then eiej is the ij-entry of the matrix of the operator with respect to that basis. Any operator A is therefore a sum of the form ${\displaystyle \mathbf {A} =a_{ij}e_{i}\otimes e^{j}.}$ With t defined as above, ${\displaystyle \operatorname {tr} (\mathbf {A} )=a_{ij}\operatorname {tr} \left(e_{i}\otimes e^{j}\right).}$ The latter, however, is just the Kronecker delta, being 1 if i = j and 0 otherwise. This shows that tr(A) is simply the sum of the coefficients along the diagonal. This method, however, makes coordinate invariance an immediate consequence of the definition. ### Dual Further, one may dualize this map, obtaining a map ${\displaystyle F^{*}=F\to V\otimes V^{*}\cong \operatorname {End} (V).}$ This map is precisely the inclusion of scalars, sending 1 ∈ F to the identity matrix: "trace is dual to scalars". In the language of bialgebras, scalars are the unit, while trace is the counit. One can then compose these, ${\displaystyle F~{\overset {I}{\to }}~\operatorname {End} (V)~{\overset {\operatorname {tr} }{\to }}~F,}$ which yields multiplication by n, as the trace of the identity is the dimension of the vector space. ### Generalizations Using the notion of dualizable objects and categorical traces, this approach to traces can be fruitfully axiomatized and applied to other mathematical areas. • Trace of a tensor with respect to a metric tensor • Characteristic function • Field trace • Golden–Thompson inequality • Singular trace • Specht's theorem • Trace class • Trace identity • Trace inequalities • von Neumann's trace inequality ## Notes 1. This is immediate from the definition of the matrix product: ${\displaystyle \operatorname {tr} (\mathbf {A} \mathbf {B} )=\sum _{i=1}^{m}\left(\mathbf {A} \mathbf {B} \right)_{ii}=\sum _{i=1}^{m}\sum _{j=1}^{n}a_{ij}b_{ji}=\sum _{j=1}^{n}\sum _{i=1}^{m}b_{ji}a_{ij}=\sum _{j=1}^{n}\left(\mathbf {B} \mathbf {A} \right)_{jj}=\operatorname {tr} (\mathbf {B} \mathbf {A} )}$. 2. Proof: f(eij) = 0 if and only if ij and f(ejj) = f(e11) (with the standard basis eij), and thus ${\displaystyle f(\mathbf {A} )=\sum _{i,j}[\mathbf {A} ]_{ij}f\left(e_{ij}\right)=\sum _{i}[\mathbf {A} ]_{ii}f\left(e_{11}\right)=f\left(e_{11}\right)\operatorname {tr} (\mathbf {A} ).}$ More abstractly, this corresponds to the decomposition ${\displaystyle {\mathit {gl}}_{n}={\mathit {sl}}_{n}\oplus k,}$ as tr(AB) = tr(BA) (equivalently, tr([A,B]) = 0) defines the trace on sln, which has complement the scalar matrices, and leaves one degree of freedom: any such map is determined by its value on scalars, which is one scalar parameter and hence all are multiple of the trace, a nonzero such map. 3. Proof: ${\displaystyle {\mathfrak {sl}}}$n is a semisimple Lie algebra and thus every element in it is a linear combination of commutators of some pairs of elements, otherwise the derived algebra would be a proper ideal. 4. This follows from the fact that tr(A*A) = 0 if and only if A = 0. 5. This can be proven with the Cauchy–Schwarz inequality. ## References 1. "Comprehensive List of Algebra Symbols". Math Vault. 2020-03-25. Retrieved 2020-09-09. 2. "Rank, trace, determinant, transpose, and inverse of matrices". fourier.eng.hmc.edu. Retrieved 2020-09-09. 3. Weisstein, Eric W. "Matrix Trace". mathworld.wolfram.com. Retrieved 2020-09-09. 4. Lipschutz, Seymour; Lipson, Marc (September 2005). Schaum's Outline of Theory and Problems of Linear Algebra. McGraw-Hill. ISBN 9780070605022. 5. Horn, Roger A.; Johnson, Charles R. (2013). Matrix Analysis (2nd ed.). Cambridge University Press. ISBN 9780521839402. 6. Teschl, G. (30 October 2014). Mathematical Methods in Quantum Mechanics. Graduate Studies in Mathematics. 157 (2nd ed.). American Mathematical Society. ISBN 978-1470417048.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 81, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9801403284072876, "perplexity": 468.5655649083587}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587963.12/warc/CC-MAIN-20211026231833-20211027021833-00673.warc.gz"}
https://math.stackexchange.com/questions/850887/derivation-of-null-quantification-in-logic/850901#850901
# Derivation of Null Quantification in Logic? I was reading page 10-8 of this: https://faculty.washington.edu/smcohen/120/Chapter10.pdf and I was wondering if the distributive qualities could be derived, e.g. $\forall x (P \lor Q(x)) \Leftrightarrow P \lor \forall x Q(x)$, and the equivalent one for $\exists , \land$. • In the Lecture Notes of the website there are (in the chapters following the one tou are referencing) the rules for quantifiers : $\forall$-intro, $\forall$-elim,... They must be used to prove the theroem above. Jun 29, 2014 at 8:00 The following proof uses the rules of Chapter 12: Methods of Proof for Quantifiers. I'll consider only the case : $∀x(P∨Q(x)) \Rightarrow P∨∀xQ(x)$; the other one is similar. (1) $∀x(P∨Q(x))$ --- assumed (2) $P∨Q(a)$ --- from (1) by universal instantiation, or $\forall$-elim Now we need some "propositional" transformation : we use the equivalence between $A \lor B$ and $\lnot A \Rightarrow B$ (3) $\lnot P \Rightarrow Q(a)$ --- from (2) by tautological equivalence (4) $\lnot P$ --- assumed (5) $Q(a)$ --- by $\Rightarrow$-elim (or modus ponens) (6) $\forall xQ(x)$ --- by universal introduction (or $\forall$-intro) : the constant $a$ is not free in the assumptions (7) $\lnot P \Rightarrow \forall xQ(x)$ --- from (4) and (6) by $\Rightarrow$-intro (8) $P \lor \forall xQ(x)$ --- from (7) by tautological equivalence (9) $∀x(P∨Q(x)) \Rightarrow P \lor \forall xQ(x)$ --- from (1) by $\Rightarrow$-intro. Note If we don't want to use the tautological equivalence in steps (3) and (8), we can use a "proof by cases". From step : (2) $P \lor Q(a)$ (3) $P$ --- assumed for $\lor$-elim (4) $P \lor \forall x Q(x)$ --- from (3) by $\lor$-intro (5) $Q(a)$ --- assumed for $\lor$-elim (6) $\forall xQ(x)$ --- by universal introduction (or $\forall$-intro) : the constant $a$ is not free in the assumptions (7) $P \lor \forall x Q(x)$ --- from (6) by $\lor$-intro (8) $P \lor \forall x Q(x)$ --- from (2), (4) and (7) by $\lor$-elim, "discharging" the temporary assumptions (3) and (5). Theorem follows as above, by $\Rightarrow$-intro. The other "direction" is easier. From the assumption : $P \lor \forall x Q(x)$, we derive from both $P$ and $\forall xQ(x)$ separately : $P \lor Q(a)$ [in one case by $\lor$-intro; in the other case by $\forall$-elim followed by $\lor$-intro]. Then we apply again proof by cases ($\lor$-elim) to conclude from : $P \lor \forall x Q(x)$ to : $P \lor Q(a)$. Finally we apply $\forall$-intro to derive : $\forall x(P \lor Q(x))$ followed by a final $\Rightarrow$-intro. Comment Please note that, in general : $\forall xP(x) \lor \forall xQ(x) \vdash \forall x(P(x) \lor Q(x))$, but not vice versa, i.e. : $\forall x(P(x) \lor Q(x)) \nvdash \forall xP(x) \lor \forall xQ(x)$. In other words, the above proof does not works if $x$ is free in $P$. Why so ? Because (consider the first proof above) in step (6) we have to apply $\forall$-intro to $Q(a)$ to get $\forall xQ(x)$. If $x$ is free in $P$ [and this is so if we try to apply the proof to the assumption $\forall x(P(x) \lor Q(x))$, because in step (2) we have instantiated it to $P(a) \lor Q(a)$], if we try to apply $\forall$-intro we have to violate the proviso that the constant $a$ must not be free in the assumptions, and at that step of the proof we still have an "undischarged" assumption containing $a$ : exactly $P(a)$. • Why can you assume $\neg P$ in step 4? – user82004 Jun 29, 2014 at 9:26 • @Anthony - We can assume whatever we need, provided that we "discharge" it before the end of the proof, if - as in this case - we are proving a logical theorem, i.e. a valid formula which does not depend on assumptions. Jun 29, 2014 at 10:06 • But don't you need to consider the situation where P is true as well? Or is it because the only thing you're really interested in with an implication is what happens when the antecedent is true? – user82004 Jun 29, 2014 at 20:38 It doesn't seem like the notes you're reading define any deductive system for FOL at all, and you can't derive anything before you have a deductive system to do it in. However, once you have a deductive system for FOL (of which there are several to choose among), you will be able to derive these laws in it. Otherwise the system you have is not FOL! • Oof. I'm not sure what deductive system I normally use, I just was trying to figure out why Null Quantification makes sense. – user82004 Jun 29, 2014 at 2:33 • @Anthony: If you're just trying to gain an intuitive understanding of why the rules hold, I think that's easier by thinking about the semantics rather than about derivations of them. To wit, in each possible world, $P$ is either true or false. So, depending on which of those is the case, the content of the rule is either $$\forall x({\sf True}\lor \cdots)\Leftrightarrow {\sf True}\lor \cdots$$ or $$\forall x(Q(x))\Leftrightarrow \forall x\,Q(x)$$ each of which is clearly always true. Jun 29, 2014 at 2:38 • I suppose I did want more than intuition though- derivations are always nice. There are a few other cases to consider, no? – user82004 Jun 29, 2014 at 2:49 • @Anthony: The semantics of FOL can (and should) be formalized; it is not just intuitive handwaving. And I don't see which other cases there would be to consider. (The rules are only valid if $P$ does not contain $x$ free, and therefore in any given interpretation $P$ has the same truth value no matter which value $x$ is considered to have). Jun 29, 2014 at 2:56
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9043384790420532, "perplexity": 330.85284340749473}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104249664.70/warc/CC-MAIN-20220703195118-20220703225118-00216.warc.gz"}
https://brilliant.org/problems/circle-problem-3/
# Circle problem Geometry Level pending Let AD be diameter of a circle of radius r, and let B,C be points on the circle such that AB=BC=r/2 and A is not equal to C. Find CD/r. ×
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9878260493278503, "perplexity": 1198.9221981274557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948512054.0/warc/CC-MAIN-20171211014442-20171211034442-00286.warc.gz"}
https://arxiv.org/abs/1403.6185
astro-ph.IM (what is this?) # Title:Physics of Fully Depleted CCDs Abstract: In this work we present simple, physics-based models for two effects that have been noted in the fully depleted CCDs that are presently used in the Dark Energy Survey Camera. The first effect is the observation that the point-spread function increases slightly with the signal level. This is explained by considering the effect on charge-carrier diffusion due to the reduction in the magnitude of the channel potential as collected signal charge acts to partially neutralize the fixed charge in the depleted channel. The resulting reduced voltage drop across the carrier drift region decreases the vertical electric field and increases the carrier transit time. The second effect is the observation of low-level, concentric ring patterns seen in uniformly illuminated images. This effect is shown to be most likely due to lateral deflection of charge during the transit of the photogenerated carriers to the potential wells as a result of lateral electric fields. The lateral fields are a result of space charge in the fully depleted substrates arising from resistivity variations inherent to the growth of the high-resistivity silicon used to fabricate the CCDs. Subjects: Instrumentation and Methods for Astrophysics (astro-ph.IM); Instrumentation and Detectors (physics.ins-det) DOI: 10.1088/1748-0221/9/03/C03057 Cite as: arXiv:1403.6185 [astro-ph.IM] (or arXiv:1403.6185v1 [astro-ph.IM] for this version) ## Submission history From: Chris Bebek [view email] [v1] Mon, 24 Mar 2014 23:07:56 UTC (832 KB)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8514081835746765, "perplexity": 2136.491449066155}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247482347.44/warc/CC-MAIN-20190217172628-20190217194628-00206.warc.gz"}
http://mathhelpforum.com/advanced-algebra/224041-ad-bc-proof-linear-independence.html
# Math Help - ad-bc proof of linear independence 1. ## ad-bc proof of linear independence given 2 vectors x1=(a,b) and x2=(c,d), prove that they are linearly independent if and only if ad-bc doesn't equal zero. I'm familiar with ad-bc as the determinant of a matrix and if it is equal to zero then the matrix is singular. ad-bc is the area of a parallelogram 2. ## Re: ad-bc proof of linear independence Use the definition of "linearly dependent"! Two vectors, x1 and x2, are "linearly independent" if and only if there exist numbers, A and B, at least one non-zero, such that Ax1+ Bx2= 0. Here, that means A(a, b)+ B(c, d)= (Aa+ Bc, Ab+ Bd)= (0, 0) which is equivalent to Aa+ Bc= 0, Ab+ Bd= 0. Obviously, one solution is A= B= 0. What must be true so that is NOT the only solution? One way to answer that is to try to solve the equations! If we multiply the first equation by b, Aab+ Bbc= 0, multiply the second equation by a, Aab+ Bad= 0, and subtract it from the previous equation, we eliminate A: B(ad- bc)= 0. IF ad- bc is not 0, we can divide both sides by it getting B= 0. Putting B= 0 into Aa+ Bc= 0, we have Aa= 0. If a is not 0, A=0. If a= 0, we can put B= 0 into the other equation Ab+ Bd= 0 to get Ab= 0. If b is not 0, A= 0 again. (If both A and B are 0, ad- bc= 0.) In any case, the vectors are independent. If ad- bc is 0, then B(ad- bc)= 0 for any value of B- including non-zero values so the vectors are dependent. In the case of just two vectors we can also say that if ad- bc= 0 and a is not 0, then d= bc/a so that <c, d>= <c, bc/a>= (c/a)<a, b> showing that <c, d> is just a multiple of <a, b>. If a= 0, ad- bc= 0 becomes bc= 0 so either b=0 or c= 0. If b= 0, <a, b>= <0, 0> which is linearly dependent with any other vector. If c= 0, then we have <0, b> and <0, d> so that <0, d>= (d/b)<0, b> and, again, one vector is a multiple of the other. 3. ## Re: ad-bc proof of linear independence Thanks very much! I guess it's stronger to prove by algebra. so you have to prove 2 things, 1. given linear independence, A=B=0, then ad-bc doesn't equal 0, is this because then B(ad-bc)=0 and ad-bc can be arbitrary and does not necessarily equal 0 (but then both B And ad-bc can equal 0 and the proof fails?) if A=B=0 then it doesn't follow necessarily from the Aa+Bc=0, Ab+Bd=0 that ad-bc doesn't equal 0...? 2.given ad-bc doesn't equal 0 then the vectors are linearly independent: B=0 leads to Aa=0, Ab=0, if A does not equal 0 then a=b=0 (0,0) is linearly dependent on (c,d)0. if a and b don't equal 0 then A=0 then you can prove linear independence but then a and b can equal 0 in the problem since the vectors are in all of V2? the problem seems to allude to this proof: n vectors in Rn are linearly dependent if and only if the determinant of the matrix taking the vectors as its columns=0. or also Steinitz's theorem and linear independence is equivalent to the non-existence of non trivial solutions, proportionality to non trivial solutions (x,y) (-y,x) from (a,c)=h(b,d) you can derive ad-bc=0 with h=1 I know we are all busy so can someone else please let me know if I've made any mistakes? is there another way to prove this using matrix theory? 4. ## Re: ad-bc proof of linear independence Apparently there is a Leibniz formula in which the identity permutation is the only one that gives a nonzero contribution and from this or Laplace expansion you can deduce that This n-linear function is an alternating form. This means that whenever two columns of a matrix are identical, or more generally some column can be expressed as a linear combination of the other columns (i.e. the columns of the matrix form a linearly dependent set), its determinant is 0. so maybe you can prove in this way that the converse is true that then the columns of a matrix form a linearly independent set then it's determinant is nonzero. but how do you get the 2X2 matrix? ab+cd, V1+V2 gives you a parallelogram whose area is the determinant ad-bc and if Det=0 then the vectors are linearly dependent and the area of the parallelogram is 0. but if the vectors are linearly independent then they can be added to form a parallelogram whose area is non zero- kind of a geometric proof 5. ## Re: ad-bc proof of linear independence I wonder why he gives this problem without the matrix theory (I started at chapter 12). can you intuitively prove this just from chapter 12 Apostol? it seems impossible to prove without some linear algebra 6. ## Re: ad-bc proof of linear independence Originally Posted by mathnerd15 I wonder why he gives this problem without the matrix theory (I started at chapter 12). can you intuitively prove this just from chapter 12 Apostol? You don't need matrix theory. In $\mathcal{R}^2$ two vectors are linearly independent if and only if they are not parallel. Two vectors are parallel if and only if they are scalar multiples of each other.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9614332318305969, "perplexity": 759.4554486913477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776415016.1/warc/CC-MAIN-20140707234015-00019-ip-10-180-212-248.ec2.internal.warc.gz"}
http://www.physicsforums.com/showthread.php?t=113896
# Series with Factorial by dekoi Tags: factorial, series P: n/a I don't understand this conversion! $$\sum_{n=1}^\infty \frac{sin(n\pi /2)}{n!} = \sum_{n=0}^\infty \frac{(-1)^n}{(2n+1)!}$$ I know that the numerator of the left side is 0 when n is an even number. When n is odd, the numerator is either +1 or -1. But how do i continue? Math Emeritus Thanks PF Gold P: 38,706 Quote by dekoi I don't understand this conversion! $$\sum_{n=1}^\infty \frac{sin(n\pi /2)}{n!} = \sum_{n=0}^\infty \frac{(-1)^n}{(2n+1)!}$$ I know that the numerator of the left side is 0 when n is an even number. When n is odd, the numerator is either +1 or -1. But how do i continue? AS you say, if n is even then sin(n$\pi$)= 0. Let n= 2m+1 so that for every m= 1, 2, 3,... n is odd. Notice that $sin(\frac{\pi}{2})= 1$, $sin(\frac{3\pi}{2})= -1$, $sin(\frac{5\pi}{2})= 1$ etc. In other words, $sin(\frac{(2m+1)\pi}{2}$ is 1 if m even (0, 2, etc.) and -1 is m is odd. $$sin(\frac{n\pi}{2})= sin(\frac{(2m+1)\pi}{2})= (-1)^m$$ so we have $$\sum_{n=1}^\infty \frac{sin(\frac{n\pi}{2})}{n!}= \sum_{m=0}^\infty\frac{(-1)^m}{(2m+1)!}$$ Since m is a "dummy variable" (it just denotes a place in the series and doesn't appear in the final sum) just replace m by n to get the result you have- the two "n"s on either side of the equation have different meanings. P: n/a How come the lower limit was replaced by 0 from 1? Thanks. P: 347 ## Series with Factorial Because 2n+1 skips 1. So you could write 2n-1 instead. HW Helper P: 1,422 Quote by dekoi How come the lower limit was replaced by 0 from 1? Thanks. Hmmm, look at the equality again: $$\sum_{n = 1} ^ {\infty} \left( \frac{\sin \left( \frac{n \pi}{2} \right)}{n!} \right) = \sum_{n = 0} ^ {\infty} \left( \frac{(-1) ^ n}{(2n + 1)!} \right)$$ It's not only that n = 1 has been replaced by n = 0, but the n! in the denominator has also been replaced by (2n + 1)!. Do you notice this? As HallsofIvy has already pointed out: If n is even then $$\frac{\sin \left( \frac{n \pi}{2} \right)}{n!} = 0$$, right? So you'll be left with the terms with odd n only, now let n = 2m + 1 This means n is odd right? And since n >= 1 (the series starts from n = 1, and 1 is an odd number), so 2m + 1 >= 1, so m >= 0, which means the new series will start from m = 0. So change n to m, we have: $$\sum_{n = 1} ^ {\infty} \left( \frac{\sin \left( \frac{n \pi}{2} \right)}{n!} \right) = \sum_{m = 0} ^ {\infty} \left( \frac{\sin \left( \frac{(2m + 1) \pi}{2} \right)}{(2m + 1)!} \right) = \sum_{m = 0} ^ {\infty} \left( \frac{(-1) ^ m}{(2m + 1)!} \right)$$ Can you get it now? :) Related Discussions General Math 20 General Math 18 Calculus 5 Calculus & Beyond Homework 17 Calculus 7
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9699826836585999, "perplexity": 555.883197982642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999675300/warc/CC-MAIN-20140305060755-00090-ip-10-183-142-35.ec2.internal.warc.gz"}
http://tex.stackexchange.com/questions/45927/exporting-definitions
# Exporting Definitions Below is part of the code for my notes. I would like to "easily" create a new document displaying only the definitions, theorems, and lemmas. I can copy and paste this and just delete what I do not want, but if there is an easier way, that would be great. \documentclass[12pt,letterpaper]{article} \usepackage{amsmath} % just math \usepackage{amssymb} % allow blackboard bold (aka N,R,Q sets) \usepackage{amsthm} % allows thm environment \usepackage{mathrsfs} \usepackage{graphicx, framed} % allows graphics \usepackage[none]{hyphenat} % allows graphics \usepackage[usenames,dvipsnames]{color} \usepackage{color} \usepackage{wrapfig} \definecolor{Def}{rgb}{0.85,0.65,0.13} \definecolor{Ex}{rgb}{0,0.39,0} \definecolor{Nt}{rgb}{1,0.08, 0.58} \definecolor{Tm}{rgb}{0,0,0.80} \definecolor{Pf}{rgb}{1,0.55,0} \definecolor{white}{rgb}{1,1,1} \textwidth 6.5truein % These 4 commands define more efficient margins \textheight 9.4truein \oddsidemargin 0.0in \topmargin -0.6in \parskip 5pt % Also, a bit of space between paragraphs \newtheorem{theorem}{\color{Tm}{\textbf{\underline{Theorem}}}} \newtheorem{cor}{\color{Tm}{\textbf{\underline{Corollary}}}} \newtheorem{prop}{\color{Tm}{\textbf{\underline{Proposition}}}} \newtheorem{claim}{\color{Tm}{\textbf{\underline{Claim}}}} \theoremstyle{definition}\newtheorem{definition}{\color{Def}{\textbf{\underline{Definition}}}} \newtheorem{lemma}{\color{Tm}{\textbf{\underline{Lemma}}}} \newtheorem{exer}{\color{Plum}{\textbf{\underline{Exercise}}}} \theoremstyle{definition}\newtheorem{note}{\color{Nt}{\textbf{\underline{Note}}}} \theoremstyle{definition}\newtheorem{example}{\color{Ex}{\textbf{\underline{Example}}}} \newtheorem{sidebar}{\color{Aquamarine}{\textbf{\underline{Sidebar}}}} \newtheorem{pf}{\color{BurntOrange}{\textbf{\underline{Proof}}}} \newcommand{\QEDend}{\linebreak \begin{flushright}\textbf{QED}\end{flushright}} \newcommand{\QEDish}{\linebreak \begin{flushright}\textbf{$\approx$QED}\end{flushright}} \newcommand{\bb}[1]{\mathbb{#1}} \newcommand{\mcal}[1]{\mathcal{#1}} \newcommand{\scr}[1]{\mathscr{#1}} \newcommand{\R}[1]{$\mathbb{R}$} \newcommand{\C}[1]{$\mathbb{C}$} \newcommand{\N}[1]{$\mathbb{N}$} \newcommand{\Q}[1]{$\mathbb{Q}$} \newcommand{\lp}[1]{$\left({#1}\right)$} \newcommand{\la}[1]{$\left|{#1}\right|$} \newcommand{\as}[1]{\color{white}{*}\color{black}{}} \newcommand{\ind}{\hspace*{30pt}} \newcommand{\done}{\hfill \mbox{\raggedright \rule{0.1in}{0.1in}}} \newcommand{\bigdotcup}{\ensuremath{\mathop{\makebox[-0.6pt]{\hspace{16pt}{$$\cdot$$}}\bigcup}}} \newcommand{\dotcup}{\ensuremath{\mathop{\makebox[-0.6pt]{\hspace{13pt}{$$\cdot$$}}\bigcup}}} \raggedright \parindent 30pt \newcommand{\ftheorem}[1]{\begin{framed}\begin{theorem}#1\end{theorem}\end{framed}} \newcommand{\fdef}[1]{\begin{framed}\begin{definition}#1\end{definition}\end{framed}} \newcommand{\fnote}[1]{\begin{framed}\begin{note}#1\end{note}\end{framed}} \newcommand{\flemma}[1]{\begin{framed}\begin{lemma}#1\end{lemma}\end{framed}} \newcommand{\fex}[1]{\begin{framed}\begin{example}#1\end{example}\end{framed}} \newcommand{\fcor}[1]{\begin{framed}\begin{corollary}#1\end{corollary}\end{framed}} \begin{document} \section{Course Notes} %January 23, 2012 \subsection{Metric Spaces} \begin{definition}The metric space $M$ is a \underline{complete metric space} if every Cauchy sequence is convergent in $M$. \end{definition} \begin{example} $\bb{R}$ is a complete metric space\done \end{example} \begin{example} $C[0,1]$ is a complete metric space with $d(f,g) = \sup \mid f(t) - g(t) \mid$, $t \in [0,1]$.\done \end{example} \begin{theorem} Any closed subset $A$ of a complete metric space $M$ is complete. \end{theorem} \begin{proof} Let $A \subset M$ be closed and $<x_n>$ be any Cauchy sequence in $A$. Since $M$ is a complete metric space, $<x_n>$ is convergent in $M$, so $x_n \rightarrow x \in M$. Since $A$ is closed, $x \in A$. Thus, $A$ is complete. \end{proof} \end{document} - Please consider adding a minimum working example (MWE) of your code, with more information on what you're trying to achieve. The first and second sentences of your current posting, I'm afraid to say, sound a bit contradictory. Plus, are you using a package such as amsthm or ntheorem? –  Mico Feb 27 '12 at 2:01 Mico, I have added some sample code showing the packages I am using and rephrased my question. Sorry about the confusion. –  Tyler Clark Feb 28 '12 at 23:29 Well, there is still some missing info in your MWE, rendering it a MNWE;). But try this: \documentclass{amsart} \usepackage[active, generate=short, extract-cmd={section}, extract-env={definition,theorem}]{extract} \begin{extract*} \usepackage{amsthm} \newtheorem{theorem}{Theorem} \theoremstyle{definition} \newtheorem{definition}{Definition} \newtheorem{example}{Example} \end{extract*} as the preamble, compile your file and look into the file short.tex. See the documentation of the extract package for more info (this is important - this package messes with internals of LaTeX and will not work for any commands, for example!). Short info: the extract* environment puts its contents into the generated file (and processes them as usual as well); generate defines the name of the generated file, and extract-cmd and extract-env specify what to extract. Also, consider avoiding \underline and using \emph` instead. - This works wonderfully. Thank you! –  Tyler Clark Feb 29 '12 at 1:51
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9829164147377014, "perplexity": 2088.670601485352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163828351/warc/CC-MAIN-20131204133028-00088-ip-10-33-133-15.ec2.internal.warc.gz"}
http://zbmath.org/?q=an:1045.33009
# zbMATH — the first resource for mathematics ##### Examples Geometry Search for the term Geometry in any field. Queries are case-independent. Funct* Wildcard queries are specified by * (e.g. functions, functorial, etc.). Otherwise the search is exact. "Topological group" Phrases (multi-words) should be set in "straight quotation marks". au: Bourbaki & ti: Algebra Search for author and title. The and-operator & is default and can be omitted. Chebyshev | Tschebyscheff The or-operator | allows to search for Chebyshev or Tschebyscheff. "Quasi* map*" py: 1989 The resulting documents have publication year 1989. so: Eur* J* Mat* Soc* cc: 14 Search for publications in a particular source with a Mathematics Subject Classification code (cc) in 14. "Partial diff* eq*" ! elliptic The not-operator ! eliminates all results containing the word elliptic. dt: b & au: Hilbert The document type is set to books; alternatively: j for journal articles, a for book articles. py: 2000-2015 cc: (94A | 11T) Number ranges are accepted. Terms can be grouped within (parentheses). la: chinese Find documents in a given language. ISO 639-1 language codes can also be used. ##### Operators a & b logic and a | b logic or !ab logic not abc* right wildcard "ab c" phrase (ab c) parentheses ##### Fields any anywhere an internal document identifier au author, editor ai internal author identifier ti title la language so source ab review, abstract py publication year rv reviewer cc MSC code ut uncontrolled term dt document type (j: journal article; b: book; a: book article) Estimates for the error term in a uniform asymptotic expansion of the Jacobi polynomials. (English) Zbl 1045.33009 There are now several ways to derive an asymptotic expansion for the Jacobi polynomials ${P}_{n}^{\left(\alpha ,\beta \right)}\left(cos\theta \right)$, as $n\to \infty$, which holds uniformly for $\theta \in \left[0,\frac{1}{2}\pi \right]$. One of these starts with a contour integral, involves a transformation which takes this integral into a canonical form, and makes repeated use of an integration-by-parts technique. There are two advantages to this approach: (i) it provides a recursive formula for calculating the coefficients in the expansion, and (ii) it leads to an explicit expression for the error term. In this paper, we point out that the estimate for the error term given previously is not sufficient for the expansion to be regarded as genuinely uniform for $\theta$ near the origin, when one takes into account the behavior of the coefficients near $\theta =0$. The aim is to use an alternative method to estimate the remainder. First it is shown that the coefficients in the expansion are bounded for $\theta \in \left[0,\frac{1}{2}\pi \right]$. Next, is given an estimate for the error term which is of the same order as the first neglected term. ##### MSC: 33C45 Orthogonal polynomials and functions of hypergeometric type 41A30 Approximation by other special function classes
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8461006283760071, "perplexity": 2148.327002522008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163826391/warc/CC-MAIN-20131204133026-00021-ip-10-33-133-15.ec2.internal.warc.gz"}
http://physics.stackexchange.com/questions/20797/differentiating-propagator-greens-function-correlation-function-etc/20812
# Differentiating Propagator, Greens function, Correlation function, etc For the following quantities respectively, could someone write down the common definitions, their meaning, the field of study in which one would typically find these under their actual name, and most foremost the associated abuse of language as well as difference and correlation (no pun intended): Maybe including side notes regarding the distinction between Covariance, Covariance function and Cross-Covariance, the pair correlation function for different observables, relations to the autocorrelation function, the $n$-point function, the Schwinger function, the relation to transition amplitudes, retardation and related adjectives for Greens functions and/or propagators, the Heat-Kernel and its seemingly privileged position, the spectral density, spectra and the resolvent. Edit: I'd still like to hear about the "Correlation fuction interpretation" of the quantum field theoretical framework. Can transition amplitudes be seen as a sort of auto-correlation? Like... such that the QFT dynamics at hand just determine the structure of the temporal and spatial overlaps? - The propagator, the two-point correlation function, and the two-point Green's function are all synonymous. They are used primarily in quantum mechanics, and quantum field theory. They represent the probability of preparing a one particle state at $\vec{x}$ and then finding the particle at $\vec{y}$. –  kηives Feb 10 '12 at 14:08 The main distinction you want to make is between the Green function and the kernel. (I prefer the terminology "Green function" without the 's. Imagine a different name, say, Feynman. People would definitely say the Feynman function, not the Feynman's function. But I digress...) Start with a differential operator, call it $L$. E.g., in the case of Laplace's equation, then $L$ is the Laplacian $L = \nabla^2$. Then, the Green function of $L$ is the solution of the inhomogenous differential equation $$L_x G(x, x^\prime) = \delta(x - x^\prime)\,.$$ We'll talk about its boundary conditions later on. The kernel is a solution of the homogeneous equation $$L_x K(x, x^\prime) = 0\,,$$ subject to a Dirichlet boundary condition $\lim_{x \rightarrow x^\prime}K(x,x^\prime) = \delta (x-x^\prime)$, or Neumann boundary condition $\lim_{x \rightarrow x^\prime} \partial K(x,x^\prime) = \delta(x-x^\prime)$. So, how do we use them? The Green function solves linear differential equations with driving terms. $L_x u(x) = \rho(x)$ is solved by $$u(x) = \int G(x,x^\prime)\rho(x^\prime)dx^\prime\,.$$ Whichever boundary conditions we what to impose on the solution $u$ specify the boundary conditions we impose on $G$. For example, a retarded Green function propagates influence strictly forward in time, so that $G(x,x^\prime) = 0$ whenever $x^0 < x^{\prime\,0}$. (The 0 here denotes the time coordinate.) One would use this if the boundary condition on $u$ was that $u(x) = 0$ far in the past, before the source term $\rho$ "turns on." The kernel solves boundary value problems. Say we're solving the equation $L_x u(x) = 0$ on a manifold $M$, and specify $u$ on the boundary $\partial M$ to be $v$. Then, $$u(x) = \int_{\partial M} K(x,x^\prime)v(x^\prime)dx^\prime\,.$$ In this case, we're using the kernel with Dirichlet boundary conditions. For example, the heat kernel is the kernel of the heat equation, in which $$L = \frac{\partial}{\partial t} - \nabla_{R^d}^2\,.$$ We can see that $$K(x,t; x^\prime, t^\prime) = \frac{1}{[4\pi (t-t^\prime)]^{d/2}}\,e^{-|x-x^\prime|^2/4(t-t^\prime)},$$ solves $L_{x,t} K(x,t;x^\prime,t^\prime) = 0$ and moreover satisfies $$\lim_{t \rightarrow t^\prime} \, K(x,t;x^\prime,t^\prime) = \delta^{(d)}(x-x^\prime)\,.$$ (We must be careful to consider only $t > t^\prime$ and hence also take a directional limit.) Say you're given some shape $v(x)$ at time $0$ and want to "melt" is according to the heat equation. Then later on, this shape has become $$u(x,t) = \int_{R^d} K(x,t;x^\prime,0)v(x^\prime)d^dx^\prime\,.$$ So in this case, the boundary was the time-slice at $t^\prime = 0$. Now for the rest of them. Propagator is sometimes used to mean Green function, sometimes used to mean kernel. The Klein-Gordon propagator is a Green function, because it satisfies $L_x D(x,x^\prime) = \delta(x-x^\prime)$ for $L_x = \partial_x^2 + m^2$. The boundary conditions specify the difference between the retarded, advanced and Feynman propagators. (See? Not Feynman's propagator) In the case of a Klein-Gordon field, the retarded propagator is defined as $$D_R(x,x^\prime) = \Theta(x^0 - x^{\prime\,0})\,\langle0| \varphi(x) \varphi(x^\prime) |0\rangle\,$$ where $\Theta(x) = 1$ for $x > 0$ and $= 0$ otherwise. The Wightman function is defined as $$W(x,x^\prime) = \langle0| \varphi(x) \varphi(x^\prime) |0\rangle\,,$$ i.e. without the time ordering constraint. But guess what? It solves $L_x W(x,x^\prime) = 0$. It's a kernel. The difference is that $\Theta$ out front, which becomes a Dirac $\delta$ upon taking one time derivative. If one uses the kernel with Neumann boundary conditions on a time-slice boundary, the relationship $$G_R(x,x^\prime) = \Theta(x^0 - x^{\prime\,0}) K(x,x^\prime)$$ is general. In quantum mechanics, the evolution operator $$U(x,t; x^\prime, t^\prime) = \langle x | e^{-i (t-t^\prime) \hat{H}} | x^\prime \rangle$$ is a kernel. It solves the Schroedinger equation and equals $\delta(x - x^\prime)$ for $t = t^\prime$. People sometimes call it the propagator. It can also be written in path integral form. Linear response and impulse response functions are Green functions. These are all two-point correlation functions. "Two-point" because they're all functions of two points in space(time). In quantum field theory, statistical field theory, etc. one can also consider correlation functions with more field insertions/random variables. That's where the real work begins! - Very nice answer. I wonder why when you introduce the Kernel, the $lim$ is taken to be w.r.t. the same arguments $x$ and $x'$ as the delta function, but later, you only use times. Also, in statistical mechanics, is the correlation function (which depend on the correlation length and which specify how macroscopic the effect are) a Green(s) function? I don't see any differential equations there. That's generally the problem I think, that I read the name Greens function, where there are no Differential equations and deltas around. Lastly, what about the functions characterizing susceptibilities? –  Nikolaj K. Feb 11 '12 at 23:46 In how many dimensions you take the limit (i.e. just time or time and space) is sort of a matter of terminology, due to the fact that the $\delta$ function is zero everywhere except one point. For the limit of the heat kernel, for example, all I'm getting at is that if the two time coordinates approach one another and the spatial points are not equal, the result vanishes. But if they are equal and then the time coordinates are made to approach, you get a quantity that behaves like a $d$-dimensional $\delta$ function. –  josh Feb 12 '12 at 17:46 To see how quantities like $W(x,x^\prime)=\langle0|\varphi(x)\varphi(x^\prime)|0\rangle$ satisfy the right differential equations and boundary conditions, read about Schwinger-Dyson Equations in QFT. And don't forget that when you canonically quantize a Klein-Gordon field, the canonical momentum $\pi = \partial_t\varphi$ and so $[\varphi(x,t),\partial_t\varphi(x^\prime,t)] = i\hbar\delta(x-x^\prime)$. This will matter in getting the right boundary conditions on the time-slice boundary. –  josh Feb 12 '12 at 17:50
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.905916690826416, "perplexity": 283.95315453015905}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394023864525/warc/CC-MAIN-20140305125104-00087-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.aanda.org/articles/aa/full_html/2010/09/aa13593-09/aa13593-09.html
Subscriber Authentication Point Free Access Issue A&A Volume 517, July 2010 A77 18 Extragalactic astronomy https://doi.org/10.1051/0004-6361/200913593 11 August 2010 A&A 517, A77 (2010) ## Relating dust, gas, and the rate of star formation in M 31 F. S. Tabatabaei - E. M. Berkhuijsen Max-Planck Institut für Radioastronomie, Auf dem Hügel 69, 53121 Bonn, Germany Received 3 November 2009 / Accepted 26 April 2010 Abstract Aims. We investigate the relationships between dust and gas, and study Methods. We have derived distributions of dust temperature and dust opacity across M 31 at 45 resolution using the Spitzer data. With the opacity map and a standard dust model we de-reddened the H emission yielding the first H map of M 31 corrected for extinction. We compared the emissions from dust, H, HI, and H2 by means of radial distributions, pixel-to-pixel correlations, and wavelet cross-correlations. We calculated the star formation rate and star formation efficiency from the de-reddened H emission. Results. The dust temperature steeply decreases from 30 K near the center to 15 K at large radii. The mean dust optical depth at the H wavelength along the line of sight is about 0.7. The radial decrease in the dust-to-gas ratio is similar to that of the oxygen abundance. Extinction is nearly linearly correlated with the total gas surface density within limited radial intervals. On scales <2 kpc, cold dust emission is best correlated with that of neutral gas, and warm dust emission with that of ionized gas. The H emission is slightly better correlated with emission at 70 m than at 24 m. The star formation rate in M 31 is low. In the area 6 kpc < R < 17 kpc, the total SFR is 0.3  . A linear relationship exists between surface densities of SFR and H2. The Kennicutt-Schmidt law between SFR and total gas has a power-law index of 1.30  0.05 in the radial range of R=7-11 kpc increasing by about 0.3 for R = 11-13 kpc. Conclusions. The better 70 m-H than 24 m-H correlation plus an excess in the 24 m/70 m intensity ratio indicates that other sources than dust grains, e.g. those of stellar origin, contribute to the 24 m emission. The lack of H2 in the central region could be related to the lack of HI and the low opacity/high temperature of the dust. Since neither SFR nor SFE is well correlated with the surface density of H2 or total gas, other factors than gas density must play an important role in the formation of massive stars in M 31. The molecular depletion time scale of 1.1 Gyr indicates that M 31 is about three times less efficient in forming young massive stars than M 33. Key words: galaxies: individual: M 31 - galaxies: ISM - dust, extinction - ISM: general - stars: formation ## 1 Introduction Dust, neutral gas, and ionized gas are the major components of the interstellar medium (ISM) in galaxies. Observations of their properties and inter-relationships can give strong clues to the physics governing star formation. Relationships between components in the ISM are to be expected. Observations have shown that in the Galaxy dust and neutral gas are well mixed. In dense clouds of molecular gas mixed with cold dust most of the stars are formed. They subsequently heat the dust and gas in their surroundings and ionize the atomic gas. As the major coolants of the ISM are continuum emission and line emission at various frequencies, a close comparison of these emissions could shed light on spatial and physical connections between the emitting components. Present-day IR and radio telescopes have produced sensitive high-resolution maps of several nearby galaxies, which are ideal laboratories for studying the interplay between the ISM and star formation (e.g. Bigiel et al. 2008; Kennicutt et al. 2007; Verley et al. 2009). The spiral galaxy nearest to us, the Andromeda Nebula (NGC 224), is a highly inclined Sb galaxy of low surface brightness. Table 1 lists the positional data on M 31. Its proximity and large extent on the sky (> ) enable detailed studies of the ISM over a wide radial range. Surveys of M 31 at high angular resolution (< ) are available at many wavelengths. In the HI line the galaxy was mapped by Brinks & Shane (1984) at resolution, the northeastern half by Braun (1990) at 10 resolution, and most recently, the entire galaxy with high sensitivity by Braun et al. (2009) at a resolution of 15 . Nieten et al. (2006) made a survey in the 12CO(1-0) line at a resolution of 23 . Devereux et al. (1994) observed M 31 in the H line to obtain the distribution of the ionized gas. The dust emission from M 31 has recently been observed by the multiband imaging photometer Spitzer (MIPS, Rieke et al. 2004) with high sensitivity at 24 m, 70 m, and 160 m at resolutions . Table 1:   Positional data adopted for M 31. Table 2:   M 31 data used in this study. The relationships between gas and dust, as well as between gas and star formation, in M 31 have been studied in the past at resolutions of several arcminutes. Walterbos & Schwering (1987) derived a nearly constant dust temperature across M 31 using the IRAS 100 m and 60 m maps. They also found a strong increase in the atomic gas-to-dust surface density ratio with increasing radius. This increase was confirmed by Walterbos & Kennicutt (1988) who used optical extinction as dust tracer, and by Nieten et al. (2006) using the ISO map at 175 m (Haas et al. 1998). Interestingly, the latter authors did not find a radial increase in the molecular gas-to-dust ratio. The dependence of star formation on HI surface density in M 31 has been studied by a number of authors (Berkhuijsen 1977; Emerson 1974; Tenjes & Haud 1991; Nakai & Sofue 1984; Unwin 1980; Nakai & Sofue 1982) using the number density of HII regions or of OB stars as star formation tracers. They obtained power-law indices between 0.5 and 2, possibly depending on the region in M 31, the star formation tracer and the angular resolution. Braun et al. (2009) plotted the star formation density derived from the brightnesses at 8m, 24m, and UV against the surface densities of molecular gas, HI, and total gas, but did not fit power laws to their data. The high-resolution data available for M 31 show the morphologies of the emission from dust and gas components in detail. We apply a 2-D wavelet analysis technique (Frick et al. 2001) to the MIPS IR data (Gordon et al. 2006) and the gas (HI, H2, and H) maps to study the scale distribution of emission power and to separate the diffuse emission components from compact sources. We then compare the wavelet-decomposed maps at various spatial scales. We also use pixel-to-pixel (Pearson) correlations to derive quantitative relations not only between different ISM components but also between them and the present-day star formation rate. Following Walterbos & Schwering (1987) and Haas et al. (1998), we derive the dust temperature assuming a emissivity law for the MIPS bands at which the emission from the big grains and hence the LTE condition is relevant, and present a map of the dust color temperature. We also obtain the distribution of the optical depth and analyze the gas-to-dust surface-density ratio at a resolution of 45 (170 pc  660 pc along the major and minor axis, respectively, in the galaxy plane), 9 times higher than before (Walterbos & Schwering 1987). We use the optical depth map to de-redden the H emission observed by Devereux et al. (1994) yielding the distribution of the absorption-free emission from the ionized gas, and use this as an indicator of massive star formation. We compare it with the distributions of neutral gas to obtain the dependence of the star formation rate on gas surface density. Figure 1: Dust temperature in M 31 obtained from the ratio / based on the Spitzer MIPS data. Only pixels with intensity above 3 noise level were used. The angular resolution of 45 is shown in the lower right-hand corner of the map. The cross indicates the location of the center. The bar at the top gives the dust temperature in Kelvin. Open with DEXTER The paper is organized as follows: The relevant data sets are described in Sect. 2. In Sect. 3 we derive maps of the dust color temperature and optical depth, and correct the H emission for absorption by dust. Radial profiles of the dust and gas emission and of the various gas-to-dust ratios are obtained in Sect. 4. Section 5 is devoted to wavelet decompositions and wavelet spectra of the dust and gas distributions, and their cross correlations. Complementary, we discuss in Sect. 6 classical correlations between gas and dust. In Sect. 7 the dependence of the star formation rate on the gas surface density is presented. Finally, in Sect. 8 we summarize our results. ## 2 Data Table 2 summarizes the data used in this work. M 31 was mapped in IR (at 24 m, 70 m, and 160 m) by MIPS in August 2004 covering a region of about 1 (Gordon et al. 2006). The basic data reduction and mosaicing was performed using the MIPS instrument team Data Analysis Tool versions 2.90 (Gordon et al. 2005). M 31 was observed in the 12CO(1-0) line with the IRAM telescope by Nieten et al. (2006) at a resolution of 23 . They derived the distribution of the molecular gas using a constant conversion factor of X  mol. K-1 km-1 s. The galaxy was observed in the 21-cm HI line with the Westerbork interferometer by Brinks & Shane (1984) at a resolution of 24 . The HI survey has been corrected for missing spacings. The H observations of Devereux et al. (1994) were carried out on the Case Western Burell-Schmidt telescope at the Kitt Peak National Observatory, providing a 2 field of view. Although the resolution of 40 of the 160 m image is the lowest of the data listed in Table 2, we smoothed all maps to a Gaussian beam with a half-power width of 45 for a comparison with radio continuum data at 20 cm (Hoernes et al. 1998) in a forthcoming study (Tabatabaei et al. in prep.). As the point spread function (PSF) of the MIPS data is not Gaussian, we convolved the MIPS images using custom kernels created with Fast Fourier transforms to account for the detailed structure of the PSFs. Details of the kernel creation can be found in Gordon et al. (2007). After convolution, the maps were transformed to the same grid of 15 width with the reference coordinates and position angle of the major axis given in Table 1. Finally, they were cut to a common extent of 110 , for which most data sets are complete. The field is not centred on the nucleus of M 31, but extends to 56 25 along the northern major axis (corresponding to a radius of R = 12.8 kpc) and to 53 75 along the southern major axis (R = 12.2 kpc). The H2 map of Nieten et al. (2006) extends to 48 5 along the southern major axis (R = 11.0 kpc). With an extent of 19 25 along the minor axis in both directions, the field covers radii of R <16.9 kpc in the plane of M 31. Hence, radial profiles derived by averageing the data in circular rings in the plane of the galaxy (equivalent to elliptical rings in the plane of the sky) are incomplete at R> 12 kpc because of missing data near the major axis. ## 3 Dust temperature and opacity Walterbos & Schwering (1987) extensively studied the distributions of the dust temperature and opacity in M 31 using the IRAS data at 60 m and 100 m smoothed to a resolution of . Assuming a  emissivity law, they found a remarkably constant dust temperature (21-22 K) across the disk between 2 kpc and 15 kpc radius. Using this temperature, they obtained the opacity distribution at 100 m. Below we apply a similar method to the 70 m and 160 m MIPS maps to derive the distributions of dust temperature and optical depth at the H wavelength at higher resolution and sensitivity. ### 3.1 Dust temperature We derived the color temperature of the dust, , between 70 m and 160 m assuming a  emissivity law that should be appropriate for interstellar grains emitting at these wavelengths (Draine & Lee 1984; Andriesse 1974). The resulting dust temperature map (Fig. 1) and a histogram of the temperatures (Fig. 2a) show that  varies between 15.6  0.8 K and 30.8  0.3 K. The mean value of 18.7  1.4 K (standard deviation) is lower than that obtained by Walterbos & Schwering (1987) and close to the ISO measurements (16  2 K) of Haas et al. (1998). Figure 1 shows that dust of 18 K exists all over M 31. Warmer dust with  K dominates in starforming regions and in an extended area around the center of the galaxy, while cooler dust dominates in interarm regions. Figure 2b shows the dust temperature averaged in rings of 0.2 kpc width in the plane of M 31 against radius R. On both sides of the center the dust temperature falls very fast from about 30 K near the nucleus to 19 K at  kpc. To the outer parts of the galaxy, it then stays within a narrow range of about 17 K-19 K in the north and 16 K-19 K in the south. This indicates different radiation characteristics between the inner 4 kpc and beyond. In the ring of bright emission, the so called 10 kpc ring'', the temperature is clearly enhanced, especially in the northern half. Thus, in contrast to the finding of Walterbos & Schwering (1987), is not constant in the range R = 2-15 kpc but varies between 22.5  0.5 K and 17.2  0.7 K. It is interesting that the mean dust temperature obtained between 70 m and 160 m is about 3 K lower in M 31 than in M 33 (Tabatabaei et al. 2007b). The emission from cold dust in M 31 is stronger than in M 33, which can also be inferred from the total emission spectra based on IRAS and ISO observations (see Hippelein et al. 2003; Haas et al. 1998). Figure 2: a) Histogram of the dust temperature shown in Fig. 1. b) Distribution of the dust temperature in rings of 0.2 kpc in the galactic plane in the northern and southern halves of M 31. Open with DEXTER Figure 3: Distribution of the dust optical depth at H wavelength in M 31. The bar at the top shows . The angular resolution of 45 is shown in the lower right-hand corner of the map. The cross indicates the location of the center. Overlayed are contours of molecular gas column density N(H2) with levels of 250 and 800   mol. cm-2. Note that maxima in not always coincide with maxima in N(H2). Open with DEXTER ### 3.2 Dust opacity distribution The total dust optical depth along the line of sight was obtained from the dust intensity at 160 m and the derived temperature. Following Tabatabaei et al. (2007b), was converted into the dust optical depth at the wavelength of the H line, , by multiplying it by the ratio of the dust extinction coefficient per unit mass at the corresponding wavelengths, (see e.g. Fig. 12.8 of Krügel 2003). Figure 3 shows the distribution of across the disk of M 31 at an angular resolution of 45 . Regions with considerable dust opacity ( ) follow the spiral arms, even the inner arms which are either weak or not detected in H emission. The high opacity clumps ( ), however, only occur in the arms at 5 kpc and in the 10 kpc ring''. On average, the optical depth is largest in the 10 kpc ring''. Hence, in this ring, dust has the highest density like the atomic gas (Brinks & Shane 1984). Figure 4a shows the histogram of , indicating a most probable value of 0.5 and a mean value of 0.7  0.4 across the galaxy. This agrees with the value of  = 0.5  0.4 that follows from the mean total extinction obtained by Barmby et al. (2000) towards 314 globular clusters. The variation of the mean dust optical depth with galactocentric radius is shown in Fig. 4b. In the north, peaks not only in the 10 kpc ring'' (with two maxima at R = 9.9 and 10.9 kpc) but also near 5 kpc (with two maxima at R = 4.3 and 5.9 kpc). Beyond 11 kpc drops with an exponential scale length of 2.48  0.07 kpc in the north and 5.06  0.22 kpc in the south (Sect. 4, Eq. (1)). In Fig. 5 we compare the radial variation of for the total area with earlier determinations. The various estimates agree well given the large uncertainties. Xu & Helou (1996) derived  from high-resolution IRAS data using a dust heating/cooling model and a sandwich configuration of dust and stars. Although they left out the discrete sources, their values may be too high because they did not include inter-arm regions in their study. Montalto et al. (2009) calculated the extinction from the total infrared TIR-to-FUV intensity ratio and a sandwich model for stars and dust. They note that at R < 8 kpc the geometry of M 31 may differ from the sandwich model due to the stars in the bulge, making the inner points less reliable. This would also affect the results of Xu & Helou (1996) at R <8 kpc. The TIR-to-FUV ratio is applicable if the dust is mainly heated by young stars, but in M 31 about 70% of the cold dust is heated by the ISRF (Xu & Helou 1996). Therefore, is overestimated, as was also found for M 33 (Verley et al. 2009). The curve of Tempel et al. (2010) closely agrees with our data. They derived  from MIPS data and a star-dust model. Their smooth curve underestimates  in the brightest regions by about 0.1 and may overestimate in regions of low brightness. Figure 4: a) Histogram of the dust optical depth shown in Fig. 3, b) radial distribution of the mean optical depth at the H wavelength in rings of width of 0.2 kpc in the galactic plane in the north and south of M 31. The errors are smaller than the size of the symbols. Open with DEXTER The opacity map in Fig. 3 can be used to correct the H emission for the extinction by dust. In general, extinction depends on the relative distribution of emitting regions and dust along the line of sight and changes with the geometry (e.g. well mixed diffuse medium or shell-like in HII regions Witt & Gordon 2000). In this study, individual HII regions are rarely resolved and the geometry is close to a mixed diffuse medium. Furthermore, there is no information about the relative position of emittors and absorbers along the line of sight. For the Milky Way, Dickinson et al. (2003) found indications of a non-uniform mixing by comparing the z-distribution of atomic gas and dust. They adopted one third of the total dust optical depth as the effective extinction as a first-order approach. This is also in agreement with Krügel (2009) taking scattering into account. Moreover, Magnier et al. (1997) found that on average the extinction comes from dust associated with only one-third of average N(HI) in their study of OB associations along the eastern spiral arm regions of M 31. Therefore, we use an effective optical depth in this paper. The attenuation factor for the H intensity then is e and we derive the intrinsic H intensity I0 from the observed H intensity . Integration of the H map out to a radius of 16 kpc yields a ratio of corrected-to-observed total H flux density of 1.29, thus about 30% of the total H emission is obscured by dust within M 31. The corrected H map is shown in Fig. 10a. Figure 5: Radial variation of the (total) mean optical depth in H along the line of sight for the full area (north+south). Plusses: our data averaged in 0.2 kpc-wide rings in the plane of M 31 using i = 75; stars: same but with i = 77.5 for comparison with other work; triangles: Xu & Helou (1996), averages in 2 kpc-wide rings with i = 77 (scaled to D = 780 kpc); circles: Montalto et al. (2009), averages in 2 kpc-wide rings with i = 77.6; solid line: Tempel et al. (2010), semi-major axis cut through model with i = 77.5. The errors in our data and in the curve of Tempel et al. (2010) are about 10% of the mean values. Open with DEXTER Near the center (R < 1 kpc), varies between 0.03 and 0.13, corresponding to an extinction of -0.14 mag. At larger radii, the mean extinction increases, particularly in dense clouds and starforming regions, reaching a maximum of  1.2 mag at the densest dust cloud in the south-east of the 10 kpc ring'' (RA = 00 41 05.10 and Dec = +40 38 17.73 ). The range of extinction values agrees with that derived from the optical study of dust lanes by Walterbos & Kennicutt (1988) and the photometric study of Williams (2003). ## 4 Radial distributions of dust and gas emission In this section, we present the mean surface brightness along the line of sight of dust and gas components as a function of galactocentric radius R. The surface brightnesses are averaged in 200 pc-wide circular rings about the nucleus in the plane of M 31. This is equivalent to averaging in elliptical rings of 53 width in the plane of the sky. For simplicity we used a constant inclination angle of 75 at all radii, appropriate for the emission at (6.8 kpc), although in H and HI the inner regions are seen more face-on (Chemin et al. 2009; Ciardullo et al. 1988; Braun 1991). However, using for instead of (the area-weighted mean of the inclinations for the interval R=1.9-6.8 kpc given by Chemin et al. 2009) does not change our results. The smaller inclination shifts the radial positions of the inner arms about 0.5 kpc inwards but the general shape of the profiles remains the same, and as all profiles change in a similar way their inter-comparison is not affected. Furthermore, the results of the classical correlations for presented in Sect. 6 are the same within the errors for and . Figure 6 shows the mean IR intensities and the gas surface densities versus the galactocentric radius R for the northern and southern halves of M 31. The radial profiles of the IR emission at 24 m and 70 m are similar. The 160 m emission, representing the colder dust emission, however, shows a generally flatter radial distribution than the 24 m and 70 m emission. In particular, the fast decrease in the 24 m and 70 m profiles from the center to  kpc does not occur at 160 m. This is in agreement with Haas et al. (1998) who concluded from their ISO 175 m map and IRAS data that the dust near the center is relatively warm. The fast central decrease in warmer dust emission may be attributed to a decrease in the UV radiation field outside the nucleus, as a similar trend is seen in the GALEX UV profiles presented by Thilker et al. (2005). At all three IR wavelengths the arms are visible, even the weak inner arms. The bright arms forming the 10 kpc ring'', are pronounced in the north and followed by an exponential decrease toward larger radii. Figure 6: Top: radial profiles of the Spitzer IR emission from the northern ( left) and the southern ( right) halves of M 31. Bottom: radial profiles of the surface densities of the atomic, molecular and total neutral gas together with that of the ionized gas (de-reddened H) for the northern ( left) and southern ( right) halves of M 31. The units are 1018 atoms cm-2 for HI and HI+2H2, 1018 molecules cm-2 for H2, and 1010 erg s-1 cm-2 sr-1 for H profiles. The profiles show intensities along the line of sight averaged in circular rings of 0.2 kpc width in the plane of M 31 against the galactocentric radius. The errors are smaller than 5% for all profiles, only for H2 they increase from 10% to 25% at R>12.3 kpc and in the inner arms for R<4.5 kpc. Open with DEXTER Although the general trend of the warm dust surface brightness (at 24 m and 70 m) resembles more that of H than those of the neutral gas profiles (Fig. 6, lower panels), small variations (e.g. in the inner 5 kpc and for R = 11-12 kpc in the south) follow those in the total gas distribution due to variations in the molecular gas. Beyond about 5 kpc, the radial profile of the cold dust (160 m) is similar to that of the molecular gas, but with smoother variations. The minimum between 5 kpc and 10 kpc radius at 24 m and 70 m is less deep at 160 m and is missing in the HI profile. We obtained radial scale lengths between the maximum in the 10 kpc ring'' and R=14.9 kpc for the northern () and southern () halves of M 31 separately as well as for the total area (l). We fit an exponential function of the form (1) where I0 is the intensity at R = 10.9 kpc for the total area and in the north, and R = 8.9 kpc in the south. The resulting scale lengths are listed in Table 3. In each half of M 31, the scale lengths of the warm dust emission are smaller than that of the cold dust. This confirms that the warm dust is mainly heated by the UV photons from the starforming regions in the 10 kpc ring'' and the cold dust mainly by the interstellar radiation field (ISRF) from old stars (Xu & Helou 1996). Table 3:   Exponential scale lengths of dust and gas emissions from M 31. The scale lengths of the 24 m and 70 m emission are nearly the same and the 24 m-to-70 m intensity ratio (Fig. 7) hardly varies between  kpc and  kpc. This indicates a similar distribution of their origins. Assuming that the main source of the 24 m emission is very small dust grains and of the 70 m and 160 m emission is big grains, as argued by Walterbos & Schwering (1987), a constant intensity ratio of the 24 m-to-70 m and 24 m-to-160 m emission suggests that the very small and big grains are well mixed in the interstellar medium. Other possible origins of the 24 m emission are stars with dust shells like evolved AGB stars or Carbon stars. Using the IRAS data, (Soifer et al. 1986) attributed the 25 m emission from the bulge (central 8 ) of M 31 to circumstellar dust emission from late-type stars. In the disk of M 31, Walterbos & Schwering (1987) found no direct evidence of a contribution from stars with dust shells (contrary to that in the Milky Way, Cox et al. 1986). The higher resolution and sensitivity of the MIPS IR intensity ratios (Fig. 7), however, provide more information. Although at R > 3 kpc the variations in the IR intensity ratios are not large, their radial behavior is not the same. For instance, the 24-to-70 m intensity ratio peaks between 5 kpc and 10 kpc radius, whereas the 70-to-160 m intensity ratio peaks in the 10 kpc ring''. The latter can be explained by the higher temperature of the dust heated by OB associations in the 10 kpc ring''. The fact that the 24-to-70 m intensity ratio is not enhanced in the 10 kpc ring'' (and in the central region) shows the invalidity of this ratio for temperature determination due to the important contribution from the very small grains. On the other hand, the enhancement of the 24-to-70 m in regions where there is no strong radiation field (between the arms) reveals possibly different origins of the 24 m and 70 m emission. The stellar origin, e.g. photosphere of cool stars or dust shell of the evolved stars, may provide the enhancement of the 24-to-70 m intensity ratio in the inter-arm region. In M 33, Verley et al. (2009) attributed a similar enhancement of the diffuse 24 m emission to dusty circumstellar shells of unresolved, evolved AGB stars. For M 31, this needs to be quantified through a more detailed study and modeling of the spectral energy distribution, which is beyond the scope of this paper. Figure 7: Ratio of the MIPS IR intensities against galactocentric radius in M 31. Open with DEXTER Figure 8: Radial profiles of the gas-to-dust ratios in M 31, the northern half and the southern half. Top: N(HI)/ , middle: N(2H2)/ , bottom: N(HI + 2H2)/ . In the middle panel, the northern profile is shifted by 300 units for clarity. The errors are smaller than 5% everywhere only for N(2H2)/ they increase from 10% to 25% at R>12.3 kpc and in the inner arms for R<4.5 kpc. Open with DEXTER ### 4.2 Gas-to-dust ratio The gas-to-dust mass ratio and its variation across the galaxy can provide information about the metallicity distribution (e.g. Viallefond et al. 1982) and hence about the evolutionary history of the galaxy. The relative amount of dust and gas is expected to be correlated with the abundance of the heavy elements (Draine et al. 2007). A number of authors has studied the gas-to-dust ratio in M 31 by comparing HI column densities and optical or UV extinction (Walterbos & Kennicutt 1988; Xu & Helou 1996; Savcheva & Tassev 2002; Bajaja & Gergely 1977; Nedialkov et al. 2000). All authors found an increase in the atomic gas-to-dust ratio with radius. Walterbos & Schwering (1987) derived the HI gas-to-dust ratio using dust optical depth from IRAS 60 m and 100 m data. They found a radial gradient that is 4-5 times larger than the abundance gradient of Blair et al. (1982). After adding the molecular and atomic gas column densities, Nieten et al. (2006) obtained a strong radial increase in the total gas-to-175 m intensity resulting from the increase in the atomic gas-to-175 m intensity. As the dust optical depth is a better measure for the dust column density than the temperature-dependent dust emission, we re-investigated the gas-to-dust ratio in M 31 taking advantage of the high resolution of the Spitzer MIPS data. We calculated the radial profiles of the three gas-to-dust ratios from the mean column densities of N(HI), N(2H2), N(HI + 2H2) and in circular rings of 0.2 kpc width in the plane of the galaxy. Figure 8 (upper panel) shows that the atomic gas-to-dust ratio increases exponentially with radius by more than a factor of 10 from about at cm-2 at the center to about at cm-2 at R = 15 kpc. The increase is surprisingly smooth and, at least up to R = 13 kpc, nearly the same for the northern and southern half, indicating little variation between arm and inter-arm regions and within the arms. In contrast, the molecular gas-to-dust ratio (Fig. 8, middle panel) does not increase systematically with radius but shows clear enhancements of a factor 2-3 in the spiral arms and the 10 kpc ring''. The minima in the inter-arm regions are due to a stronger decrease in N(2H2) than in . Figure 3 shows that along the arms N(2H2)/ also varies significantly because maxima in H2 emission and are often not coincident. The variations in N(2H2)/ are visible in the profile of the total gas-to-dust ratio (Fig. 8, bottom panel) as weak enhancements at the positions of the arms near R = 6 kpc and R = 8-12 kpc. As the atomic gas is the dominant gas phase in M 31, dust mixed with HI gas largely determines the optical depth. Inspection of the distribution of the total gas-to-dust ratio across M 31 (not shown) reveals small-scale variations along the arms of typically a factor of 2. Table 4:   Exponential scale lengths L and radial gradients of dust-to-gas ratios and the abundance [O/H]. We conclude that the radial increase in the total gas-to-dust ratio of more than a factor 10 between the center and R=15 kpc is entirely due to that of the atomic gas-to-dust ratio, whereas the molecular gas-to-dust ratio is only increased in the arms. This confirms the conclusion of Nieten et al. (2006) based on the same gas data and the 175 m intensity. At which radius in M 31 would the gas-to-dust ratio observed in the solar neighborhood occur? Bohlin et al. (1978) and Diplas & Savage (1994) derived N(HI)/  at cm-2 mag-1 and  at cm-2 mag-1, respectively, using the extinction towards large samples of stars to determine the color excess E(B-V). Since E(B-V) = /, where the visual extinction  = 1.234   mag (e.g. Krügel 2003) and the total/selective extinction  = 2.8  0.3 in M 31 (Walterbos & Kennicutt 1988), we have E(B-V) = 0.44   mag-1. Hence, a value of N(HI)/  at cm-2 mag-1 corresponds to N(HI)/  =  at cm-2, which occurs in M 31 near R=8.5 kpc (Fig. 8, top panel), just in the bright emission ring. The total gas-to-dust ratio near the sun of  at cm-2 mag-1 (Bohlin et al. 1978) corresponding to N(gas)/  at cm-2 occurs at nearly the same radius (Fig. 8, bottom panel). Thus the gas-to-dust ratio near the sun is similar to that in the 10 kpc ring'' in M 31, in agreement with earlier studies (van Genderen 1973; Walterbos & Schwering 1987). In contrast to Fig. 8, we present in Fig. 9 the radial profiles of the dust-to-gas ratios, here for the total area in M 31. The two lower curves closely follow exponentials with scale lengths of 6.1  0.2 kpc and 7.4  0.2 kpc for /N(HI) and /N(gas), respectively, between R = 5 kpc and R = 15 kpc (see Table 4). For nearly the same radial range (R = 3-15 kpc), Walterbos & Schwering (1987) derived a scale length of /N(HI)  4 kpc from data near the major axis. Walterbos & Kennicutt (1988) obtained a scale length of /N(HI)  9 kpc for the inner and outer dust lanes, and the /N(HI) ratio of Xu & Helou (1996) for diffuse spiral arm regions also indicates a scale length of about 4 kpc (all scale lengths were scaled to D = 780 kpc). Since our scale lengths are not restricted to specific areas, our results are more representative for the mean dust-to-gas ratios in the disk of M 31. As dust consists of heavy elements and both dust and heavy elements are found in star formation regions, the radial variations in the dust-to-gas ratio and the metal abundance are expected to be similar (e.g. (Hirashita et al. 2002; Hirashita 1999). This has indeed been observed in several nearby galaxies (Issa et al. 1990). In M 31 the variation in the metallicity with radius is not well established. Measurements of the element abundance strongly depend on the empirical method and calibration applied (Trundle et al. 2002). Pagel et al. (1979) showed that the ([OII] + [OIII])/H ratio (the so called R23'') is a good probe of oxygen abundance and radial trends of this ratio have been studied in many nearby galaxies (e.g. Evans 1986; Pagel & Edmunds 1981; Henry & Howard 1995; Garnett et al. 1997). Blair et al. (1982) and Dennefeld & Kunth (1981) derived R23 for HII regions in M 31. We combined their results and derived a scale length of log [O/H] of 9.7  2.6 kpc corresponding to a gradient of 0.045  dex/kpc (Table 4). Comparing four different calibrations, Trundle et al. (2002) derived gradients of 0.027-0.013 dex/kpc using the 11 HII regions of Blair et al. (1982), with Pagel's calibration giving 0.017  0.001 dex/kpc. Since our value of the [O/H] gradient is based on 19 HII regions, we expect it to be more reliable than that of Trundle et al. (2002). Table 4 shows that the radial gradient in /N(gas) best matches the metallicity gradient. In view of the large uncertainties, the gradients in the dust-to-gas surface-density ratio and the oxygen abundance in M 31 may indeed be comparable. A much larger sample of abundance measurements of HII regions is needed to verify this similarity. Our result agrees with the approximately linear trend between gradients in dust-to-gas ratios and [O/H] in nearby galaxies noted by Issa et al. (1990). Figure 9: Dust-to-gas ratios as function of galactocentric radius for M 31, calculated from the radial profiles of , N(HI), N(2H2) and N(HI + 2H2) = N(gas). Open with DEXTER ## 5 Wavelet analysis of dust and gas emission To investigate the physical properties of different phases of the interstellar medium as a function of the size of emitting regions, wavelet transformation is an ideal tool. We use the Pet Hat wavelet (see Frick et al. 2001; Tabatabaei et al. 2007a) to decompose the emissions of IR, HI, H2, HI + 2H2, and de-reddened H into 10 spatial scales starting at 0.4 kpc (about twice the resolution). The central 2 kpc was subtracted from all images before the wavelet transformation to prevent a strong influence of the nucleus on the results. As an example, we show the extinction-corrected H map and the H emission for 3 different scales in Fig. 10. On the scale of 0.4 kpc, the distribution of HII complexes and large HII regions is borne out. The scale of 1.6 kpc (the typical width of spiral arms) shows connected HII complexes along the arms, and on the scale of 4 kpc we see the extended emission from the 10 kpc ring''. ### 5.1 Wavelet spectra The wavelet spectrum, M(a), represents the distribution of the emitting power as function of the scale a. The wavelet spectrum will smoothly increase towards larger scales if most of the emission is coming from diffuse structures forming the largest scales, here up to 25 kpc. On the other hand, the spectrum will decrease with increasing scale if compact structures are the dominant source of emission. The spectra of the IR and gas emission are shown in Fig. 11. All IR and gas spectra are intermediate between the two cases described above. Only the spectra of the HI gas and the 160 m emission generally increase with scale indicating the importance of diffuse HI and cold dust emission. In addition, the HI spectrum exhibits a dominant scale at  kpc corresponding to the width of the 10 kpc ring'', where strong diffuse emission occurs in interarm regions. The large width of the HI ring'' is also visible in the radial profiles in Fig. 6. The dominant scale of the emission from warm dust, molecular gas and H is near 1 kpc, where complexes of giant molecular clouds and starforming regions show up. The IR spectra at 24 m and 70 m on scales a< 6 kpc look most similar indicating that the starforming regions are the main heating sources at both wavelengths. On the other hand, the effect of the ISRF heating the cold dust is well indicated in the 160 m spectrum where a general increase towards larger scales is found. All spectra, apart from that of HI, show a minimum near a = 6 kpc corresponding to the large, weak interarm region inside the 10 kpc ring''. The spectrum of H is most similar to that of 70 m, which may explain why the H emission correlates better with 70 m emission than with that at 24 m (see Sect. 6.2 and Table 6). The spectrum of the H emission is flat on small scales up to 1.6 kpc, the width of the spiral arms in the H map. This is understandable as the emission from very compact HII regions is unresolved at our resolution and not many large HII complexes exist especially in the south (see the decomposed map in Fig. 10b of a = 0.4 kpc). ### 5.2 Wavelet cross-correlations We derive the cross-correlation coefficients, rw(a), for different scales following Tabatabaei et al. (2007a). The correlation coefficients are plotted in terms of scale in Fig. 12. They show that IR emission correlates with the emission from different gas phases on most scales. In all cases, emission from structures on scales larger than 10 kpc are best correlated. This corresponds to scales of the diameter of the 10 kpc ring'' and the over-all structure of the galaxy. On medium scales, the weakest correlation occurs between HI and dust emission on a= 6 kpc. This scale includes areas of significant diffuse HI emission where the dust emission is weak interior to the 10 kpc ring'' (compare also Fig. 11). On the smallest scale of 0.4 kpc, the cold dust emission is best correlated with that of the total neutral gas, while the warm dust emission at 70 m is best correlated with the ionized gas emission. Note that on this scale, the 24 m and 70 m (warm dust) emissions hardly correlate with HI ( rw(a)<0.5) because only a small fraction of the HI emission occurs on this scale (see Fig. 11). Furthermore, the coefficients of the 70 m-H correlation are higher than those of the 70 m-neutral gas correlation on scales a < 6.3 kpc. Figure 10: Distribution of the de-reddened H emission a) and the wavelet decomposition for scales 0.4, 1.6, 4.0 kpc b) to d). The central 2 kpc was subtracted from the H map before the decomposition. The cross in the H map indicates the location of the center. Open with DEXTER Figure 11: Wavelet spectra of MIPS IR ( left) and gas ( right) emission in M 31, shown in arbitrary units. The data points correspond to the scales 0.4, 0.6, 1.0, 1.6, 2.5, 4.0, 6.3, 10.0, 15.9, 25.1 kpc. Open with DEXTER Figure 12: Wavelet cross correlations of atomic gas ( top-left), molecular gas ( top-right), and total neutral gas ( bottom-left) with IR emission in M 31. The IR correlation with the ionized gas ( bottom-right) is also shown. The data points correspond to the scales 0.4, 0.6, 1.0, 1.6, 2.5, 4.0, 6.3, 10.0, 15.9, and 25.1 kpc. Open with DEXTER ## 6 Classical correlations between dust and gas The wavelet cross-correlations for different scales in Fig. 12 show on which scales the distributions of the various types of emission are significantly correlated. However, because the scale maps are normalized and information about absolute intensities is lost, they cannot be used to find quantitative relations between components of the ISM. Hence, to obtain numerical equations relating two distributions, we need classical correlations. Classical cross-correlations contain all scales that exist in a distribution. For example, the high-intensity points of the H-70m correlation in Fig. 14 represent high-emission peaks on small scales in the spiral arms (compare Fig. 10b), whereas low-intensity points represent weak emission around and between the arms on larger scales (compare Fig. 10d). The correlation coefficient of 79% is a mean of all scales, consistent with Fig. 12. We made pixel-to-pixel correlations between the distributions of and H2, HI, total gas as well as between de-reddened H and 24 m, 70 m, and 160 m. We restricted the comparisons to radii where all data sets are complete, (or 11.4 kpc), and to intensities above 2   rms noise. To reduce the influence of the gradient in the gas-to-dust ratio (see Sect. 4.2), we calculated correlations for two radial ranges: and . We obtained sets of independent data points, i.e. a beam area overlap of <, by choosing pixels spaced by more than 1.67 the beamwidth. Since the correlated variables are not directly depending on each other, we fitted a power law to the bisector in each case (Isobe et al. 1990). We also calculated the correlation coefficient, , to show how well two components are correlated, and the student-t test to indicate the statistical significance of the fit. For a number of independent points of n > 100, the fit is significant at the 3 level if t>3. Errors in intercept and slope b of the bisector are standard deviations (1). We first discuss the correlations between the neutral gas and dust extinction, scaled from . Then we investigate the relationships between the emissions from dust and ionized gas. The results are given in Tables 5 and 6, and examples of correlation plots are shown in Figs. 13 to 15. Figure 13: Classical cross-correlations between gas column densities and dust extinction A in the radial ranges and (or 6.8 kpc ### 6.1 Correlation between neutral gas and dust extinction In search of a general relationship between neutral gas and dust extinction, a number of authors employed scatter plots between gas column densities and extinction, optical depth or FIR surface brightness (e.g. Savage et al. 1978; Walterbos & Kennicutt 1988; Nieten et al. 2006; Xu & Helou 1996; Boulanger et al. 1996; Neininger et al. 1998). They obtained nearly linear relationships between these quantities. As the studies on M 31 have various shortcomings (lower limits for extinction, H2 data not included and/or low angular resolution), we calculated classical correlations between the distribution of dust extinction   and those of N(HI), N(2H2) and N(gas) at our resolution of 45 . As the correlations are restricted to gas column densities above 2 the rms noise, values of are not included (see upper panel of Fig. 13). The bisector fits given in Table 5 are plotted in the bottom panel of Fig. 13. The relationships between and N(2H2) for the two radial ranges are the same within errors, so the two areas can be combined. With a correlation coefficient of  0.6, the correlation is not very good, indicating that only a small part of the extinction is caused by dust in molecular clouds. This is not surprising in view of the low molecular gas fraction in M 31 (see lower panels of Fig. 6) and the small area filling factor of the molecular gas compared to that of the atomic gas. Table 5:   Power-law relations and correlation coefficients between dust extinction and gas components. The correlations between and N(HI) are indeed better ( ) than those between and N(2H2), but the relationships for the two radial intervals are not the same. Although both are nearly linear (power-law exponent ), their power laws are shifted (see Fig. 13) in the sense that the values of in are about a factor of 2 lower than those inside . This difference is caused by the radial decrease in /N(HI) discussed in Sect. 4.2. The variation of this ratio within each of the radial intervals contributes to the spread in the scatter plots and reduces the correlation coefficients. The correlations between total gas N(gas) and A are best ( ), as represents dust mixed with both HI and H2. They are close to linear ( ) and differ by nearly a factor 2 in . The scatter plots for the two intervals are shown in the upper panel of Fig. 13. In linear plots both power-law fits are going through zero, suggesting that the dust causing the extinction and neutral gas are mixed down to very low densities. Interestingly, extinction (or dust opacity) is proportional to the square root of N(2H2), while it is about linearly related to the atomic gas density. This is due to the quadratic dependence of N(2H2) on N(HI) in M 31 observed by (Nieten et al. 2006). This dependence is expected if in cool, dense, and dusty HI clouds the formation and destruction rates of H2 are balanced (Reach & Boulanger 1998). ### 6.2 Correlation between ionized gas and dust Because the emission from ionized gas is a good tracer of the present-day star formation rate and massive stars both heat the dust and ionize the gas, a correlation between the emissions from warm dust and ionized gas is expected. Relationships between the emission at 24 m and Pa or H emission from HII regions in nearby galaxies as well as relationships between global luminosities of galaxies have been reported (see Kennicutt et al. 2009, and references therein). For M 31, the correlation between the emission from dust and ionized gas was first tested by Hoernes et al. (1998), who found a good, nearly linear correlation between warm dust emission and free-free radio emission for the radial range , using HIRAS data and multi-wavelength radio data. Here we correlate the extinction-corrected H emission presented in Fig. 10a with dust emission in the MIPS maps. The wavelet correlations in Fig. 12 (bottom-right) show that in M 31 H emission is best correlated with dust emission at 70 m. This suggests that of the MIPS bands, the 70 m emission could best be used as the tracer of present-day star formation, making a numerical relation between the emissions at 70 m and H of interest. Table 6 gives the bisector fits for the two radial ranges, which are very similar. Therefore, we present this correlation in Fig. 14 for the entire radial range of . The power-law fit for this radial interval is where I70 is in MJy/sr and in 10-7 erg s-1 cm-2 sr-1. The correlation is quite good ( ) and nearly linear. In a linear plot the power-law fit goes through zero suggesting that the correlation is also valid for the lowest intensities. The good correlation indicates that the heating sources that power dust emission at 70 m and ionize the gas must indeed be largely the same. Naturally, H emission is less correlated with the emission from cold dust at 160 m than with emission from warm dust seen at the shorter wavelengths. This is especially so at where the radial profiles differ most (see Fig. 6). Moreover, the relation between the emission from cold dust and H is non-linear (see the bisector slope b in Table 6). Table 6 shows that the correlations with 24 m are slightly worse than those with 70 m. In contrast, in M 33 the 24 m-H correlation is better than the 70 m-H correlation (Tabatabaei et al. 2007a). This may suggest that in early-type galaxies like M 31 the contribution from evolved AGB stars to the 24 m emission is larger than in late-type galaxies like M 33. A significant stellar contribution to the 24 m emission from M 31 is also indicated by the enhancement of the 24 m-to-70 m intensity ratio in inter-arm regions where the radiation field is weak (Fig. 7). Across M 31, the 24 m emission is linearly proportional to the extinction-corrected H emission ( ). A linear relationship was also found between the luminosities at 24 m and extinction-corrected Pa of HII regions in M 51 (Calzetti et al. 2005) and between the luminosities at 24 m and extinction-corrected Hof HII regions in M 81 (Pérez-González et al. 2006). Comparing the 24 m luminosities and corrected H luminosities of HII regions in 6 nearby galaxies (including M 51 and M 81), Relaño et al. (2007) obtained a somewhat steeper power law with index , in agreement with the index for global luminosities of galaxies (see also Calzetti et al. 2007). Thus, while the L24- relationship is linear within a single galaxy, the relationships for HII regions in a sample of galaxies and for global luminosities are non-linear. According to Kennicutt et al. (2009), the steepening is due to variations between galaxies in the contribution from evolved, non-ionizing stars to the heating of the dust that emits at 24 m. Figure 14: Scatter plot between the surface brightnesses of ionized gas and dust emission at 70 m for the radial range 0 -50 . Only independent data points (separated by 1.67 beamwidth) with values above 2 rms noise are included. The line shows the power-law fit given in Table 6, which has an exponent close to 1. In a linear frame, this fit goes through the zero point of the plot. Open with DEXTER Table 6:   Power-law relations and correlation coefficients between the emission from dust and ionized gas. ## 7 Star formation rate and efficiency Over the last 40 years many authors have studied the relationship between the rate of star formation and gas density in M 31 by comparing the number surface density of massive young stars or of HII regions with that of HI (Berkhuijsen 1977; Tenjes & Haud 1991; Nakai & Sofue 1984; Unwin 1980). They found power-law exponents near 2 as was also obtained for the solar neighborhood by Schmidt (1959), who first proposed this relationship with HI volume density. Kennicutt (1998a) showed that a similar relationship is expected between SFR and gas column densities. The early studies suffered from the effects of dust absorption and could not consider molecular gas (apart from Tenjes & Haud 1991). As the necessary data are now available, we again address this issue. We compared the distribution of the H emission corrected for dust attenuation (see Fig. 10a) with those of HI, H2, and total gas. The corrected H emission is a good measure for the present-day star formation rate (SFR) that we first estimate for the total area observed using the relation of (Kennicutt 1998b): (2) where is the luminosity. In an area of (R<17 kpc), the luminosity of the de-reddened emission is  erg s-1 or for the distance to M 31 of 780 kpc (see Table 1), giving SFR = 0.38  . However, this value is rather uncertain for two reasons. First, the contribution from the inner disk ( , nearly 6 kpc) is overestimated because in this area the number of ionizing stars is low and the gas must be mainly heated by other sources (see Sect. 7.1). Second, our H map is limited to about 55 (12.5 kpc) along the major axis, so some of the emission between R = 12.5 kpc and R=17 kpc is missing. Subtracting the luminosity from the area R <6 kpc gives a lower limit to the SFR of 0.27  yr-1 for the radial range 6 < R < 17 kpc. Earlier estimates of the recent SFR for a larger part of the disk indicated 0.35-1  yr-1 (Barmby et al. 2006; Williams 2003; Walterbos & Braun 1994). Recently, Kang et al. (2009) derived a SFR of 0.43  yr-1(for metallicity 2.5  Solar) from UV observations of young starforming regions (<10 Myr) within 120 from the center (R < 27 kpc). Their Fig. 13 suggests that about 20% of this SFR is coming from R > 17 kpc and a negligible amount from R < 6 kpc. So for the range 6 kpc < R < 17 kpc they find a SFR of about 0.34  yr-1, which is consistent with our lower limit of 0.27  yr-1. Figure 15: Scatter plots between the surface density of the star formation rate and neutral gas surface densities for the radial interval (6.8 kpc A SFR of 0.3  yr-1 yields a mean face-on surface density of  Gyr-1 pc-2 between R =6 kpc and R=17 kpc. This is about 6 times lower than the value of  = 2.3  Gyr-1 pc-2 that Verley et al. (2009) obtained for the disk of M 33 (R <7 kpc), also using de-reddened H data. We can also calculate the star formation efficiency between R =6 kpc and R=17 kpc in M 31. The total molecular gas mass in the entire area of R<17 kpc in M 31 is M(H2) = 3.6  (Nieten et al. 2006) and that in the area 6 kpc < R <17 kpc is M(H2) = 2.9   . Hence the star formation efficiency SFE = SFR/M(H2) between 6 kpc and 17 kpc radius is SFE = 0.9 Gyr-1. It is equivalent to a molecular depletion time scale of 1.1 Gyr. Hence, the disk of M 31 is about three times less efficient in forming young massive stars than the northern part of the disk of M 33 (Gardan et al. 2007). ### 7.1 Star formation rate in the 10 kpc ring'' The radial distributions of H emission in the bottom panels of Fig. 6 show a steep decrease from the center to  kpc, followed by a shallower decrease to a minimum near R =6 kpc. In the radial profile of , shown in the upper panel of Fig. 17 for the total area, the inner arms at R = 2.5 kpc and R = 5.5 kpc are only visible as little wiggles superimposed onto a high background. Clearly, the starforming regions in these arms hardly contribute to the ionization of the gas at R < 6 kpc. Devereux et al. (1994) noted that at these radii the H emission is filamentary and unlike that in starforming regions, and since not many young, massive, ionizing stars are found interior to the 10 kpc ring'' (Kang et al. 2009; Berkhuijsen & Humphreys 1989; Tenjes & Haud 1991) the gas must be ionized by other sources. Naturally, the same holds for the heating of the warm dust, the emission of which also strongly increases towards the center. In an extensive discussion, Devereux et al. (1994) concluded that a collision with another galaxy in the past may explain the ionization of the gas and the heating of the dust as well as several other peculiarities (e.g. the double nucleus) in the inner disk of M 31 (see also Block et al. 2006). We note that the UV emission may also be influenced by this event, because it shows a similarly steep increase towards the center as the H emission. Furthermore, all radial profiles that increase towards the center are anti- correlated with the radial profiles of HI and total gas (see Fig. 6, bottom panels). This leads to apparent deviations in the Kennicutt-Schmidt law in the inner disk if is calculated from the usual star formation tracers (see Yin et al. 2009; Boissier et al. 2007). If massive stars are not responsible for the ionization of the gas and the heating of the dust, we can neither use the H emission nor the infrared emission as tracers of present-day star formation at R < 6 kpc in M 31, as was also pointed out by Devereux et al. (1994). Therefore, we investigated the relationship between SFR and neutral gas only for the interval (R= 6.8-11.4 kpc) containing the 10 kpc ring''. Table 7:   Kennicutt-Schmidt law in M 31 for . The correlation plots in Fig. 15 and the results in Table 7 show that is not well correlated with the surface densities of either H2, HI or total gas ( -0.59). In spite of this, the fitted bisectors are statistically significant (t>3). Interestingly, we find a linear relationship between and (exponent ), which closely agrees with the average relationship for 7 nearby galaxies, much brighter than M 31 (see Fig. 15a), analyzed by Bigiel et al. (2008). While in these galaxies molecular hydrogen is the dominant gas phase, most of the neutral gas in M 31 is atomic (compare Fig. 15a,b). Hence, the surface density of SFR is linearly related to that of molecular gas, irrespective of the fraction of molecular gas or the absolute value of the total gas surface density in a galaxy. Bigiel et al. (2008) arrived at the same conclusion after comparing the galaxies in their sample. The correlation between and total gas surface density is slightly better than that between and molecular gas surface density. The bisector fit in Table 7 yields the Kennicutt-Schmidt-law (3) where and are in  pc-2 and  Gyr-1 pc-2, respectively. The exponent of is well in the range of 1.1-2.7 derived by Bigiel et al. (2008). As a galaxy of low surface brightness, the SFRs in M 31 are correspondingly low. Our - relationship nicely fits on the low-brightness extension of the compilation of available galaxy data in Fig. 15 of Bigiel et al. (2008), formed by the outer parts of their 7 galaxies and the global values for 20 galaxies of low surface brightness. Very recently, Braun et al. (2009) also studied the dependence of SFR on gas density in M 31 using the new Westerbork HI survey and the CO survey of Nieten et al. (2006). They estimated the SFR from the surface brightnesses at IRAC 8 m, MIPS 24 m and GALEX FUV following the procedure of Thilker et al. (2005). Our Fig. 15a is comparable to the radial range 8-16 kpc in their Fig. 20D that shows the same range in as we find. Note that the molecular gas densities of Braun et al. (2009) are a factor of 1.6 larger (+0.21 dex) and have a wider dynamic range than our values due to differences in scaling of the CO data, inclination, angular resolution and radial range. Scaling our relationship to the assumptions of Braun et al. (2009) gives , which is in good agreement with their Fig. 20D. The dependencies of SFR surface density on total gas surface density in Fig. 15c and in Fig. 20E of Braun et al. (2009) have the same pear-like shape characterized by a broadening towards lower and a rather sharp cut-off near  = 10  pc-2. The cut-off comes from the - relation (see Fig. 15b) and occurs at the same value as in the bright galaxies analyzed by Bigiel et al. (2008), who interpreted the lack of higher surface mass densities as a saturation effect. Braun et al. (2009) show that in M 31 this truncation indeed vanishes after correcting the HI data for opacity, which could lead to somewhat steeper slopes in Figs. 15b and c. ### 7.2 Radial variations in the Kennicutt-Schmidt law In Fig. 16a, we plot the mean values in 0.5 kpc-wide rings in the plane of M 31 of against those of from R =6 kpc to R = 16 kpc. The points form a big loop with a horizontal branch for R =6-8.5 kpc and a maximum in the ring R = 10.5-11.0 kpc (see also Fig. 17). This behavior was already noted by Berkhuijsen (1977) and Tenjes & Haud (1991), who used the number density of HII regions as tracer of SFR and HI gas, and was recently confirmed by Boissier et al. (2007) from GALEX UV data and total gas. Both Berkhuijsen (1977) and Tenjes & Haud (1991) showed that the differences between the slopes inside and outside the maximum of the starforming ring is greatly reduced when the increase in the scale height of the gas with increasing radius is taken into account. We calculated the scale height, h, from the scale height of the HI gas given by Eq. (13) of Braun (1991), scaled to D = 780 kpc, assumed half this value for that of the H2 gas, and a constant scale height for the ionizing stars. Figure 16b shows as a function of gas volume density  = N(HI)/2h + N(2H2)/h. The points have moved towards each other, but the horizontal branch remained and the behavior on the starforming ring has become more complicated. The variations in slope in Fig. 16 are clear evidence of radial variations in the index of the Kennicutt-Schmidt law. Such variations are not specific to M 31 as they are also seen in some of the galaxies analyzed by Bigiel et al. (2008). In order to quantify the variations, we determined the bisectors in scatter plots for three circular rings: R = 7-9 kpc, R = 9-11 kpc and R = 11-13 kpc, covering the horizontal branch, the increasing part inside the maximum and the decreasing part outside the maximum, respectively. The results are given in Table 8. The index for the star fromation law for surface densities is unity for the 7-9 kpc ring and about 1.6 for the other two rings. Hence, the slope of b = 1.30  0.05 obtained for the 10 kpc ring'' (R =6.8-11.4 kpc) in Sect. 7.1 represents the mean value of the first two rings considered here. The scatter plots between and gas volume density yield bisector slopes that are about 0.2 smaller than those for surface density. The correlation coefficients are all close to for the 10 kpc ring'' (see Table 7), indicating that even in 2 kpc-wide rings the intrinsic scatter is considerable. This implies that on scales of a few hundred parsec significant variations in the index of the Kennicutt-Schmidt law and in the star formation efficiency occur. Figure 16: Mean face-on values of , averaged in 0.5 kpc-wide circular rings in the plane of M 31, plotted against the corresponding mean values of a) gas surface density , and b) gas volume density . The upper left point is for the ring R =6.0-6.5 kpc, the maximum in is in ring 10.5-11.0 kpc and the minimum in ring R = 14.5-15.0 kpc. Typical errors are 0.01  Gyr-1 pc-2 in , 0.02  pc-2 in , and 3  10-5  pc-3 in . Points for R>12 kpc suffer from missing data points near the major axis, the number of which increases with radius. Open with DEXTER Table 8:   Kennicutt-Schmidt law in three radial intervals in M 31. ### 7.3 Radial variations in SFR and SFE In Fig. 17 (upper panel) we present the radial profile of the SFR surface density between 6 kpc and 17 kpc, averaged in 0.5 kpc-wide circular rings in the plane of M 31. The face-on values vary between about 0.1 and 1  Gyr-1  pc-2. Boissier et al. (2007) and Braun et al. (2009) obtained similar values for in this radial range from GALEX UV data. They are about 10 times smaller than the surface densities of SFR between R = 1.5 kpc and R = 7 kpc in the northern part of M 33 observed by (Gardan et al. 2007). In the lower panel of Fig. 17 we show the radial profiles of the surface density of the molecular gas and of the star formation efficiencies SFE =  / and / . Although the maximum occurs on a relative maximum in the molecular gas density (in ring 10.5-11.0 kpc), is only about 70% of its maximum value where the molecular gas density is highest (in ring 9.0-9.5 kpc). Consequently, SFE varies significantly with radius. Between R=6 kpc and R=15 kpc SFE fluctuates around a value of 0.9 Gyr-1, with a minimum of 0.46  0.01 Gyr-1 near R = 9 kpc. Thus SFE is smallest where is highest! Up to R=12 kpc the efficiency / shows the same trend as SFE. The increase in SFE between 12 kpc and 15 kpc radius of a factor 1.5 results from the difference in radial scale lengths of (or H emission) and the molecular gas density (see Fig. 6 and Table 3). Interestingly, in M 33 (Gardan et al. 2007) found a radial increase in SFE of a factor 2 between 2 kpc and 6 kpc radius with similar fluctuations around the mean as we observe in M 31, but the mean value in M 31 is about three times lower than in M 33. Furthermore, Leroy et al. (2008) found significant variations in the efficiency on a linear scale of 800 pc in the sample of 12 spiral galaxies analyzed by them. That large, small-scale variations in SFE exist in galaxies is also clear from the large spread in the scatter plots of - visible in Fig. 15a and in several figures of Bigiel et al. (2008). The same value of can occur in a range of spanning more than a factor of 10. We may conclude that neither the present-day star formation rate nor the star formation efficiency SFE is well correlated with the molecular gas surface density. Hence, other factors than molecular gas density must play an important role in the star formation process. Bigiel et al. (2008) argue that local environmental circumstances largely determine the SFE in spiral galaxies. These factors are extensively discussed by e.g. Leroy et al. (2008). Figure 17: Radial variation of the face-on surface density of the star formation rate and star formation efficiency SFE in M 31, averaged in 0.5 kpc-wide rings in the plane of the galaxy. Beyond R= 12 kpc the data are not complete, because the observed area is limited along the major axis (see Fig. 10a). Top: radial profile of . Bottom: full line - SFE =  / ; long dashed line - / ; dots - radial profile of the molecular gas surface density seen face-on. Note that and do not peak at the same radius. Statistical errors in and are smaller than the symbols and those in SFE and / are smaller than the thickness of the lines. Only beyond R = 12 kpc the error in SFE slowly increases to at R = 15 kpc. Open with DEXTER ## 8 Summary In this paper, we studied the emission from dust, neutral gas and ionized gas in the disk of M 31, and the relationships between these components on various linear scales. We compared the Spitzer MIPS maps at 24 m, 70 m and 160 m (Gordon et al. 2006) to the distributions of atomic gas seen in the HI line (Brinks & Shane 1984), molecular gas as traced by the 12CO(1-0) line (Nieten et al. 2006) and ionized gas observed in H (Devereux et al. 1994). All data were smoothed to an angular resolution of 45 corresponding to 170 pc  660 pc in the plane of the galaxy. For each of the dust and gas maps, we calculated the mean intensity distribution as a function of radius (Fig. 6), separately for the northern and the southern half of M 31. Using wavelet analysis, we decomposed the dust and gas distributions in spatial scales and calculated cross-correlations as a function of scale. We also used classical correlations to derive quantitative relations between the various dust and gas components. Using the MIPS 70 m and 160 m maps, we derived the distributions of the dust temperature and optical depth. The dust optical depth at the H wavelength was used to a) investigate the dust-to-gas ratio, b) derive scaling relations between extinction and neutral gas emission, and c) de-redden the H emission in order to estimate the recent star formation rate. We also presented the Kennicutt-Schmidt law indices obtained for the bright emission ring near R = 10 kpc in M 31. We summarize the main results and conclusions as follows. 1. Dust temperature and opacity: The dust temperature steeply drops from about 30 K in the center to about 19 K near R= 4.5 kpc, and stays between about 17 K and 20 K beyond this radius (Fig. 2). The mean dust temperature in the area studied is about 18.5 K. This is 3 K less than the temperature obtained by Walterbos & Schwering (1987) between the IRAS maps at 60 m and 100 m that both trace warmer dust than the MIPS maps at 70 m and 160 m used here. The dust optical depth at H along the line of sight varies in a range between about 0.2 near the center and about 1 in the 10 kpc ring'' (Fig. 4) with a mean value of 0.7  0.4 (the error is standard deviation) and a most probable value of  0.5, indicating that M 31 is mostly optically thin to the H emission. The total flux density of the H emission increases by 30% after correction for extinction. The radial scale lengths between the maximum in the 10 kpc ring'' and R= 15 kpc of the warm dust are smaller than that of the cold dust, as is expected if the warm dust is mainly heated by UV photons from star forming regions and cold dust by the ISRF. With the largest scale length, atomic gas has the largest radial extent of the dust and gas components considered here. The radial gradient of the total gas-to-dust ratio is consistent with that of the oxygen abundance in M 31. The gas-to-dust ratios observed in the solar neighborhood (Bohlin et al. 1978) occur near R= 8.5 kpc in the disk of M 31 where N(gas)/ at cm-2. 3. Properties as a function of scale: Spatial scales larger than about 8 kpc contain most of the emitted power from the cold dust and the atomic gas, whereas the emissions from warm dust, molecular gas and ionized gas are dominated by scales near 1 kpc, typical for complexes of starforming regions and molecular clouds in spiral arms (Fig. 11). Dust emission is correlated ( ) with both neutral and ionized gas on scales >1 kpc. On scales <1 kpc, ionized gas is best correlated with warm dust and neutral gas (both HI and H2) with cold dust. On the smallest scale of 0.4 kpc, an HI-warm dust correlation hardly exists ( ) because not much HI occurs on the scale of starforming regions (see Fig. 11). 4. Relationships between gas and dust: H emission is slightly better correlated with the emission at 70 m than at 24 m (Fig. 13, Table 6), especially on scales < 2 kpc (Fig. 12). As in M 33 the 24 m-H correlation is best, this suggests that in early-type galaxies like M 31 the contribution from evolved AGB stars to the 24 m emission is larger than in late-type galaxies like M 33. Dust extinction is not well correlated with N(2H2) indicating that dust mixed with molecular clouds does not contribute much to the total extinction. Although the correlation with N(HI) is better, is best correlated with N(HI + 2H2). Dust opacity is proportional to the square root of N(2H2) but about linearly related to N(HI), as was also found by Nieten et al. (2006) at 90 resolution. This is an indirect indication of a balance between the formation and destruction rates of H2 in cool, dusty HI clouds. In the central 2 kpc both the dust opacity and the HI column density are very low and the dust temperature is high. This combination may explain the lack of H2 in this region. 5. SFR and SFE: The SFR in M 31 is low. The total SFR in the observed field between R= 6 kpc and R= 17 kpc is and the star formation efficiency is 0.9 Gyr-1, yielding a molecular depletion time scale of 1.1 Gyr. This is about three times longer than observed in the northern part of M 33 (Gardan et al. 2007). The radial distribution of in 0.5 kpc-wide rings in the plane of the galaxy (Fig. 17) varies between about 0.1 and 1  Gyr-1 pc-2, values that are about 10 times smaller than in the northern part of M 33 (Gardan et al. 2007). Between R= 6 kpc and R= 15 kpc, SFE varies between about 0.5 Gyr-1 and 1.5 Gyr-1, whereas the efficiency with respect to the total gas surface density slowly decreases from about 0.18 Gyr-1 to about 0.03 Gyr-1. SFR is not well correlated with neutral gas and worst of all with molecular gas in the radial range containing the 10 kpc ring'' (Fig. 15, Table 7). In spite of this, the power-law fits are statistically significant. We find a linear relationship between the surface densities of SFR and molecular gas (power-law exponent 0.96  0.03), and a power law with index 1.30  0.05 between the surface densities of SFR and total gas. These results agree with the average relationship for 7 nearby galaxies much brighter than M 31 (Bigiel et al. 2008). While in these galaxies molecular hydrogen is the dominant gas phase, most of the neutral gas in M 31 is atomic. Thus, the surface density of SFR depends linearly on that of molecular gas irrespective of the fraction of molecular gas or the absolute value of the total gas surface density in a galaxy. Some important implications of this study are: - Precaution is required in using the total IR luminosity (TIR) as an indicator of recent SFR or to derive dust opacity for an early-type galaxy like M 31, because the cold dust is mainly heated by the ISRF and the warm dust emission at 24 m is partly due to evolved stars (especially in the bulge of the galaxy). - Neither the present-day SFR nor SFE is well correlated with the surface density of molecular gas or total gas. Therefore, other factors than gas density must play an important role in the process of star formation in M 31. Acknowledgements We are grateful to E. Krügel for valuable and stimulating comments. We thank K.M. Menten and R. Beck for comments and careful reading of the manuscript. The Spitzer MIPS data were kindly provided by Karl D. Gordon. E. Tempel kindly sent us a table of extinction values that we used for Fig. 5. We thank an anonymous referee for extensive comments leading to improvements in the manuscript. F.T. was supported through a stipend from the Max Planck Institute for Radio Astronomy (MPIfR). ## References 1. Andriesse, C. D. 1974, A&A, 37, 257 [NASA ADS] [Google Scholar] 2. Bajaja, E., & Gergely, T. E. 1977, A&A, 61, 229 [NASA ADS] [Google Scholar] 3. Barmby, P., Ashby, M. L. N., Bianchi, L., et al. 2006, ApJ, 650, L45 [NASA ADS] [CrossRef] [Google Scholar] 4. Berkhuijsen, E. M. 1977, A&A, 57, 9 [NASA ADS] [Google Scholar] 5. Berkhuijsen, E. M., & Humphreys, R. M. 1989, A&A, 214, 68 [NASA ADS] [Google Scholar] 6. Bigiel, F., Leroy, A., Walter, F., et al. 2008, AJ, 136, 2846 [NASA ADS] [CrossRef] [Google Scholar] 7. Blair, W. P., Kirshner, R. P., & Chevalier, R. A. 1982, ApJ, 254, 50 [NASA ADS] [CrossRef] [Google Scholar] 8. Block, D. L., Bournaud, F., Combes, F., et al. 2006, Nature, 443, 832 [NASA ADS] [CrossRef] [PubMed] [Google Scholar] 9. Bohlin, R. C., Savage, B. D., & Drake, J. F. 1978, ApJ, 224, 132 [NASA ADS] [CrossRef] [Google Scholar] 10. Boissier, S., Gil de Paz, A., Boselli, A., et al. 2007, ApJS, 173, 524 [NASA ADS] [CrossRef] [Google Scholar] 11. Boulanger, F., Abergel, A., Bernard, J.-P., et al. 1996, A&A, 312, 256 [NASA ADS] [Google Scholar] 12. Braun, R. 1990, ApJS, 72, 755 [NASA ADS] [CrossRef] [Google Scholar] 13. Braun, R. 1991, ApJ, 372, 54 [NASA ADS] [CrossRef] [Google Scholar] 14. Braun, R., Thilker, D. A., Walterbos, R. A. M., & Corbelli, E. 2009, ApJ, 695, 937 [NASA ADS] [CrossRef] [Google Scholar] 15. Brinks, E., & Shane, W. W. 1984, A&AS, 55, 179 [NASA ADS] [CrossRef] [EDP Sciences] [MathSciNet] [PubMed] [Google Scholar] 16. Calzetti, D., Kennicutt, Jr., R. C., Bianchi, L., et al. 2005, ApJ, 633, 871 [NASA ADS] [CrossRef] [Google Scholar] 17. Calzetti, D., Kennicutt, R. C., Engelbracht, C. W., et al. 2007, ApJ, 666, 870 [NASA ADS] [CrossRef] [Google Scholar] 18. Chemin, L., Carignan, C., & Foster, T. 2009, ApJ, 705, 1395 [NASA ADS] [CrossRef] [Google Scholar] 19. Ciardullo, R., Rubin, V. C., Ford, Jr., W. K., Jacoby, G. H., & Ford, H. C. 1988, AJ, 95, 438 [NASA ADS] [CrossRef] [Google Scholar] 20. Cox, P., Kruegel, E., & Mezger, P. G. 1986, A&A, 155, 380 [NASA ADS] [Google Scholar] 21. Dennefeld, M., & Kunth, D. 1981, AJ, 86, 989 [NASA ADS] [CrossRef] [Google Scholar] 22. Devereux, N. A., Price, R., Wells, L. A., & Duric, N. 1994, AJ, 108, 1667 [NASA ADS] [CrossRef] [Google Scholar] 23. Dickinson, C., Davies, R. D., & Davis, R. J. 2003, MNRAS, 341, 369 [NASA ADS] [CrossRef] [Google Scholar] 24. Diplas, A., & Savage, B. D. 1994, ApJ, 427, 274 [NASA ADS] [CrossRef] [Google Scholar] 25. Draine, B. T., & Lee, H. M. 1984, ApJ, 285, 89 [NASA ADS] [CrossRef] [Google Scholar] 26. Draine, B. T., Dale, D. A., Bendo, G., et al. 2007, ApJ, 663, 866 [NASA ADS] [CrossRef] [Google Scholar] 27. Emerson, D. T. 1974, MNRAS, 169, 607 [NASA ADS] [Google Scholar] 28. Evans, I. N. 1986, ApJ, 309, 544 [NASA ADS] [CrossRef] [Google Scholar] 29. Frick, P., Beck, R., Berkhuijsen, E. M., & Patrickeyev, I. 2001, MNRAS, 327, 1145 [NASA ADS] [CrossRef] [Google Scholar] 30. Gardan, E., Braine, J., Schuster, K. F., Brouillet, N., & Sievers, A. 2007, A&A, 473, 91 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar] 31. Garnett, D. R., Shields, G. A., Skillman, E. D., Sagan, S. P., & Dufour, R. J. 1997, ApJ, 489, 63 [NASA ADS] [CrossRef] [Google Scholar] 32. Gordon, K. D., Rieke, G. H., Engelbracht, C. W., et al. 2005, PASP, 117, 503 [NASA ADS] [CrossRef] [Google Scholar] 33. Gordon, K. D., Bailin, J., Engelbracht, C. W., et al. 2006, ApJ, 638, L87 [NASA ADS] [CrossRef] [Google Scholar] 34. Gordon, K. D., Engelbracht, C. W., Fadda, D., et al. 2007, PASP, 119, 1019 [NASA ADS] [CrossRef] [Google Scholar] 35. Haas, M., Lemke, D., Stickel, M., et al. 1998, A&A, 338, L33 [NASA ADS] [Google Scholar] 36. Henry, R. B. C., & Howard, J. W. 1995, ApJ, 438, 170 [NASA ADS] [CrossRef] [Google Scholar] 37. Hippelein, H., Haas, M., Tuffs, R. J., et al. 2003, A&A, 407, 137 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar] 38. Hirashita, H. 1999, ApJ, 510, L99 [NASA ADS] [CrossRef] [Google Scholar] 39. Hirashita, H., Tajiri, Y. Y., & Kamaya, H. 2002, A&A, 388, 439 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar] 40. Hoernes, P., Berkhuijsen, E. M., & Xu, C. 1998, A&A, 334, 57 [NASA ADS] [Google Scholar] 41. Isobe, T., Feigelson, E. D., Akritas, M. G., & Babu, G. J. 1990, ApJ, 364, 104 [NASA ADS] [CrossRef] [Google Scholar] 42. Issa, M. R., MacLaren, I., & Wolfendale, A. W. 1990, A&A, 236, 237 [NASA ADS] [Google Scholar] 43. Kang, Y., Bianchi, L., & Rey, S. 2009, ApJ, 703, 614 [NASA ADS] [CrossRef] [Google Scholar] 44. Kennicutt, Jr., R. C. 1998a, ARA&A, 36, 189 [NASA ADS] [CrossRef] [Google Scholar] 45. Kennicutt, Jr., R. C. 1998b, ApJ, 498, 541 [NASA ADS] [CrossRef] [Google Scholar] 46. Kennicutt, Jr., R. C., Calzetti, D., Walter, F., et al. 2007, ApJ, 671, 333 [NASA ADS] [CrossRef] [Google Scholar] 47. Kennicutt, R. C., Hao, C., Calzetti, D., et al. 2009, ApJ, 703, 1672 [NASA ADS] [CrossRef] [Google Scholar] 48. Krügel, E. 2003, The physics of interstellar dust, ed. E. Krügel, IoP Series in Astronomy and Astrophysics (Bristol, UK: The Institute of Physics) [Google Scholar] 49. Krügel, E. 2009, A&A, 493, 385 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar] 50. Leroy, A. K., Walter, F., Brinks, E., et al. 2008, AJ, 136, 2782 [NASA ADS] [CrossRef] [Google Scholar] 51. Magnier, E. A., Hodge, P., Battinelli, P., Lewin, W. H. G., & van Paradijs, J. 1997, MNRAS, 292, 490 [NASA ADS] [CrossRef] [Google Scholar] 52. Montalto, M., Seitz, S., Riffeser, A., et al. 2009, A&A, 507, 283 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar] 53. Nakai, N., & Sofue, Y. 1982, PASJ, 34, 199 [NASA ADS] [Google Scholar] 54. Nakai, N., & Sofue, Y. 1984, PASJ, 36, 313 [NASA ADS] [Google Scholar] 55. Nedialkov, P., Berkhuijsen, E. M., Nieten, C., & Haas, M. 2000, in Proceedings, WE-Heraeus Seminar, ed. E. M. Berkhuijsen, R. Beck, & R. A. M. Walterbos, 85, 232 [Google Scholar] 56. Neininger, N., Guélin, M., Ungerechts, H., Lucas, R., & Wielebinski, R. 1998, Nature, 395, 871 [NASA ADS] [CrossRef] [Google Scholar] 57. Nieten, C., Neininger, N., Guélin, M., et al. 2006, A&A, 453, 459 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar] 58. Pagel, B. E. J., & Edmunds, M. G. 1981, ARA&A, 19, 77 [NASA ADS] [CrossRef] [Google Scholar] 59. Pagel, B. E. J., Edmunds, M. G., Blackwell, D. E., Chun, M. S., & Smith, G. 1979, MNRAS, 189, 95 [NASA ADS] [CrossRef] [Google Scholar] 60. Pérez-González, P. G., Kennicutt, Jr., R. C., Gordon, K. D., et al. 2006, ApJ, 648, 987 [NASA ADS] [CrossRef] [Google Scholar] 61. Reach, W. T., & Boulanger, F. 1998, in The Local Bubble and Beyond, Proc. IAU Colloq. 166, ed. D. Breitschwerdt, M. J. Freyberg, & J. Truemper (Berlin: Springer Verlag), Lect. Notes Phys., 506, 353 [Google Scholar] 62. Relaño, M., Lisenfeld, U., Pérez-González, P. G., Vílchez, J. M., & Battaner, E. 2007, ApJ, 667, L141 [NASA ADS] [CrossRef] [Google Scholar] 63. Rieke, G. H., Young, E. T., Engelbracht, C. W., et al. 2004, ApJS, 154, 25 [NASA ADS] [CrossRef] [Google Scholar] 64. Savage, B. D., Wesselius, P. R., Swings, J. P., & The, P. S. 1978, ApJ, 224, 149 [NASA ADS] [CrossRef] [Google Scholar] 65. Savcheva, A. S., & Tassev, S. V. 2002, Publications de l'Observatoire Astronomique de Beograd, 73, 219 [NASA ADS] [Google Scholar] 66. Schmidt, M. 1959, ApJ, 129, 243 [NASA ADS] [CrossRef] [Google Scholar] 67. Soifer, B. T., Rice, W. L., Mould, J. R., et al. 1986, ApJ, 304, 651 [NASA ADS] [CrossRef] [Google Scholar] 68. Stanek, K. Z., & Garnavich, P. M. 1998, ApJ, 503, L131 [NASA ADS] [CrossRef] [Google Scholar] 69. Tabatabaei, F. S., Beck, R., Krause, M., et al. 2007a, A&A, 466, 509 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar] 70. Tabatabaei, F. S., Beck, R., Krügel, E., et al. 2007b, A&A, 475, 133 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar] 71. Tempel, E., Tamm, A., & Tenjes, P. 2010, A&A, 509, 91 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar] 72. Tenjes, P., & Haud, U. 1991, A&A, 251, 11 [NASA ADS] [Google Scholar] 73. Thilker, D. A., Hoopes, C. G., Bianchi, L., et al. 2005, ApJ, 619, L67 [NASA ADS] [CrossRef] [Google Scholar] 74. Trundle, C., Dufton, P. L., Lennon, D. J., Smartt, S. J., & Urbaneja, M. A. 2002, A&A, 395, 519 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar] 75. Unwin, S. C. 1980, MNRAS, 192, 243 [NASA ADS] [Google Scholar] 76. van Genderen, A. M. 1973, A&A, 24, 47 [NASA ADS] [Google Scholar] 77. Verley, S., Corbelli, E., Giovanardi, C., & Hunt, L. K. 2009, A&A, 493, 453 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar] 78. Viallefond, F., Goss, W. M., & Allen, R. J. 1982, A&A, 115, 373 [NASA ADS] [Google Scholar] 79. Walterbos, R. A. M., & Braun, R. 1994, ApJ, 431, 156 [NASA ADS] [CrossRef] [Google Scholar] 80. Walterbos, R. A. M., & Kennicutt, Jr., R. C. 1988, A&A, 198, 61 [NASA ADS] [Google Scholar] 81. Walterbos, R. A. M., & Schwering, P. B. W. 1987, A&A, 180, 27 [NASA ADS] [Google Scholar] 82. Williams, B. F. 2003, AJ, 126, 1312 [NASA ADS] [CrossRef] [Google Scholar] 83. Witt, A. N., & Gordon, K. D. 2000, ApJ, 528, 799 [NASA ADS] [CrossRef] [Google Scholar] 84. Xu, C., & Helou, G. 1996, ApJ, 456, 163 [NASA ADS] [CrossRef] [Google Scholar] 85. Yin, J., Hou, J. L., Prantzos, N., et al. 2009, A&A, 505, 497 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar] ## Footnotes ... R Because the spiral structure is different in the northern and southern half (northeast and southwest of the minor axis, i.e. left and right of the minor axis, respectively), we present all radial profiles for each half separately. ## All Tables Table 1:   Positional data adopted for M 31. Table 2:   M 31 data used in this study. Table 3:   Exponential scale lengths of dust and gas emissions from M 31. Table 4:   Exponential scale lengths L and radial gradients of dust-to-gas ratios and the abundance [O/H]. Table 5:   Power-law relations and correlation coefficients between dust extinction and gas components. Table 6:   Power-law relations and correlation coefficients between the emission from dust and ionized gas. Table 7:   Kennicutt-Schmidt law in M 31 for . Table 8:   Kennicutt-Schmidt law in three radial intervals in M 31. ## All Figures Figure 1: Dust temperature in M 31 obtained from the ratio / based on the Spitzer MIPS data. Only pixels with intensity above 3 noise level were used. The angular resolution of 45 is shown in the lower right-hand corner of the map. The cross indicates the location of the center. The bar at the top gives the dust temperature in Kelvin. Open with DEXTER In the text Figure 2: a) Histogram of the dust temperature shown in Fig. 1. b) Distribution of the dust temperature in rings of 0.2 kpc in the galactic plane in the northern and southern halves of M 31. Open with DEXTER In the text Figure 3: Distribution of the dust optical depth at H wavelength in M 31. The bar at the top shows . The angular resolution of 45 is shown in the lower right-hand corner of the map. The cross indicates the location of the center. Overlayed are contours of molecular gas column density N(H2) with levels of 250 and 800   mol. cm-2. Note that maxima in not always coincide with maxima in N(H2). Open with DEXTER In the text Figure 4: a) Histogram of the dust optical depth shown in Fig. 3, b) radial distribution of the mean optical depth at the H wavelength in rings of width of 0.2 kpc in the galactic plane in the north and south of M 31. The errors are smaller than the size of the symbols. Open with DEXTER In the text Figure 5: Radial variation of the (total) mean optical depth in H along the line of sight for the full area (north+south). Plusses: our data averaged in 0.2 kpc-wide rings in the plane of M 31 using i = 75; stars: same but with i = 77.5 for comparison with other work; triangles: Xu & Helou (1996), averages in 2 kpc-wide rings with i = 77 (scaled to D = 780 kpc); circles: Montalto et al. (2009), averages in 2 kpc-wide rings with i = 77.6; solid line: Tempel et al. (2010), semi-major axis cut through model with i = 77.5. The errors in our data and in the curve of Tempel et al. (2010) are about 10% of the mean values. Open with DEXTER In the text Figure 6: Top: radial profiles of the Spitzer IR emission from the northern ( left) and the southern ( right) halves of M 31. Bottom: radial profiles of the surface densities of the atomic, molecular and total neutral gas together with that of the ionized gas (de-reddened H) for the northern ( left) and southern ( right) halves of M 31. The units are 1018 atoms cm-2 for HI and HI+2H2, 1018 molecules cm-2 for H2, and 1010 erg s-1 cm-2 sr-1 for H profiles. The profiles show intensities along the line of sight averaged in circular rings of 0.2 kpc width in the plane of M 31 against the galactocentric radius. The errors are smaller than 5% for all profiles, only for H2 they increase from 10% to 25% at R>12.3 kpc and in the inner arms for R<4.5 kpc. Open with DEXTER In the text Figure 7: Ratio of the MIPS IR intensities against galactocentric radius in M 31. Open with DEXTER In the text Figure 8: Radial profiles of the gas-to-dust ratios in M 31, the northern half and the southern half. Top: N(HI)/ , middle: N(2H2)/ , bottom: N(HI + 2H2)/ . In the middle panel, the northern profile is shifted by 300 units for clarity. The errors are smaller than 5% everywhere only for N(2H2)/ they increase from 10% to 25% at R>12.3 kpc and in the inner arms for R<4.5 kpc. Open with DEXTER In the text Figure 9: Dust-to-gas ratios as function of galactocentric radius for M 31, calculated from the radial profiles of , N(HI), N(2H2) and N(HI + 2H2) = N(gas). Open with DEXTER In the text Figure 10: Distribution of the de-reddened H emission a) and the wavelet decomposition for scales 0.4, 1.6, 4.0 kpc b) to d). The central 2 kpc was subtracted from the H map before the decomposition. The cross in the H map indicates the location of the center. Open with DEXTER In the text Figure 11: Wavelet spectra of MIPS IR ( left) and gas ( right) emission in M 31, shown in arbitrary units. The data points correspond to the scales 0.4, 0.6, 1.0, 1.6, 2.5, 4.0, 6.3, 10.0, 15.9, 25.1 kpc. Open with DEXTER In the text Figure 12: Wavelet cross correlations of atomic gas ( top-left), molecular gas ( top-right), and total neutral gas ( bottom-left) with IR emission in M 31. The IR correlation with the ionized gas ( bottom-right) is also shown. The data points correspond to the scales 0.4, 0.6, 1.0, 1.6, 2.5, 4.0, 6.3, 10.0, 15.9, and 25.1 kpc. Open with DEXTER In the text Figure 13: Classical cross-correlations between gas column densities and dust extinction A in the radial ranges and (or 6.8 kpc Figure 14: Scatter plot between the surface brightnesses of ionized gas and dust emission at 70 m for the radial range 0 -50 . Only independent data points (separated by 1.67 beamwidth) with values above 2 rms noise are included. The line shows the power-law fit given in Table 6, which has an exponent close to 1. In a linear frame, this fit goes through the zero point of the plot. Open with DEXTER In the text Figure 15: Scatter plots between the surface density of the star formation rate and neutral gas surface densities for the radial interval (6.8 kpc Figure 16: Mean face-on values of , averaged in 0.5 kpc-wide circular rings in the plane of M 31, plotted against the corresponding mean values of a) gas surface density , and b) gas volume density . The upper left point is for the ring R =6.0-6.5 kpc, the maximum in is in ring 10.5-11.0 kpc and the minimum in ring R = 14.5-15.0 kpc. Typical errors are 0.01  Gyr-1 pc-2 in , 0.02  pc-2 in , and 3  10-5  pc-3 in . Points for R>12 kpc suffer from missing data points near the major axis, the number of which increases with radius. Open with DEXTER In the text Figure 17: Radial variation of the face-on surface density of the star formation rate and star formation efficiency SFE in M 31, averaged in 0.5 kpc-wide rings in the plane of the galaxy. Beyond R= 12 kpc the data are not complete, because the observed area is limited along the major axis (see Fig. 10a). Top: radial profile of . Bottom: full line - SFE =  / ; long dashed line - / ; dots - radial profile of the molecular gas surface density seen face-on. Note that and do not peak at the same radius. Statistical errors in and are smaller than the symbols and those in SFE and / are smaller than the thickness of the lines. Only beyond R = 12 kpc the error in SFE slowly increases to at R = 15 kpc. Open with DEXTER In the text
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9049473404884338, "perplexity": 2411.5377500830673}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347406785.66/warc/CC-MAIN-20200529214634-20200530004634-00540.warc.gz"}
https://math.stackexchange.com/tags/asymptotics/hot
# Tag Info 6 (This is more like a comment with images) Here are some simulations of the values $c = c(p)$ using the grid of size $1000\times1000$ and $500$ steps together with some fitting curves. $\hspace{8em}$ The data clearly deviates from the polynomial $2p^2$, and although the above plot may seem to suggest that $c(p)$ assumes a nice closed form, I believe that ... 3 Seems fine but it is easier to note that $$n+1\leq 2n$$ for $n\geq 1$ whence $$(n+1)^3\leq (2n)^3=8n^3$$ for $n\geq 1$ as desired. 2 By the most recent bound on Linnik's Theorem, there is an absolute constant $c$ such that for every prime $q < cp_n^{1/5}$, there is a prime $p < p_n$ such that $p \equiv 1 \pmod{q}$. Your least common multiple is therefore divisible by all primes below $cp_n^{1/5}$. The prime number theorem implies that the product of all primes below $cp_n^{1/5}$ is ... 2 Your differential equation has closed-form solutions \eqalign{f \left( t \right) &=c_{{1}}{t}^{1/2-a/2}{{ J}_{-\sqrt {{a}^{2}-2\,a +4\,b+1}}\left(2\,{\frac {\sqrt {b\epsilon}}{\sqrt {t}}}\right)}\cr &+c_{{2 }}{t}^{1/2-a/2}{{ Y}_{-\sqrt {{a}^{2}-2\,a+4\,b+1}}\left(2\,{\frac {\sqrt {b\epsilon}}{\sqrt {t}}}\right)}} whereJ$and$Y$are Bessel ... 1 Here is a key lemma (which you can and should prove for yourself) that helps to resolve so many questions of this type. I'll phrase it in terms of functions of$x$where$x\to\infty$, although it holds for other domains as well. Lemma: Let$f,g\colon [1,\infty)\to[2,\infty)$be functions. If$\log f(x) = o(\log g(x))$as$x\to\infty$, then$f(x) = o(g(x))$... 1$\log((1/n) + n^2 )$If we just focus on the term inside of the logarithm.$ let x = 1/n + n^2$As n grows, we notice that the 1/n term effectively becomes zero and the overpowering term is$n^2$We can now state that$\log((1/n) + n^2 )$has growth$log(n^2)$when n gets large. We know from logarithms that$log(a^b) = b* log(a)$So$log(n^2)$can be ... 1 Yes it does. Let$t_N$denote the algorithm time. Let$f_N = \max\{t_M/M^2:M\le N\}$. Suppose$f_N$diverges. Then$t_N$fails to be$O(\sqrt{f_N} N^2)$. Therefore$f_N$does not diverge. Since$f_N$is a non-decreasing sequence, it must be bounded. Set$C= \sup f_N$. Then$t_N \le C N^2\$. Only top voted, non community-wiki answers of a minimum length are eligible
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.989678680896759, "perplexity": 925.0044005554669}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998298.91/warc/CC-MAIN-20190616202813-20190616224813-00505.warc.gz"}
http://mathhelpforum.com/pre-calculus/56082-continuity-question.html
$g(x)=\left\{\begin{array}{cc}x^2+1,&\mbox{ if }x\leq 0\\x^2+bx\sin(\frac{1}{x})+c,&\mbox{ if }x>0\end{array}\right.$ $b\in$ $c\in$ 2. The only possible point of discontinuity is at $x=0,$ hence, you must put $\lim_{x\to0^-}f(x)=\lim_{x\to0^+}f(x).$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9927082657814026, "perplexity": 256.0851825339668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886117519.92/warc/CC-MAIN-20170823035753-20170823055753-00496.warc.gz"}
https://dbfin.com/logic/enderton/chapter-3/section-3-3-a-subtheory-of-number-theory/problem-6-solution/
# Section 3.3: Problem 6 Solution Working problems is a crucial part of learning mathematics. No one can learn... merely by poring over the definitions, theorems, and examples that are worked out in the text. One must work part of it out for oneself. To provide that opportunity is the purpose of the exercises. James R. Munkres Is $3$ a sequence number? What is $\mbox{lh}3$ ? Find $(1\ast3)\ast6$ and $1\ast(3\ast6)$ . $3=2^{0}\cdot3^{1}$ is not a sequence number, as $p_{1}=3$ divides $3$ but $p_{0}=2$ does not divide $3$ . However, formally, we can still calculate the formulas in the question. First, formally, $\mbox{lh}b$ is defined as the least $n\in\mathbb{N}$ such that either $b=0$ or $\notin\mathcal{D}$ where $\mathcal{D}$ is the divisibility relation. In our case, $\notin\mathcal{D}$ , so that $\mbox{lh}3=0$ . Second, formally, $(b)_{c}$ is defined as the least $n\in\mathbb{N}$ such that either $b=0$ or $\notin\mathcal{D}$ . In our case, $(3)_{c}=0$ for all $c\in\mathbb{N}$ (including $c=1$ for which $p_{1}=3$ ). Finally, $a\ast b=a\cdot\prod_{i<\mbox{lh}b}p_{i+\mbox{lh}a}^{(b)_{i}+1}$ is defined for all natural numbers as well. In particular, since $\mbox{lh}1=0$ , $1\ast b=\prod_{i<\mbox{lh}b}p_{i}^{(b)_{i}+1}$ which is simply the maximum sequence number that divides $b$ . Therefore, $1\ast3=1$ and $(1\ast3)\ast6=6$ ($6=\langle0,0\rangle$ is a sequence number). Further, $3\ast6=3\cdot\prod_{i<2}p_{i}^{(6)_{i}+1}=3\cdot6=18=2^{1}\cdot3^{2}=<0,1>$ . Hence, $1\ast(3\ast6)=18$ . Note. The exercise shows that the concatenation operation, as defined, is not associative on the set of all natural numbers, however, as mentioned in the text, it is associative on the set of sequence numbers.
{"extraction_info": {"found_math": true, "script_math_tex": 33, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9901350736618042, "perplexity": 160.95423676393048}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488538041.86/warc/CC-MAIN-20210623103524-20210623133524-00469.warc.gz"}
http://www.science.gov/topicpages/a/ac+conductance+measurements.html
Sample records for ac conductance measurements 1. Accelerated life ac conductivity measurements of CRT oxide cathodes Hashim, A. A.; Barratt, D. S.; Hassan, A. K.; Nabok, A. 2006-07-01 The ac conductivity measurements have been carried out for the activated Ba/SrO cathode with additional 5% Ni powder for every 100 h acceleration life time at the temperature around 1125 K. The ac conductivity was studied as a function of temperature in the range 300-1200 K after conversion and activation of the cathode at 1200 K for 1 h in two cathodes face to face closed configuration. The experimental results prove that the hopping conductivity dominate in the temperature range 625-770 K through the traps of the WO 3 associate with activation energy Ea = 0.87 eV, whereas from 500-625 K it is most likely to be through the traps of the Al 2O 3 with activation energy of Ea = 1.05 eV. The hopping conductivity at the low temperature range 300-500 K is based on Ni powder link with some Ba contaminants in the oxide layer stricture which indicates very low activation energy Ea = 0.06 eV. 2. AC-Conductivity Measure from Heat Production of Free Fermions in Disordered Media Bru, J.-B.; de Siqueira Pedra, W.; Hertling, C. 2016-05-01 We extend (Bru et al. in J Math Phys 56:051901-1-51, 2015) in order to study the linear response of free fermions on the lattice within a (independently and identically distributed) random potential to a macroscopic electric field that is time- and space-dependent. We obtain the notion of a macroscopic AC-conductivity measure which only results from the second principle of thermodynamics. The latter corresponds here to the positivity of the heat production for cyclic processes on equilibrium states. Its Fourier transform is a continuous bounded function which is naturally called (macroscopic) conductivity. We additionally derive Green-Kubo relations involving time-correlations of bosonic fields coming from current fluctuations in the system. This is reminiscent of non-commutative central limit theorems. 3. AC conductivity and dielectric measurements of metal-free phthalocyanine thin films dispersed in polycarbonate Riad, A. S.; Korayem, M. T.; Abdel-Malik, T. G. 1999-10-01 The dielectric constant and the dielectric loss of thin films of metal-free phthalocyanine dispersed in polycarbonate using ohmic gold electrodes are investigated in the frequency range 20-10 5 Hz and within the temperature range 300-388 K. The frequency dependence of the impedance spectra plotted in the complex plane shows semicircles. The Cole-Cole diagrams have been used to determine the molecular relaxation time, τ, The temperature dependence of τ is expressed by thermally activated process. The AC conductivity σ AC (ω) is found to vary as ωs with the index s⩽1, indicating a dominant hopping process at low temperatures. From the temperature dependence of AC conductivity, free carrier conduction with mean activation energy of 0.33 eV is observed at higher temperatures. Capacitance and loss tangent are found to decrease with increasing frequency and increase with increasing temperature. Such characteristics are found to be in good qualitative agreement with existing equivalent circuit model assuming ohmic contacts. 4. Studies on the activation energy from the ac conductivity measurements of rubber ferrite composites containing manganese zinc ferrite Hashim, Mohd.; Alimuddin; Kumar, Shalendra; Shirsath, Sagar E.; Mohammed, E. M.; Chung, Hanshik; Kumar, Ravi 2012-11-01 Manganese zinc ferrites (MZF) have resistivities between 0.01 and 10 Ω m. Making composite materials of ferrites with either natural rubber or plastics will modify the electrical properties of ferrites. The moldability and flexibility of these composites find wide use in industrial and other scientific applications. Mixed ferrites belonging to the series Mn(1-x)ZnxFe2O4 were synthesized for different ‘x’ values in steps of 0.2, and incorporated in natural rubber matrix (RFC). From the dielectric measurements of the ceramic manganese zinc ferrite and rubber ferrite composites, ac conductivity and activation energy were evaluated. A program was developed with the aid of the LabVIEW package to automate the measurements. The ac conductivity of RFC was then correlated with that of the magnetic filler and matrix by a mixture equation which helps to tailor properties of these composites. 5. AC resistance measuring instrument DOEpatents Hof, P.J. 1983-10-04 An auto-ranging AC resistance measuring instrument for remote measurement of the resistance of an electrical device or circuit connected to the instrument includes a signal generator which generates an AC excitation signal for application to a load, including the device and the transmission line, a monitoring circuit which provides a digitally encoded signal representing the voltage across the load, and a microprocessor which operates under program control to provide an auto-ranging function by which range resistance is connected in circuit with the load to limit the load voltage to an acceptable range for the instrument, and an auto-compensating function by which compensating capacitance is connected in shunt with the range resistance to compensate for the effects of line capacitance. After the auto-ranging and auto-compensation functions are complete, the microprocessor calculates the resistance of the load from the selected range resistance, the excitation signal, and the load voltage signal, and displays of the measured resistance on a digital display of the instrument. 8 figs. 6. AC Resistance measuring instrument DOEpatents Hof, Peter J. 1983-01-01 An auto-ranging AC resistance measuring instrument for remote measurement of the resistance of an electrical device or circuit connected to the instrument includes a signal generator which generates an AC excitation signal for application to a load, including the device and the transmission line, a monitoring circuit which provides a digitally encoded signal representing the voltage across the load, and a microprocessor which operates under program control to provide an auto-ranging function by which range resistance is connected in circuit with the load to limit the load voltage to an acceptable range for the instrument, and an auto-compensating function by which compensating capacitance is connected in shunt with the range resistance to compensate for the effects of line capacitance. After the auto-ranging and auto-compensation functions are complete, the microprocessor calculates the resistance of the load from the selected range resistance, the excitation signal, and the load voltage signal, and displays of the measured resistance on a digital display of the instrument. 7. Automated ac galvanomagnetic measurement system NASA Technical Reports Server (NTRS) Szofran, F. R.; Espy, P. N. 1985-01-01 An automated, ac galvanomagnetic measurement system is described. Hall or van der Pauw measurements in the temperature range 10-300 K can be made at a preselected magnetic field without operator attendance. Procedures to validate sample installation and correct operation of other system functions, such as magnetic field and thermometry, are included. Advantages of ac measurements are discussed. 8. Impedance, AC conductivity and dielectric behavior Adeninium Trichloromercurate (II) Fersi, M. Amine; Chaabane, I.; Gargouri, M. 2016-09-01 In this work, we report the measurements impedance spectroscopy technique for the organic-inorganic hybrid compound (C5H6N5) HgCl3, 11/2H2O measured in the 209 Hz-5 MHz frequency range from 378 to 428 K. Besides, the Cole-Cole (Z″ versus Z‧) plots were well fitted to an equivalent circuit built up by a parallel combination of resistance (R), fractal capacitance (CPE) and capacitance (C). Furthermore, the AC conductivity was investigated as a function of temperature and frequency in the same range. The experiment results indicated that AC conductivityac) was proportional to σdc + A ωS . The obtained results are discussed in terms of the correlated barrier hopping (CBH) model. An agreement between the experimental and theoretical results suggests that the AC conductivity behavior of Adeninium Trichloromercurate (II) can be successfully explained by CBH model. The contribution of single polaron hopping to AC conductivity in a present alloy was also studied. 9. AC Conductivity and Dielectric Properties of Borotellurite Glass Taha, T. A.; Azab, A. A. 2016-06-01 Borotellurite glasses with formula 60B2O3-10ZnO-(30 - x)NaF-xTeO2 (x = 0 mol.%, 5 mol.%, 10 mol.%, and 15 mol.%) have been synthesized by thermal melting. X-ray diffraction (XRD) analysis confirmed that the glasses were amorphous. The glass density (ρ) was determined by the Archimedes method at room temperature. The density (ρ) and molar volume (V m) were found to increase with increasing TeO2 content. The direct-current (DC) conductivity was measured in the temperature range from 473 K to 623 K, in which the electrical activation energy of ionic conduction increased from 0.27 eV to 0.48 eV with increasing TeO2 content from 0 mol.% to 15 mol.%. The dielectric parameters and alternating-current (AC) conductivityac) were investigated in the frequency range from 1 kHz to 1 MHz and temperature range from 300 K to 633 K. The AC conductivity and dielectric constant decreased with increasing TeO2 content from 0 mol.% to 15 mol.%. 10. ac-resistance-measuring instrument SciTech Connect Hof, P.J. 1981-04-22 An auto-ranging ac resistance measuring instrument for remote measurement of the resistance of an electrical device or circuit connected to the instrument includes a signal generator which generates an ac excitation signal for application to a load, including the device and the transmission line, a monitoring circuit which provides a digitally encoded signal representing the voltage across the load, and a microprocessor which operates under program control to provide an auto-ranging function by which range resistance is connected in circuit with the load to limit the load voltage to an acceptable range for the instrument, and an auto-compensating function by which compensating capacitance is connected in shunt with the range resistance to compensate for the effects of line capacitance. 11. Study of AC electrical conduction mechanisms in an epoxy polymer Jilani, Wissal; Mzabi, Nissaf; Gallot-Lavallée, Olivier; Fourati, Najla; Zerrouki, Chouki; Zerrouki, Rachida; Guermazi, Hajer 2015-11-01 The AC conductivity of an epoxy resin was investigated in the frequency range 10^{-1} - 106 Hz at temperatures ranging from -100 to 120 °C. The frequency dependence of σ_{ac} was described by the law: σ_{ac}=ω \\varepsilon0\\varepsilon^''_{HN}+Aωs. The study of temperature variation of the exponent (s) reveals two conduction models: the AC conduction dependence upon temperature is governed by the small polaron tunneling mechanism (SPTM) at low temperature (-100 -60 °C) and the correlated barrier hopping (CHB) model at high temperature (80-120 °C). 12. Ac conduction in conducting poly pyrrole-poly vinyl methyl ether polymer composite materials SciTech Connect Saha, S.K.; Mandal, T.K.; Mandal, B.M.; Chakravorty, D. 1997-03-01 Composite materials containing conducting polypyrrole and insulating poly (vinyl methyl ether) (PVME) have been synthesized by oxidative polymerization of pyrrole in ethanol using FeCl{sub 3} oxidant in the presence of PVME. The ac conductivity measurements have been carried out in the frequency range of 100 Hz to 10 MHz and in the temperature range of 110 to 350 K. The frequency dependent conductivity has been explained on the basis of a small polaron tunnelling mechanism. {copyright} {ital 1997 American Institute of Physics.} 13. ac conductance of surface layer in lithium tetraborate single crystals Kim, Chung-Sik; Park, Jong-Ho; Moon, Byung Kee; Seo, Hyo-Jin; Choi, Byung-Chun; Hwang, Yoon-Hwae; Kim, Hyung Kook; Kim, Jung Nam 2003-12-01 ac conductance for the electrode effect in Li2B4O7 single crystal was investigated by use of a coplanar electrode applied on the surface of a (001) plate. A coplanar electrode in this material more clearly shows conduction of the electrode effect than a conventional parallel planar electrode. The electrode effect in ac conductance is likely to be controlled by the surface layer, which is a poorly conductive depletion layer possibly filled with vacancies of lithium ions. We found that the surface layer is not locally distributed near the electrodes, but, rather, on the broad area of the surface (001) plane of the material. So we conclude that the electrode effect in ac conduction of Li2B4O7 single crystal is mainly due to the poor conductive surface layer distributed over the whole surface of the (001) plane and is not a secondary phase formed by reaction with the electrode material. 14. AC Conductivity Studies of Lithium Based Phospho Vanadate Glasses Nagendra, K.; Babu, G. Satish; Reddy, C. Narayana; Gowda, Veeranna 2011-07-01 Glasses in the system xLi2SO4-20Li2O-(80-x) [80P2O5-20V2O5] (5⩾x⩾20 mol%) has been prepared by melt quenching method. Dc and ac conductivity has been studied over a wide range of frequency (10 Hz to 10 MHz) and temperature (298 K-523 K). The dc conductivity found to increase with increase of Li2SO4 concentration. The ac conductivities have been fitted to the Almond-West type single power law equation σ(ω) = σ(0)+Aωs where s' is the power law exponent. The ac conductivity found to increase with increase of Li2SO4 concentration. An attempt is made to elucidate the enhancement of lithium ion conduction in phosphor-vanadate glasses by considering the expansion of network structure. 15. dc piezoresistance and ac conductance of niobium dioxide SciTech Connect Guerra Vela, C. 1984-01-01 The resistance, R, of monocrystalline n-type NbO/sub 2/ in the semiconducting, distorted rutile-structured phase was measured at temperatures from 196 to 410 K and hydrostatic pressures, P, from one to 6000 atm. R/T increases exponentially with 1/T, and ..delta..R/R increases linearly with P/T at different rates along the a- and c-axes. Conduction is apparently due to adiabatic hopping of small polarons; values were obtained for phonon, electron transfer, and polaron binding energies, the pressure dependences of these energies, and of the small polaron activation energy. An electronic phase diagram is presented also. The complex ac conductivity was also measured using frequencies from 5 to 92 kHz between 1.5 and 300 K along the a- and c-axes of NbO/sub 2/. Above 200 K the real part of the conductivity sigma/sub a/ and sigma/sub c/ were independent of frequency, f, and strongly activated like the dc conductivity. Below 200 K, sigma/sub a/ decreased ever less rapidly until 120 K where a weakly activated regime began in which sigma/sub a/ varied about like f/sup 0.5/ implying transitions of polarons between centers with a characteristic energy difference. 16. Dynamic conductivity of ac-dc-driven graphene superlattice Kukhar', E. I.; Kryuchkov, S. V.; Ionkina, E. S. 2016-06-01 The dynamic conductivity of graphene superlattice in the presence of ac electric field and dc electric field with longitudinal and transversal components with respect to superlattice axis was calculated. In the case of strong transversal component of dc field conductivity of graphene superlattice was shown to be such as if the electrons had got the effective mass. In the case of weak transversal component of dc field conductivity was shown to change its sign if the frequency of ac field was an integer multiple of half of Bloch frequency. 17. Tevatron optics measurements using an AC dipole SciTech Connect Miyamoto, R.; Kopp, S.E.; Jansson, A.; Syphers, M.J.; /Fermilab 2007-06-01 The AC dipole is a device to study beam optics of hadron synchrotrons. It can produce sustained large amplitude oscillations with virtually no emittance growth. A vertical AC dipole for the Tevatron is recently implemented and a maximum oscillation amplitude of 2{sigma} (4{sigma}) at 980 GeV (150 GeV) is achieved [1]. When such large oscillations are measured with the BPM system of the Tevatron (20 {micro}m resolution), not only linear but even nonlinear optics can be directly measured. This paper shows how to measure {beta} function using an AC dipole and the result is compared to the other measurement. The paper also shows a test to detect optics changes when small changes are made in the Tevatron. Since an AC dipole is nondestructive, it allows frequent measurements of the optics which is necessary for such an test. 18. Structural and AC conductivity study of CdTe nanomaterials Das, Sayantani; Banerjee, Sourish; Sinha, T. P. 2016-04-01 Cadmium telluride (CdTe) nanomaterials have been synthesized by soft chemical route using mercapto ethanol as a capping agent. Crystallization temperature of the sample is investigated using differential scanning calorimeter. X-ray diffraction and transmission electron microscope measurements show that the prepared sample belongs to cubic structure with the average particle size of 20 nm. Impedance spectroscopy is applied to investigate the dielectric relaxation of the sample in a temperature range from 313 to 593 K and in a frequency range from 42 Hz to 1.1 MHz. The complex impedance plane plot has been analyzed by an equivalent circuit consisting of two serially connected R-CPE units, each containing a resistance (R) and a constant phase element (CPE). Dielectric relaxation peaks are observed in the imaginary parts of the spectra. The frequency dependence of real and imaginary parts of dielectric permittivity is analyzed using modified Cole-Cole equation. The temperature dependence relaxation time is found to obey the Arrhenius law having activation energy ~0.704 eV. The frequency dependent conductivity spectra are found to follow the power law. The frequency dependence ac conductivity is analyzed by power law. 19. AC Conductivity Studies of Lithium Based Phospho Vanadate Glasses SciTech Connect Nagendra, K.; Babu, G. Satish; Gowda, Veeranna; Reddy, C. Narayana 2011-07-15 Glasses in the system xLi{sub 2}SO{sub 4}-20Li{sub 2}O-(80-x) [80P{sub 2}O{sub 5}-20V{sub 2}O{sub 5}](5{>=}x{>=}20 mol%) has been prepared by melt quenching method. Dc and ac conductivity has been studied over a wide range of frequency (10 Hz to 10 MHz) and temperature (298 K-523 K). The dc conductivity found to increase with increase of Li{sub 2}SO{sub 4} concentration. The ac conductivities have been fitted to the Almond-West type single power law equation {sigma}({omega}) = {sigma}(0)+A{omega}{sup s} where 's' is the power law exponent. The ac conductivity found to increase with increase of Li{sub 2}SO{sub 4} concentration. An attempt is made to elucidate the enhancement of lithium ion conduction in phosphor-vanadate glasses by considering the expansion of network structure. 20. Broadband AC Conductivity of XUV Excited Warm Dense Gold Chen, Z.; Tsui, Y.; Toleikis, S.; Hering, P.; Brown, S.; Curry, C.; Tanikawa, T.; Hoeppner, H.; Levy, M.; Goede, S.; Ziaja-Motyka, B.; Rethfeld, B.; Recoules, Vanina; Ng, A.; Glenzer, S. 2015-11-01 The properties of ultrafast laser excited warm dense gold have been extensively studied in the past decade. In those studies, a 400nm ultrashort laser pulse was used to excite the 5 d electrons in gold to 6s/p state. Here we will present our recent study of warm dense gold with 245eV, 70fs pulses to selectively excite 4 f electrons using the XUV-FEL at FLASH. The AC conductivity of the warm dense gold was measured at different wavelengths (485nm, 520nm, 585nm, 640nm and 720nm) to cover the range from 5 d-6 s / p interband transitions to 6 s/ p intraband transitions. Preliminary result suggests that the onset of 5 d-6 s / p band transition shifts from 2.3eV to ~ 2eV, which is in agreement with the study of 400nm laser pulse excited warm dense gold. More detailed analysis of our data will also be presented. 1. Structural, AC conductivity and dielectric properties of Sr-La hexaferrite Singh, A.; Narang, S. B.; Singh, K.; Sharma, P.; Pandey, O. P. 2006-03-01 A series of M-type hexaferrite samples with composition Sr{1-x}La{x}Fe{12}O{19} (x = 0.00, 0.05, 0.15 and 0.25) were prepared by standard ceramic technique. AC electrical conductivity measurements were carried out at different frequencies (20 Hz 1 MHz) and at different temperatures. The dielectric constant and dielectric loss tangent were measured in the same range of frequencies. The experimental results indicate that AC electrical conductivity increases on increasing the frequency as well as the temperature, indicating magnetic semiconductor behavior of the samples. The increase in AC electrical conductivity with frequency and temperature has been explained on the basis of Koops Model whereas dielectric constant and dielectric loss tangent has been explained with the Maxwell Wagner type interfacial polarization in agreement with the Koops phenomenological theory. 2. Microwave ac Conductivity Spectrum of a Coulomb Glass SciTech Connect Lee, Mark; Stutzmann, M. L. 2001-07-30 We report the first observation of the transition between interacting and noninteracting behavior in the ac conductivity spectrum {sigma}({omega}) of a doped semiconductor in its Coulomb glass state near T=0 K . The transition manifests itself as a crossover from approximately linear frequency dependence below {approx}10 GHz , to quadratic dependence above {approx}15 GHz . The sharpness of the transition and the magnitude of the crossover frequency strongly suggest that the transition is driven by photon-induced excitations across the Coulomb gap, in contrast to existing theoretical descriptions. 3. ac conductivity and dielectric constant of conductor-insulator composites Murtanto, Tan Benny; Natori, Satoshi; Nakamura, Jun; Natori, Akiko 2006-09-01 We study the complex admittance (ac conductivity and dielectric constant) of conductor-insulator composite material, based on a two-dimensional square network consisting of randomly placed conductors and capacitors. We derived some exact analytical relations between the complex admittances of high and low frequencies and of complementary conductor concentrations. We calculate the complex admittance by applying a transfer-matrix method to a square network and study the dependence on both the frequency and the conductor concentration. The numerical results are compared with an effective-medium theory, and the range of applicability and limitation of the effective-medium theory are clarified. 4. AC Loss Measurements on a 2G YBCO Coil SciTech Connect Rey, Christopher M; Duckworth, Robert C; Schwenterly, S W 2011-01-01 The Oak Ridge National Laboratory (ORNL) is collaborating with Waukesha Electric Systems (WES) to continue development of HTS power transformers. For compatibility with the existing power grid, a commercially viable HTS transformer will have to operate at high voltages in the range of 138 kV and above, and will have to withstand 550-kV impulse voltages as well. Second-generation (2G) YBCO coated conductors will be required for an economically-competitive design. In order to adequately size the refrigeration system for these transformers, the ac loss of these HTS coils must be characterized. Electrical AC loss measurements were conducted on a prototype high voltage (HV) coil with co-wound stainless steel at 60 Hz in a liquid nitrogen bath using a lock-in amplifier technique. The prototype HV coil consisted of 26 continuous (without splice) single pancake coils concentrically centered on a stainless steel former. For ac loss measurement purposes, voltage tap pairs were soldered across each set of two single pancake coils so that a total of 13 separate voltage measurements could be made across the entire length of the coil. AC loss measurements were taken as a function of ac excitation current. Results show that the loss is primarily concentrated at the ends of the coil where the operating fraction of critical current is the highest and show a distinct difference in current scaling of the losses between low current and high current regimes. 5. Calibration-free electrical conductivity measurements for highly conductive slags SciTech Connect MACDONALD,CHRISTOPHER J.; GAO,HUANG; PAL,UDAY B.; VAN DEN AVYLE,JAMES A.; MELGAARD,DAVID K. 2000-05-01 This research involves the measurement of the electrical conductivity (K) for the ESR (electroslag remelting) slag (60 wt.% CaF{sub 2} - 20 wt.% CaO - 20 wt.% Al{sub 2}O{sub 3}) used in the decontamination of radioactive stainless steel. The electrical conductivity is measured with an improved high-accuracy-height-differential technique that requires no calibration. This method consists of making continuous AC impedance measurements over several successive depth increments of the coaxial cylindrical electrodes in the ESR slag. The electrical conductivity is then calculated from the slope of the plot of inverse impedance versus the depth of the electrodes in the slag. The improvements on the existing technique include an increased electrochemical cell geometry and the capability of measuring high precision depth increments and the associated impedances. These improvements allow this technique to be used for measuring the electrical conductivity of highly conductive slags such as the ESR slag. The volatilization rate and the volatile species of the ESR slag measured through thermogravimetric (TG) and mass spectroscopy analysis, respectively, reveal that the ESR slag composition essentially remains the same throughout the electrical conductivity experiments. 6. AC Conductivity and Dielectric Relaxation Behavior of Sb2S3 Bulk Material Abd El-Rahman, K. F.; Darwish, A. A. A.; Qashou, Saleem I.; Hanafy, T. A. 2016-04-01 The Sb2S3 bulk material was used for next-generation anode for lithium-ion batteries. Alternative current (AC) conductivity, dielectric properties and electric modulus of Sb2S3 have been investigated. The measurements were carried out in the frequency range from 40 Hz to 5 MHz and temperature range from 293 K to 453 K. The direct current (DC) conductivity, σ DC, shows an activated behavior and the calculated activation energy is 0.50 eV. The AC conductivity, σ AC, was found to increase with the increase of temperature and frequency. The conduction mechanism of σ AC was controlled by the correlated barrier hopping model. The behavior of the dielectric constant, ɛ', and dielectric loss index, ɛ'', reveal that the polarization process of Sb2S3 is dipolar in nature. The behavior of both ɛ' and ɛ'' reveals that bulk Sb2S3 has no ferroelectric or piezoelectric phase transition. The dielectric modulus, M, gives a simple method for evaluating the activation energy of the dielectric relaxation. The calculated activation energy from M is 0.045 eV. 7. AC Conductivity and Dielectric Relaxation Behavior of Sb2S3 Bulk Material Abd El-Rahman, K. F.; Darwish, A. A. A.; Qashou, Saleem I.; Hanafy, T. A. 2016-07-01 The Sb2S3 bulk material was used for next-generation anode for lithium-ion batteries. Alternative current (AC) conductivity, dielectric properties and electric modulus of Sb2S3 have been investigated. The measurements were carried out in the frequency range from 40 Hz to 5 MHz and temperature range from 293 K to 453 K. The direct current (DC) conductivity, σ DC, shows an activated behavior and the calculated activation energy is 0.50 eV. The AC conductivity, σ AC, was found to increase with the increase of temperature and frequency. The conduction mechanism of σ AC was controlled by the correlated barrier hopping model. The behavior of the dielectric constant, ɛ', and dielectric loss index, ɛ'', reveal that the polarization process of Sb2S3 is dipolar in nature. The behavior of both ɛ' and ɛ'' reveals that bulk Sb2S3 has no ferroelectric or piezoelectric phase transition. The dielectric modulus, M, gives a simple method for evaluating the activation energy of the dielectric relaxation. The calculated activation energy from M is 0.045 eV. 8. ac and dc percolative conductivity of magnetite-cellulose acetate composites SciTech Connect Chiteme, C.; McLachlan, D. S.; Sauti, G. 2007-03-01 ac and dc conductivity results for a percolating system, which consists of a conducting powder (magnetite) combined with an 'insulating' powder (cellulose acetate), are presented. Impedance and modulus spectra are obtained in a percolation system. The temperature dependence of the resistivity of the cellulose acetate is such that at 170 deg. C, it is essentially a conductor at frequencies below 0.059{+-}0.002 Hz, and a dielectric above. The percolation parameters, from the dc conductivity measured at 25 and 170 deg. C, are determined and discussed in relation to the ac results. The experimental results scale as a function of composition, temperature, and frequency. An interesting result is the correlation observed between the scaling parameter (f{sub ce}), obtained from a scaling of the ac measurements, and the peak frequency (f{sub cp}) of the arcs, obtained from impedance spectra, above the critical volume fraction. Scaling at 170 deg. C is not as good as at 25 deg. C, probably indicating a breakdown in scaling at the higher temperature. The modulus plots show the presence of two materials: a conducting phase dominated by the cellulose acetate and the isolated conducting clusters below the critical volume fraction {phi}{sub c}, as well as the interconnected conducting clusters above {phi}{sub c}. These results are confirmed by computer simulations using the two exponent phenomenological percolation equation. These results emphasize the need to analyze ac conductivity results in terms of both impedance and modulus spectra in order to get more insight into the behavior of composite materials. 9. Measuring Salinity by Conductivity. ERIC Educational Resources Information Center Lapworth, C. J. 1981-01-01 Outlines procedures for constructing an instrument which uses an electrode and calibration methods to measure the salinity of waters in environments close to and affected by a saline estuary. (Author/DC) 10. RG flow of AC conductivity in soft wall model of QCD Bhatnagar, Neha; Siwach, Sanjay 2016-03-01 We study the Renormalization Group (RG) flow of AC conductivity in soft wall model of holographic QCD. We consider the charged black hole metric and the explicit form of AC conductivity is obtained at the cutoff surface. We plot the numerical solution of conductivity flow as a function of radial coordinate. The equation of gauge field is also considered and the numerical solution is obtained for AC conductivity as a function of frequency. The results for AC conductivity are also obtained for different values of chemical potential and Gauss-Bonnet couplings. 11. Random free energy barrier hopping model for ac conduction in chalcogenide glasses Murti, Ram; Tripathi, S. K.; Goyal, Navdeep; Prakash, Satya 2016-03-01 The random free energy barrier hopping model is proposed to explain the ac conductivityac) of chalcogenide glasses. The Coulomb correlation is consistently accounted for in the polarizability and defect distribution functions and the relaxation time is augmented to include the overlapping of hopping particle wave functions. It is observed that ac and dc conduction in chalcogenides are due to same mechanism and Meyer-Neldel (MN) rule is the consequence of temperature dependence of hopping barriers. The exponential parameter s is calculated and it is found that s is subjected to sample preparation and measurement conditions and its value can be less than or greater than one. The calculated results for a - Se, As2S3, As2Se3 and As2Te3 are found in close agreement with the experimental data. The bipolaron and single polaron hopping contributions dominates at lower and higher temperatures respectively and in addition to high energy optical phonons, low energy optical and high energy acoustic phonons also contribute to the hopping process. The variations of hopping distance with temperature is also studied. The estimated defect number density and static barrier heights are compared with other existing calculations. 12. Phonon-Induced Electron-Hole Excitation and ac Conductance in Molecular Junction Ueda, Akiko; Utsumi, Yasuhiro; Imamura, Hiroshi; Tokura, Yasuhiro 2016-04-01 We investigate the linear ac conductance of molecular junctions under a fixed dc bias voltage in the presence of an interaction between a transporting electron and a single local phonon in a molecule with energy ω0. The electron-phonon interaction is treated by the perturbation expansion. The ac conductance as a function of the ac frequency ωac decreases or increases compared with the noninteracting case depending on the magnitude of the dc bias voltage. Furthermore, a dip emerges at ωac ˜ 2ω0. The dip originates from the modification of electron-hole excitation by the ac field, which cannot be obtained by treating the phonon in the linear regime of a classical forced oscillation. 13. Polaron conductivity mechanism in potassium acid phthalate crystal: AC-conductivity investigation 2016-08-01 The complex dielectric constant, \\varepsilon *(ν ,T), of potassium acid phthalate monocrystal (KAP) was investigated over the broad frequency and temperature range. While the imaginary part of dielectric constant ε‧‧(ν) increases rapidly with increasing temperature in the studied temperature range, the real part of dielectric constant ε‧(ν) increases only at high temperatures; there is almost no change of ε‧(ν) below 200 K. Both values of ε‧ and ε‧‧ are frequency dependent; the values increase with decreasing frequencies. At temperatures below 450 K the ac electrical conductivity and dielectric constant follow simultaneously the universal dielectric response (UDR). The analysis of the temperature dependence of the UDR parameter s in terms of the theoretical model for small polarons revealed that this mechanism governs the charge transport in KAP crystal in the studied temperature range. 14. AC conductivity and Dielectric Study of Chalcogenide Glasses of Se-Te-Ge System Salman, Fathy 2004-01-01 The ac conductivity and dielectric properties of glassy system SexTe79 - xGe21, with x = 11, 14, 17 at.%, has been studied at temperatures 300 to 450 K and over a wide range of frequencies (50 Hz to 500 kHz). Experimental results indicate that the ac conductivity and the dielectric constants depend on temperature, frequency and Se content. The conductivity as a function of frequency exhibited two components: dc conductivity s dc, and ac conductivity s ac, where s ac ˜ w s. The mechanism of ac conductivity can be reasonably interpreted in terms of the correlated barrier hopping model (CBH). The activation energies are estimated and discussed. The dependence of ac conductivity and dielectric constants on the Se content x can be interpreted as the effect of Se fraction on the positional disorder. The impedance plot at each temperature appeared as a semicircle passes through the origin. Each semicircle is represented by an equivalent circuit of parallel resistance Rb and capacitance Cb. 15. Thermal conductivity Measurements of Kaolite SciTech Connect Wang, H 2003-02-21 Testing was performed to determine the thermal conductivity of Kaolite 1600, which primarily consists of Portland cement and vermiculite. The material was made by Thermal Ceramics for refractory applications. Its combination of light weight, low density, low cost, and noncombustibility made it an attractive alternative to the materials currently used in ES-2 container for radioactive materials. Mechanical properties and energy absorption tests of the Kaolite have been conducted at the Y-12 complex. Heat transfer is also an important factor for the application of the material. The Kaolite samples are porous and trap moisture after extended storage. Thermal conductivity changes as a function of moisture content below 100 C. Thermal conductivity of the Kaolite at high temperatures (up to 700 C) are not available in the literature. There are no standard thermal conductivity values for Kaolite because each sample is somewhat different. Therefore, it is necessary to measure thermal conductivity of each type of Kaolite. Thermal conductivity measurements will help the modeling and calculation of temperatures of the ES-2 containers. This report focuses on the thermal conductivity testing effort at ORNL. 16. Measurement of coupling resonance driving terms with the AC dipole SciTech Connect Miyamoto, R. 2010-10-01 Resonance driving terms for linear coupled betatron motion in a synchrotron ring can be determined from corresponding spectral lines of an excited coherent beam motion. An AC dipole is one of instruments to excite such a motion. When a coherent motion is excited with an AC dipole, measured Courant-Snyder parameters and betatron phase advance have apparent modulations, as if there is an additional quadrupole field at the location of the AC dipole. Hence, measurements of these parameters using the AC dipole require a proper interpretation of observed quantities. The situation is similar in measurements of resonance driving terms using the AC dipole. In this note, we derive an expression of coupled betatron motion excited with two AC dipoles in presence of skew quadrupole fields, discuss an impact of this quadrupole like effect of the AC dipole on a measurement of coupling resonance driving terms, and present an analytical method to determine the coupling resonance driving terms from quantities observed using the AC dipole. 17. Temperature correction in conductivity measurements USGS Publications Warehouse Smith, Stanford H. 1962-01-01 Electrical conductivity has been widely used in freshwater research but usual methods employed by limnologists for converting measurements to conductance at a given temperature have not given uniformly accurate results. The temperature coefficient used to adjust conductivity of natural waters to a given temperature varies depending on the kinds and concentrations of electrolytes, the temperature at the time of measurement, and the temperature to which measurements are being adjusted. The temperature coefficient was found to differ for various lake and stream waters, and showed seasonal changes. High precision can be obtained only by determining temperature coefficients for each water studied. Mean temperature coefficients are given for various temperature ranges that may be used where less precision is required. 18. AC motor controller with 180 degree conductive switches NASA Technical Reports Server (NTRS) Oximberg, Carol A. (Inventor) 1995-01-01 An ac motor controller is operated by a modified time-switching scheme where the switches of the inverter are on for electrical-phase-and-rotation intervals of 180.degree. as opposed to the conventional 120.degree.. The motor is provided with three-phase drive windings, a power inverter for power supplied from a dc power source consisting of six switches, and a motor controller which controls the current controlled switches in voltage-fed mode. During full power, each switch is gated continuously for three successive intervals of 60.degree. and modulated for only one of said intervals. Thus, during each 60.degree. interval, the two switches with like signs are on continuously and the switch with the opposite sign is modulated. 19. Structural, dielectric and AC conductivity properties of Co2+ doped mixed alkali zinc borate glasses Madhu, B. J.; Banu, Syed Asma; Harshitha, G. A.; Shilpa, T. M.; Shruthi, B. 2013-02-01 The Co2+ doped 19.9ZnO+5Li2CO3+25Na2CO3+50B2O3 (ZLNB) mixed alkali zinc borate glasses have been prepared by a conventional melt quenching method. The structural (XRD & FT-IR), dielectric and a.c. conductivityac) properties have been investigated. Amorphous nature of these glasses has been confirmed from their XRD pattern. The dielectric properties and electrical conductivityac) of these glasses have been studied from 100Hz to 5MHz at the room temperature. Based on the observed trends in the a.c. conductivities, the present glass samples are found to exhibit a non-Debye behavior. 20. Critical field measurements in superconductors using ac inductive techniques Campbell, S. A.; Ketterson, J. B.; Crabtree, G. W. 1983-09-01 The ac in-phase and out-of-phase response of type II superconductors is discussed in terms of dc magnetization curves. Hysteresis in the dc magnetization is shown to lead to a dependence of the ac response on the rate at which an external field is swept. This effect allows both Hc1 and Hc2 to be measured by ac techniques. A relatively simple mutual inductance bridge for making such measurements is described in the text, and factors affecting bridge sensitivity are discussed in the Appendix. Data for the magnetic superconductor ErRh4B4 obtained using this bridge are reported. 1. Calorimetric method of ac loss measurement in a rotating magnetic field. PubMed Ghoshal, P K; Coombs, T A; Campbell, A M 2010-07-01 A method is described for calorimetric ac-loss measurements of high-T(c) superconductors (HTS) at 80 K. It is based on a technique used at 4.2 K for conventional superconducting wires that allows an easy loss measurement in parallel or perpendicular external field orientation. This paper focuses on ac loss measurement setup and calibration in a rotating magnetic field. This experimental setup is to demonstrate measuring loss using a temperature rise method under the influence of a rotating magnetic field. The slight temperature increase of the sample in an ac-field is used as a measure of losses. The aim is to simulate the loss in rotating machines using HTS. This is a unique technique to measure total ac loss in HTS at power frequencies. The sample is mounted on to a cold finger extended from a liquid nitrogen heat exchanger (HEX). The thermal insulation between the HEX and sample is provided by a material of low thermal conductivity, and low eddy current heating sample holder in vacuum vessel. A temperature sensor and noninductive heater have been incorporated in the sample holder allowing a rapid sample change. The main part of the data is obtained in the calorimetric measurement is used for calibration. The focus is on the accuracy and calibrations required to predict the actual ac losses in HTS. This setup has the advantage of being able to measure the total ac loss under the influence of a continuous moving field as experienced by any rotating machines. PMID:20687748 2. Calorimetric method of ac loss measurement in a rotating magnetic field Ghoshal, P. K.; Coombs, T. A.; Campbell, A. M. 2010-07-01 A method is described for calorimetric ac-loss measurements of high-Tc superconductors (HTS) at 80 K. It is based on a technique used at 4.2 K for conventional superconducting wires that allows an easy loss measurement in parallel or perpendicular external field orientation. This paper focuses on ac loss measurement setup and calibration in a rotating magnetic field. This experimental setup is to demonstrate measuring loss using a temperature rise method under the influence of a rotating magnetic field. The slight temperature increase of the sample in an ac-field is used as a measure of losses. The aim is to simulate the loss in rotating machines using HTS. This is a unique technique to measure total ac loss in HTS at power frequencies. The sample is mounted on to a cold finger extended from a liquid nitrogen heat exchanger (HEX). The thermal insulation between the HEX and sample is provided by a material of low thermal conductivity, and low eddy current heating sample holder in vacuum vessel. A temperature sensor and noninductive heater have been incorporated in the sample holder allowing a rapid sample change. The main part of the data is obtained in the calorimetric measurement is used for calibration. The focus is on the accuracy and calibrations required to predict the actual ac losses in HTS. This setup has the advantage of being able to measure the total ac loss under the influence of a continuous moving field as experienced by any rotating machines. 3. Calorimetric method of ac loss measurement in a rotating magnetic field SciTech Connect Ghoshal, P. K.; Coombs, T. A.; Campbell, A. M. 2010-07-15 A method is described for calorimetric ac-loss measurements of high-T{sub c} superconductors (HTS) at 80 K. It is based on a technique used at 4.2 K for conventional superconducting wires that allows an easy loss measurement in parallel or perpendicular external field orientation. This paper focuses on ac loss measurement setup and calibration in a rotating magnetic field. This experimental setup is to demonstrate measuring loss using a temperature rise method under the influence of a rotating magnetic field. The slight temperature increase of the sample in an ac-field is used as a measure of losses. The aim is to simulate the loss in rotating machines using HTS. This is a unique technique to measure total ac loss in HTS at power frequencies. The sample is mounted on to a cold finger extended from a liquid nitrogen heat exchanger (HEX). The thermal insulation between the HEX and sample is provided by a material of low thermal conductivity, and low eddy current heating sample holder in vacuum vessel. A temperature sensor and noninductive heater have been incorporated in the sample holder allowing a rapid sample change. The main part of the data is obtained in the calorimetric measurement is used for calibration. The focus is on the accuracy and calibrations required to predict the actual ac losses in HTS. This setup has the advantage of being able to measure the total ac loss under the influence of a continuous moving field as experienced by any rotating machines. 4. Ac-conductivity and dielectric response of new zinc-phosphate glass/metal composites Maaroufi, A.; Oabi, O.; Lucas, B. 2016-07-01 The ac-conductivity and dielectric response of new composites based on zinc-phosphate glass with composition 45 mol%ZnO-55 mol%P2O5, filled with metallic powder of nickel (ZP/Ni) were investigated by impedance spectroscopy in the frequency range from 100 Hz to 1 MHz at room temperature. A high percolating jump of seven times has been observed in the conductivity behavior from low volume fraction of filler to the higher fractions, indicating an insulator - semiconductor phase transition. The measured conductivity at higher filler volume fraction is about 10-1 S/cm and is frequency independent, while, the obtained conductivity for low filler volume fraction is around 10-8 S/cm and is frequency dependent. Moreover, the elaborated composites are characterized by high dielectric constants in the range of 105 for conductive composites at low frequencies (100 Hz). In addition, the distribution of the relaxation processes was also evaluated. The Debye, Cole-Cole, Davidson-Cole and Havriliak-Negami models in electric modulus formalism were used to model the observed relaxation phenomena in ZP/Ni composites. The observed relaxation phenomena are fairly simulated by Davidson-Cole model, and an account of the interpretation of results is given. 5. Low frequency ac conduction and dielectric relaxation in pristine poly(3-octylthiophene) films Singh, Ramadhar; Kumar, Jitendra; Singh, Rajiv K.; Rastogi, Ramesh C.; Kumar, Vikram 2007-02-01 The ac conductivity σ(ω)m, dielectric constant ɛ'(ω) and loss ɛ''(ω) of pristine poly(3-octylthiophene) (P3OT) films (thickness ~ 20 μm) have been measured in wide temperature (77 350 K) and frequency (100 Hz 10 MHz) ranges. At low temperatures, σ(ω)m can be described by the relation σ(ω)m = Aωs, where s is ~ 0.61 at 77 K and decreases with increasing temperature. A clear Debye-type loss peak is observed by subtracting the contribution of σdc from σ(ω)m. The frequency dependence of conductivity indicates that there is a distribution of relaxation times. This is confirmed by measurement of the dielectric constant as a function of frequency and temperature. Reasonable estimates of various electrical parameters such as effective dielectric constant (ɛp), phonon frequency (νph), Debye temperature (θD), polaron radius (rp), small-polaron coupling constant (\\Upsilon ), effective polaron mass (mp), the density of states at the Fermi level N(EF), average hopping distance (R) and average hopping energy (W) from dc conductivity measurements suggest the applicability of Mott's variable range hopping model in this system. 6. AC conductivity and structural properties of Mg-doped ZnO ceramic Othman, Zayani Jaafar; Hafef, Olfa; Matoussi, Adel; Rossi, Francesca; Salviati, Giancarlo 2015-11-01 Undoped ZnO and Zn1- x Mg x O ceramic pellets were synthesized by the standard sintering method at the temperature of 1200 °C. The influence of Mg doping on the morphological, structural and electrical properties was studied. The scanning electron microscopy images revealed rough surface textured by grain boundaries and compacted grains having different shapes and sizes. Indeed, the X-ray diffraction reveals the alloying of hexagonal ZnMgO phase and the segregation of cubic MgO phase. The crystallite size, strain and stress were studied using Williamson-Hall (W-H) method. The results of mean particle size of Zn1- x Mg x O composites showed an inter-correlation with W-H analysis and Sherrer method. The electrical conductivity of the films was measured from 173 to 373 K in the frequency range of 0.1 Hz-1 MHz to identify the dominant conductivity mechanism. The DC conductivity is thermally activated by electron traps having activation energy of about 0.09 to 0.8 eV. The mechanisms of AC conductivity are controlled by the correlated barrier hopping model for the ZnO sample and the small polaron tunneling (SPT) model for Zn0.64Mg0.36O and Zn0.60Mg0.40O composites. 7. Hydraulic Conductivity Measurements Barrow 2014 DOE Data Explorer Katie McKnight; Tim Kneafsey; Craig Ulrich; Jil Geller 2015-02-22 Six individual ice cores were collected from Barrow Environmental Observatory in Barrow, Alaska, in May of 2013 as part of the Next Generation Ecosystem Experiment (NGEE). Each core was drilled from a different location at varying depths. A few days after drilling, the cores were stored in coolers packed with dry ice and flown to Lawrence Berkeley National Laboratory (LBNL) in Berkeley, CA. 3-dimensional images of the cores were constructed using a medical X-ray computed tomography (CT) scanner at 120kV. Hydraulic conductivity samples were extracted from these cores at LBNL Richmond Field Station in Richmond, CA, in February 2014 by cutting 5 to 8 inch segments using a chop saw. Samples were packed individually and stored at freezing temperatures to minimize any changes in structure or loss of ice content prior to analysis. Hydraulic conductivity was determined through falling head tests using a permeameter [ELE International, Model #: K-770B]. After approximately 12 hours of thaw, initial falling head tests were performed. Two to four measurements were collected on each sample and collection stopped when the applied head load exceeded 25% change from the original load. Analyses were performed between 2 to 3 times for each sample. The final hydraulic conductivity calculations were computed using methodology of Das et al., 1985. 8. Microfabricated Thin Film Impedance Sensor & AC Impedance Measurements PubMed Central Yu, Jinsong; Liu, Chung-Chiun 2010-01-01 Thin film microfabrication technique was employed to fabricate a platinum based parallel-electrode structured impedance sensor. Electrochemical impedance spectroscopy (EIS) and equivalent circuit analysis of the small amplitude (±5 mV) AC impedance measurements (frequency range: 1 MHz to 0.1 Hz) at ambient temperature were carried out. Testing media include 0.001 M, 0.01 M, 0.1 M NaCl and KCl solutions, and alumina (∼3 μm) and sand (∼300 μm) particulate layers saturated with NaCl solutions with the thicknesses ranging from 0.6 mm to 8 mm in a testing cell, and the results were used to assess the effect of the thickness of the particulate layer on the conductivity of the testing solution. The calculated resistances were approximately around 20 MΩ, 4 MΩ, and 0.5 MΩ for 0.001 M, 0.01 M, and 0.1 M NaCl solutions, respectively. The presence of the sand particulates increased the impedance dramatically (6 times and 3 times for 0.001 M and 0.1 M NaCl solutions, respectively). A cell constant methodology was also developed to assess the measurement of the bulk conductivity of the electrolyte solution. The cell constant ranged from 1.2 to 0.8 and it decreased with the increase of the solution thickness. PMID:22219690 9. Ac-electrical conductivity of poly(propylene) before and after X-ray irradiation Gaafar, M. 2001-05-01 Study on the ac-electrical conductivity of poly(propylene), before and after X-ray irradiation within the temperature range 300-360 K are reported. The measurements have been performed in a wide range of frequencies (from 0 to 10 5 Hz) and under the effect of different X-ray irradiation doses (from 0 to 15 Gy). Cole-Cole diagrams have been used to show the frequency dependence of the complex impedance at different temperatures. The results exhibit semicircles which are consistent with existing equivalent circuit model. Analysis of the results reveal semiconducting features based mainly on a hopping mechanism. The study shows a pronounced effect of X-ray irradiation on the electrical conductivity at zero frequency σDC. At the early stage of irradiation, σDC increased as a result of free radical formation. As the irradiation progressed, it decreased as a result of crosslinking, then it increased again due to irradiation induced degradation, which motivates the generation of mobile free radicals. The study shows that this polymer is one among other polymers which its electrical conductivity is modified by irradiation. 10. Analytic formulation for the ac electrical conductivity in two- temperature, strongly coupled, overdense plasma: FORTRAN subroutine SciTech Connect Cauble, R.; Rozmus, W. 1993-10-21 A FORTRAN subroutine for the calculation of the ac electrical conductivity in two-temperature, strongly coupled, overdense plasma is presented. The routine is the result of a model calculation based on classical transport theory with application to plasmas created by the interaction of short pulse lasers and solids. The formulation is analytic and the routine is self-contained. 11. Charging in the ac Conductance of a Double Barrier Resonant Tunneling Structure NASA Technical Reports Server (NTRS) Anantram, M. P.; Saini, Subhash (Technical Monitor) 1998-01-01 There have been many studies of the linear response ac conductance of a double barrier resonant tunneling structure (DBRTS), both at zero and finite dc biases. While these studies are important, they fail to self consistently include the effect of the time dependent charge density in the well. In this paper, we calculate the ac conductance at both zero and finite do biases by including the effect of the time dependent charge density in the well in a self consistent manner. The charge density in the well contributes to both the flow of displacement currents in the contacts and the time dependent potential in the well. We find that including these effects can make a significant difference to the ac conductance and the total ac current is not equal to the simple average of the non-selfconsistently calculated conduction currents in the two contacts. This is illustrated by comparing the results obtained with and without the effect of the time dependent charge density included correctly. Some possible experimental scenarios to observe these effects are suggested. 12. How to Measure Thermal Conductivity Ventura, Guglielmo; Perfetti, Mauro The methods to measure the thermal conductivity at low temperature are described: the steady-state techniques, (Sect. 2.2 ); the 3ω technique (Sect. 2.3 ); and the thermal diffusivity measurement (Sect. 2.4 ). Each of these techniques has its own advantages as well as its inherent limitations, with some techniques more appropriate to specific sample geometry, such as the 3ω technique for thin films which is discussed in detail in Sect. 2.4.2 . The radial flux method is reported in Sect. 2.2.4 , the laser flash diffusivity method in Sect. 2.4.1 and the "pulsed power or Maldonado technique" in Sect. 2.3.2 . 13. AC Conductivity Studies in Lithium-Borate Glass Containing Gold Nanoparticles Shivaprakash, Y.; Anavekar, R. V. 2011-07-01 Gold nanoparticles have been synthesized in a base glass with composition 30Li2O-70B2O3 using gold chloride (HAuCl4.3H2O) as a dopant. The samples are characterized using XRD, ESR, SEM and optical absorption in the visible range. AC conductivity studies have been performed at RT over a frequency range 100 to 10 MHz. The dc conductivity is calculated from the complex impedence plot. The dc conductivity is found to be increasing with the increase of dopant concentration. AC conductivity data is fitted with Almond-West law with power exponent s'. The values of `s' is found to lie in the range of 0.70-0.73. 14. Gas sensing properties of magnesium doped SnO{sub 2} thin films in relation to AC conduction SciTech Connect Deepa, S.; Skariah, Benoy Thomas, Boben; Joseph, Anisha 2014-01-28 Conducting magnesium doped (0 to 1.5 wt %) tin oxide thin films prepared by Spray Pyrolysis technique achieved detection of 1000 ppm of LPG. The films deposited at 304 °C exhibit an enhanced response at an operating temperature of 350 °C. The microstructural properties are studied by means of X-ray diffraction. AC conductivity measurements are carried out using precision LCR meter to analyze the parameters that affect the variation in sensing. The results are correlated with compositional parameters and the subsequent modification in the charge transport mechanism facilitating an enhanced LPG sensing action. 15. Measurement of the 225Ac half-life. PubMed Pommé, S; Marouli, M; Suliman, G; Dikmen, H; Van Ammel, R; Jobbágy, V; Dirican, A; Stroh, H; Paepen, J; Bruchertseifer, F; Apostolidis, C; Morgenstern, A 2012-11-01 The (225)Ac half-life was determined by measuring the activity of (225)Ac sources as a function of time, using various detection techniques: α-particle counting with a planar silicon detector at a defined small solid angle and in a nearly-2π geometry, 4πα+β counting with a windowless CsI sandwich spectrometer and with a pressurised proportional counter, gamma-ray spectrometry with a HPGe detector and with a NaI(Tl) well detector. Depending on the technique, the decay was followed for 59-141 d, which is about 6-14 times the (225)Ac half-life. The six measurement results were in good mutual agreement and their mean value is T(1/2)((225)Ac)=9.920 (3)d. This half-life value is more precise and better documented than the currently recommended value of 10.0 d, based on two old measurements lacking uncertainty evaluations. PMID:22940415 16. ac driving amplitude dependent systematic error in scanning Kelvin probe microscope measurements: Detection and correction SciTech Connect Wu Yan; Shannon, Mark A. 2006-04-15 The dependence of the contact potential difference (CPD) reading on the ac driving amplitude in scanning Kelvin probe microscope (SKPM) hinders researchers from quantifying true material properties. We show theoretically and demonstrate experimentally that an ac driving amplitude dependence in the SKPM measurement can come from a systematic error, and it is common for all tip sample systems as long as there is a nonzero tracking error in the feedback control loop of the instrument. We further propose a methodology to detect and to correct the ac driving amplitude dependent systematic error in SKPM measurements. The true contact potential difference can be found by applying a linear regression to the measured CPD versus one over ac driving amplitude data. Two scenarios are studied: (a) when the surface being scanned by SKPM is not semiconducting and there is an ac driving amplitude dependent systematic error; (b) when a semiconductor surface is probed and asymmetric band bending occurs when the systematic error is present. Experiments are conducted using a commercial SKPM and CPD measurement results of two systems: platinum-iridium/gap/gold and platinum-iridium/gap/thermal oxide/silicon are discussed. 17. Measurements of AC Losses and Current Distribution in Superconducting Cables SciTech Connect Nguyen, Doan A; Ashworth, Stephen P; Duckworth, Robert C; Carter, Bill; Fleshler, Steven 2011-01-01 This paper presents our new experimental facility and techniques to measure ac loss and current distribution between the layers for High Temperature Superconducting (HTS) cables. The facility is powered with a 45 kVA three-phase power supply which can provide three-phase currents up to 5 kA per phase via high current transformers. The system is suitable for measurements at any frequency between 20 and 500 Hz to better understand the ac loss mechanisms in HTS cables. In this paper, we will report techniques and results for ac loss measurements carried out on several HTS cables with and without an HTS shielding layer. For cables without a shielding layer, care must be taken to control the effect of the magnetic fields from return currents on loss measurements. The waveform of the axial magnetic field was also measured by a small pick-up coil placed inside a two-layer cable. The temporal current distribution between the layers can be calculated from the waveform of the axial field. 18. Temperature and frequency dependence of AC conductivity of new quaternary Se-Te-Bi-Pb chalcogenide glasses 2016-05-01 The aim of the present work is to study the temperature and frequency dependence of ac conductivity of new quaternary Se84-xTe15Bi1.0Pbx chalcogenide glasses. The Se84-xTe15Bi1.0Pbx (x = 2, 6) glassy alloys are prepared by using melt quenching technique. The temperature and frequency dependent behavior of ac conductivity σac(ω) has been carried out in the frequency range 42 Hz to 5 MHz and in the temperature range of 298-323 K below glass transition temperature. The behavior of ac conductivity is described in terms of the power law ωs. The obtained temperature dependence behavior of ac conductivity and frequency component (s) are explained by means of correlated barrier hopping model recommended by Elliot. 19. Microwave a.c. conductivity of domain walls in ferroelectric thin films Tselev, Alexander; Yu, Pu; Cao, Ye; Dedon, Liv R.; Martin, Lane W.; Kalinin, Sergei V.; Maksymovych, Petro 2016-05-01 Ferroelectric domain walls are of great interest as elementary building blocks for future electronic devices due to their intrinsic few-nanometre width, multifunctional properties and field-controlled topology. To realize the electronic functions, domain walls are required to be electrically conducting and addressable non-destructively. However, these properties have been elusive because conducting walls have to be electrically charged, which makes them unstable and uncommon in ferroelectric materials. Here we reveal that spontaneous and recorded domain walls in thin films of lead zirconate and bismuth ferrite exhibit large conductance at microwave frequencies despite being insulating at d.c. We explain this effect by morphological roughening of the walls and local charges induced by disorder with the overall charge neutrality. a.c. conduction is immune to large contact resistance enabling completely non-destructive walls read-out. This demonstrates a technological potential for harnessing a.c. conduction for oxide electronics and other materials with poor d.c. conduction, particularly at the nanoscale. 20. Microwave a.c. conductivity of domain walls in ferroelectric thin films. PubMed Tselev, Alexander; Yu, Pu; Cao, Ye; Dedon, Liv R; Martin, Lane W; Kalinin, Sergei V; Maksymovych, Petro 2016-01-01 Ferroelectric domain walls are of great interest as elementary building blocks for future electronic devices due to their intrinsic few-nanometre width, multifunctional properties and field-controlled topology. To realize the electronic functions, domain walls are required to be electrically conducting and addressable non-destructively. However, these properties have been elusive because conducting walls have to be electrically charged, which makes them unstable and uncommon in ferroelectric materials. Here we reveal that spontaneous and recorded domain walls in thin films of lead zirconate and bismuth ferrite exhibit large conductance at microwave frequencies despite being insulating at d.c. We explain this effect by morphological roughening of the walls and local charges induced by disorder with the overall charge neutrality. a.c. conduction is immune to large contact resistance enabling completely non-destructive walls read-out. This demonstrates a technological potential for harnessing a.c. conduction for oxide electronics and other materials with poor d.c. conduction, particularly at the nanoscale. PMID:27240997 1. Microwave a.c. conductivity of domain walls in ferroelectric thin films PubMed Central Tselev, Alexander; Yu, Pu; Cao, Ye; Dedon, Liv R.; Martin, Lane W.; Kalinin, Sergei V.; Maksymovych, Petro 2016-01-01 Ferroelectric domain walls are of great interest as elementary building blocks for future electronic devices due to their intrinsic few-nanometre width, multifunctional properties and field-controlled topology. To realize the electronic functions, domain walls are required to be electrically conducting and addressable non-destructively. However, these properties have been elusive because conducting walls have to be electrically charged, which makes them unstable and uncommon in ferroelectric materials. Here we reveal that spontaneous and recorded domain walls in thin films of lead zirconate and bismuth ferrite exhibit large conductance at microwave frequencies despite being insulating at d.c. We explain this effect by morphological roughening of the walls and local charges induced by disorder with the overall charge neutrality. a.c. conduction is immune to large contact resistance enabling completely non-destructive walls read-out. This demonstrates a technological potential for harnessing a.c. conduction for oxide electronics and other materials with poor d.c. conduction, particularly at the nanoscale. PMID:27240997 2. Temperature dependence of conductivity measurement for conducting polymer Gutierrez, Leandro; Duran, Jesus; Isah, Anne; Albers, Patrick; McDougall, Michael; Wang, Weining 2014-03-01 Conducting polymer-based solar cells are the newest generation solar cells. While research on this area has been progressing, the efficiency is still low because certain important parameters of the solar cell are still not well understood. It is of interest to study the temperature dependence of the solar cell parameters, such as conductivity of the polymer, open circuit voltage, and reverse saturation current to gain a better understanding on the solar cells. In this work, we report our temperature dependence of conductivity measurement using our in-house temperature-varying apparatus. In this project, we designed and built a temperature varying apparatus using a thermoelectric cooler module which gives enough temperature range as we need and costs much less than a cryostat. The set-up of the apparatus will be discussed. Temperature dependence of conductivity measurements for PEDOT:PSS films with different room-temperature conductivity will be compared and discussed. NJSGC-NASA Fellowship grant 3. Effect of nanosilica on optical, electric modulus and AC conductivity of polyvinyl alcohol/polyaniline films El-Sayed, Somyia; Abel-Baset, Tarob; Elfadl, Azza Abou; Hassen, Arafa 2015-05-01 Nanosilica (NS) was synthesized by a sol-gel method and mixed with 0.98 polyvinyl alcohol (PVA)/0.02 polyaniline (PANI) in different amounts to produce nanocomposite films. High-resolution transmission electron microscopy (HR-TEM) revealed the average particle size of the NS to be ca. 15 nm. Scanning electron microscopy (SEM) showed that the NS was well-dispersed on the surface of the PVA/PNAI films. The Fourier transform infrared (FTIR) spectra of the samples showed a significant change in the intensity of the characteristic peak of the functional groups in the composite films with the amount of NS added. The absorbance and refractive index (n) of the composites were studied in the UV-vis range, and the optical energy band gap, Eg, and different optical parameters were calculated. The dielectric loss modulus, M″ and ac conductivity, σac, of the samples were studied within 300-425 K and 0.1 kHz-5 MHz, respectively. Two relaxation peaks were observed in the frequency dependence of the dielectric loss modulus, M″. The behavior of σac(f) for the composite films indicated that the conduction mechanism was correlated barrier hopping (CBH). The results of this work are discussed and compared with those of previous studies of similar composites. 4. Instabilities across the isotropic conductivity point in a nematic phenyl benzoate under AC driving. PubMed Kumar, Pramoda; Patil, Shivaram N; Hiremath, Uma S; Krishnamurthy, K S 2007-08-01 We characterize the sequence of bifurcations generated by ac fields in a nematic layer held between unidirectionally rubbed ITO electrodes. The material, which possesses a negative dielectric anisotropy epsilona and an inversion temperature for electrical conductivity anisotropy sigmaa, exhibits a monostable tilted alignment near TIN, the isotropic-nematic point. On cooling, an anchoring transition to the homeotropic configuration occurs close to the underlying smectic phase. The field experiments are performed for (i) negative sigmaa and homeotropic alignment, and (ii) weakly positive sigmaa and nearly homeotropic alignment. Under ac driving, the Freedericksz transition is followed by bifurcation into various patterned states. Among them are the striped states that seem to belong to the dielectric regime and localized hybrid instabilities. Very significantly, the patterned instabilities are not excited by dc fields, indicating their possible gradient flexoelectric origin. The Carr-Helfrich mechanism-based theories that take account of flexoelectric terms can explain the observed electroconvective effects only in part. PMID:17616118 5. Conductivity (ac and dc) in III-V amorphous semiconductors and chalcogenide glasses Hauser, J. J. 1985-02-01 Variable-range hopping, as evidenced by a resistivity proportional to exp(T-1/4), has been induced in many III-V amorphous semiconductors (InSb, AlSb, and GaAs) and even in chalcogenide glasses (As2Te3, As2Te3-xSex, and GeTe) by depositing films at 77 K. It is therefore remarkable that the same procedure failed to generate variable-range hopping in GaSb, which is one of the less ionic III-V semiconductors. Besides differences in the dc conductivity, there are also different behaviors in the ac conductivity of amorphous semiconductors. The low-temperature ac conductivity of all amorphous semiconductors is proportional to ωsTn with s~=1 and n<1, which is consistent with a model of correlated barrier hopping of electron pairs between paired and random defects. However, in the case of a-SiO2 and a-GeSe2 one finds, in addition, that the capacitance obeys the scaling relation C=A ln(Tω-1), which would suggest a conduction mechanism by tunneling relaxation. Furthermore, this scaling relation cannot be fitted to the data for a-As2Te3, a-InSb, and a-GaSb although the functional dependence of C on T and ω are similar. 6. Iterative Precise Conductivity Measurement with IDEs PubMed Central Hubálek, Jaromír 2015-01-01 The paper presents a new approach in the field of precise electrolytic conductivity measurements with planar thin- and thick-film electrodes. This novel measuring method was developed for measurement with comb-like electrodes called interdigitated electrodes (IDEs). Correction characteristics over a wide range of specific conductivities were determined from an interface impedance characterization of the thick-film IDEs. The local maximum of the capacitive part of the interface impedance is used for corrections to get linear responses. The measuring frequency was determined at a wide range of measured conductivity. An iteration mode of measurements was suggested to precisely measure the conductivity at the right frequency in order to achieve a highly accurate response. The method takes precise conductivity measurements in concentration ranges from 10−6 to 1 M without electrode cell replacement. PMID:26007745 7. Study on AC-DC Electrical Conductivities in Warm Dense Matter Generated by Pulsed-power Discharge with Isochoric Vessel Sasaki, Toru; Ohuchi, Takumi; Takahashi, Takuya; Kawaguchi, Yoshinari; Saito, Hirotaka; Miki, Yasutoshi; Takahashi, Kazumasa; Kikuchi, Takashi; Aso, Tsukasa; Harada, Nob. 2016-03-01 To observe AC and DC electrical conductivity in warm dense matter (WDM), we have demonstrated to apply the spectroscopic ellipsometry for a pulsed-power discharge with isochoric vessel. At 10 μs from the beginning of discharge, the generated parameters by using pulsed-power discharge with isochoric vessel are 0.1 ρ s (ρ s: solid density) of density and 4000 K of temperature, respectively. The DC electrical conductivity for above parameters is estimated to be 104 S/m. In order to measure the AC electrical conductivity, we have developed a four-detector spectroscopic ellipsometer with a multichannel spectrometer. The multichannel spectrometer, in which consists of a 16-channel photodiode array, a two-stages voltage adder, and a flat diffraction grating, has 10 MHz of the frequency response with covered visible spectrum. For applying the four-detector spectroscopic ellipsometer, we observe the each observation signal evolves the polarized behavior compared to the ratio as I 1/I 2. 8. Skin Conductance Measurement in Communication Research. ERIC Educational Resources Information Center Goodman, R. Irwin 1985-01-01 Describes skin conductance measurement as a physiological procedure to obtain information on onset, duration, intensity, and completion of private physiological responses to parts of films or media products. The mechanics of the technique, how measurements are recorded and analyzed, and types of skin conductance research literature are discussed.… 9. AC conductivity scaling behavior in grain and grain boundary response regime of fast lithium ionic conductors Mariappan, C. R. 2014-05-01 AC conductivity spectra of Li-analogues NASICON-type Li1.5Al0.5Ge1.5P3O12 (LAGP), Li-Al-Ti-P-O (LATP) glass-ceramics and garnet-type Li7La2Ta2O13 (LLTO) ceramic are analyzed by universal power law and Summerfield scaling approaches. The activation energies and pre-exponential factors of total and grain conductivities are following the Meyer-Neldel (M-N) rule for NASICON-type materials. However, the garnet-type LLTO material deviates from the M-N rule line of NASICON-type materials. The frequency- and temperature-dependent conductivity spectra of LAGP and LLTO are superimposed by Summerfield scaling. The scaled conductivity curves of LATP are not superimposed at the grain boundary response region. The superimposed conductivity curves are observed at cross-over frequencies of grain boundary response region for LATP by incorporating the exp ( {{{ - (EAt - EAg )} {{{ - (EAt - EAg )} {kT}}} ) factor along with Summerfield scaling factors on the frequency axis, where EAt and EAg are the activation energies of total and grain conductivities, respectively. 10. AC and DC conductivity of ionic liquid containing polyvinylidene fluoride thin films Frübing, Peter; Wang, Feipeng; Kühle, Till-Friedrich; Gerhard, Reimund 2016-01-01 Polarisation processes and charge transport in polyvinylidene fluoride (PVDF) with a small amount (0.01-10 wt%) of the ionic liquid (IL) 1-ethyl-3-methylimidazolium nitrate ({[EMIM]}^+[{NO}_3]^-) are investigated by means of dielectric spectroscopy. The response of PVDF that contains more than 0.01 wt% IL is dominated by a low-frequency relaxation which shows typical signatures of electrode polarisation. Furthermore, the α a relaxation, related to the glass transition, disappears for IL contents of more than 1 wt%, which indicates that the amorphous phase loses its glass-forming properties and undergoes structural changes. The DC conductivity is determined from the low-frequency limit of the AC conductivity and from the dielectric loss peak related to the electrode polarisation. DC conductivities of 10^{-10} to 10^{-2} {S}/{m} are obtained—increasing with IL content and temperature. The dependence of the DC conductivity on the IL content follows a power law with an exponent greater than one, indicating an increase in the ion mobility. The temperature dependence of the DC conductivity shows Vogel-Fulcher-Tammann behaviour, which implies that charge transport is coupled to polymer chain motion. Mobile ion densities and ion mobilities are calculated from the DC conductivity and the dielectric loss related to electrode polarisation, with the results that less than one per cent of the total ion concentration contributes to the conductivity and that the strong increase in conductivity with temperature is mainly caused by a strong increase in ion mobility. This leads to the conclusion that in particular the ion mobility must be reduced in order to decrease the DC conductivity. 11. Finding the asymmetric parasitic source and drain resistances from the a.c. conductances of a single MOS transistor Raychaudhuri, A.; Deen, M. J.; King, M. I. H.; Kolk, J. 1996-06-01 Layout asymmetry, processing, or hot-carrier stressing can give rise to unequal source and drain parasitic resistances in a MOSFET. In these cases, it is necessary to extract these resistances separately without the aid of other transistors. In this paper, we present a simple method to extract the source and drain parasitic resistances separately. This method, unlike earlier ones that depend on the measurements of the d.c. resistances of several MOSFETs, is based on accurate formulations and measurements of the a.c. conductances with respect to the gate and drain terminals of a single transistor. This allows us to get reasonably accurate estimates of these resistances in a more straightforward manner. We also discuss the main error terms in detail. 12. AC conductivity and dielectric behavior of CoAl xFe 2- xO 4 Abo El Ata, A. M.; Attia, S. M.; Meaz, T. M. 2004-01-01 AC conductivity and dielectric properties have been studied for a series of polycrystalline spinel ferrite with composition CoAl xFe 2- xO 4, as a function of frequency and temperature. The results of AC conductivity were discussed in terms of the quantum mechanical tunneling and small polaron tunneling models. The dispersion of the dielectric constant was discussed in the light of Koops model and hopping conduction mechanism. The dielectric loss tangent tan δ curves exhibits a dielectric relaxation peaks which are attributed to the coincidence of the hopping frequency of the charge carriers with that of the external fields. The AC conductivity, dielectric constant, and dielectric loss tangent were found to increase with increasing the temperature due to the increase of the hopping frequency, while they decrease with increasing Al ion content due to the reduction of iron ions available for the conduction process at the octahedral sites. 13. Electric properties of carbon nano-onion/polyaniline composites: a combined electric modulus and ac conductivity study Papathanassiou, Anthony N.; Mykhailiv, Olena; Echegoyen, Luis; Sakellis, Ilias; Plonska-Brzezinska, Marta E. 2016-07-01 The complex electric modulus and the ac conductivity of carbon nano-onion/polyaniline composites were studied from 1 mHz to 1 MHz at isothermal conditions ranging from 15 K to room temperature. The temperature dependence of the electric modulus and the dc conductivity analyses indicate a couple of hopping mechanisms. The distinction between thermally activated processes and the determination of cross-over temperature were achieved by exploring the temperature dependence of the fractional exponent of the dispersive ac conductivity and the bifurcation of the scaled ac conductivity isotherms. The results are analyzed by combining the granular metal model (inter-grain charge tunneling of extended electron states located within mesoscopic highly conducting polyaniline grains) and a 3D Mott variable range hopping model (phonon assisted tunneling within the carbon nano-onions and clusters). 14. Thunderstorm related variations in stratospheric conductivity measurements NASA Technical Reports Server (NTRS) Hu, Hua; Holzworth, Robert H.; Li, Ya QI 1989-01-01 The vector electric field and polar conductivities were measured by zero-pressure balloon-borne payloads launched from Wallops Island, Virgina during the summers of 1987 and 1988. Data were collected over thunderstorms (or electrified clouds) during 6-hour flights at altitudes near 30 km. The vector electric field measurements were made with the double Langmuir probe high-impedance method, and the direct conductivity measurements were obtained with the relaxation technique. Evidence is presented for conductivity variations over thunderstorms (or electrified clouds). It is found that both positive and negative polar conductivity data do show variations of up to a factor of 2 from ambient values associated with the disturbed periods. Some ideas for possible physical mechanisms which may be responsible for the conductivity variations over thunderstorms are also discussed in this paper. 15. AC conductivity and dielectric relaxation of tris(N,N-dimethylanilinium) hexabromidostannate(IV) bromide: (C8H12N)3SnBr6.Br Chouaib, H.; Kamoun, S. 2015-10-01 The X-ray powder analysis, thermogravimetric analysis, differential scanning calorimetry analysis and complex impedance spectroscopic data have been carried out on (C8H12N)3SnBr6.Br compound. The results show that this compound exhibits a phase transition at (T=365±2 K) which has been characterized by differential scanning calorimetry (DSC), AC conductivity and dielectric measurements. The AC conductivity, the modulus analysis, the dielectric constants and the polarizability have been studied using impedance in the temperature range from 334 K to 383 K and in the frequency range between 20 Hz and 2 MHz. The temperature dependence of DC conductivity follows the Arrhenius law. Moreover, the frequency dependence of conductivity follows Jonscher's dynamical law with the relation: σ(ω , T) =σDC + B(T)ω s(T) . Relaxation peaks can be observed in the complex modulus analysis and after a transformation of the complex permittivity ε* to the complex polarizability α*. 16. An effective thermal conductivity measurement system Madrid, F.; Jordà, X.; Vellvehi, M.; Guraya, C.; Coleto, J.; Rebollo, J. 2004-11-01 In the technical literature, there is a lack of reliable thermal parameters and, often, it is necessary to do in situ measurements for every particular material. An effective thermal conductivity measurement system has been designed and implemented to provide reliable and accurate values for that thermal parameter. The thermal conductivity of a given material is deduced from thermal resistance differential measurements of two samples. All parts of the implemented system as well as practical and theoretical solutions are described, including a power controller circuit exclusively conceived for this application. Experimental considerations to reduce the measurement error are exposed, as well as some results obtained for three different materials. 17. Transport ac losses of a second-generation HTS tape with a ferromagnetic substrate and conducting stabilizer Li, Shuo; Chen, Du-Xing; Fang, Jin 2015-12-01 The current-voltage curve and transport ac loss of a second-generation HTS tape with a ferromagnetic NiW substrate and brass stabilizer are measured. It is found that the ac loss is up to two orders of magnitude larger than what is expected by the power-law E(J) determined by the current-voltage curve and increases with increasing frequency. Modeling results show that the overly large ac loss is contributed by the ac loss in the HTS strip enhanced by the NiW substrate and the magnetic hysteresis loss in the substrate, and the frequency-dependent loss occurs in the brass layer covering the substrate but not in the ferromagnetic substrate itself as assumed previously. The ac loss in the brass layer is associated with transport currents but not eddy currents, and it has some features similar to ordinary eddy-current loss with significant differences. 18. Thermal conductivity measurements of Summit polycrystalline silicon. SciTech Connect Clemens, Rebecca; Kuppers, Jaron D.; Phinney, Leslie Mary 2006-11-01 A capability for measuring the thermal conductivity of microelectromechanical systems (MEMS) materials using a steady state resistance technique was developed and used to measure the thermal conductivities of SUMMiT{trademark} V layers. Thermal conductivities were measured over two temperature ranges: 100K to 350K and 293K to 575K in order to generate two data sets. The steady state resistance technique uses surface micromachined bridge structures fabricated using the standard SUMMiT fabrication process. Electrical resistance and resistivity data are reported for poly1-poly2 laminate, poly2, poly3, and poly4 polysilicon structural layers in the SUMMiT process from 83K to 575K. Thermal conductivity measurements for these polysilicon layers demonstrate for the first time that the thermal conductivity is a function of the particular SUMMiT layer. Also, the poly2 layer has a different variation in thermal conductivity as the temperature is decreased than the poly1-poly2 laminate, poly3, and poly4 layers. As the temperature increases above room temperature, the difference in thermal conductivity between the layers decreases. 19. Conductance measurement circuit with wide dynamic range NASA Technical Reports Server (NTRS) Mount, Bruce E. (Inventor); Von Esch, Myron (Inventor) 1994-01-01 A conductance measurement circuit to measure conductance of a solution under test with an output voltage proportional to conductance over a 5-decade range, i.e., 0.01 uS to 1000 uS or from 0.1 uS to 10,000 uS. An increase in conductance indicates growth, or multiplication, of the bacteria in the test solution. Two circuits are used each for an alternate half-cycle time periods of an alternate squarewave in order to cause alternate and opposite currents to be applied to the test solution. The output of one of the two circuits may be scaled for a different range optimum switching frequency dependent upon the solution conductance and to enable uninterrupted measurement over the complete 5-decade range. This circuitry provides two overlapping ranges of conductance which can be read simultaneously without discontinuity thereby eliminating range switching within the basic circuitry. A VCO is used to automatically change the operating frequency according to the particular value of the conductance being measured, and comparators indicate which range is valid and also facilitate computer-controlled data acquisition. A multiplexer may be used to monitor any number of solutions under test continuously. 20. Absorption and Attenuation Coefficients Using the WET Labs ac-s in the Mid-Atlantic Bight: Field Measurements and Data Analysis NASA Technical Reports Server (NTRS) Ohi, Nobuaki; Makinen, Carla P.; Mitchell, Richard; Moisan, Tiffany A. 2008-01-01 Ocean color algorithms are based on the parameterization of apparent optical properties as a function of inherent optical properties. WET Labs underwater absorption and attenuation meters (ac-9 and ac-s) measure both the spectral beam attenuation [c (lambda)] and absorption coefficient [a (lambda)]. The ac-s reports in a continuous range of 390-750 nm with a band pass of 4 nm, totaling approximately 83 distinct wavelengths, while the ac-9 reports at 9 wavelengths. We performed the ac-s field measurements at nine stations in the Mid-Atlantic Bight from water calibrations to data analysis. Onboard the ship, the ac-s was calibrated daily using Milli Q-water. Corrections for the in situ temperature and salinity effects on optical properties of water were applied. Corrections for incomplete recovery of the scattered light in the ac-s absorption tube were performed. The fine scale of spectral and vertical distributions of c (lambda) and a (lambda) were described from the ac-s. The significant relationships between a (674) and that of spectrophotometric analysis and chlorophyll a concentration of discrete water samples were observed. 1. Local measurement of thermal conductivity and diffusivity SciTech Connect Hurley, David H.; Schley, Robert S.; Khafizov, Marat; Wendt, Brycen L. 2015-12-01 Simultaneous measurement of local thermal diffusivity and conductivity is demonstrated on a range of ceramic samples. This was accomplished by measuring the temperature field spatial profile of samples excited by an amplitude modulated continuous wave laser beam. A thin gold film is applied to the samples to ensure strong optical absorption and to establish a second boundary condition that introduces an expression containing the substrate thermal conductivity. The diffusivity and conductivity are obtained by comparing the measured phase profile of the temperature field to a continuum based model. A sensitivity analysis is used to identify the optimal film thickness for extracting the both substrate conductivity and diffusivity. Proof of principle studies were conducted on a range of samples having thermal properties that are representative of current and advanced accident tolerant nuclear fuels. It is shown that by including the Kapitza resistance as an additional fitting parameter, the measured conductivity and diffusivity of all the samples considered agree closely with literature values. Lastly, a distinguishing feature of this technique is that it does not require a priori knowledge of the optical spot size which greatly increases measurement reliability and reproducibility. 2. Local measurement of thermal conductivity and diffusivity. PubMed Hurley, David H; Schley, Robert S; Khafizov, Marat; Wendt, Brycen L 2015-12-01 Simultaneous measurement of local thermal diffusivity and conductivity is demonstrated on a range of ceramic samples. This was accomplished by measuring the temperature field spatial profile of samples excited by an amplitude modulated continuous wave laser beam. A thin gold film is applied to the samples to ensure strong optical absorption and to establish a second boundary condition that introduces an expression containing the substrate thermal conductivity. The diffusivity and conductivity are obtained by comparing the measured phase profile of the temperature field to a continuum based model. A sensitivity analysis is used to identify the optimal film thickness for extracting the both substrate conductivity and diffusivity. Proof of principle studies were conducted on a range of samples having thermal properties that are representatives of current and advanced accident tolerant nuclear fuels. It is shown that by including the Kapitza resistance as an additional fitting parameter, the measured conductivity and diffusivity of all the samples considered agreed closely with the literature values. A distinguishing feature of this technique is that it does not require a priori knowledge of the optical spot size which greatly increases measurement reliability and reproducibility. PMID:26724041 3. Local measurement of thermal conductivity and diffusivity Hurley, David H.; Schley, Robert S.; Khafizov, Marat; Wendt, Brycen L. 2015-12-01 Simultaneous measurement of local thermal diffusivity and conductivity is demonstrated on a range of ceramic samples. This was accomplished by measuring the temperature field spatial profile of samples excited by an amplitude modulated continuous wave laser beam. A thin gold film is applied to the samples to ensure strong optical absorption and to establish a second boundary condition that introduces an expression containing the substrate thermal conductivity. The diffusivity and conductivity are obtained by comparing the measured phase profile of the temperature field to a continuum based model. A sensitivity analysis is used to identify the optimal film thickness for extracting the both substrate conductivity and diffusivity. Proof of principle studies were conducted on a range of samples having thermal properties that are representatives of current and advanced accident tolerant nuclear fuels. It is shown that by including the Kapitza resistance as an additional fitting parameter, the measured conductivity and diffusivity of all the samples considered agreed closely with the literature values. A distinguishing feature of this technique is that it does not require a priori knowledge of the optical spot size which greatly increases measurement reliability and reproducibility. 4. Local measurement of thermal conductivity and diffusivity SciTech Connect Hurley, David H.; Schley, Robert S.; Khafizov, Marat; Wendt, Brycen L. 2015-12-15 Simultaneous measurement of local thermal diffusivity and conductivity is demonstrated on a range of ceramic samples. This was accomplished by measuring the temperature field spatial profile of samples excited by an amplitude modulated continuous wave laser beam. A thin gold film is applied to the samples to ensure strong optical absorption and to establish a second boundary condition that introduces an expression containing the substrate thermal conductivity. The diffusivity and conductivity are obtained by comparing the measured phase profile of the temperature field to a continuum based model. A sensitivity analysis is used to identify the optimal film thickness for extracting the both substrate conductivity and diffusivity. Proof of principle studies were conducted on a range of samples having thermal properties that are representatives of current and advanced accident tolerant nuclear fuels. It is shown that by including the Kapitza resistance as an additional fitting parameter, the measured conductivity and diffusivity of all the samples considered agreed closely with the literature values. A distinguishing feature of this technique is that it does not require a priori knowledge of the optical spot size which greatly increases measurement reliability and reproducibility. 5. Local measurement of thermal conductivity and diffusivity DOE PAGESBeta Hurley, David H.; Schley, Robert S.; Khafizov, Marat; Wendt, Brycen L. 2015-12-01 Simultaneous measurement of local thermal diffusivity and conductivity is demonstrated on a range of ceramic samples. This was accomplished by measuring the temperature field spatial profile of samples excited by an amplitude modulated continuous wave laser beam. A thin gold film is applied to the samples to ensure strong optical absorption and to establish a second boundary condition that introduces an expression containing the substrate thermal conductivity. The diffusivity and conductivity are obtained by comparing the measured phase profile of the temperature field to a continuum based model. A sensitivity analysis is used to identify the optimal film thickness formore » extracting the both substrate conductivity and diffusivity. Proof of principle studies were conducted on a range of samples having thermal properties that are representative of current and advanced accident tolerant nuclear fuels. It is shown that by including the Kapitza resistance as an additional fitting parameter, the measured conductivity and diffusivity of all the samples considered agree closely with literature values. Lastly, a distinguishing feature of this technique is that it does not require a priori knowledge of the optical spot size which greatly increases measurement reliability and reproducibility.« less 6. Ac-loss measurement of a DyBCO-Roebel assembled coated conductor cable (RACC) Schuller, S.; Goldacker, W.; Kling, A.; Krempasky, L.; Schmidt, C. 2007-10-01 Low ac-loss HTS cables for transport currents well above 1 kA are required for application in transformers and generators and are taken into consideration for future generations of fusion reactor coils. Coated conductors (CC) are suitable candidates for high field application at an operation temperature around 50-77 K, which is a crucial precondition for economical cooling costs. We prepared a short length of a Roebel bar cable made of industrial DyBCO coated conductor (Theva Company, Germany). Meander shaped tapes of 4 mm width with a twist pitch of 122 mm were cut from 10 mm wide CC tapes using a specially designed tool. Eleven of these strands were assembled to a cable. The electrical and mechanical connection of the tapes was achieved using a silver powder filled conductive epoxy resin. Ac-losses of a short sample in an external ac field were measured as a function of frequency and field amplitude in transverse and parallel field orientations. In addition, the coupling current time constant of the sample was directly measured. 7. AC Circuit Measurements with a Differential Hall Element Magnetometer Calkins, Matthew W.; Nicks, B. Scott; Quintero, Pedro A.; Meisel, Mark W. 2013-03-01 As the biomedical field grows, there is an increasing need to quickly and efficiently characterize more samples at room temperature. An automated magnetometer was commissioned to do these room temperature magnetic characterizations. This magnetometer, which is inspired by a Differential Hall Element Magnetometer,[2] uses two commercially available Hall elements wired in series. One Hall element measures the external magnetic field of a 9 T superconducting magnet and the other measures the same external field plus the field due to the magnetization of the sample that sits on top of the Hall element. The difference between these two Hall elements is taken while a linear stepper motor sweeps through the external magnetic field. The linear motor and data acquisition are controlled by a LabVIEW program. Recently, the system was outfitted for AC circuit measurements and these data will be compared to DC circuit data. In addition, the lowest signal to noise ratio will be found in order to deduce the smallest amount of sample needed to register an accurate coercive field. Supported by the NSF via NHMFL REU (DMR-0654118), a single investigator grant (DMR-1202033 to MWM) and by the UF Undergraduate Scholars Program. 8. Measurement of AC Induced Flow using Mico PIV Wang, Dazhi; Meinhart, Carl; Sigurdson, Marin 2002-11-01 The fluid motion in a wedge-shaped device subject to an AC electric field is measured using Micron-Resolution Particle Image Velocimetry (micro-PIV). The fluorescent polystyrene spherical particles are used as flow tracers. In the non-uniform electric field, the particles in the suspension experience dielectrophoretic forces, which cause difference of velocities between the particles and the fluid. In order to eliminate the velocity difference, two different size particles are used for the micro-PIV measurements to determine the fluid velocity field. A two-color PIV technique is used to determine uniquely the fluid velocity field. The wedge-shaped channel is 100-micron wide at the apex, and fabricated from a 550-micron thick silicon wafer. A voltage of 15Vrms at 100 kHz is applied to the electrodes. The particle volume fraction is set below 0.1% so that the effect of the particles on the fluid can be negligible. Fifty successive images are taken to record particle images and analyzed to estimate the particle velocity fields. The velocity fields of the two different size particles are then used to uniquely determine the underlying fluid velocity. The measured fluid flow is a saddle-point flow, which could be used for precision mixing and transport in microscale devices. 9. Origin of DC and AC conductivity anisotropy in iron-based superconductors: Scattering rate versus spectral weight effects Schütt, Michael; Schmalian, Jörg; Fernandes, Rafael M. 2016-08-01 To shed light on the transport properties of electronic nematic phases, we investigate the anisotropic properties of the AC and DC conductivities. Based on the analytical properties of the former, we show that the anisotropy of the effective scattering rate behaves differently than the actual scattering rate anisotropy and even changes sign as a function of temperature. Similarly, the effective spectral weight acquires an anisotropy even when the plasma frequency is isotropic. These results are illustrated by an explicit calculation of the AC conductivity due to the interaction between electrons and spin fluctuations in the nematic phase of the iron-based superconductors and shown to be in agreement with recent experiments. 10. Inductive Measurement of Plasma Jet Electrical Conductivity NASA Technical Reports Server (NTRS) Turner, Matthew W.; Hawk, Clark W.; Litchford, Ron J. 2005-01-01 An inductive probing scheme, originally developed for shock tube studies, has been adapted to measure explosive plasma jet conductivities. In this method, the perturbation of an applied magnetic field by a plasma jet induces a voltage in a search coil, which, in turn, can be used to infer electrical conductivity through the inversion of a Fredholm integral equation of the first kind. A 1-inch diameter probe was designed and constructed, and calibration was accomplished by firing an aluminum slug through the probe using a light-gas gun. Exploratory laboratory experiments were carried out using plasma jets expelled from 15-gram high explosive shaped charges. Measured conductivities were in the range of 3 kS/m for unseeded octol charges and 20 kS/m for seeded octol charges containing 2% potassium carbonate by mass. 11. Flow rate measurement in aggressive conductive fluids Dubovikova, Nataliia; Kolesnikov, Yuri; Karcher, Christian 2014-03-01 Two non-contact experimental methods of flow rate measurements for aggressive conductive liquids are described. The techniques are based on electromagnetic forces and Faraday's law: Lorentz force is induced inside moving conductive liquid under influence of variable magnetic field of permanent magnets. They are mounted along a liquid metal channel or (in case of the second method) inserted into rotated metal wheels. The force acts in the opposite of fluids' velocity direction and hence it is possible to measure reaction force of it that takes place according to Newton's law on magnetic field source - permanent magnets. And by knowing the force, which linearly depends on velocity, one can calculate mean flow rate of liquid. In addition experimental "dry" calibration and its results are described for one of the measurements' techniques. 12. Analytic formulation for the ac electrical conductivity in two-temperature, strongly coupled, overdense plasma: FORTRAN subroutine Cauble, R.; Rozmus, W. 1993-10-01 A FORTRAN subroutine for the calculation of the ac electrical conductivity in two-temperature, strongly coupled, overdense plasma is presented. The routine is the result of a model calculation based on classical transport theory with application to plasmas created by the interaction of short pulse lasers and solids. The formulation is analytic and the routine is self-contained. 13. Martian Surface after Phoenix's Conductivity Measurements NASA Technical Reports Server (NTRS) 2008-01-01 NASA's Phoenix Mars Lander's Robotic Arm Camera took this image on Sol 71 (August 6, 2008), the 71st Martian day after landing. The shadow shows the outline of Phoenix's Thermal and Electrical Conductivity Probe, or TECP. The holes seen in the Martian surface were made by this instrument to measure the soil's conductivity. A fork-like probe inserted into the soil checks how well heat and electricity move through the soil from one prong to another. The measurements completed Wednesday ran from the afternoon of Phoenix's 70th Martian day, or sol, to the morning of Sol 71. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver. 14. Measuring Contact Thermal Conductances at Low Temperatures NASA Technical Reports Server (NTRS) Salerno, Louis J.; Kittel, Peter; Brooks, Walter; Spivak, Alan L.; Marks, William G., Jr. 1987-01-01 Instrument measures thermal conductance of pressed contacts in liquid helium. Makes measurements automatically as function of force on pairs of brass samples having various surface finishes. Developed as part of effort to determine heat-transfer characteristics of bolted joints on cryogenically cooled focal planes in infrared equipment. Cylindrical chamber hangs from cover plate in bath of liquid helium. Inside chamber rocker arm applies controlled force to samples. Upper sample made slightly wider than lower one so two samples remain in complete contact even under slight lateral misalignment. 15. AC hot wire measurement of thermophysical properties of nanofluids with 3ω method Turgut, A.; Sauter, C.; Chirtoc, M.; Henry, J. F.; Tavman, S.; Tavman, I.; Pelzl, J. 2008-01-01 We present a new application of a hot wire sensor for simultaneous and independent measurement of thermal conductivity k and diffusivity α of (nano)fluids, based on a hot wire thermal probe with ac excitation and 3 ω lock-in detection. The theoretical modeling of imaginary part of the signal yields the k value while the phase yields the α value. Due to modulated heat flow in cylindrical geometry with a radius comparable to the thermal diffusion length, the necessary sample quantity is kept very low, typically 25 μl. In the case of relative measurements, the resolution is 0.1% in k and 0.3% in α. Measurements of water-based Aerosil 200V nanofluids indicate that ultrasound treatment is more efficient than high pressure dispersion method in enhancing their thermal parameters. 16. System for absolute measurement of electrolytic conductivity in aqueous solutions based on van der Pauw's theory Zhang, Bing; Lin, Zhen; Zhang, Xiao; Yu, Xiang; Wei, Jiali; Wang, Xiaoping 2014-05-01 Based on an innovative application of van der Pauw's theory, a system was developed for the absolute measurement of electrolytic conductivity in aqueous solutions. An electrolytic conductivity meter was designed that uses a four-electrode system with an axial-radial two-dimensional adjustment structure coupled to an ac voltage excitation source and signal collecting circuit. The measurement accuracy, resolution and repeatability of the measurement system were examined through a series of experiments. Moreover, the measurement system and a high-precision electrolytic conductivity meter were compared using some actual water samples. 17. Measurement of heat conduction through stacked screens NASA Technical Reports Server (NTRS) Lewis, M. A.; Kuriyama, T.; Kuriyama, F.; Radebaugh, R. 1998-01-01 This paper describes the experimental apparatus for the measurement of heat conduction through stacked screens as well as some experimental results taken with the apparatus. Screens are stacked in a fiberglass-epoxy cylinder, which is 24.4 mm in diameter and 55 mm in length. The cold end of the stacked screens is cooled by a Gifford-McMahon (GM) cryocooler at cryogenic temperature, and the hot end is maintained at room temperature. Heat conduction through the screens is determined from the temperature gradient in a calibrated heat flow sensor mounted between the cold end of the stacked screens and the GM cryocooler. The samples used for these experiments consisted of 400-mesh stainless steel screens, 400-mesh phosphor bronze screens, and two different porosities of 325-mesh stainless steel screens. The wire diameter of the 400-mesh stainless steel and phosphor bronze screens was 25.4 micrometers and the 325-mesh stainless steel screen wire diameters were 22.9 micrometers and 27.9 micrometers. Standard porosity values were used for the experimental data with additional porosity values used on selected experiments. The experimental results showed that the helium gas between each screen enhanced the heat conduction through the stacked screens by several orders of magnitude compared to that in vacuum. The conduction degradation factor is the ratio of actual heat conduction to the heat conduction where the regenerator material is assumed to be a solid rod of the same cross sectional area as the metal fraction of the screen. This factor was about 0.1 for the stainless steel and 0.022 for the phosphor bronze, and almost constant for the temperature range of 40 to 80 K at the cold end. 18. Harmonic analysis of AC magnetostriction measurements under non-sinusoidal excitation SciTech Connect Mogi, Hisashi; Yabumoto, Masao; Mizokami, Masato; Okazaki Yasuo 1996-09-01 A new system for analyzing ac magnetostriction of electrical steel sheets has been developed. This system has the following advantages: (a) AC magnetostriction waveforms can be precisely measured up to 4 kHz, and analyzed to harmonic components; (b) non-sinusoidal flux density can be excited to simulate the distorted waveform in an actual transformer core. 19. Thermal Conductivity Measurements on consolidated Soil Analogs Seiferlin, K.; Heimberg, M.; Thomas, N. 2007-08-01 Heat transport in porous media such as soils and regolith is significantly reduced compared to the properties of compact samples of the same material. The bottle neck for solid state heat transport is the contact area between adjacent grains. For "dry" and unconsolidated materials the contact areas and thus the thermal conductivity are extremely small. Sintering and cementation are two processes that can increase the cross section of interstitial bonds signifcantly. On Mars, cementation can be caused by condensation of water or carbon dioxide ice from the vapor phase, or from salts and minerals that fall out from aqueous solutions. We produced several artificially cemented samples, using small glass beads of uniform size as soil analog. The cementation is achieved by initially molten wax that is mixed with the glass beads while liqiud. The wax freezes preferably at the contact points between grains, thus minimizing surface energy, and consolidates the samples. The thermal conductivity of these samples is then measured in vacuum. We present the results of these measurements and compare them with theoretical models. The observed range of thermal conductivity values can explain some, but not all of the variations in thermal intertia that can be seen in TES remote sensing data. 20. Transport properties of random and nonrandom substitutionally disordered alloys. I. Exact numerical calculation of the ac conductivity Hwang, M.; Gonis, A.; Freeman, A. J. 1987-06-01 Results of exact computer simulations for the zero-temperature ac conductivity of one-dimensional substitutionally disordered alloys are reported. These results are obtained by (i) solving for the eigenvalues and eigenvectors of a Hamiltonian associated with a specific configuration of 500 atoms on a linear chain, (ii) evaluating the ac conductivity of this configuration by using the Kubo-Greenwood formula, and (iii) averaging the resulting conductivities over 20 to 50 different configurations (the number of configurations depends on the type of disorder). In all cases convergence (i.e., a stable result) was obtained and confirmed by another independent approach (the recursive method). For not too weak disorder (defined precisely in the text), these results exhibit a great deal of fine structure that includes high peaks and gaps where the conductivity vanishes. These features are reminiscent of, and are correlated with, the similar kind of behavior of the densities of states of one-dimensional substitutionally disordered alloys. Thus we find that the fine structure in the ac-conductivity spectra of one-dimensional systems provides a rigorous testing ground for judging the validity of analytic theories for calculating the transport properties of substitutionally disordered systems. 1. Quantitative Thermal Microscopy Measurement with Thermal Probe Driven by dc+ac Current Bodzenta, Jerzy; Juszczyk, Justyna; Kaźmierczak-Bałata, Anna; Firek, Piotr; Fleming, Austin; Chirtoc, Mihai 2016-07-01 Quantitative thermal measurements with spatial resolution allowing the examination of objects of submicron dimensions are still a challenging task. The quantity of methods providing spatial resolution better than 100 nm is very limited. One of them is scanning thermal microscopy (SThM). This method is a variant of atomic force microscopy which uses a probe equipped with a temperature sensor near the apex. Depending on the sensor current, either the temperature or the thermal conductivity distribution at the sample surface can be measured. However, like all microscopy methods, the SThM gives only qualitative information. Quantitative measuring methods using SThM equipment are still under development. In this paper, a method based on simultaneous registration of the static and the dynamic electrical resistances of the probe driven by the sum of dc and ac currents, and examples of its applications are described. Special attention is paid to the investigation of thin films deposited on thick substrates. The influence of substrate thermal properties on the measured signal and its dependence on thin film thermal conductivity and film thickness are analyzed. It is shown that in the case where layer thicknesses are comparable or smaller than the probe-sample contact diameter, a correction procedure is required to obtain actual thermal conductivity of the layer. Experimental results obtained for thin SiO2 and BaTiO_{3 }layers with thicknesses in the range from 11 nm to 100 nm are correctly confirmed with this approach. 2. Signature of Topological Insulators in Conductance Measurements Hong, Seokmin; Diep, Vinh; Datta, Supriyo 2012-02-01 Following the discovery of spin-polarized states at the surface of three-dimensional topological insulators (TI) like Bi2Te3 and Bi2Se3, there are intense interests in possible electrical measurements demonstrating unique signatures of these unusual states. A recent interesting proposal suggests that a signature of TI material should be a change in the conductance measured between a normal contact and a magnetic contact when the magnetization of the latter is reversed. However, the generalized Onsager relation suggests that no such change is expected in two-terminal setups and a multi-terminal set up is needed to observe the proposed effect. We present numerical results using a Non-Equilibrium Green Function (NEGF) based model capable of covering both ballistic and diffusive transport regimes seamlessly. Simple expressions based on a semi-classical picture describe some of the results quite well. Finally, we estimate the magnitude of signal expected in realistic samples that have recently been studied experimentally and have shown evidence of surface conduction. 3. Reflectometer distance measurement between parallel conductive plates NASA Technical Reports Server (NTRS) Hearn, Chase P.; Neece, Robert T. 1995-01-01 This report presents an analytic and experimental investigation of the measurement problem in which a reflectometer is used to determine the distance to a target that is a highly conductive surface parallel to the reflectometer antenna ground plane. These parallel surfaces constitute a waveguide (WG) which can contribute parasitic perturbations that seriously degrade the accuracy of the measurements. Two distinct parallel-plate-waveguide (PPWG) phenomena are described, and their effects on both frequency and time-domain reflectometers are considered. The time-domain processing approach was found to be superior to a representative frequency-domain phase-measurement approach because of less susceptibility to perturbations produced by edge reflections and immunity to phase capture. Experimental results are presented which show that a simple radiating system modification can suppress parallel-plate (PP) propagation. The addition of a thin layer of lossy mu-metal 'magnetic absorber' to the antenna ground plane allowed a measurement accuracy of 0.025 cm (0.01 in.) when a vector network analyzer (VNA) is used as a time-domain reflectometer. 4. The change in dielectric constant, AC conductivity and optical band gaps of polymer electrolyte film: Gamma irradiation SciTech Connect Raghu, S. Subramanya, K. Sharanappa, C. Mini, V. Archana, K. Sanjeev, Ganesh Devendrappa, H. 2014-04-24 The effects of gamma (γ) irradiation on dielectric and optical properties of polymer electrolyte film were investigated. The dielectric constant and ac conductivity increases with γ dose. Also optical band gap decreased from 4.23 to 3.78ev after irradiation. A large dependence of the polymer properties on the irradiation dose was noticed. This suggests that there is a possibility of improving polymer electrolyte properties on gamma irradiation. 5. Measuring Thermal Conductivity at LH2 Temperatures NASA Technical Reports Server (NTRS) Selvidge, Shawn; Watwood, Michael C. 2004-01-01 For many years, the National Institute of Standards and Technology (NIST) produced reference materials for materials testing. One such reference material was intended for use with a guarded hot plate apparatus designed to meet the requirements of ASTM C177-97, "Standard Test Method for Steady-State Heat Flux Measurements and Thermal Transmission Properties by Means of the Guarded-Hot-Plate Apparatus." This apparatus can be used to test materials in various gaseous environments from atmospheric pressure to a vacuum. It allows the thermal transmission properties of insulating materials to be measured from just above ambient temperature down to temperatures below liquid hydrogen. However, NIST did not generate data below 77 K temperature for the reference material in question. This paper describes a test method used at NASA's Marshall Space Flight Center (MSFC) to optimize thermal conductivity measurements during the development of thermal protection systems. The test method extends the usability range of this reference material by generating data at temperatures lower than 77 K. Information provided by this test is discussed, as are the capabilities of the MSFC Hydrogen Test Facility, where advanced methods for materials testing are routinely developed and optimized in support of aerospace applications. 6. Assessing Conduct Disorder: A New Measurement Approach PubMed Central Reavy, Racheal; Stein, L. A. R.; Quina, Kathryn; Paiva, Andrea L. 2015-01-01 The Delinquent Activities Scale (DAS) was used to develop indicators of conduct disorder (CD) in terms of symptom severity and age of onset. Incarcerated adolescents (N = 190) aged 14 to 19 were asked about their delinquent behaviors, including age the behavior was first performed, as well as substance use and parental and peer influences. Assessments were performed for the 12 months prior to incarceration and at 3-month postrelease follow-up. Evidence supports the utility of the DAS as a measure of CD diagnosis, including concurrent incremental validity. Furthermore, CD severity (symptom count) was significantly associated with two peer factors: friend substance use and friend prior arrests, with medium to large effect sizes (ESs). Earlier age of CD onset was associated with earlier marijuana use. This study finds that the DAS is a useful instrument in that it is easy to apply and has adequate psychometrics. PMID:24241820 7. AC conductivity and electrochemical studies of PVA/PEG based polymer blend electrolyte films Polu, Anji Reddy; Kumar, Ranveer; Dehariya, Harsha 2012-06-01 Polymer blend electrolyte films based on Polyvinyl alcohol(PVA)/Poly(ethylene glycol)(PEG) and magnesium nitrate (Mg(NO3)2) were prepared by solution casting technique. Conductivity in the temperature range 303-373 K and transference number measurements have been employed to investigate the charge transport in this polymer blend electrolyte system. The highest conductivity is found to be 9.63 × 10-5 S/cm at 30°C for sample with 30 weight percent of Mg(NO3)2 in PVA/PEG blend matrix. Transport number data shows that the charge transport in this polymer electrolyte system is predominantly due to ions. Using this electrolyte, an electrochemical cell with configuration Mg/(PVA+PEG+Mg(NO3)2)/(I2+C+electrolyte) was fabricated and its discharge characteristics profile has been studied. 8. AC loss measurements in HTS coil assemblies with hybrid coil structures Jiang, Zhenan; Long, Nicholas J.; Staines, Mike; Badcock, Rodney A.; Bumby, Chris W.; Buckley, Robert G.; Amemiya, Naoyuki 2016-09-01 Both AC loss and wire cost in coil windings are critical factors for high temperature superconductor (HTS) AC machinery applications. We present AC loss measurement results in three HTS coil assemblies at 77 K and 65 K which have a hybrid coil structure comprising one central winding (CW) and two end windings (EWs) wound with ReBCO and BSCCO wires with different self-field I c values at 77 K. All AC loss results in the coil assemblies are hysteretic and the normalized AC losses in the coil assemblies at different temperatures can be scaled with the I c value of the coil assemblies. The normalised results show that AC loss in a coil assembly with BSCCO CW can be reduced by using EWs wound with high I c ReBCO wires, whilst further AC loss reduction can be achieved by replacing the BSCCO CW with ReBCO CW. The results imply that a flexible hybrid coil structure is possible which considers both AC loss and wire cost in coil assemblies. 9. Sensitive bridge circuit measures conductance of low-conductivity electrolyte solutions NASA Technical Reports Server (NTRS) Schmidt, K. 1967-01-01 Compact bridge circuit measures sensitive and accurate conductance of low-conductivity electrolyte solutions. The bridge utilizes a phase sensitive detector to obtain a linear deflection of the null indicator relative to the measured conductance. 10. Measurements and calculations of transport AC loss in second generation high temperature superconducting pancake coils Yuan, Weijia; Coombs, T. A.; Kim, Jae-Ho; Han Kim, Chul; Kvitkovic, Jozef; Pamidi, Sastry 2011-12-01 Theoretical and experimental AC loss data on a superconducting pancake coil wound using second generation (2 G) conductors are presented. An anisotropic critical state model is used to calculate critical current and the AC losses of a superconducting pancake coil. In the coil there are two regions, the critical state region and the subcritical region. The model assumes that in the subcritical region the flux lines are parallel to the tape wide face. AC losses of the superconducting pancake coil are calculated using this model. Both calorimetric and electrical techniques were used to measure AC losses in the coil. The calorimetric method is based on measuring the boil-off rate of liquid nitrogen. The electric method used a compensation circuit to eliminate the inductive component to measure the loss voltage of the coil. The experimental results are consistent with the theoretical calculations thus validating the anisotropic critical state model for loss estimations in the superconducting pancake coil. 11. Construction of Tunnel Diode Oscillator for AC Impedance Measurement Shin, J. H.; Kim, E. 2014-03-01 We construct a tunnel diode oscillator (TDO) to study electromagnetic response of a superconducting thin film. Highly sensitive tunnel diode oscillators allow us to detect extremely small changes in electromagnetic properties such as dielectric constant, ac magnetic susceptibility and magnetoresistance. A tunnel diode oscillator is a self-resonant oscillator of which resonance frequency is primarily determined by capacitance and inductance of a resonator. Amplitude of the signal depends on the quality factor of the resonator. The change in the impedance of the sample electromagnetic coupled to one of inductors in the resonator alters impedance of the inductor, and leads to the shift in the resonance frequency and the change of the amplitude. 12. Measuring thermal diffusivity of mechanical and optical grades of polycrystalline diamond using an AC laser calorimetry method SciTech Connect Rule, Toby D.; Cai, Wei; Wang, Hsin 2013-01-01 Because of its extremely high thermal conductivity, measuring the thermal conductivity or diffusivity of optical-grade diamond can be challenging. Various methods have been used to measure the thermal conductivity of thick diamond films. For the purposes of commercial quality control, the AC laser calorimetry method is appealing because it enables fairly rapid and convenient sample preparation and measurement. In this paper, the method is used to measure the thermal diffusivity of optical diamond. It is found that sample dimensions and measurement parameters are critical, and data analysis must be performed with great care. The results suggest that the method as it is applied to optical-grade diamond could be enhanced by a more powerful laser, higher frequency beam modulation, and post-processing based on 2D thermal simulation. 13. Dielectric behavior and ac conductivity study of NiO /Al2O3 nanocomposites in humid atmosphere 2006-11-01 Humidity sensing characteristics of NiO /Al2O3 nanocomposites, prepared by sol-gel method, are studied by impedance spectroscopy. Modeling of the obtained impedance spectra with an appropriate equivalent circuit enables us to separate the electrical responses of the tightly bound chemisorbed water molecules on the grain surfaces and the loosely associated physisorbed water layers. Dependence of the dielectric properties and ac conductivity of the nanocomposites on relative humidity (RH) were studied as a function of the frequency of the applied ac signal in the frequency range of 0.1-105Hz. The electrical relaxation behavior of the investigated materials is presented in the conductivity formalism, where the conductivity spectra at different RHs are analyzed by the Almond-West formalism [D. P. Almond et al., Solid State Ionics 8, 159 (1983)]. The dc conductivity and the hopping rate of charge carriers, determined from this analysis, show similar dependences on RH, indicating that the concentration of mobile ions is independent of RH and is primarily determined by the chemisorption process of water molecules. Finally, the results are discussed in view of a percolation-type conduction mechanism, where mobile ions are provided by the chemisorbed water molecules and the percolation network is formed by the physisorbed water layers. 14. Calculation of the ac to dc resistance ratio of conductive nonmagnetic straight conductors by applying FEM simulations Riba, Jordi-Roger 2015-09-01 This paper analyzes the skin and proximity effects in different conductive nonmagnetic straight conductor configurations subjected to applied alternating currents and voltages. These effects have important consequences, including a rise of the ac resistance, which in turn increases power loss, thus limiting the rating for the conductor. Alternating current (ac) resistance is important in power conductors and bus bars for line frequency applications, as well as in smaller conductors for high frequency applications. Despite the importance of this topic, it is not usually analyzed in detail in undergraduate and even in graduate studies. To address this, this paper compares the results provided by available exact formulas for simple geometries with those obtained by means of two-dimensional finite element method (FEM) simulations and experimental results. The paper also shows that FEM results are very accurate and more general than those provided by the formulas, since FEM models can be applied in a wide range of electrical frequencies and configurations. 15. 21 CFR 882.1550 - Nerve conduction velocity measurement device. Code of Federal Regulations, 2014 CFR 2014-04-01 ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Nerve conduction velocity measurement device. 882... conduction velocity measurement device. (a) Identification. A nerve conduction velocity measurement device is a device which measures nerve conduction time by applying a stimulus, usually to a... 16. 21 CFR 882.1550 - Nerve conduction velocity measurement device. Code of Federal Regulations, 2010 CFR 2010-04-01 ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Nerve conduction velocity measurement device. 882... conduction velocity measurement device. (a) Identification. A nerve conduction velocity measurement device is a device which measures nerve conduction time by applying a stimulus, usually to a... 17. 21 CFR 882.1550 - Nerve conduction velocity measurement device. Code of Federal Regulations, 2012 CFR 2012-04-01 ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Nerve conduction velocity measurement device. 882... conduction velocity measurement device. (a) Identification. A nerve conduction velocity measurement device is a device which measures nerve conduction time by applying a stimulus, usually to a... 18. 21 CFR 882.1550 - Nerve conduction velocity measurement device. Code of Federal Regulations, 2013 CFR 2013-04-01 ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Nerve conduction velocity measurement device. 882... conduction velocity measurement device. (a) Identification. A nerve conduction velocity measurement device is a device which measures nerve conduction time by applying a stimulus, usually to a... 19. 21 CFR 882.1550 - Nerve conduction velocity measurement device. Code of Federal Regulations, 2011 CFR 2011-04-01 ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Nerve conduction velocity measurement device. 882... conduction velocity measurement device. (a) Identification. A nerve conduction velocity measurement device is a device which measures nerve conduction time by applying a stimulus, usually to a... 20. Development of the Exams Data Analysis Spreadsheet as a Tool to Help Instructors Conduct Customizable Analyses of Student ACS Exam Data ERIC Educational Resources Information Center Brandriet, Alexandra; Holme, Thomas 2015-01-01 The American Chemical Society Examinations Institute (ACS-EI) has recently developed the Exams Data Analysis Spread (EDAS) as a tool to help instructors conduct customizable analyses of their student data from ACS exams. The EDAS calculations allow instructors to analyze their students' performances both at the total score and individual item… 1. AC loss measurement of superconducting dipole magnets by the calorimetric method SciTech Connect Morita, Y.; Hara, K.; Higashi, N.; Kabe, A. 1996-12-31 AC losses of superconducting dipole magnets were measured by the calorimetric method. The magnets were model dipole magnets designed for the SSC. These were fabricated at KEK with 50-mm aperture and 1.3-m overall length. The magnet was set in a helium cryostat and cooled down to 1.8 K with 130 L of pressurized superfluid helium. Heat dissipated by the magnet during ramp cycles was measured by temperature rise of the superfluid helium. Heat leakage into the helium cryostat was 1.6 W and was subtracted from the measured heat to obtain AC loss of the magnet. An electrical measurement was carried out for calibration. Results of the two methods agreed within the experimental accuracy. The authors present the helium cryostat and measurement system in detail, and discuss the results of AC loss measurement. 2. Second VAMAS a.c. loss measurement intercomparison: a.c. magnetization measurement of hysteresis and coupling losses in NbTi multifilamentary strands Schmidt, C.; Itoh, K.; Wada, H. The article summarizes results of part of the second VAMAS a.c. loss measurement intercomparison. This program was carried out at 17 participating laboratories on two sets of multifilamentary NbTi strands (Set No. 1: copper matrix, fil. diam. between 0.5 and 12 μm; Set No. 2: cupronickel matrix, fil. diam. between 0.4 and 1.2 μm). The results reported here were measured by means of a.c. magnetization methods and separated into hysteresis and coupling losses. One laboratory used a calorimetric method. The data scatter in measured hysteresis losses among the participating laboratories was reasonably small for different measuring methods adopted and experimental arrangements used. On the other hand, the data scatter in coupling losses was large, mainly because in most laboratories a.c. losses were measured only at low frequencies (below 1 Hz), where the separation of coupling losses from total losses tends to be inaccurate. The comparison of measured hysteresis losses with the critical state model showed a large disagreement, which is assumed to be due to proximity effect coupling between filaments. 1997 Elsevier Science Limited 3. The Wechsler ACS Social Perception Subtest: A Preliminary Comparison with Other Measures of Social Cognition ERIC Educational Resources Information Center Kandalaft, Michelle R.; Didehbani, Nyaz; Cullum, C. Munro; Krawczyk, Daniel C.; Allen, Tandra T.; Tamminga, Carol A.; Chapman, Sandra B. 2012-01-01 Relative to other cognitive areas, there are few clinical measures currently available to assess social perception. A new standardized measure, the Wechsler Advanced Clinical Solutions (ACS) Social Perception subtest, addresses some limitations of existing measures; however, little is known about this new test. The first goal of this investigation… 4. Dielectric properties and study of AC electrical conduction mechanisms by non-overlapping small polaron tunneling model in Bis(4-acetylanilinium) tetrachlorocuprate(II) compound Abkari, A.; Chaabane, I.; Guidara, K. 2016-09-01 In the present work, the synthesis and characterization of the Bis(4-acetylanilinium) tetrachlorocuprate(II) compound are presented. The structure of this compound is analyzed by X-ray diffraction which confirms the formation of single phase and is in good agreement the literature. Indeed, the Thermo gravimetric Analysis (TGA) shows that the decomposition of the compound is observed in the range of 420-520 K. However, the differential thermal analysis (DTA) indicates the presence of a phase transition at T=363 k. Furthermore, the dielectric properties and AC conductivity were studied over a temperature range (338-413 K) and frequency range (200 Hz-5 MHz) using complex impedance spectroscopy. Dielectric measurements confirmed such thermal analyses by exhibiting the presence of an anomaly in the temperature range of 358-373 K. The complex impedance plots are analyzed by an electrical equivalent circuit consisting of resistance, constant phase element (CPE) and capacitance. The activation energy values of two distinct regions are obtained from log σT vs 1000/T plot and are found to be E=1.27 eV (T<363 K) and E=1.09 eV (363 Kac conductivity, σac, has been analyzed by Jonscher's universal power law σ(ω)=σdc+Aωs. The value of s is to be temperature-dependent, which has a tendency to increase with temperature and the non-overlapping small polaron tunneling (NSPT) model is the most applicable conduction mechanism in the title compound. Complex impedance spectra of [C8H10NO]2CuCl4 at different temperatures. 5. Crystal structure, NMR study, dielectric relaxation and AC conductivity of a new compound [Cd3(SCN)2Br6(C2H9N2)2]n Saidi, K.; Kamoun, S.; Ayedi, H. Ferid; Arous, M. 2013-11-01 The crystal structure, the 13C NMR spectroscopy and the complex impedance have been carried out on [Cd3(SCN)2Br6(C2H9N2)2]n. Crystal structure shows a 2D polymeric network built up of two crystallographically independent cadmium atoms with two different octahedral coordinations. This compound exhibits a phase transition at (T=355±2 K) which has been characterized by differential scanning calorimetry (DSC), X-rays powder diffraction, AC conductivity and dielectric measurements. Examination of 13C CP/MAS line shapes shows indirect spin-spin coupling (14N and 13C) with a dipolar coupling constant of 1339 Hz. The AC conductivity of this compound has been carried out in the temperature range 325-376 K and the frequency range from 10-2 Hz to 10 MHz. The impedance data were well fitted to two equivalent electrical circuits. The results of the modulus study reveal the presence of two distinct relaxation processes. One, at low frequency side, is thermally activated due to the ionic conduction of the crystal and the other, at higher frequency side, gradually disappears when temperature reaches 355 K which is attributed to the localized dipoles in the crystal. Moreover, the temperature dependence of DC-conductivity in both phases follows the Arrhenius law and the frequency dependence of σ(ω,T) follows Jonscher's universal law. The near values of activation energies obtained from the conductivity data and impedance confirm that the transport is through the ion hopping mechanism. 6. Development of Low-Frequency AC Voltage Measurement System Using Single-Junction Thermal Converter Amagai, Yasutaka; Nakamura, Yasuhiro Accurate measurement of low-frequency AC voltage using a digital multimeter at frequencies of 4-200Hz is a challenge in the mechanical engineering industry. At the National Metrology Institute of Japan, we developed a low-frequency AC voltage measurement system for calibrating digital multimeters operating at frequencies down to 1 Hz. The system uses a single-junction thermal converter and employs a theoretical model and a three-parameter sine wave fitting algorithm based on the least-square (LS) method. We calibrated the AC voltage down to 1Hz using our measurement system and reduced the measurement time compared with that using thin-film thermal converters. Our measurement results are verified by comparison with those of a digital sampling method using a high-resolution analog-to-digital converter; our data are in agreement to within a few parts in 105. Our proposed method enables us to measure AC voltage with an uncertainty of 25 μV/V (k = 1) at frequencies down to 4 Hz and a voltage of 10 V. 7. Measurements of the Seebeck coefficient of thermoelectric materials by an ac method SciTech Connect Goto, T.; Li, J.H.; Hirai, T.; Maeda, Y.; Kato, R.; Maesono, A. 1997-03-01 An ac method for measurement of the Seebeck coefficient was developed. Specimens were heated periodically at frequencies in the range 0.2--10 Hz using a semiconductor laser. The small temperature increase and the resultant thermoelectric power were measured with a Pt-Pt 13% Rh thermocouple (25 {micro}m in diameter) through a lock-in amplifier. The Seebeck coefficient of a Pt{sub 90}Rh{sub 10} foil measured by the ac method was in agreement with that obtained from the standard table. The optimum frequency and specimen thickness for the ac method were 0.2 Hz and 0.1--0.2 mm, respectively. The Seebeck coefficients of silicon single crystal and several thermoelectric semiconductors (Si{sub 80}Ge{sub 20}, PbTc, FeSi{sub 2}, SiB{sub 14}) measured by the ac method agreed with those measured by a conventional dc method in the temperature range between room temperature and 1200 K. The time needed for each measurement was less than a few tens of minutes, significantly shorter than that for a conventional dc method. 8. AC impedance analysis of ionic and electronic conductivities in electrode mixture layers for an all-solid-state lithium-ion battery Siroma, Zyun; Sato, Tomohiro; Takeuchi, Tomonari; Nagai, Ryo; Ota, Akira; Ioroi, Tsutomu 2016-06-01 The ionic and electronic effective conductivities of an electrode mixture layers for all-solid-state lithium-ion batteries containing Li2Ssbnd P2S5 as a solid electrolyte were investigated by AC impedance measurements and analysis using a transmission-line model (TLM). Samples containing graphite (graphite electrodes) or LiNi0.5Co0.2Mn0.3O2 (NCM electrodes) as the active material were measured under a "substrate | sample | bulk electrolyte | sample | substrate" configuration (ion-electron connection) and a "substrate | sample | substrate" configuration (electron-electron connection). Theoretically, if the electronic resistance is negligibly small, which is the case with our graphite electrodes, measurement with the ion-electron connection should be effective for evaluating ionic conductivity. However, if the electronic resistance is comparable to the ionic resistance, which is the case with our NCM electrodes, the results with the ion-electron connection may contain some inherent inaccuracy. In this report, we theoretically and practically demonstrate the advantage of analyzing the results with the electron-electron connection, which gives both the ionic and electronic conductivities. The similarity of the behavior of ionic conductivity with the graphite and NCM electrodes confirms the reliability of this analysis. 9. AC Conduction and Time-Temperature Superposition Scaling in a Reduced Graphene Oxide-Zinc Sulfide Nanocomposite. PubMed Chakraborty, Koushik; Das, Poulomi; Chakrabarty, Sankalpita; Pal, Tanusri; Ghosh, Surajit 2016-05-18 We report, herein, the results of an in depth study and concomitant analysis of the AC conduction [σ'(ω): f=20 Hz to 2 MHz] mechanism in a reduced graphene oxide-zinc sulfide (RGO-ZnS) composite. The magnitude of the real part of the complex impedance decreases with increase in both frequency and temperature, whereas the imaginary part shows an asymptotic maximum that shifts to higher frequencies with increasing temperature. On the other hand, the conductivity isotherm reveals a frequency-independent conductivity at lower frequencies subsequent to a dispersive conductivity at higher frequencies, which follows a power law [σ'(ω)∝ω(s) ] within a temperature range of 297 to 393 K. Temperature-independent frequency exponent 's' indicates the occurrence of phonon-assisted simple quantum tunnelling of electrons between the defects present in RGO. Finally, this sample follows the "time-temperature superposition principle", as confirmed from the universal scaling of conductivity isotherms. These outcomes not only pave the way for increasing our elemental understanding of the transport mechanism in the RGO system, but will also motivate the investigation of the transport mechanism in other order-disorder systems. PMID:26864678 10. Note: Development of a microfabricated sensor to measure thermal conductivity of picoliter scale liquid samples Park, Byoung Kyoo; Yi, Namwoo; Park, Jaesung; Kim, Dongsik 2012-10-01 This paper presents a thermal analysis device, which can measure thermal conductivity of picoliter scale liquid sample. We employ the three omega method with a microfabricated AC thermal sensor with nanometer width heater. The liquid sample is confined by a micro-well structure fabricated on the sensor surface. The performance of the instrument was verified by measuring the thermal conductivity of 27-picoliter samples of de-ionized (DI) water, ethanol, methanol, and DI water-ethanol mixtures with accuracies better than 3%. Furthermore, another analytical scheme allows real-time thermal conductivity measurement with 5% accuracy. To the best of our knowledge, this technique requires the smallest volume of sample to measure thermal property ever. 11. Fiber - Optic Devices as Temperature Sensors for Temperature Measurements in AC Magnetic Fields Rablau, Corneliu; Lafrance, Joseph; Sala, Anca 2007-10-01 We report on the investigation of several fiber-optic devices as potential sensors for temperature measurements in AC magnetic fields. Common temperature sensors, such as thermocouples, thermistors or diodes, will create random and/or systematic errors when placed in a magnetic field. A DC magnetic field is susceptible to create a systematic offset to the measurement, while in an AC magnetic field of variable frequency random errors which cannot be corrected for can also be introduced. Fiber-Bragg-gratings and thin film filters have an inherent temperature dependence. Detrimental for their primary applications, the same dependence allows one to use such devices as temperature sensors. In an AC magnetic field, they present the advantage of being immune to electromagnetic interference. Moreover, for fiber-Bragg-gratings, the shape factor and small mass of the bare-fiber device make it convenient for temperature measurements on small samples. We studied several thin-film filters and fiber-Bragg-gratings and compared their temperature measurement capabilities in AC magnetic fields of 0 to 150 Gauss, 0 to 20 KHz to the results provided by off-the-shelf thermocouples and thermistor-based temperature measurement systems. 12. Second VAMAS a.c. loss measurement intercomparison: magnetization measurement of low-frequency (hysteretic) a.c. loss in NbTi multifilamentary strands Collings, E. W.; Sumption, M. D.; Itoh, K.; Wada, H.; Tachikawa, K. The results of the 2 nd VAMAS measurement intercomparison program on low-frequency (hysteretic) a.c. loss are presented and discussed. Two sets of multifilamentary NbTi strands (Set No. 1: copper matrix, fil. diams 0.5, 1, 3, and 12 μm; Set No. 2: cupronickel matrix, fil. diams 0.4, 0.5, and 1 μm) were subjected to interlaboratory testing. In an initial series of tests, samples in various forms (e.g. wire bundles, coils) were measured mostly by vibrating-sample- and SQUID magnetometry. Considerable scatter was noted especially in the small-filament-diameter a.c.-loss data. In a study of measurement accuracy, a supplementary series of tests compared the results of VSM measurement of a given pair of copper-matrix samples. In the light of all the results, factors contributing to a.c. loss error are discussed and recommendations are made concerning the specification of future a.c.-loss measurement intercomparisons. 13. Modelling and measurement of ac loss in BSCCO/Ag-tape windings Oomen, M. P.; Nanke, R.; Leghissa, M. 2003-03-01 High-temperature superconducting (HTS) transformers promise decreased weight and volume and higher efficiency. A 1 MVA HTS railway transformer was built and tested at Siemens AG. This paper deals with the prediction of ac loss in the BSCCO/Ag-tape windings. In a railway transformer the tape carries ac current in alternating field, the temperature differs from 77 K, tapes are stacked or cabled and overcurrents and higher harmonics occur. In ac-loss literature these issues are treated separately, if at all. We have developed a model that predicts the ac loss in sets of BSCCO/Ag-tape coils, and deals with the above-mentioned issues. The effect of higher harmonics on the loss in HTS tapes is considered for the first time. The paper gives a complete overview of the model equations and required input parameters. The model is validated over a wide range of the input parameters, using the measured critical current and ac loss of single tapes, single coils and sets of coils in the 1 MVA transformer. An accuracy of around 25% is achieved in all relevant cases. Presently the model is developed further, in order to describe other HTS materials and other types of applications. 14. New ac microammeter for leakage current measurement of biomedical equipment Branca, F. P.; Del Prete, Z.; Marinozzi, F. 1993-11-01 A new inexpensive current probe for on-line leakage current measurement of biomedical devices in hospital environment is described. The prototype is designed to detect and measure leakage currents on the ground wire of the device's power cord so that its integrity can be monitored in real time. Realized with a sensing coil specially matched to a low-noise op amp, this probe adds only negligible impedance on the monitored ground lines. From this preliminary study about the device's metrological performances, a sensitivity of 10 nArms for a current range 1-500 μArms has emerged, together with a mean linearity error of 0.03% and a frequency response flat within 1% of gain from 50 to 2000 Hz. 15. Monitoring colloidal stability of polymer-coated magnetic nanoparticles using AC susceptibility measurements. PubMed Herrera, Adriana P; Barrera, Carola; Zayas, Yashira; Rinaldi, Carlos 2010-02-15 The application of the response of magnetic nanoparticles to oscillating magnetic fields to probe transitions in colloidal state and structure of polymer-coated nanoparticles is demonstrated. Cobalt ferrite nanoparticles with narrow size distribution were prepared and shown to respond to oscillating magnetic fields through a Brownian relaxation mechanism, which is dependent on the mechanical coupling between the particle dipoles and the surrounding matrix. These nanoparticles were coated with covalently-attached poly(N-isopropylacrylamide) (pNIPAM) or poly(N-isopropylmethacrylamide) (pNIPMAM) through free radical polymerization. The temperature induced transitions of colloidal suspensions of these nanoparticles were studied through a combination of differential scanning calorimetry (DSC), dynamic light scattering (DLS), and AC susceptibility measurements. In the pNIPAM coated nanoparticles excellent agreement was found for a transition temperature of approximately 30 degrees C by all three methods, although the AC susceptibility measurements indicated aggregation which was not evident from the DLS results. Small-angle neutron scattering (SANS) results obtained for pNIPAM coated nanoparticles confirmed that aggregation indeed occurs above the lower critical transition temperature of pNIPAM. For the pNIPMAM coated nanoparticles DLS and AC susceptibility measurements indicated aggregation at a temperature of approximately 33-35 degrees C, much lower than the transition temperature peak at 40 degrees C observed by DSC. However, the transition observed by DSC is very broad, hence it is possible that aggregation begins to occur at temperatures lower than the peak, as indicated by the AC susceptibility and DLS results. These experiments and observations demonstrate the possibility of using AC susceptibility measurements to probe transitions in colloidal suspensions induced by external stimuli. Because magnetic measurements do not require optical transparency, these 16. Calorimetric AC loss measurement of MgB2 superconducting tape in an alternating transport current and direct magnetic field See, K. W.; Xu, X.; Horvat, J.; Cook, C. D.; Dou, S. X. 2012-11-01 Applications of MgB2 superconductors in electrical engineering have been widely reported, and various studies have been made to define their alternating current (AC) losses. However, studies on the transport losses with an applied transverse DC magnetic field have not been conducted, even though this is one of the favored conditions in applications of practical MgB2 tapes. Methods and techniques used to characterize and measure these losses have so far been grouped into ‘electrical’ and ‘calorimetric’ approaches with external conditions set to resemble the application conditions. In this paper, we present a new approach to mounting the sample and employ the calorimetric method to accurately determine the losses in the concurrent application of AC transport current and DC magnetic fields that are likely to be experienced in practical devices such as generators and motors. This technique provides great simplification compared to the pickup coil and lock-in amplifier methods and is applied to a long length (˜10 cm) superconducting tape. The AC loss data at 20 and 30 K will be presented in an applied transport current of 50 Hz under external DC magnetic fields. The results are found to be higher than the theoretical predictions because of the metallic fraction of the tape that contributes quite significantly to the total losses. The data, however, will allow minimization of losses in practical MgB2 coils and will be used in the verification of numerical coil models. 17. Non-Contact Electrical Conductivity Measurement Technique for Molten Metals NASA Technical Reports Server (NTRS) Rhim, W. K.; Ishikawa, T. 1998-01-01 A non-contact technique of measuring the electrical conductivity (or resistivity) of conducting liquids while they are levitated by the high temperature electrostatic levitator in a high vacuum is reported. 18. Determination of the Si-conducting polymer interfacial properties using A-C impedance techniques NASA Technical Reports Server (NTRS) Nagasubramanian, G.; Di Stefano, Salvador; Moacanin, Jovan 1985-01-01 A study was made of the interfacial properties of poly(pyrrole) (PP) deposited electrochemically onto single crystal p-Si surfaces. The interfacial properties are dependent upon the counterions. The formation of 'quasi-ohmic' and 'nonohmic' contacts, respectively, of PP(ClO4) and PP films doped with other counterions (BF4 and para-toluene sulfonate) with p-Si, are explained in terms of the conductivity of these films and the flat band potential, V(fb), of PP relative to that of p-Si. The PP film seems to passivate or block intrinsic surface states present on the p-Si surface. The differences in the impedance behavior of para-toluene sulfonate doped and ClO4 doped PP are compared. 19. Thermal-Conductivity Measurement of Thermoelectric Materials Using 3{{\\upomega }} Method Hahtela, O.; Ruoho, M.; Mykkänen, E.; Ojasalo, K.; Nissilä, J.; Manninen, A.; Heinonen, M. 2015-12-01 In this work, a measurement system for high-temperature thermal-conductivity measurements has been designed, constructed, and characterized. The system is based on the 3{\\upomega } method which is an ac technique suitable for both bulk and thin-film samples. The thermal-conductivity measurements were performed in a horizontal three-zone tube furnace whose sample space can be evacuated to vacuum or alternatively a protective argon gas environment can be applied to prevent undesired oxidation and contamination of the sample material. The system was tested with several dielectric, semiconductor, and metal bulk samples from room temperature up to 725 K. The test materials were chosen so that the thermal-conductivity values covered a wide range from 0.37 W\\cdot m^{-1}\\cdot K^{-1} to 150 {} \\cdot m^{-1}\\cdot K^{-1}. An uncertainty analysis for the thermal-conductivity measurements was carried out. The measurement accuracy is mainly limited by the determination of the third harmonic of the voltage over the resistive metal heater strip that is used for heating the sample. A typical relative measurement uncertainty in the thermal-conductivity measurements was between 5 % and 8 % (k=2). An extension of the 3{\\upomega } method was also implemented in which the metal heater strip is first deposited on a transferable Kapton foil. Utilizing such a prefabricated sensor allows for faster measurements of the samples as there is no need to deposit a heater strip on each new sample. 20. Studies of structural, optical, dielectric relaxation and ac conductivity of different alkylbenzenesulfonic acids doped polypyrrole nanofibers Hazarika, J.; Kumar, A. 2016-01-01 Polypyrrole (PPy) nanofibers doped with alkylbenzenesulfonic acids (ABSA) have been synthesized using interfacial polymerization method. HRTEM studies confirm the formation of PPy nanofibers with average diameter ranging from 13 nm to 25 nm. Broad X-ray diffraction peak in 2 θ range 20-23.46° reveals amorphous structure of PPy nanofibers. The ordering or crystallinity of polymer chains increases, while their interplanar spacing (d) and interchain separation (R) decreases for short alkyl chain ABSA doped PPy nanofibers. FTIR studies reveal that short alkyl chain ABSA doped PPy nanofibers show higher value of "effective conjugation length". PPy nanofibers doped with short alkyl chain ABSA dopant exhibit smaller optical band gap. TGA studies show enhanced thermal stability of short alkyl chain ABSA doped PPy nanofibers. Decrease in dielectric permittivity ε ‧ (ω) with increasing frequency suggests presence of electrode polarization effects. Linear decrease in dielectric loss ε ″ (ω) with increasing frequency suggests dominant effect of dc conductivity process. Low value of non-exponential exponent β (<1) reveals non-Debye relaxation of charge carriers. Scaling of imaginary modulus (M ″) reveals that the charge carriers follow the same relaxation mechanism. Moreover, the charge carriers in PPy nanofibers follow the correlated barrier hopping (CBH) transport mechanism. 1. Measurement of solar cell ac parameters using the time domain technique Deshmukh, M. P.; Kumar, R. Anil; Nagaraju, J. 2004-08-01 The instrumentation to measure solar cell ac parameters [cell capacitance (CP) and cell resistance (RP)] using the time domain technique is developed. The cell capacitance (CP) and series resistance (r) are calculated using open circuit voltage decay (OCVD) technique. It is calibrated with the help of an electrical network with passive components similar to ac equivalent circuit of a solar cell consisting of precision resistors and capacitors. The maximum error observed in the measurement of resistor and capacitor value is ±3.5%. The cell resistance (RP) is calculated from I-V characteristics of solar cell. The data obtained in time domain technique is compared with the impedance spectroscopy technique data measured on same solar cell and it is found that the deviation in cell capacitance and resistance are within ±8%. 2. Effective method to measure back emfs and their harmonics of permanent magnet ac motors Jiang, Q.; Bi, C.; Lin, S. 2006-04-01 As the HDD spindle motors become smaller and smaller, the back electromotive forces (emfs) measurement faces the new challenges due to their low inertias and small sizes. This article proposes a novel method to measure the back emfs and their harmonic components of PM ac motors only through a freewheeling procedure. To eliminate the influence of the freewheeling deceleration, the phase flux linkages are employed to obtain the back emf amplitudes and phases of the fundamental and harmonic components by using finite Fourier series analysis. The proposed method makes the freewheeling measurement of the back emfs and their harmonics accurate and fast. It is especially useful for the low inertia PM ac motors, such as spindle motors for small form factor HDDs. 3. Measurement of the thermal contact conductance and thermal conductivity of anodized aluminum coatings SciTech Connect Peterson, G.P.; Fletcher, L.S. ) 1990-08-01 An experimental investigation was conducted to determine the thermal contact conductance and effective thermal conductivity of anodized coatings. One chemically polished Aluminum 6061-T6 test specimen and seven specimens with anodized coatings varying in thickness from 60.9 {mu}m to 163.8 {mu}m were tested while in contact with a single unanodized aluminum surface. Measurements of the overall joint conductance, composed of the thermal contact conductance between the anodized coating and the bare aluminum surface and the bulk conductance of the coating material, indicated that the overall joint conductance decreased with increasing thickness of the anodized coating and increased with increasing interfacial load. Using the experimental data, a dimensionless expression was developed that related the overall joint conductance to the coating thickness, the surface roughness, the interfacial pressure, and the properties of the aluminum substrate. By subtracting the thermal contact conductance from the measured overall joint conductance, estimations of the effective thermal conductivity of the anodized coating as a function of pressure were obtained for each of the seven anodized specimens. At an extrapolated pressure of zero, the effective thermal conductivity was found to be approximately 0.02 W/m-K. In addition to this extrapolated value, a single expression for predicting the effective thermal conductivity as a function of both the interface pressure and the anodized coating thickness was developed and shown to be within {plus minus}5 percent of the experimental data over a pressure range of 0 to 14 MPa. 4. AC magnetic measurements of the ALS Booster Dipole Engineering Model Magnet SciTech Connect Green, M.I.; Keller, R.; Nelson, D.H.; Hoyer, E. 1989-03-01 10 Hz sine wave and 2 Hz sawtooth AC magnetic measurements of he curved ALS Booster Dipole Engineering Model Magnet have been accomplished. Long curved coils were utilized to measure the integral transfer function and uniformity. Point coils and a Hall Probe were used to measure magnetic induction and its uniformity. The data were logged and processed by a Tektronix 11401 digital oscilloscope. The dependence of the effective length on the field was determined from the ratio of the integral coil signals to the point coil signals. Quadrupole and sextupole harmonics were derived from the point and integral uniformity measurements. 5 refs., 4 figs., 2 tabs. 5. Direct Measurement of Ab and Ac at the Z0 Pole Using a Lepton Tag Abe, Kenji; Abe, Koya; Abe, T.; Adam, I.; Akagi, T.; Allen, N. J.; Arodzero, A.; Ash, W. W.; Aston, D.; Baird, K. G.; Baltay, C.; Band, H. R.; Barakat, M. B.; Bardon, O.; Barklow, T. L.; Bashindzhagyan, G. L.; Bauer, J. M.; Bellodi, G.; Ben-David, R.; Benvenuti, A. C.; Bilei, G. M.; Bisello, D.; Blaylock, G.; Bogart, J. R.; Bolen, B.; Bower, G. R.; Brau, J. E.; Breidenbach, M.; Bugg, W. M.; Burke, D.; Burnett, T. H.; Burrows, P. N.; Byrne, R. M.; Calcaterra, A.; Calloway, D.; Camanzi, B.; Carpinelli, M.; Cassell, R.; Castaldi, R.; Castro, A.; Cavalli-Sforza, M.; Chou, A.; Church, E.; Cohn, H. O.; Coller, J. A.; Convery, M. R.; Cook, V.; Cotton, R.; Cowan, R. F.; Coyne, D. G.; Crawford, G.; Damerell, C. J.; Danielson, M. N.; Daoudi, M.; de Groot, N.; dell'Orso, R.; Dervan, P. J.; de Sangro, R.; Dima, M.; D'Oliveira, A.; Dong, D. N.; Doser, M.; Dubois, R.; Eisenstein, B. I.; Eschenburg, V.; Etzion, E.; Fahey, S.; Falciai, D.; Fan, C.; Fernandez, J. P.; Fero, M. J.; Flood, K.; Frey, R.; Gillman, T.; Gladding, G.; Gonzalez, S.; Goodman, E. R.; Hart, E. L.; Harton, J. L.; Hasan, A.; Hasuko, K.; Hedges, S. J.; Hertzbach, S. S.; Hildreth, M. D.; Huber, J.; Huffer, M. E.; Hughes, E. W.; Huynh, X.; Hwang, H.; Iwasaki, M.; Jackson, D. J.; Jacques, P.; Jaros, J. A.; Jiang, Z. Y.; Johnson, A. S.; Johnson, J. R.; Johnson, R. A.; Junk, T.; Kajikawa, R.; Kalelkar, M.; Kamyshkov, Y.; Kang, H. J.; Karliner, I.; Kawahara, H.; Kim, Y. D.; King, R.; King, M. E.; Kofler, R. R.; Krishna, N. M.; Kroeger, R. S.; Langston, M.; Lath, A.; Leith, D. W.; Lia, V.; Lin, C.-J. S.; Liu, X.; Liu, M. X.; Loreti, M.; Lu, A.; Lynch, H. L.; Ma, J.; Mahjouri, M.; Mancinelli, G.; Manly, S.; Mantovani, G.; Markiewicz, T. W.; Maruyama, T.; Masuda, H.; Mazzucato, E.; McKemey, A. K.; Meadows, B. T.; Menegatti, G.; Messner, R.; Mockett, P. M.; Moffeit, K. C.; Moore, T. B.; Morii, M.; Muller, D.; Murzin, V.; Nagamine, T.; Narita, S.; Nauenberg, U.; Neal, H.; Nussbaum, M.; Oishi, N.; Onoprienko, D.; Osborne, L. S.; Panvini, R. S.; Park, H.; Park, C. H.; Pavel, T. J.; Peruzzi, I.; Piccolo, M.; Piemontese, L.; Pieroni, E.; Pitts, K. T.; Plano, R. J.; Prepost, R.; Prescott, C. Y.; Punkar, G. D.; Quigley, J.; Ratcliff, B. N.; Reeves, T. W.; Reidy, J.; Reinertsen, P. L.; Rensing, P. E.; Rochester, L. S.; Rowson, P. C.; Russell, J. J.; Saxton, O. H.; Schalk, T.; Schindler, R. H.; Schumm, B. A.; Schwiening, J.; Sen, S.; Serbo, V. V.; Shaevitz, M. H.; Shank, J. T.; Shapiro, G.; Sherden, D. J.; Shmakov, K. D.; Simopoulos, C.; Sinev, N. B.; Smith, S. R.; Smy, M. B.; Snyder, J. A.; Staengle, H.; Stahl, A.; Stamer, P.; Steiner, R.; Steiner, H.; Strauss, M. G.; Su, D.; Suekane, F.; Sugiyama, A.; Suzuki, S.; Swartz, M.; Szumilo, A.; Takahashi, T.; Taylor, F. E.; Thom, J.; Torrence, E.; Toumbas, N. K.; Usher, T.; Vannini, C.; Va'Vra, J.; Vella, E.; Venuti, J. P.; Verdier, R.; Verdini, P. G.; Wagner, S. R.; Wagner, D. L.; Waite, A. P.; Walston, S.; Wang, J.; Ward, C.; Watts, S. J.; Weidemann, A. W.; Weiss, E. R.; Whitaker, J. S.; White, S. L.; Wickens, F. J.; Williams, B.; Williams, D. C.; Williams, S. H.; Willocq, S.; Wilson, R. J.; Wisniewski, W. J.; Wittlin, J. L.; Woods, M.; Word, G. B.; Wright, T. R.; Wyss, J.; Yamamoto, R. K.; Yamartino, J. M.; Yang, X.; Yashima, J.; Yellin, S. J.; Young, C. C.; Yuta, H.; Zapalac, G.; Zdarko, R. W.; Zhou, J. 1999-10-01 The parity violation parameters Ab and Ac of the Zbb¯ and Zcc¯ couplings have been measured directly, using the polar angle dependence of the Z0-pole polarized cross sections. Bottom and charmed hadrons were tagged via semileptonic decays. Both the muon and electron identification algorithms take advantage of new multivariate techniques, incorporating for the first time information from the SLD Cˇerenkov Ring Imaging Detector. Based on the 1993-1995 SLD sample of 150 000 Z0 decays produced with highly polarized electron beams, we measure Ab = 0.910+/-0.068\$$stat\$$+/-0.037\$$syst\$$, Ac = 0.642+/-0.110\$$stat\$$+/-0.063\$$syst\$$. 6. Ac hysteresis loop measurement of stator-tooth in induction motor SciTech Connect Son, D. 1999-09-01 The properties of ac hysteresis loop of a stator tooth in a 5 hp induction motor was measured and analyzed. The load increase on the motor decreased magnetic induction, however increase the minor hysteresis loops in the high induction region. This effect caused increase in the core loss. Depending on condition of the motor, the core loss of the stator tooth can be 50% greater than the core loss under sinusoidal magnetic induction waveform. 7. Crystal structure and AC conductivity mechanism of [N(C3H7)4]2CoCl4 compound Moutia, N.; Oueslati, A.; Ben Gzaiel, M.; Khirouni, K. 2016-09-01 We found that the new organic-inorganic compound [N(C3H7)4]2 CoCl4, crystallizes at room temperature in the centrosymmetric monoclinic system with P21/c space group. The atomic arrangement can be described by an alternation of organic and organic-inorganic layers parallel to the (001) plan. Indeed, the differential scanning calorimetry (DSC) studies indicate a presence of three order-disorder phase transitions located at 332, 376 and 441 K. Furthermore, the conductivity was measured in the frequency range from 200 MHz to 5 MHz and temperatures between 318 K and 428 K using impedance spectroscopy. Analysis of the AC conductivity experimental data obtained, and the frequency exponent s with theoretical models reveals that the correlated barrier hopping (CBH) model is the appropriate mechanism for conduction in the title compound. The analysis of the dielectric constants ε ‧ and ε ″ versus temperature, at several frequencies, shows a distribution of relaxation times. This relaxation is probably due to the reorientational dynamics of [N(C3H7)4]+ cations. 8. Measuring the local electrical conductivity of human brain tissue Akhtari, M.; Emin, D.; Ellingson, B. M.; Woodworth, D.; Frew, A.; Mathern, G. W. 2016-02-01 The electrical conductivities of freshly excised brain tissues from 24 patients were measured. The diffusion-MRI of the hydrogen nuclei of water molecules from regions that were subsequently excised was also measured. Analysis of these measurements indicates that differences between samples' conductivities are primarily due to differences of their densities of solvated sodium cations. Concomitantly, the sample-to-sample variations of their diffusion constants are relatively small. This finding suggests that non-invasive in-vivo measurements of brain tissues' local sodium-cation density can be utilized to estimate its local electrical conductivity. 9. Comparison of Measured and Estimated Unsaturated Hydraulic Conductivity Parkes, M. E.; Waters, P. A. 1980-08-01 Most studies of empirical estimates of unsaturated hydraulic conductivity functions do not account for water which may be relatively immobile under the conditions in which field measurements of conductivity are made. To investigate this, unsaturated hydraulic conductivity data were obtained for three monolith lysimeters, 80 cm in diameter by 135 cm deep, using the instantaneous profile technique. The lysimeters contained well-structured, freely draining loam soil and moisture measurements were made using a neutron probe. Conductivity estimates were also obtained from laboratory measurements of soil moisture characteristics using the modified Millington and Quirk computational method. Ratios of the calculated to measured conductivities at a matching point near saturation were so large as to suggest that only a minor proportion of the soil pore space was contributing to flow through the whole profile. 10. Direct measurements of Ab and Ac using vertex and kaon charge tags at the SLAC detector. PubMed Abe, Koya; Abe, Kenji; Abe, T; Adam, I; Akimoto, H; Aston, D; Baird, K G; Baltay, C; Band, H R; Barklow, T L; Bauer, J M; Bellodi, G; Berger, R; Blaylock, G; Bogart, J R; Bower, G R; Brau, J E; Breidenbach, M; Bugg, W M; Burke, D; Burnett, T H; Burrows, P N; Calcaterra, A; Cassell, R; Chou, A; Cohn, H O; Coller, J A; Convery, M R; Cook, V; Cowan, R F; Crawford, G; Damerell, C J S; Daoudi, M; Dasu, S; de Groot, N; de Sangro, R; Dong, D N; Doser, M; Dubois, R; Erofeeva, I; Eschenburg, V; Etzion, E; Fahey, S; Falciai, D; Fernandez, J P; Flood, K; Frey, R; Hart, E L; Hasuko, K; Hertzbach, S S; Huffer, M E; Huynh, X; Iwasaki, M; Jackson, D J; Jacques, P; Jaros, J A; Jiang, Z Y; Johnson, A S; Johnson, J R; Kajikawa, R; Kalelkar, M; Kang, H J; Kofler, R R; Kroeger, R S; Langston, M; Leith, D W G; Lia, V; Lin, C; Mancinelli, G; Manly, S; Mantovani, G; Markiewicz, T W; Maruyama, T; McKemey, A K; Messner, R; Moffeit, K C; Moore, T B; Morii, M; Muller, D; Murzin, V; Narita, S; Nauenberg, U; Neal, H; Nesom, G; Oishi, N; Onoprienko, D; Osborne, L S; Panvini, R S; Park, C H; Peruzzi, I; Piccolo, M; Piemontese, L; Plano, R J; Prepost, R; Prescott, C Y; Ratcliff, B N; Reidy, J; Reinertsen, P L; Rochester, L S; Rowson, P C; Russell, J J; Saxton, O H; Schalk, T; Schumm, B A; Schwiening, J; Serbo, V V; Shapiro, G; Sinev, N B; Snyder, J A; Staengle, H; Stahl, A; Stamer, P; Steiner, H; Su, D; Suekane, F; Sugiyama, A; Suzuki, A; Swartz, M; Taylor, F E; Thom, J; Torrence, E; Usher, T; Va'vra, J; Verdier, R; Wagner, D L; Waite, A P; Walston, S; Weidemann, A W; Weiss, E R; Whitaker, J S; Williams, S H; Willocq, S; Wilson, R J; Wisniewski, W J; Wittlin, J L; Woods, M; Wright, T R; Yamamoto, R K; Yashima, J; Yellin, S J; Young, C C; Yuta, H 2005-03-11 Exploiting the manipulation of the SLAC Linear Collider electron-beam polarization, we present precise direct measurements of the parity-violation parameters A(c) and A(b) in the Z-boson-c-quark and Z-boson-b-quark coupling. Quark-antiquark discrimination is accomplished via a unique algorithm that takes advantage of the precise SLAC Large Detector charge coupled device vertex detector, employing the net charge of displaced vertices as well as the charge of kaons that emanate from those vertices. From the 1996-1998 sample of 400 000 Z decays, produced with an average beam polarization of 73.4%, we find A(c)=0.673+/-0.029(stat)+/-0.023(syst) and A(b)=0.919+/-0.018(stat)+/-0.017(syst). PMID:15783953 11. Direct Detection of Pure ac Spin Current by X-Ray Pump-Probe Measurements. PubMed Li, J; Shelford, L R; Shafer, P; Tan, A; Deng, J X; Keatley, P S; Hwang, C; Arenholz, E; van der Laan, G; Hicken, R J; Qiu, Z Q 2016-08-12 Despite recent progress in spin-current research, the detection of spin current has mostly remained indirect. By synchronizing a microwave waveform with synchrotron x-ray pulses, we use the ferromagnetic resonance of the Py (Ni_{81}Fe_{19}) layer in a Py/Cu/Cu_{75}Mn_{25}/Cu/Co multilayer to pump a pure ac spin current into the Cu_{75}Mn_{25} and Co layers, and then directly probe the spin current within the Cu_{75}Mn_{25} layer and the spin dynamics of the Co layer by x-ray magnetic circular dichroism. This element-resolved pump-probe measurement unambiguously identifies the ac spin current in the Cu_{75}Mn_{25} layer. PMID:27563981 12. Direct Detection of Pure ac Spin Current by X-Ray Pump-Probe Measurements Li, J.; Shelford, L. R.; Shafer, P.; Tan, A.; Deng, J. X.; Keatley, P. S.; Hwang, C.; Arenholz, E.; van der Laan, G.; Hicken, R. J.; Qiu, Z. Q. 2016-08-01 Despite recent progress in spin-current research, the detection of spin current has mostly remained indirect. By synchronizing a microwave waveform with synchrotron x-ray pulses, we use the ferromagnetic resonance of the Py (Ni81Fe19 ) layer in a Py /Cu /Cu75Mn25/Cu /Co multilayer to pump a pure ac spin current into the Cu75Mn25 and Co layers, and then directly probe the spin current within the Cu75Mn25 layer and the spin dynamics of the Co layer by x-ray magnetic circular dichroism. This element-resolved pump-probe measurement unambiguously identifies the ac spin current in the Cu75Mn25 layer. 13. Recent Advances in AC-DC Transfer Measurements Using Thin-Film Thermal Converters SciTech Connect WUNSCH,THOMAS F.; KINARD,JOSEPH R.; MANGINELL,RONALD P.; LIPE,THOMAS E.; SOLOMON JR.,OTIS M.; JUNGLING,KENNETH C. 2000-12-08 New standards for ac current and voltage measurements, thin-film multifunction thermal converters (MJTCS), have been fabricated using thin-film and micro-electro-mechanical systems (MEMS) technology. Improved sensitivity and accuracy over single-junction thermoelements and targeted performance will allow new measurement approaches in traditionally troublesome areas such as the low frequency and high current regimes. A review is presented of new microfabrication techniques and packaging methods that have resulted from a collaborative effort at Sandia National Laboratories and the National Institute of Standards and Technology (MHZ). 14. A wide-frequency range AC magnetometer to measure the specific absorption rate in nanoparticles for magnetic hyperthermia Garaio, E.; Collantes, J. M.; Garcia, J. A.; Plazaola, F.; Mornet, S.; Couillaud, F.; Sandre, O. 2014-11-01 Measurement of specific absorption rate (SAR) of magnetic nanoparticles is crucial to assert their potential for magnetic hyperthermia. To perform this task, calorimetric methods are widely used. However, those methods are not very accurate and are difficult to standardize. In this paper, we present AC magnetometry results performed with a lab-made magnetometer that is able to obtain dynamic hysteresis-loops in the AC magnetic field frequency range from 50 kHz to 1 MHz and intensities up to 24 kA m-1. In this work, SAR values of maghemite nanoparticles dispersed in water are measured by AC magnetometry. The so-obtained values are compared with the SAR measured by calorimetric methods. Both measurements, by calorimetry and magnetometry, are in good agreement. Therefore, the presented AC magnetometer is a suitable way to obtain SAR values of magnetic nanoparticles. 15. Theory of the ac spin valve effect: a new method to measure spin relaxation time Kochan, Denis; Gmitra, Martin; Fabian, Jaroslav 2012-02-01 Parallel (P) and antiparallel (AP) configurations of FNF junctions have, in a dc regime, different resistivities (RAP>RP), giving rise to the giant magnetoresistance (GMR) effect, which can be explained within the spin injection drift-diffusion model. We extend the model to include ac phenomena and predict new spin dynamical phenomenon; the resonant amplification and depletion of spin accumulation in the P and AP configurations, respectively. As the major new effect, the spin valve magnetoimpedance of the FNF junction oscillates with the driving ac frequency, which leads to negative GMR effect (|ZAP|<|ZP|). We show that from the spin-valve oscillation periods, measured all electrically in the GHz regime, the spin relaxation times could be extracted without any magnetic field and sample size changes (contrary to other techniques). For thin tunnel junctions the ac signal becomes pure Lorentzian, also enabling one to obtain the spin relaxation time of the N region from the signal width. This work, was published in Physical Review Letters,10, 176604 (2011). 16. In situ measurement of conductivity during nanocomposite film deposition Blattmann, Christoph O.; Pratsinis, Sotiris E. 2016-05-01 Flexible and electrically conductive nanocomposite films are essential for small, portable and even implantable electronic devices. Typically, such film synthesis and conductivity measurement are carried out sequentially. As a result, optimization of filler loading and size/morphology characteristics with respect to film conductivity is rather tedious and costly. Here, freshly-made Ag nanoparticles (nanosilver) are made by scalable flame aerosol technology and directly deposited onto polymeric (polystyrene and poly(methyl methacrylate)) films during which the resistance of the resulting nanocomposite is measured in situ. The formation and gas-phase growth of such flame-made nanosilver, just before incorporation onto the polymer film, is measured by thermophoretic sampling and microscopy. Monitoring the nanocomposite resistance in situ reveals the onset of conductive network formation by the deposited nanosilver growth and sinternecking. The in situ measurement is much faster and more accurate than conventional ex situ four-point resistance measurements since an electrically percolating network is detected upon its formation by the in situ technique. Nevertheless, general resistance trends with respect to filler loading and host polymer composition are consistent for both in situ and ex situ measurements. The time lag for the onset of a conductive network (i.e., percolation) depends linearly on the glass transition temperature (Tg) of the host polymer. This is attributed to the increased nanoparticle-polymer interaction with decreasing Tg. Proper selection of the host polymer in combination with in situ resistance monitoring therefore enable the optimal preparation of conductive nanocomposite films. 17. Mutation Glu82Lys in lamin A/C gene is associated with cardiomyopathy and conduction defect SciTech Connect Wang Hu; Wang Jizheng; Zheng Weiyue; Wang Xiaojian; Wang Shuxia; Song Lei; Zou Yubao; Yao Yan; Hui Rutai . E-mail: [email protected] 2006-05-26 Dilated cardiomyopathy is a form of heart muscle disease characterized by impaired systolic function and ventricular dilation. The mutations in lamin A/C gene have been linked to dilated cardiomyopathy. We screened genetic mutations in a large Chinese family of 50 members including members with dilated cardiomyopathy and found a Glu82Lys substitution mutation in the rod domain of the lamin A/C protein in eight family members, three of them have been diagnosed as dilated cardiomyopathy, one presented with heart dilation. The pathogenic mechanism of lamin A/C gene defect is poorly understood. Glu82Lys mutated lamin A/C and wild type protein was transfected into HEK293 cells. The mutated protein was not properly localized at the inner nuclear membrane and the emerin protein, which interacts with lamin A/C, was also aberrantly distributed. The nuclear membrane structure was disrupted and heterochromatin was aggregated aberrantly in the nucleus of the HEK293 cells stably transfected with mutated lamin A/C gene as determined by transmission electron microscopy. 18. Measurement of volume resistivity/conductivity of metallic alloy in inhibited seawater by optical interferometry techniques SciTech Connect Habib, K. 2011-03-15 Optical interferometry techniques were used for the first time to measure the volume resistivity/conductivity of carbon steel samples in seawater with different concentrations of a corrosion inhibitor. In this investigation, the real-time holographic interferometry was carried out to measure the thickness of anodic dissolved layer or the total thickness, U{sub total}, of formed oxide layer of carbon steel samples during the alternating current (ac) impedance of the samples in blank seawater and in 5-20 ppm TROS C-70 inhibited seawater, respectively. In addition, a mathematical model was derived in order to correlate between the ac impedance (resistance) and the surface (orthogonal) displacement of the surface of the samples in solutions. In other words, a proportionality constant [resistivity ({rho}) or conductivity ({sigma})= 1/{rho}] between the determined ac impedance [by electrochemical impedance spectroscopy (EIS) technique] and the orthogonal displacement (by the optical interferometry techniques) was obtained. The value of the resistivity of the carbon steel sample in the blank seawater was found similar to the value of the resistivity of the carbon steel sample air, around 1 x 10{sup -5}{Omega} cm. On the contrary, the measured values of the resistivity of the carbon steel samples were 1.85 x 10{sup 7}, 3.35 x 10{sup 7}, and 1.7 x 10{sup 7}{Omega} cm in 5, 10, and 20 ppm TROS C-70 inhibited seawater solutions, respectively. Furthermore, the determined value range of {rho} of the formed oxide layers, from 1.7 x 10{sup 7} to 3.35 x 10{sup 7}{Omega} cm, is found in a reasonable agreement with the one found in literature for the Fe oxide-hydroxides, i.e., goethite ({alpha}-FeOOH) and for the lepidocrocite ({gamma}-FeOOH), 1 x 10{sup 9}{Omega} cm. The {rho} value of the Fe oxide-hydroxides, 1 x 10{sup 9}{Omega} cm, was found slightly higher than the {rho} value range of the formed oxide layer of the present study. This is because the former value was determined 19. Measurement of thermal conductivity in proton irradiated silicon SciTech Connect Marat Khafizov; Clarissa Yablinsky; Todd Allen; David Hurley 2014-04-01 We investigate the influence of proton irradiation on thermal conductivity in single crystal silicon. We apply laser based modulated thermoreflectance technique to extract the change in conductivity of the thin layer damaged by proton irradiation. Unlike time domain thermoreflectance techniques that require application of a metal film, we perform our measurement on uncoated samples. This provides greater sensitivity to the change in conductivity of the thin damaged layer. Using sample temperature as a parameter provides a means to deduce the primary defect structures that limit thermal transport. We find that under high temperature irradiation the degradation of thermal conductivity is caused primarily by extended defects. 20. Thermal conductivity of halide solid solutions: measurement and prediction. PubMed Gheribi, Aïmen E; Poncsák, Sándor; St-Pierre, Rémi; Kiss, László I; Chartrand, Patrice 2014-09-14 The composition dependence of the lattice thermal conductivity in NaCl-KCl solid solutions has been measured as a function of composition and temperature. Samples with systematically varied compositions were prepared and the laser flash technique was used to determine the thermal diffusivity from 373 K to 823 K. A theoretical model, based on the Debye approximation of phonon density of state (which contains no adjustable parameters) was used to predict the thermal conductivity of both stoichiometric compounds and fully disordered solid solutions. The predictions obtained with the model agree very well with our measurement. A general method for predicting the thermal conductivity of different halide systems is discussed. PMID:25217938 1. Measurements of prompt radiation induced conductivity of Kapton. SciTech Connect Preston, Eric F.; Zarick, Thomas Andrew; Sheridan, Timothy J.; Hartman, E. Frederick; Stringer, Thomas Arthur 2010-10-01 We performed measurements of the prompt radiation induced conductivity in thin samples of Kapton (polyimide) at the Little Mountain Medusa LINAC facility in Ogden, UT. Three mil samples were irradiated with a 0.5 {mu}s pulse of 20 MeV electrons, yielding dose rates of 1E9 to 1E10 rad/s. We applied variable potentials up to 2 kV across the samples and measured the prompt conduction current. Analysis rendered prompt conductivity coefficients between 6E-17 and 2E-16 mhos/m per rad/s, depending on the dose rate and the pulse width. 2. Indirect measurement of thermal conductivity in silicon nanowires SciTech Connect Pennelli, Giovanni Nannini, Andrea; Macucci, Massimo 2014-02-28 We report indirect measurements of thermal conductivity in silicon nanostructures. We have exploited a measurement technique based on the Joule self-heating of silicon nanowires. A standard model for the electron mobility has been used to determine the temperature through the accurate measurement of the nanowire resistance. We have applied this technique to devices fabricated with a top-down process that yields nanowires together with large silicon areas used both as electrical and as thermal contacts. As there is crystalline continuity between the nanowires and the large contact areas, our thermal conductivity measurements are not affected by any temperature drop due to the contact thermal resistance. Our results confirm the observed reduction of thermal conductivity in nanostructures and are comparable with those previously reported in the literature, achieved with more complex measurement techniques. 3. Non-Contact Conductivity Measurement for Automated Sample Processing Systems NASA Technical Reports Server (NTRS) Beegle, Luther W.; Kirby, James P. 2012-01-01 A new method has been developed for monitoring and control of automated sample processing and preparation especially focusing on desalting of samples before analytical analysis (described in more detail in Automated Desalting Apparatus, (NPO-45428), NASA Tech Briefs, Vol. 34, No. 8 (August 2010), page 44). The use of non-contact conductivity probes, one at the inlet and one at the outlet of the solid phase sample preparation media, allows monitoring of the process, and acts as a trigger for the start of the next step in the sequence (see figure). At each step of the muti-step process, the system is flushed with low-conductivity water, which sets the system back to an overall low-conductivity state. This measurement then triggers the next stage of sample processing protocols, and greatly minimizes use of consumables. In the case of amino acid sample preparation for desalting, the conductivity measurement will define three key conditions for the sample preparation process. First, when the system is neutralized (low conductivity, by washing with excess de-ionized water); second, when the system is acidified, by washing with a strong acid (high conductivity); and third, when the system is at a basic condition of high pH (high conductivity). Taken together, this non-contact conductivity measurement for monitoring sample preparation will not only facilitate automation of the sample preparation and processing, but will also act as a way to optimize the operational time and use of consumables 4. Estimation of Ionospheric Conductivity Based on the Measurements by Superdarn Lee, Eun-Ah; An, Byung-Ho; Yi, Yu 2002-06-01 The ionosphere plays an important role in the electrodynamics of space environment. In particular, the information on the ionospheric conductivity distribution is indispensable in understanding the electrodynamics of the magnetosphere and ionosphere coupling study. To meet such a requirement, several attempts have been made to estimate the conductivity distribution over the polar ionosphere. As one of such attempts we compare the ionospheric plasma convection patterns obtained from the Super Dual Auroral Radar Network (SuperDARN), from which the electric field distribution is estimated, and the simultaneously measured ground magnetic disturbance. Specifically, the electric field measured from the Goose Bay and Stokkseyri radars and magnetic disturbance data obtained from the west coast chain of Greenland are compared. In order to estimate ionospheric conductivity distribution with these information, the overhead infinite sheet current approximation is employed. As expected, the Hall conductance, height-integrated conductivity, shows a wide enhancement along the center of the auroral electrojet. However, Pedersen conductance shows negative values over a wide portion of the auroral oval region, a physically unacceptable situation. To alleviate this problem, the effect of the field-aligned current is taken into account. As a result, the region with negative Pedersen conductance disappears significantly, suggesting that the effect of the field-aligned current should be taken into account, when one wants to estimate ionospheric conductance based on ground magnetic disturbance and electric field measurements by radars. 5. In vivo electrical conductivity measurements during and after tumor electroporation: conductivity changes reflect the treatment outcome Ivorra, Antoni; Al-Sakere, Bassim; Rubinsky, Boris; Mir, Lluis M. 2009-10-01 Electroporation is the phenomenon in which cell membrane permeability is increased by exposing the cell to short high-electric-field pulses. Reversible electroporation treatments are used in vivo for gene therapy and drug therapy while irreversible electroporation is used for tissue ablation. Tissue conductivity changes induced by electroporation could provide real-time feedback of the treatment outcome. Here we describe the results from a study in which fibrosarcomas (n = 39) inoculated in mice were treated according to different electroporation protocols, some of them known to cause irreversible damage. Conductivity was measured before, within the pulses, in between the pulses and for up to 30 min after treatment. Conductivity increased pulse after pulse. Depending on the applied electroporation protocol, the conductivity increase after treatment ranged from 10% to 180%. The most significant conclusion from this study is the fact that post-treatment conductivity seems to be correlated with treatment outcome in terms of reversibility. 6. A noncontact thermal microprobe for local thermal conductivity measurement. PubMed Zhang, Yanliang; Castillo, Eduardo E; Mehta, Rutvik J; Ramanath, Ganpati; Borca-Tasciuc, Theodorian 2011-02-01 We demonstrate a noncontact thermal microprobe technique for measuring the thermal conductivity κ with ∼3 μm lateral spatial resolution by exploiting quasiballistic air conduction across a 10-100 nm air gap between a joule-heated microprobe and the sample. The thermal conductivity is extracted from the measured effective thermal resistance of the microprobe and the tip-sample thermal contact conductance and radius in the quasiballistic regime determined by calibration on reference samples using a heat transfer model. Our κ values are within 5%-10% of that measured by standard steady-state methods and theoretical predictions for nanostructured bulk and thin film assemblies of pnictogen chalcogenides. Noncontact thermal microprobing demonstrated here mitigates the strong dependence of tip-sample heat transfer on sample surface chemistry and topography inherent in contact methods, and allows the thermal characterization of a wide range of nanomaterials. PMID:21361625 7. A noncontact thermal microprobe for local thermal conductivity measurement Zhang, Yanliang; Castillo, Eduardo E.; Mehta, Rutvik J.; Ramanath, Ganpati; Borca-Tasciuc, Theodorian 2011-02-01 We demonstrate a noncontact thermal microprobe technique for measuring the thermal conductivity κ with ˜3 μm lateral spatial resolution by exploiting quasiballistic air conduction across a 10-100 nm air gap between a joule-heated microprobe and the sample. The thermal conductivity is extracted from the measured effective thermal resistance of the microprobe and the tip-sample thermal contact conductance and radius in the quasiballistic regime determined by calibration on reference samples using a heat transfer model. Our κ values are within 5%-10% of that measured by standard steady-state methods and theoretical predictions for nanostructured bulk and thin film assemblies of pnictogen chalcogenides. Noncontact thermal microprobing demonstrated here mitigates the strong dependence of tip-sample heat transfer on sample surface chemistry and topography inherent in contact methods, and allows the thermal characterization of a wide range of nanomaterials. 8. Measuring the conductivity dependence of the Casimir force Xu, Jun; Schafer, Robert; Banishev, Alexandr; Mohideen, Umar 2015-03-01 The strength and distance dependence of the Casimir force can be controlled through the conductivity of the material bodies, with lower conductivity in general leading to lower Casimir forces. However low conductivity, large bandgap materials which are insulating, have drawbacks as any surface electrostatic charges cannot be easily compensated. This restricts experiments to metallic or highly doped semiconductor materials. We will report on measurements of the Casimir force gradient using the frequency shift technique. Improvements in the measurement technique will be discussed. Measurements of the Casimir force gradient using low and high conductivity silicon surfaces will be reported. The authors thank G.L. Klimchitskaya and V.M. Mostepanenko for help with the theory and the US National Science Foundation for funding the research. 9. Optical sensor for heat conduction measurement in biological tissue Gutierrez-Arroyo, A.; Sanchez-Perez, C.; Aleman-Garcia, N. 2013-06-01 This paper presents the design of a heat flux sensor using an optical fiber system to measure heat conduction in biological tissues. This optoelectronic device is based on the photothermal beam deflection of a laser beam travelling in an acrylic slab this deflection is measured with a fiber optic angle sensor. We measure heat conduction in biological samples with high repeatability and sensitivity enough to detect differences in tissues from three chicken organs. This technique could provide important information of vital organ function as well as the detect modifications due to degenerative diseases or physical damage caused by medications or therapies. 10. pH measurement of low-conductivity waters USGS Publications Warehouse 1987-01-01 pH is an important and commonly measured parameter of precipitation and other natural waters. The various sources of errors in pH measurement were analyzed and procedures for improving the accuracy and precision of pH measurements in natural waters with conductivities of < 100 uS/cm at 25 C are suggested. Detailed procedures are given for the preparation of dilute sulfuric acid standards to evaluate the performance of pH electrodes in low conductivity waters. A daily check of the pH of dilute sulfuric acid standards and deionized water saturated with a gas mixture of low carbon dioxide at partial pressure (air) prior to the measurement of the pH of low conductivity waters is suggested. (Author 's abstract) 11. Estimation of charge-carrier concentration and ac conductivity scaling properties near the V-I phase transition of polycrystalline Na2 S O4 2005-11-01 The conductivity spectra of polycrystalline Na2SO4 have been investigated in the frequency range 42Hz-1MHz at different temperatures below and above the V-I phase transition temperature. The conductivity data have been analyzed using Almond-West formalism. The dc conductivity, the hopping frequency of the charge carriers, and their respective activation energies have been obtained from the analysis of the ac conductivity data, and the concentration of charge carriers was calculated at different temperatures. The power-law exponent n of the conductivity spectra has average values of 0.43 and 0.61 in phases V and I , respectively, which indicates different conduction properties in the two phases. Moreover, scaling of the conductivity spectra at the low- and high-temperature phases was performed in accord with Ghosh’s scaling approach. It is found that the scaling properties depend on the structure of the investigated material. 12. Noninvasive measurement of conductivity anisotropy at larmor frequency using MRI. PubMed Lee, Joonsung; Song, Yizhuang; Choi, Narae; Cho, Sungmin; Seo, Jin Keun; Kim, Dong-Hyun 2013-01-01 Anisotropic electrical properties can be found in biological tissues such as muscles and nerves. Conductivity tensor is a simplified model to express the effective electrical anisotropic information and depends on the imaging resolution. The determination of the conductivity tensor should be based on Ohm's law. In other words, the measurement of partial information of current density and the electric fields should be made. Since the direct measurements of the electric field and the current density are difficult, we use MRI to measure their partial information such as B1 map; it measures circulating current density and circulating electric field. In this work, the ratio of the two circulating fields, termed circulating admittivity, is proposed as measures of the conductivity anisotropy at Larmor frequency. Given eigenvectors of the conductivity tensor, quantitative measurement of the eigenvalues can be achieved from circulating admittivity for special tissue models. Without eigenvectors, qualitative information of anisotropy still can be acquired from circulating admittivity. The limitation of the circulating admittivity is that at least two components of the magnetic fields should be measured to capture anisotropic information. PMID:23554838 13. Analytical estimation of skeleton thermal conductivity of a geopolymer foam from thermal conductivity measurements Henon, J.; Alzina, A.; Absi, J.; Smith, D. S.; Rossignol, S. 2015-07-01 The geopolymers are alumino-silicate binders. The addition of a high pores volume fraction, gives them a thermal insulation character desired in the building industry. In this work, potassium geopolymer foams were prepared at room temperature (< 70 ∘C) by a process of in situ gas release. The porosity distribution shows a multiscale character. However, the thermal conductivity measurements gave values from 0.35 to 0.12 Wm-1.K-1 for a pore volume fraction values between 65 and 85%. In the aim to predict the thermal properties of these foams and focus on the relationship "thermal-conductivity/microstructure", knowledge of the thermal conductivity of their solid skeleton (λ s ) is paramount. However, there is rare work on the determination of this value depending on the initial composition. By the formulation used, the foaming agent contributes to the final network, and it is not possible to obtain a dense material designate to make a direct measurement of λ s . The objective of this work is to use inverse analytical methods to identify the value of λ s . Measurements of thermal conductivity by the fluxmetre technique were performed. The obtained value of the solid skeleton thermal conductivity by the inverse numerical technique is situated in a framework between 0.95 and 1.35 Wm-1.K-1 and is in agreement with one issue from the literature. 14. Thermal and Electrical Conductivity Measurements of CDA 510 Phosphor Bronze NASA Technical Reports Server (NTRS) Tuttle, James E.; Canavan, Edgar; DiPirro, Michael 2009-01-01 Many cryogenic systems use electrical cables containing phosphor bronze wire. While phosphor bronze's electrical and thermal conductivity values have been published, there is significant variation among different phosphor bronze formulations. The James Webb Space Telescope (JWST) will use several phosphor bronze wire harnesses containing a specific formulation (CDA 510, annealed temper). The heat conducted into the JWST instrument stage is dominated by these harnesses, and approximately half of the harness conductance is due to the phosphor bronze wires. Since the JWST radiators are expected to just keep the instruments at their operating temperature with limited cooling margin, it is important to know the thermal conductivity of the actual alloy being used. We describe an experiment which measured the electrical and thermal conductivity of this material between 4 and 295 Kelvin. 15. AC photovoltaic module magnetic fields SciTech Connect Jennings, C.; Chang, G.J.; Reyes, A.B.; Whitaker, C.M. 1997-12-31 Implementation of alternating current (AC) photovoltaic (PV) modules, particularly for distributed applications such as PV rooftops and facades, may be slowed by public concern about electric and magnetic fields (EMF). This paper documents magnetic field measurements on an AC PV module, complementing EMF research on direct-current PV modules conducted by PG and E in 1993. Although not comprehensive, the PV EMF data indicate that 60 Hz magnetic fields (the EMF type of greatest public concern) from PV modules are comparable to, or significantly less than, those from household appliances. Given the present EMF research knowledge, AC PV module EMF may not merit considerable concern. 16. An AC constant-response method for electrophysiological measurements of spectral sensitivity functions. PubMed de Souza, J M; DeVoe, R D; Schoeps, C; Ventura, D F 1996-10-01 A number of methods have been used in the past to measure spectral sensitivity (S(lambda)) functions of electric responses in the visual system. We present here a microcomputer based, AC, constant-response method for automatic on-line measurement of S(lambda) in cells with or without a sustained tonic response. It is based on feedback adjustment of light intensity to obtain constant peak-to-peak amplitudes of response to a flickering stimulus as the spectrum is scanned between 300 and 700 nm in 4 nm steps. It combines the advantages of: (1) on-line presentation of S(lambda) curves; (2) constant light adaptation; (3) sampling of many points; and (4) fast data collection time. The system can be applied to sensitivity or threshold (e.g., S(lambda), dark adaptation, receptive field) measurements of any electrically recorded visual response. PMID:8912193 17. Method for Measuring Thermal Conductivity of Small Samples Having Very Low Thermal Conductivity NASA Technical Reports Server (NTRS) Miller, Robert A.; Kuczmarski, Maria a. 2009-01-01 This paper describes the development of a hot plate method capable of using air as a standard reference material for the steady-state measurement of the thermal conductivity of very small test samples having thermal conductivity on the order of air. As with other approaches, care is taken to ensure that the heat flow through the test sample is essentially one-dimensional. However, unlike other approaches, no attempt is made to use heated guards to block the flow of heat from the hot plate to the surroundings. It is argued that since large correction factors must be applied to account for guard imperfections when sample dimensions are small, it may be preferable to simply measure and correct for the heat that flows from the heater disc to directions other than into the sample. Experimental measurements taken in a prototype apparatus, combined with extensive computational modeling of the heat transfer in the apparatus, show that sufficiently accurate measurements can be obtained to allow determination of the thermal conductivity of low thermal conductivity materials. Suggestions are made for further improvements in the method based on results from regression analyses of the generated data. 18. In-Pile Thermal Conductivity Measurement Method for Nuclear Fuels SciTech Connect Joy L. Rempe; Brandon Fox; Heng Ban; Joshua E. Daw; Darrell L. Knudson; Keith G. Condie 2009-08-01 Thermophysical properties of advanced nuclear fuels and materials during irradiation must be known prior to their use in existing, advanced, or next generation reactors. Thermal conductivity is one of the most important properties for predicting fuel and material performance. A joint Utah State University (USU) / Idaho National Laboratory (INL) project, which is being conducted with assistance from the Institute for Energy Technology at the Norway Halden Reactor Project, is investigating in-pile fuel thermal conductivity measurement methods. This paper focuses on one of these methods – a multiple thermocouple method. This two-thermocouple method uses a surrogate fuel rod with Joule heating to simulate volumetric heat generation to gain insights about in-pile detection of thermal conductivity. Preliminary results indicated that this method can measure thermal conductivity over a specific temperature range. This paper reports the thermal conductivity values obtained by this technique and compares these values with thermal property data obtained from standard thermal property measurement techniques available at INL’s High Test Temperature Laboratory. Experimental results and material properties data are also compared to finite element analysis results. 19. In situ measurement of ceramic vacuum chamber conductive coating quality SciTech Connect Doose, C.; Harkay, K.; Kim, S.; Milton, S. 1997-08-01 A method for measuring the relative surface resistivity and quality of conductive coatings on ceramic vacuum chambers was developed. This method is unique in that it allows one to test the coating even after the ceramic chamber is installed in the accelerator and under vacuum; furthermore, the measurement provides a localized surface reading of the coating conductance. The method uses a magnetic probe is calibrated using the measured DC end-to-end resistance of the tube under test and by comparison to a high quality test surface. The measurement method has also been verified by comparison to high frequency impedance measurements. A detailed description, results, and sensitivity of the technique are given here. 20. Nerve conduction velocity measurements: improved accuracy using superimposed response waves. PubMed Halar, E M; Venkatesh, B 1976-10-01 A new procedure of serial motor nerve conduction velocity (NCV) measurements with the use of "superimposed response waves" technique (or double stimulus technique) was performed on 29 normal subjects. Six peripheral nerves were tested once a week for four to six weeks. A total of 760 NCV measurements were thus obtained to try to assess the magnitude of error in serial NCV testings. With the double stimulus technique employed, a significant reduction in variations of serial NCV measurements was found. The overall standard deviation of four to six consecutive NCV measurements in the 34 subjects was 1.3 meters per second with a coefficient of variation of 2.4%. These findings obtained with the double stimulus technique have proven to be approximately three times more accurate than results obtained by investigators who studied nerve conduction velocity measurement variation with single stimulus standard NCV testing techniques. PMID:184754 1. Measurement of thermal contact conductance of SPring-8 beamline components Mochizuki, Tetsuro; Ohashi, Haruhiko; Sano, Mutsumi; Takahashi, Sunao; Goto, Shunji 2007-09-01 Direct cooling is adopted for most high heat load components in SPring-8 beamlines. On the other hand, contact cooling is employed for some components such as a graphite filter, aluminum filter, mirror, and cryogenic monochromator silicon crystal. For the thermal design of the contact cooling components, it is important to obtain reliable thermal contact conductance value. The conductance depends on many parameters such as the surface materials, surface roughness, flatness of the surface, interstitial materials, temperature of the contact surface, and contact pressure. An experimental setup is fablicated to measure the conductance at liquid nitrogen temperature and room temperature. The thermal contact conductance of a Si-Cu interface and that of a Si-In-Cu interface are measured at cryogenic temperature at contact pressures ranging from 0.1-1.1 MPa. The conductance of an Al-Cu interface and that of a graphite-Cu interface are measured using gold and silver foils as interstitial materials. The measurements are performed at room temperature and at pressures ranging from 0.5-4 MPa. The experimental setup and the results obtained are presented. 2. Measurements of prompt radiation induced conductivity in Teflon (PTFE). SciTech Connect Hartman, E. Frederick; Zarick, Thomas Andrew; Sheridan, Timothy J.; Preston, E. 2013-05-01 We performed measurements of the prompt radiation induced conductivity (RIC) in thin samples of Teflon (PTFE) at the Little Mountain Medusa LINAC facility in Ogden, UT. Three mil (76.2 microns) samples were irradiated with a 0.5 %CE%BCs pulse of 20 MeV electrons, yielding dose rates of 1E9 to 1E11 rad/s. We applied variable potentials up to 2 kV across the samples and measured the prompt conduction current. Details of the experimental apparatus and analysis are reported in this report on prompt RIC in Teflon. 3. VALIDATION OF A THERMAL CONDUCTIVITY MEASUREMENT SYSTEM FOR FUEL COMPACTS SciTech Connect Jeff Phillips; Colby Jensen; Changhu Xing; Heng Ban 2011-03-01 A high temperature guarded-comparative-longitudinal heat flow measurement system has been built to measure the thermal conductivity of a composite nuclear fuel compact. It is a steady-state measurement device designed to operate over a temperature range of 300 K to 1200 K. No existing apparatus is currently available for obtaining the thermal conductivity of the composite fuel in a non-destructive manner due to the compact’s unique geometry and composite nature. The current system design has been adapted from ASTM E 1225. As a way to simplify the design and operation of the system, it uses a unique radiative heat sink to conduct heat away from the sample column. A finite element analysis was performed on the measurement system to analyze the associated error for various operating conditions. Optimal operational conditions have been discovered through this analysis and results are presented. Several materials have been measured by the system and results are presented for stainless steel 304, inconel 625, and 99.95% pure iron covering a range of thermal conductivities of 10 W/m*K to 70 W/m*K. A comparison of the results has been made to data from existing literature. 4. Thermal Conductivity Measurements in Metals at High Pressures and Temperatures. Konopkova, Z.; McWilliams, R. S.; Goncharov, A. 2014-12-01 The transport properties of iron and iron alloys at high pressures and temperatures are crucial parameters in planetary evolution models, yet are difficult to determine both theoretically and experimentally. Estimates of thermal conductivity in the Earth's core range from 30 to 150 W/mK, a substantial range leaving many open questions regarding the age of the inner core, the thermal structure of the outer core, and the conditions for a working geodynamo. Most experiments have measured electrical resistivity rather than directly measuring thermal conductivity, and have used models to extrapolate from low-temperature data to the high temperature conditions of the core. Here we present direct, in-situ high-pressure and high-temperature measurements of the thermal conductivity of metals in the diamond-anvil cell. Double-sided continuous laser heating is combined with one-side flash heating of a metallic foil, while the time-resolved temperature is measured from both sides with spectral radiometry in an optical streak camera. Emission and temperature perturbations measured on opposite sides of the foil were modeled using finite element calculations in order to extract thermal diffusivity and conductivity of foils. Results on platinum and iron at high pressures and temperatures will be presented. 5. Simultaneous specific heat and thermal conductivity measurement of individual nanostructures Zheng, Jianlin; Wingert, Matthew C.; Moon, Jaeyun; Chen, Renkun 2016-08-01 Fundamental phonon transport properties in semiconductor nanostructures are important for their applications in energy conversion and storage, such as thermoelectrics and photovoltaics. Thermal conductivity measurements of semiconductor nanostructures have been extensively pursued and have enhanced our understanding of phonon transport physics. Specific heat of individual nanostructures, despite being an important thermophysical parameter that reflects the thermodynamics of solids, has remained difficult to characterize. Prior measurements were limited to ensembles of nanostructures in which coupling and sample inhomogeneity could play a role. Herein we report the first simultaneous specific heat and thermal conductivity measurements of individual rod-like nanostructures such as nanowires and nanofibers. This technique is demonstrated by measuring the specific heat and thermal conductivity of single ∼600–700 nm diameter Nylon-11 nanofibers (NFs). The results show that the thermal conductivity of the NF is increased by 50% over the bulk value, while the specific heat of the NFs exhibits bulk-like behavior. We find that the thermal diffusivity obtained from the measurement, which is related to the phonon mean free path (MFP), decreases with temperature, indicating that the intrinsic phonon Umklapp scattering plays a role in the NFs. This platform can also be applied to one- and two- dimensional semiconductor nanostructures to probe size effects on the phonon spectra and other transport physics. 6. Complex AC impedance, transference number and vibrational spectroscopy studies of proton conducting PVAc-NH 4SCN polymer electrolytes Selvasekarapandian, S.; Baskaran, R.; Hema, M. 2005-03-01 The polymer electrolytes composed of poly (vinyl acetate) (PVAc) with various stoichiometric ratios of ammonium thiocyanate (NH 4SCN) salt have been prepared by solution casting method. The polymer-salt complex formation and the polymer-proton interactions have been analysed by FT-IR spectroscopy. The conductivity and dielectric measurements are carried out on these films as a function of frequency at various temperatures. The complex impedance spectroscopy results reveal that the high-frequency semicircle is due to the bulk effect of the material. The conductivity is found to increase in the order of 10 -8-10 -4 S cm -1 at 303 K with the increase in salt concentration. The ionic transference number of mobile ions has been estimated by Wagner's polarization method and the results reveal that the conducting species are predominantly due to ions. The transient ionic current (TIC) measurement technique has been used to detect the type of mobile species and to evaluate their mobilities. The dielectric spectra show the low-frequency dispersion, which is due to the space charge effects arising from the electrodes. 7. Comparison of DC and AC Transport in 1.5-7.5 nm Oligophenylene Imine Molecular Wires across Two Junction Platforms: Eutectic Ga-In versus Conducting Probe Atomic Force Microscope Junctions. PubMed Sangeeth, C S Suchand; Demissie, Abel T; Yuan, Li; Wang, Tao; Frisbie, C Daniel; Nijhuis, Christian A 2016-06-15 We have utilized DC and AC transport measurements to measure the resistance and capacitance of thin films of conjugated oligophenyleneimine (OPI) molecules ranging from 1.5 to 7.5 nm in length. These films were synthesized on Au surfaces utilizing the imine condensation chemistry between terephthalaldehyde and 1,4-benzenediamine. Near edge X-ray absorption fine structure (NEXAFS) spectroscopy yielded molecular tilt angles of 33-43°. To probe DC and AC transport, we employed Au-S-OPI//GaOx/EGaIn junctions having contact areas of 9.6 × 10(2) μm(2) (10(9) nm(2)) and compared to previously reported DC results on the same OPI system obtained using Au-S-OPI//Au conducting probe atomic force microscopy (CP-AFM) junctions with 50 nm(2) areas. We found that intensive observables agreed very well across the two junction platforms. Specifically, the EGaIn-based junctions showed: (i) a crossover from tunneling to hopping transport at molecular lengths near 4 nm; (ii) activated transport for wires >4 nm in length with an activation energy of 0.245 ± 0.008 eV for OPI-7; (iii) exponential dependence of conductance with molecular length with a decay constant β = 2.84 ± 0.18 nm(-1) (DC) and 2.92 ± 0.13 nm(-1) (AC) in the tunneling regime, and an apparent β = 1.01 ± 0.08 nm(-1) (DC) and 0.99 ± 0.11 nm(-1) (AC) in the hopping regime; (iv) previously unreported dielectric constant of 4.3 ± 0.2 along the OPI wires. However, the absolute resistances of Au-S-OPI//GaOx/EGaIn junctions were approximately 100 times higher than the corresponding CP-AFM junctions due to differences in metal-molecule contact resistances between the two platforms. PMID:27172452 8. Measurement of the anisotropic thermal conductivity of the porcine cornea. PubMed Barton, Michael D; Trembly, B Stuart 2013-10-01 Accurate thermal models for the cornea of the eye support the development of thermal techniques for reshaping the cornea and other scientific purposes. Heat transfer in the cornea must be quantified accurately so that a thermal treatment does not destroy the endothelial layer, which cannot regenerate, and yet is responsible for maintaining corneal transparency. We developed a custom apparatus to measure the thermal conductivity of ex vivo porcine corneas perpendicular to the surface and applied a commercial apparatus to measure thermal conductivity parallel to the surface. We found that corneal thermal conductivity is 14% anisotropic at the normal state of corneal hydration. Small numbers of ex vivo feline and human corneas had a thermal conductivity perpendicular to the surface that was indistinguishable from the porcine corneas. Aqueous humor from ex vivo porcine, feline, and human eyes had a thermal conductivity nearly equal to that of water. Including the anisotropy of corneal thermal conductivity will improve the predictive power of thermal models of the eye. PMID:23933570 9. Aqueous solubilities of phenol derivatives by conductivity measurements SciTech Connect Achard, C.; Jaoui, M.; Schwing, M.; Rogalski, M. 1996-05-01 The aqueous solubilities of five chlorophenols and three nitrophenols were measured by conductimetry at temperatures between 15 and 48C. The solubilities of 2-chlorophenol, 4-chlorophenol, 2,4-dichlorophenol, 2,4,6-trichlorophenol, pentachlorophenol, 2-nitrophenol, 4-nitrophenol, and 2,4-dinitrophenol were studied. Automatic conductivity measurements allow the determination of the solute concentration and, hence, the determination of the solubility. Emulsion formation can also be followed. Results obtained are in good agreement with literature values. 10. Measurement of soil hydraulic conductivity in relation with vegetation Chen, Xi; Cheng, Qinbo 2010-05-01 Hydraulic conductivity is a key parameter which influences hydrological processes of infiltration, surface and subsurface runoff. Vegetation alters surface characteristics (e.g., surface roughness, litter absorption) or subsurface characteristics (e.g. hydraulic conductivity). Field infiltration experiment of a single ring permeameter is widely used for measuring soil hydraulic conductivity. Measurement equipment is a simple single-ring falling head permeameter which consists of a hollow cylinder that is simply inserted into the top soil. An optimization method on the basis of objective of minimum error between the measured and simulated water depths in the single-ring is developed for determination of the soil hydraulic parameters. Using the single ring permeameter, we measured saturated hydraulic conductivities (Ks) of the red loam soil with and without vegetation covers on five hillslopes at Taoyuan Agro-Ecology Experimental Station, Hunan Province of China. For the measurement plots without vegetation roots, Ks value of the soil at 25cm depth is much smaller than that of surface soil (1.52×10-4 vs. 1.10×10-5 m/s). For the measurement plots with vegetation cover, plant roots significantly increase Ks of the lower layer soil but this increase is not significant for the shallow soil. Moreover, influences of vegetation root on Ks depend on vegetation species and ages. Ks value of the Camellia is about three times larger than that of seeding of Camphor (2.62×10-4 vs. 9.82×10-5 m/s). Ks value of the matured Camellia is 2.72×10-4 m/s while Ks value of the young Camellia is only 2.17×10-4 m/s. Key words: single ring permeameter; soil hydraulic conductivity; vegetation 11. Time-resolved Measurements of Spontaneous Magnetic Deflagration of Mn12 tBuAc Chen, Yizhang; Kent, A. D.; Zhang, Qing; Sarachik, M. P.; Baker, M. L.; Garanin, D. A.; Mhesn, Najah; Lampropoulos, Christos Magnetic deflagration in molecular magnets has been triggered by heat pulses and acoustic waves. In this work we report spontaneous magnetic deflagration (i.e. deflagration that occurs without an external trigger) in the axially symmetric single molecule magnet Mn12 tBuAc . Magnetic hysteresis measurements show steps due to resonant quantum tunneling (RQT) below 1K, confirming the spin-Hamiltonian parameters for this material and previous results. Deflagration speeds measured with a newly constructed higher bandwidth (2MHz) setup will be presented as a function of transverse and longitudinal fields Hx ⊗Hz both on and off resonance. A large increase in front velocity near RQT steps is observed in experiments with swept transverse fields and will be discussed in light of models of deflagration. Work supported by NSF-DMR-1309202 (NYU); ARO W911NF-13-1-0125 (CCNY); DMR-1161571(Lehman); Cottrell College Science Award (UNF). 12. Test Results of the AC Field Measurements of Fermilab Booster Corrector Magnets SciTech Connect DiMarco, E.Joseph; Harding, D.J.; Kashikhin, V.S.; Kotelnikov, S.K.; Lamm, M.J.; Makulski, A.; Nehring, R.; Orris, D.F.; Schlabach, P.; Sylvester, C.; Tartaglia, Michael Albert; /Fermilab 2008-06-25 Multi-element corrector magnets are being produced at Fermilab that enable correction of orbits and tunes through the entire cycle of the Booster, not just at injection. The corrector package includes six different corrector elements--normal and skew orientations of dipole, quadrupole, and sextupole--each independently powered. The magnets have been tested during typical AC ramping cycles at 15Hz using a fixed coil system to measure the dynamic field strength and field quality. The fixed coil is comprised of an array of inductive pick-up coils around the perimeter of a cylinder which are sampled simultaneously at 100 kHz with 24-bit ADC's. The performance of the measurement system and a summary of the field results are presented and discussed. 13. Measurement of electrical conductivity for a biomass fire. PubMed Mphale, Kgakgamatso; Heron, Mal 2008-08-01 A controlled fire burner was constructed where various natural vegetation species could be used as fuel. The burner was equipped with thermocouples to measure fuel surface temperature and used as a cavity for microwaves with a laboratory quality 2-port vector network analyzer to determine electrical conductivity from S-parameters. Electrical conductivity for vegetation material flames is important for numerical prediction of flashover in high voltage power transmission faults research. Vegetation fires that burn under high voltage transmission lines reduce flashover voltage by increasing air electrical conductivity and temperature. Analyzer determined electrical conductivity ranged from 0.0058 - 0.0079 mho/m for a fire with a maximum temperature of 1240 K. PMID:19325812 14. Experiment of electrical conductivity at low temperature (preliminary measurement) SciTech Connect Zhao, Y.; Wang, H. 1998-07-01 A muon collider needs very large amount of RF power, how to reduce the RF power consumption is of major concern. Thus the application of liquid nitrogen cooling has been proposed. However, it is known that the electrical conductivity depends on many factors and the data from different sources vary in a wide range, especially the data of conductivity of beryllium has no demonstration in a real application. Therefore it is important to know the conductivity of materials, which are commercially available, and at a specified frequency. Here, the results of the preliminary measurement on the electrical conductivity of copper at liquid nitrogen temperature are summarized. Addressed also are the data fitting method and the linear expansion of copper. 15. Measurement of Electrical Conductivity for a Biomass Fire PubMed Central Mphale, Kgakgamatso; Heron, Mal 2008-01-01 A controlled fire burner was constructed where various natural vegetation species could be used as fuel. The burner was equipped with thermocouples to measure fuel surface temperature and used as a cavity for microwaves with a laboratory quality 2-port vector network analyzer to determine electrical conductivity from S-parameters. Electrical conductivity for vegetation material flames is important for numerical prediction of flashover in high voltage power transmission faults research. Vegetation fires that burn under high voltage transmission lines reduce flashover voltage by increasing air electrical conductivity and temperature. Analyzer determined electrical conductivity ranged from 0.0058 - 0.0079 mho/m for a fire with a maximum temperature of 1240 K. PMID:19325812 16. Measurements of prompt radiation induced conductivity of alumina and sapphire. SciTech Connect Hartman, E. Frederick; Zarick, Thomas Andrew; Sheridan, Timothy J.; Preston, Eric F. 2011-04-01 We performed measurements of the prompt radiation induced conductivity in thin samples of Alumina and Sapphire at the Little Mountain Medusa LINAC facility in Ogden, UT. Five mil thick samples were irradiated with pulses of 20 MeV electrons, yielding dose rates of 1E7 to 1E9 rad/s. We applied variable potentials up to 1 kV across the samples and measured the prompt conduction current. Analysis rendered prompt conductivity coefficients between 1E10 and 1E9 mho/m/(rad/s), depending on the dose rate and the pulse width for Alumina and 1E7 to 6E7 mho/m/(rad/s) for Sapphire. 17. Determining aerodynamic conductance of spar chambers from energy balance measurements Technology Transfer Automated Retrieval System (TEKTRAN) The aerodynamic conductance (gA) of SPAR chambers was determined from measurements of energy balance and canopy temperature over a peanut canopy. gA was calculated from the slope of sensible heat flux (H) versus canopy-to-air temperature difference. H and the canopy-to-air temperature were varied by... 18. Measuring Impulsivity in Adolescents with Serious Substance and Conduct Problems ERIC Educational Resources Information Center Thompson, Laetitia L.; Whitmore, Elizabeth A.; Raymond, Kristen M.; Crowley, Thomas J. 2006-01-01 Adolescents with substance use and conduct disorders have high rates of aggression and attention deficit hyperactivity disorder (ADHD), all of which have been characterized in part by impulsivity. Developing measures that capture impulsivity behaviorally and correlate with self-reported impulsivity has been difficult. One promising behavioral… 19. Apparatus measures thermal conductivity of honeycomb-core panels NASA Technical Reports Server (NTRS) 1966-01-01 Overall thermal conductivity of honeycomb-core panels at elevated temperatures is measured by an apparatus with a heater assembly and a calibrated heat-rate transducer. The apparatus has space between the heater and transducer for insertion of a test panel and insulation. 20. Spectral Measurements from the Optical Emission of the A.C. Plasma Anemometer Matlis, Eric; Marshall, Curtis; Corke, Thomas; Gogineni, Sivaram 2015-11-01 The optical emission properties of a new class of AC-driven flow sensors based on a glow discharge (plasma) is presented. These results extend the utility of the plasma sensor that has recently been developed for measurements in high-enthalpy flows. The plasma sensor utilizes a high frequency (1MHz) AC discharge between two electrodes as the main sensing element. The voltage drop across the discharge correlates to changes in the external flow which can be calibrated for mass-flux (ρU) or pressure depending on the design of the electrodes and orientation relative to the free-stream flow direction. Recent experiments examine the potential for spectral analysis of the optical emission of the discharge to provide additional insight to the flow field. These experiments compare the optical emission of the plasma to emission from breakdown due to an ND:YAG laser. The oxygen 777.3 nm band in particular is a focus of interest as a marker for the determination of gas density. 1. Measuring the hydraulic conductivity of shallow submerged sediments. PubMed Kelly, Susan E; Murdoch, Lawrence C 2003-01-01 The hydraulic conductivity of submerged sediments influences the interaction between ground water and surface water, but few techniques for measuring K have been described with the conditions of the submerged setting in mind. Two simple, physical methods for measuring the hydraulic conductivity of submerged sediments have been developed, and one of them uses a well and piezometers similar to well tests performed in terrestrial aquifers. This test is based on a theoretical analysis that uses a constant-head boundary condition for the upper surface of the aquifer to represent the effects of the overlying water body. Existing analyses of tests used to measure the hydraulic conductivity of submerged sediments may contain errors from using the same upper boundary conditions applied to simulate terrestrial aquifers. Field implementation of the technique requires detecting minute drawdowns in the vicinity of the pumping well. Low-density oil was used in an inverted U-tube manometer to amplify the head differential so that it could be resolved in the field. Another technique was developed to measure the vertical hydraulic conductivity of sediments at the interface with overlying surface water. This technique uses the pan from a seepage meter with a piezometer fixed along its axis (a piezo-seep meter). Water is pumped from the pan and the head gradient is measured using the axial piezometer. Results from a sandy streambed indicate that both methods provide consistent and reasonable estimates of K. The pumping test allows skin effects to be considered, and the field data show that omitting the skin effect (e.g., by using a single well test) can produce results that underestimate the hydraulic conductivity of streambeds. PMID:12873006 2. Comparison of different methods for measuring thermal conductivities SciTech Connect Hartung, D.; Gather, F.; Klar, P. J. 2012-06-26 Two different methods for the measurement of the thermal conductivity have been applied to a glass (borosilicate) bulk sample. The first method was in the steady-state using an arrangement of gold wires on the sample to create a thermal gradient and to measure the temperatures locally. This allows one to calculate the in-plane thermal conductivity of the sample. The same wire arrangement was also used for a 3{omega}-measurement of the direction-independent bulk thermal conductivity. The 3{omega}-approach is based on periodical heating and a frequency dependent analysis of the temperature response. The results of both methods are in good agreement with each other for this isotropic material, if thermal and radiative losses are accounted for. Our results demonstrate that especially in the case of thin-film measurements, finite element analysis has to be applied to correct for heat losses due to geometry and radiation. In this fashion, the wire positions can be optimized in order to minimize measurement errors. 3. Thermal conductance measurement of windows: An innovative radiative method SciTech Connect Arpino, F.; Buonanno, G.; Giovinco, G. 2008-09-15 Heat transfer through window surfaces is one of the most important contributions to energy losses in buildings. Therefore, great efforts are made to design new window frames and glass assemblies with low thermal conductance. At the same time, it is also necessary to develop accurate measurement techniques in thermal characterisation of the above-mentioned building components. In this paper the authors show an innovative measurement method mainly based on radiative heat transfer (instead of the traditional convective one) which allows window thermal conductance measurements with corresponding uncertainty budget evaluation. The authors used the 3D finite volume software FLUENT {sup registered} to design the experimental apparatus. The numerical results have been employed for the system optimisation and metrological characterisation. (author) 4. Thermal Conductivity Measurement of Liquid-Quenched Higher Manganese Silicides Nishino, Shunsuke; Miyata, Masanobu; Ohdaira, Keisuke; Koyano, Mikio; Takeuchi, Tsunehiro 2016-03-01 Higher manganese silicides (HMSs, MnSi γ , γ ˜ 1.75) show promise for use as low-cost and environmentally friendly thermoelectric materials. To reduce their thermal conductivity, we partially substituted the Mn site with heavy elements using liquid quenching. Fabricated samples possess a curly ribbon-shape with about a 10- μm thickness and 1-mm width, with high surface roughness. In this study, we determined the thermal conductivity of the curly-ribbon-shaped samples using two independent methods: the 3 ω method with two heat flow models, and the steady-state method using a physical property measurement system (PPMS; Quantum Design). We succeeded in estimating the thermal conductivity at the temperature range of 100-200 K using the PPMS. The estimated thermal conductivity of non-doped HMSs shows a constant value without temperature dependence of 2.2 ± 0.8 W K-1m-1 at 100-200 K. The difference of thermal conductivities of W-doped and non-doped HMSs was not recognized within the measurement error. 5. Device and method for measuring thermal conductivity of thin films NASA Technical Reports Server (NTRS) Amer, Tahani R. (Inventor); Subramanian, Chelakara (Inventor); Upchurch, Billy T. (Inventor); Alderfer, David W. (Inventor); Sealey, Bradley S. (Inventor); Burkett, Jr., Cecil G. (Inventor) 2001-01-01 A device and method are provided for measuring the thermal conductivity of rigid or flexible, homogeneous or heterogeneous, thin films between 50 .mu.m and 150 .mu.m thick with relative standard deviations of less than five percent. The specimen is sandwiched between like material, highly conductive upper and lower slabs. Each slab is instrumented with six thermocouples embedded within the slab and flush with their corresponding surfaces. A heat source heats the lower slab and a heat sink cools the upper slab. The heat sink also provides sufficient contact pressure onto the specimen. Testing is performed within a vacuum environment (bell-jar) between 10.sup.-3 to 10.sup.-6 Torr. An anti-radiant shield on the interior surface of the bell-jar is used to avoid radiation heat losses. Insulation is placed adjacent to the heat source and adjacent to the heat sink to prevent conduction losses. A temperature controlled water circulator circulates water from a constant temperature bath through the heat sink. Fourier's one-dimensional law of heat conduction is the governing equation. Data, including temperatures, are measured with a multi-channel data acquisition system. On-line computer processing is used for thermal conductivity calculations. 6. Error and uncertainty in Raman thermal conductivity measurements SciTech Connect Thomas Edwin Beechem; Yates, Luke; Graham, Samuel 2015-04-22 We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materials under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious. 7. Error and uncertainty in Raman thermal conductivity measurements DOE PAGESBeta Thomas Edwin Beechem; Yates, Luke; Graham, Samuel 2015-04-22 We investigated error and uncertainty in Raman thermal conductivity measurements via finite element based numerical simulation of two geometries often employed -- Joule-heating of a wire and laser-heating of a suspended wafer. Using this methodology, the accuracy and precision of the Raman-derived thermal conductivity are shown to depend on (1) assumptions within the analytical model used in the deduction of thermal conductivity, (2) uncertainty in the quantification of heat flux and temperature, and (3) the evolution of thermomechanical stress during testing. Apart from the influence of stress, errors of 5% coupled with uncertainties of ±15% are achievable for most materialsmore » under conditions typical of Raman thermometry experiments. Error can increase to >20%, however, for materials having highly temperature dependent thermal conductivities or, in some materials, when thermomechanical stress develops concurrent with the heating. A dimensionless parameter -- termed the Raman stress factor -- is derived to identify when stress effects will induce large levels of error. Together, the results compare the utility of Raman based conductivity measurements relative to more established techniques while at the same time identifying situations where its use is most efficacious.« less 8. Frequency and voltage dependent profile of dielectric properties, electric modulus and ac electrical conductivity in the PrBaCoO nanofiber capacitors Demirezen, S.; Kaya, A.; Yerişkin, S. A.; Balbaşı, M.; Uslu, İ. In this study, praseodymium barium cobalt oxide nanofiber interfacial layer was sandwiched between Au and n-Si. Frequency and voltage dependence of ε‧, ε‧, tanδ, electric modulus (M‧ and M″) and σac of PrBaCoO nanofiber capacitor have been investigated by using impedance spectroscopy method. The obtained experimental results show that the values of ε‧, ε‧, tanδ, M‧, M″ and σac of the PrBaCoO nanofiber capacitor are strongly dependent on frequency of applied bias voltage. The values of ε‧, ε″ and tanδ show a steep decrease with increasing frequency for each forward bias voltage, whereas the values of σac and the electric modulus increase with increasing frequency. The high dispersion in ε‧ and ε″ values at low frequencies may be attributed to the Maxwell-Wagner and space charge polarization. The high values of ε‧ may be due to the interfacial effects within the material, PrBaCoO nanofibers interfacial layer and electron effect. The values of M‧ and M″ reach a maximum constant value corresponding to M∞ ≈ 1/ε∞ due to the relaxation process at high frequencies, but both the values of M‧ and M″ approach almost to zero at low frequencies. The changes in the dielectric and electrical properties with frequency can be also attributed to the existence of Nss and Rs of the capacitors. As a result, the change in the ε‧, ε″, tanδ, M‧, M″ and ac electric conductivityac) is a result of restructuring and reordering of charges at the PrBaCoO/n-Si interface under an external electric field or voltage and interface polarization. 9. Simple uniaxial pressure device for ac-susceptibility measurements suitable for closed cycle refrigerator system. PubMed Arumugam, S; Manivannan, N; Murugeswari, A 2007-06-01 A simple design of the uniaxial pressure device for the measurement of ac-susceptibility at low temperatures using closed cycle refrigerator system is presented for the first time. This device consists of disc micrometer, spring holder attachment, uniaxial pressure cell, and the ac-susceptibility coil wound on stycast bobbin. It can work under pressure till 0.5 GPa and at the temperature range of 30-300 K. The performance of the system at ambient pressure is tested and calibrated with standard paramagnetic salts [Gd(2)O(3), Er(2)O(3), and Fe(NH(4)SO(4))(2)6H(2)O], Fe(3)O(4), Gd metal, Dy metal, superconductor (YBa(2)Cu(3)O(7)), manganite (La(1.85)Ba(0.15)MnO(3)), and spin glass material (Pr(0.8)Sr(0.2)MnO(3)). The performance of the uniaxial pressure device is demonstrated by investigating the uniaxial pressure dependence of La(1.85)Ba(0.15)MnO(3) single crystal with P||c axis. The Curie temperature (T(c)) decreases as a function of pressure with P||c axis (dT(c)dP(||c axis)=-11.65 KGPa) up to 46 MPa. The design is simple, is user friendly, and does not require pressure calibration. Measurement can even be made on thin and small size oriented crystals. The failure of the coil is remote under uniaxial pressure. The present setup can be used as a multipurpose uniaxial pressure device for the measurement of Hall effect and thermoelectric power with a small modification in the pressure cell. PMID:17614625 10. Measurement of Fracture Geometry for Accurate Computation of Hydraulic Conductivity Chae, B.; Ichikawa, Y.; Kim, Y. 2003-12-01 Fluid flow in rock mass is controlled by geometry of fractures which is mainly characterized by roughness, aperture and orientation. Fracture roughness and aperture was observed by a new confocal laser scanning microscope (CLSM; Olympus OLS1100). The wavelength of laser is 488nm, and the laser scanning is managed by a light polarization method using two galvano-meter scanner mirrors. The system improves resolution in the light axis (namely z) direction because of the confocal optics. The sampling is managed in a spacing 2.5 μ m along x and y directions. The highest measurement resolution of z direction is 0.05 μ m, which is the more accurate than other methods. For the roughness measurements, core specimens of coarse and fine grained granites were provided. Measurements were performed along three scan lines on each fracture surface. The measured data were represented as 2-D and 3-D digital images showing detailed features of roughness. Spectral analyses by the fast Fourier transform (FFT) were performed to characterize on the roughness data quantitatively and to identify influential frequency of roughness. The FFT results showed that components of low frequencies were dominant in the fracture roughness. This study also verifies that spectral analysis is a good approach to understand complicate characteristics of fracture roughness. For the aperture measurements, digital images of the aperture were acquired under applying five stages of uniaxial normal stresses. This method can characterize the response of aperture directly using the same specimen. Results of measurements show that reduction values of aperture are different at each part due to rough geometry of fracture walls. Laboratory permeability tests were also conducted to evaluate changes of hydraulic conductivities related to aperture variation due to different stress levels. The results showed non-uniform reduction of hydraulic conductivity under increase of the normal stress and different values of 11. Method of simultaneous measurement of radiative and lattice thermal conductivity. NASA Technical Reports Server (NTRS) Schatz, J. F.; Simmons, G. 1972-01-01 A new technique of high-temperature thermal-conductivity measurement is described. A CO2 gas laser is used to generate a low-frequency temperature wave at one face of a small disk-shaped sample, and an infrared detector views the opposite face to detect the phase of the emerging radiation. A mathematical expression is derived which enables phase data at several frequencies to be used for the simultaneous determination of thermal diffusivity and mean extinction coefficient. Lattice and radiative thermal conductivities are then calculated. Test results for sintered aluminum oxide at temperatures from 530 to 1924 K are within the range of error of previously existing data. 12. Assembly for electrical conductivity measurements in the piston cylinder device DOEpatents Watson, Heather Christine; Roberts, Jeffrey James 2012-06-05 An assembly apparatus for measurement of electrical conductivity or other properties of a sample in a piston cylinder device wherein pressure and heat are applied to the sample by the piston cylinder device. The assembly apparatus includes a body, a first electrode in the body, the first electrode operatively connected to the sample, a first electrical conductor connected to the first electrode, a washer constructed of a hard conducting material, the washer surrounding the first electrical conductor in the body, a second electrode in the body, the second electrode operatively connected to the sample, and a second electrical conductor connected to the second electrode. 13. Electrical conductivity measurements on silicate melts using the loop technique NASA Technical Reports Server (NTRS) Waff, H. S. 1976-01-01 A new method is described for measurement of the electrical conductivity of silicate melts under controlled oxygen partial pressure at temperatures to 1550 C. The melt samples are suspended as droplets on platinum-rhodium loops, minimizing iron loss from the melt due to alloying with platinum, and providing maximum surface exposure of the melt to the oxygen-buffering gas atmosphere. The latter provides extremely rapid equilibration of the melt with the imposed oxygen partial pressure. The loop technique involves a minimum of setup time and cost, provides reproducible results to within + or - 5% and is well suited to electrical conductivity studies on silicate melts containing redox cations. 14. System to Measure Thermal Conductivity and Seebeck Coefficient for Thermoelectrics NASA Technical Reports Server (NTRS) Kim, Hyun-Jung; Skuza, Jonathan R.; Park, Yeonjoon; King, Glen C.; Choi, Sang H.; Nagavalli, Anita 2012-01-01 The Seebeck coefficient, when combined with thermal and electrical conductivity, is an essential property measurement for evaluating the potential performance of novel thermoelectric materials. However, there is some question as to which measurement technique(s) provides the most accurate determination of the Seebeck coefficient at elevated temperatures. This has led to the implementation of nonstandardized practices that have further complicated the confirmation of reported high ZT materials. The major objective of the procedure described is for the simultaneous measurement of the Seebeck coefficient and thermal diffusivity within a given temperature range. These thermoelectric measurements must be precise, accurate, and reproducible to ensure meaningful interlaboratory comparison of data. The custom-built thermal characterization system described in this NASA-TM is specifically designed to measure the inplane thermal diffusivity, and the Seebeck coefficient for materials in the ranging from 73 K through 373 K. 15. Measurement of the ac Stark shift with a guided matter-wave interferometer Deissler, B.; Hughes, K. J.; Burke, J. H. T.; Sackett, C. A. 2008-03-01 The dynamic polarizability of Rb87 atoms was measured using a guided-wave Bose-Einstein condensate interferometer. Taking advantage of the large arm separations obtainable in our device, a well-calibrated laser beam is applied to one atomic packet and not the other, inducing a differential phase shift. The technique requires relatively low laser intensity and works for arbitrary optical frequencies. For off-resonant light, the ac polarizability is obtained with a statistical accuracy of 3% and a calibration uncertainty of 6%. On resonance, the dispersion-shaped behavior of the Stark shift is observed, but with a broadened linewidth that is attributed to collective light scattering effects. The resulting nonlinearity may prove useful for the production and control of squeezed quantum states. 16. Application of inverse heat conduction problem on temperature measurement Zhang, X.; Zhou, G.; Dong, B.; Li, Q.; Liu, L. Q. 2013-09-01 For regenerative cooling devices, such as G-M refrigerator, pulse tube cooler or thermoacoustic cooler, the gas oscillating bring about temperature fluctuations inevitably, which is harmful in many applications requiring high stable temperatures. To find out the oscillating mechanism of the cooling temperature and improve the temperature stability of cooler, the inner temperature of the cold head has to be measured. However, it is difficult to measure the inner oscillating temperature of the cold head directly because the invasive temperature detectors may disturb the oscillating flow. Fortunately, the outer surface temperature of the cold head can be measured accurately by invasive temperature measurement techniques. In this paper, a mathematical model of inverse heat conduction problem is presented to identify the inner surface oscillating temperature of cold head according to the measured temperature of the outer surface in a GM cryocooler. Inverse heat conduction problem will be solved using control volume approach. Outer surface oscillating temperature could be used as input conditions of inverse problem and the inner surface oscillating temperature of cold head can be inversely obtained. A simple uncertainty analysis of the oscillating temperature measurement also will be provided. 17. An AC phase measuring interferometer for measuring dn/dT of fused silica and calcium fluoride at 193 nm SciTech Connect Shagam, R.N. 1998-09-01 A novel method for the measurement of the change in index of refraction vs. temperature (dn/dT) of fused silica and calcium fluoride at the 193 nm wavelength has been developed in support of thermal modeling efforts for the development of 193 nm-based photolithographic exposure tools. The method, based upon grating lateral shear interferometry, uses a transmissive linear grating to divide a 193 nm laser beam into several beam paths by diffraction which propagate through separate identical material samples. One diffracted order passing through one sample overlaps the undiffracted beam from a second sample and forms interference fringes dependent upon the optical path difference between the two samples. Optical phase delay due to an index change from heating one of the samples causes the interference fringes to change sinusoidally with phase. The interferometer also makes use of AC phase measurement techniques through lateral translation of the grating. Results for several samples of fused silica and calcium fluoride are demonstrated. 18. Thermal conductivity measurements of particulate materials under Martian conditions NASA Technical Reports Server (NTRS) Presley, M. A.; Christensen, P. R. 1993-01-01 The mean particle diameter of surficial units on Mars has been approximated by applying thermal inertia determinations from the Mariner 9 Infrared Radiometer and the Viking Infrared Thermal Mapper data together with thermal conductivity measurement. Several studies have used this approximation to characterize surficial units and infer their nature and possible origin. Such interpretations are possible because previous measurements of the thermal conductivity of particulate materials have shown that particle size significantly affects thermal conductivity under martian atmospheric pressures. The transfer of thermal energy due to collisions of gas molecules is the predominant mechanism of thermal conductivity in porous systems for gas pressures above about 0.01 torr. At martian atmospheric pressures the mean free path of the gas molecules becomes greater than the effective distance over which conduction takes place between the particles. Gas particles are then more likely to collide with the solid particles than they are with each other. The average heat transfer distance between particles, which is related to particle size, shape and packing, thus determines how fast heat will flow through a particulate material.The derived one-to-one correspondence of thermal inertia to mean particle diameter implies a certain homogeneity in the materials analyzed. Yet the samples used were often characterized by fairly wide ranges of particle sizes with little information about the possible distribution of sizes within those ranges. Interpretation of thermal inertia data is further limited by the lack of data on other effects on the interparticle spacing relative to particle size, such as particle shape, bimodal or polymodal mixtures of grain sizes and formation of salt cements between grains. To address these limitations and to provide a more comprehensive set of thermal conductivities vs. particle size a linear heat source apparatus, similar to that of Cremers, was assembled to 19. Thermal conductivity measurements of proton-heated warm dense matter McKelvey, A.; Fernandez-Panella, A.; Hua, R.; Kim, J.; King, J.; Sio, H.; McGuffey, C.; Kemp, G. E.; Freeman, R. R.; Beg, F. N.; Shepherd, R.; Ping, Y. 2015-06-01 Accurate knowledge of conductivity characteristics in the strongly coupled plasma regime is extremely important for ICF processes such as the onset of hydrodynamic instabilities, thermonuclear burn propagation waves, shell mixing, and efficient x-ray conversion of indirect drive schemes. Recently, an experiment was performed on the Titan laser platform at the Jupiter Laser Facility to measure the thermal conductivity of proton-heated warm dense matter. In the experiment, proton beams generated via target normal sheath acceleration were used to heat bi-layer targets with high-Z front layers and lower-Z back layers. The stopping power of a material is approximately proportional to Z2 so a sharp temperature gradient is established between the two materials. The subsequent thermal conduction from the higher-Z material to the lower-Z was measured with time resolved streaked optical pyrometry (SOP) and Fourier domain interferometry (FDI) of the rear surface. Results will be used to compare predictions from the thermal conduction equation and the Wiedemann-Franz Law in the warm dense matter regime. Data from the time resolved diagnostics for Au/Al and Au/C Targets of 20-200 nm thickness will be presented. 20. Thermal Conductivity Based on Modified Laser Flash Measurement NASA Technical Reports Server (NTRS) Lin, Bochuan; Ban, Heng; Li, Chao; Scripa, Rosalia N.; Su, Ching-Hua; Lehoczky, Sandor L. 2005-01-01 The laser flash method is a standard method for thermal diffusivity measurement. It employs single-pulse heating of one side of a thin specimen and measures the temperature response of the other side. The thermal diffusivity of the specimen can be obtained based on a one-dimensional transient heat transfer analysis. This paper reports the development of a theory that includes a transparent reference layer with known thermal property attached to the back of sample. With the inclusion of heat conduction from the sample to the reference layer in the theoretical analysis, the thermal conductivity and thermal diffusivity of sample can be extracted from the temperature response data. Furthermore, a procedure is established to select two points from the data to calculate these properties. The uncertainty analysis indicates that this method can be used with acceptable levels of uncertainty. 1. Heating rate controller for thermally stimulated conductivity and thermoluminescence measurements. NASA Technical Reports Server (NTRS) Manning, E. G.; Littlejohn, M. A.; Oakley, E. M.; Hutchby , J. A. 1972-01-01 A temperature controller is described which enables the temperature of a sample mounted on a cold finger to be varied linearly with time. Heating rates between 0.5 and 10 K/min can be achieved for temperatures between 90 and 300 K. Provision for terminating the sample heating at any temperature between these extremes is available. The temperature can be held at the terminating temperature or be reduced to the starting temperature in a matter of minutes. The controller has been used for thermally stimulated conductivity measurements and should be useful for thermoluminescence measurements as well. 2. The role of probe oxide in local surface conductivity measurements SciTech Connect Barnett, C. J.; Kryvchenkova, O.; Wilson, L. S. J.; Maffeis, T. G. G.; Cobley, R. J.; Kalna, K. 2015-05-07 Local probe methods can be used to measure nanoscale surface conductivity, but some techniques including nanoscale four point probe rely on at least two of the probes forming the same low resistivity non-rectifying contact to the sample. Here, the role of probe shank oxide has been examined by carrying out contact and non-contact I V measurements on GaAs when the probe oxide has been controllably reduced, both experimentally and in simulation. In contact, the barrier height is pinned but the barrier shape changes with probe shank oxide dimensions. In non-contact measurements, the oxide modifies the electrostatic interaction inducing a quantum dot that alters the tunneling behavior. For both, the contact resistance change is dependent on polarity, which violates the assumption required for four point probe to remove probe contact resistance from the measured conductivity. This has implications for all nanoscale surface probe measurements and macroscopic four point probe, both in air and vacuum, where the role of probe oxide contamination is not well understood. 3. TRISO fuel compact thermal conductivity measurement instrument development Jensen, Colby Thermal conductivity is an important thermophysical property needed for effectively predicting fuel performance. As part of the Next Generation Nuclear Plant (NGNP) program, the thermal conductivity of tri-isotropic (TRISO) fuel needs to be measured over a temperature range characteristic of its usage. The composite nature of TRISO fuel requires that measurement be performed over the entire length of the compact in a non-destructive manner. No existing measurement system is capable of performing such a measurement. A measurement system has been designed based on the steady-state, guarded-comparative-longitudinal heat flow technique. The system as currently designed is capable of measuring cylindrical samples with diameters ˜12.3-mm (˜0.5″) with lengths ˜25-mm (˜1″). The system is currently operable in a temperature range of 400 K to 1100 K for materials with thermal conductivities on the order of 10 W/m/K to 70 W/m/K. The system has been designed, built, and tested. An uncertainty analysis for the determinate errors of the system has been performed finding a result of 5.5%. Finite element modeling of the system measurement method has also been accomplished demonstrating optimal design, operating conditions, and associated bias error. Measurements have been performed on three calibration/validation materials: SS304, 99.95% pure iron, and inconel 625. In addition, NGNP graphite with ZrO2 particles and NGNP AGR-2 graphite matrix only, both in compact form, have been measured. Results from the SS304 sample show agreement of better than 3% for a 300--600°C temperature range. For iron between 100--600°C, the difference with published values is <8% for all temperatures. The maximum difference from published data for inconel 625 is 5.8%, near 600°C. Both NGNP samples were measured from 100--800°C. All results are presented and discussed. Finally, a discussion of ongoing work is included as well as a brief discussion of implementation under other operating 4. Silicate bonding properties: Investigation through thermal conductivity measurements Lorenzini, M.; Cesarini, E.; Cagnoli, G.; Campagna, E.; Haughian, K.; Hough, J.; Losurdo, G.; Martelli, F.; Martin, I.; Piergiovanni, F.; Reid, S.; Rowan, S.; van Veggel, A. A.; Vetrano, F. 2010-05-01 A direct approach to reduce the thermal noise contribution to the sensitivity limit of a GW interferometric detector is the cryogenic cooling of the mirrors and mirrors suspensions. Future generations of detectors are foreseen to implement this solution. Silicon has been proposed as a candidate material, thanks to its very low intrinsic loss angle at low temperatures and due to its very high thermal conductivity, allowing the heat deposited in the mirrors by high power lasers to be efficiently extracted. To accomplish such a scheme, both mirror masses and suspension elements must be made of silicon, then bonded together forming a quasi-monolithic stage. Elements can be assembled using hydroxide-catalysis silicate bonding, as for silica monolithic joints. The effect of Si to Si bonding on suspension thermal conductance has therefore to be experimentally studied. A measurement of the effect of silicate bonding on thermal conductance carried out on 1 inch thick silicon bonded samples, from room temperature down to 77 K, is reported. In the explored temperature range, the silicate bonding does not seem to affect in a relevant way the sample conductance. 5. Evaluation of DC electric field distribution of PPLP specimen based on the measurement of electrical conductivity in LN2 Hwang, Jae-Sang; Seong, Jae-Kyu; Shin, Woo-Ju; Lee, Jong-Geon; Cho, Jeon-Wook; Ryoo, Hee-Suk; Lee, Bang-Wook 2013-11-01 High temperature superconducting (HTS) cable has been paid much attention due to its high efficiency and high current transportation capability, and it is also regarded as eco-friendly power cable for the next generation. Especially for DC HTS cable, it has more sustainable and stable properties compared to AC HTS cable due to the absence of AC loss in DC HTS cable. Recently, DC HTS cable has been investigated competitively all over the world, and one of the key components of DC HTS cable to be developed is a cable joint box considering HVDC environment. In order to achieve the optimum insulation design of the joint box, analysis of DC electric field distribution of the joint box is a fundamental process to develop DC HTS cable. Generally, AC electric field distribution depends on relative permittivity of dielectric materials but in case of DC, electrical conductivity of dielectric material is a dominant factor which determines electric field distribution. In this study, in order to evaluate DC electric field characteristics of the joint box for DC HTS cable, polypropylene laminated paper (PPLP) specimen has been prepared and its DC electric field distribution was analyzed based on the measurement of electrical conductivity of PPLP in liquid nitrogen (LN2). Electrical conductivity of PPLP in LN2 has not been reported yet but it should be measured for DC electric field analysis. The experimental works for measuring electrical conductivity of PPLP in LN2 were presented in this paper. Based on the experimental works, DC electric field distribution of PPLP specimen was fully analyzed considering the steady state and the transient state of DC. Consequently, it was possible to determine the electric field distribution characteristics considering different DC applying stages including DC switching on, DC switching off and polarity reversal conditions. 6. Analysis of measurements of the thermal conductivity of liquid urania SciTech Connect Fink, J.K.; Leibowitz, L. 1984-09-17 An analysis was performed of the three existing measurements of the thermal conductivity and thermal diffusivity of molten uranium dioxide. A transient heat transfer code (THTB) was used for this analysis. A much smaller range of values for thermal conductivity than originally reported was found: the original values ranged from 2.4 to 11 W . m/sup -1/ . K/sup -1/, with a mean of 7.3 W . m/sup -1/ . K/sup -1/, whereas the recalculated values ranged from 4.5 to 6.75 W . m/sup -1/ . K/sup -1/, with a mean of 5.6 W . m/sup -1/ . K/sup -1/. 7. Thermal conductivity measurements in a 2D Yukawa system Nosenko, V.; Ivlev, A.; Zhdanov, S.; Morfill, G.; Goree, J.; Piel, A. 2007-03-01 Thermal conductivity was measured for a 2D Yukawa system. First, we formed a monolayer suspension of microspheres in a plasma, i.e., a dusty plasma, which is like a colloidal suspension, but with an extremely low volume fraction and a partially-ionized rarefied gas instead of solvent. In the absence of manipulation, the suspension forms a 2D triangular lattice. To melt this lattice and form a liquid, we used a laser-heating method. Two focused laser beams were moved rapidly around in the monolayer. The kinetic temperature of the particles increased with the laser power applied, and above a threshold a melting transition occurred. We used digital video microscopy for direct imaging and particle tracking. The spatial profiles of the particle kinetic temperature were calculated. Using the heat transport equation with an additional term to account for the energy dissipation due to the gas drag, we analyzed the temperature distribution to derive the thermal conductivity. 8. Thermal conductivity measurements of CH and Be by refraction-enhanced x-ray radiography Ping, Yuan; King, Jim; Landen, Otto; Whitley, Heather; London, Rich; Hamel, Sebastien; Sterne, Phil; Panella, Amalia; Freeman, Rick; Collins, Gilbert 2015-06-01 Transport properties of warm dense matter are important for modeling the growth of hydrodynamic instabilities near the fuel-ablator interface in an ICF capsule, which determines the mix level in the fuel and thus is critical for successful ignition. A novel technique, time-resolved refraction-enhanced x-ray radiography, has been developed to study thermal conductivity at an interface. Experiments using OMEGA laser have been carried out for CH/Be targets isochorically heated by x-rays to measure the evolution of the density gradient at the interface due to thermal conduction. The sensitivity of this radiographic technique to discontinuities enabled observation of shock/rarefraction waves propagating away from the interface. The radiographs provide enough constraints on the temperatures, densities and scale lengths in CH and Be, respectively. Preliminary data analysis suggests that the thermal conductivities of CH and Be at near solid density and a few eV temperature are higher than predictions by the commonly used Lee-More model. Detailed analysis and comparison with various models will be presented. The work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Security, LLC, Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. 9. Thermal conductivity and emissivity measurements of uranium carbides Corradetti, S.; Manzolaro, M.; Andrighetto, A.; Zanonato, P.; Tusseau-Nenez, S. 2015-10-01 Thermal conductivity and emissivity measurements on different types of uranium carbide are presented, in the context of the ActiLab Work Package in ENSAR, a project within the 7th Framework Program of the European Commission. Two specific techniques were used to carry out the measurements, both taking place in a laboratory dedicated to the research and development of materials for the SPES (Selective Production of Exotic Species) target. In the case of thermal conductivity, estimation of the dependence of this property on temperature was obtained using the inverse parameter estimation method, taking as a reference temperature and emissivity measurements. Emissivity at different temperatures was obtained for several types of uranium carbide using a dual frequency infrared pyrometer. Differences between the analyzed materials are discussed according to their compositional and microstructural properties. The obtainment of this type of information can help to carefully design materials to be capable of working under extreme conditions in next-generation ISOL (Isotope Separation On-Line) facilities for the generation of radioactive ion beams. 10. The unsaturated hydraulic conductivity: measurement and non-equilibrium effects Weller, U.; Vogel, H. 2010-12-01 The unsaturated hydraulic conductivity of porous media is a central item in hydraulic modeling. It is hard to measure and therefore in most applications it is represented by some kind of model based on indirect measurements. The validity is hardly ever checked. We have developed a fairly easy and automatic measurement procedure that allows to determine directly the hydraulic conductivity of a sample at different water potentials. The sample is placed on a plate where the potential can be controlled. It is then irrigated from the top with a constant, predefined flow rate. Tensiometers control the water potential within the sample, the topmost one is used to steer the potential at the lower boundary. It can be seen that the sample equilibrates fairly quick to a constant potential throughout the sample, and thus the conductivity of the material at the measured potential is equal to the applied flux while gravity is the only driving force. The change in water content is monitored by a balance. We have measured several materials, soils and sand substrates, with a protocol where we first lower the flux stepwise and then rise it again. The samples reach quick an equilibrium, as can be seen by the control tensiometer. Coming from the wet side, with a high flux, and lowering this flux, we observe a fast drop in potential, and in water content. But then the water potential rises again, while the water content remains constant or drops even slightly. When rising the flux again, we observe the opposite effect, but less pronounced: after the initial rise in water potential while the system adapts to the new flow rate, the potential lowers slightly. This behavior occurs over a certain range of water potential, it is not present in the very wet or very dry range. Also, the range in which it occurs and the magnitude of the effect depends on the studied material: pure sands express the pattern very clearly, it is much less obvious in loamy soils. Also, the relation between water 11. Experimental measurements of the thermal conductivity of ash deposits: Part 1. Measurement technique SciTech Connect A. L. Robinson; S. G. Buckley; N. Yang; L. L. Baxter 2000-04-01 This paper describes a technique developed to make in situ, time-resolved measurements of the effective thermal conductivity of ash deposits formed under conditions that closely replicate those found in the convective pass of a commercial boiler. Since ash deposit thermal conductivity is thought to be strongly dependent on deposit microstructure, the technique is designed to minimize the disturbance of the natural deposit microstructure. Traditional techniques for measuring deposit thermal conductivity generally do not preserve the sample microstructure. Experiments are described that demonstrate the technique, quantify experimental uncertainty, and determine the thermal conductivity of highly porous, unsintered deposits. The average measured conductivity of loose, unsintered deposits is 0.14 {+-} 0.03 W/(m K), approximately midway between rational theoretical limits for deposit thermal conductivity. 12. Apparent thermal conductivity measurements by an unguarded technique Graves, R. S.; Yarbrough, D. W.; McElroy, D. L. An unguarded longitudinal heat flow apparatus for measuring the apparent thermal conductivity (lambda/sub a) of insulations was tested. Heat flow is provided by a horizontal electrically heated Nichrome screen sandwiched between test samples that are bounded by temperature controlled copper plates and 9 cm of mineral fiber insulation. A determinate error analysis shows lambda/sub a/ measurement uncertainty to be less than + or - 1.7% for insulating materials as thin as 3 cm. Three-dimensional thermal modeling indicates negligible error in lambda/sub a/ due to edge loss for insulations up to 7.62 cm thick when the temperature difference across the sample is measured at the screen center. System repeatability and reproducibility were determined to be + or - 0.2%. Differences of lambda/sub a/ results from the screen tester and results from the National Bureau of Standards were 0.1% for a 10-kg/m(3) Calibration Transfer Standard and 0.9% for 127-kg/m(3) fibrous glass board (SRM 1450b). Measurements on fiberglass and rock wool batt insulations showed the dependence of lambda/sub a/ on density, temperature, temperature difference, plate emittance, and heat flow direction. Results obtained for lambda/sub a/ as a function of density at 240C differed by less than 2% from values obtained with a guarded hot plate. It is demonstrated that this simple technique has the accuracy and sensitivity needed for useful lambda/sub a/ measurements on thermal insulating materials. 13. Measurement of klystron phase modulation due to ac-powered filaments NASA Technical Reports Server (NTRS) Finnegan, E. J. 1977-01-01 A technique for determining the intermodulation components in the RF spectrum of the S-band radar transmitter generated by having the klystron filaments heated by 400-Hz ac power is described. When the klystron is being operated with 400-Hz (ac) on the filament, the IPM is buried in the 400-Hz equipment interference noise. The modulation sidebands were separated and identified and found to be-67 db below the main carrier. This is well below the transmitter specifications, and operating the filaments on ac would not degrade the spectrum to where it would be detrimental to the radiated RF. 14. Quantitative measurements of root water uptake and root hydraulic conductivities Zarebanadkouki, Mohsen; Javaux, Mathieu; Meunier, Felicien; Couvreur, Valentin; Carminati, Andrea 2016-04-01 How is root water uptake distributed along the root system and what root properties control this distribution? Here we present a method to: 1) measure root water uptake and 2) inversely estimate the root hydraulic conductivities. The experimental method consists in using neutron radiography to trace deuterated water (D2O) in soil and roots. The method was applied to lupines grown aluminium containers filled with a sandy soil. When the lupines were 4 weeks old, D2O was locally injected in a selected soil regions and its transport was monitored in soil and roots using time-series neutron radiography. By image processing, we quantified the concentration of D2O in soil and roots. We simulated the transport of D2O into roots using a diffusion-convection numerical model. The diffusivity of the roots tissue was inversely estimated by simulating the transport of D2O into the roots during night. The convective fluxes (i.e. root water uptake) were inversely estimating by fitting the experiments during day, when plants were transpiring, and assuming that root diffusivity did not change. The results showed that root water uptake was not uniform along the roots. Water uptake was higher at the proximal parts of the lateral roots and it decreased by a factor of 10 towards the distal parts. We used the data of water fluxes to inversely estimate the profile of hydraulic conductivities along the roots of transpiring plants growing in soil. The water fluxes in the lupine roots were simulated using the Hydraulic Tree Model by Doussan et al. (1998). The fitting parameters to be adjusted were the radial and axial hydraulic conductivities of the roots. The results showed that by using the root architectural model of Doussan et al. (1998) and detailed information of water fluxes into different root segments we could estimate the profile of hydraulic conductivities along the roots. We also found that: 1) in a tap-rooted plant like lupine water is mostly taken up by lateral roots; (2) water 15. Contactless electrical conductivity measurement of metallic submicron-grain material: Application to the study of aluminum with severe plastic deformation. PubMed Mito, M; Matsui, H; Yoshida, T; Anami, T; Tsuruta, K; Deguchi, H; Iwamoto, T; Terada, D; Miyajima, Y; Tsuji, N 2016-05-01 We measured the electrical conductivity σ of aluminum specimen consisting of submicron-grains by observing the AC magnetic susceptibility resulting from the eddy current. By using a commercial platform for magnetic measurement, contactless measurement of the relative electrical conductivity σn of a nonmagnetic metal is possible over a wide temperature (T) range. By referring to σ at room temperature, obtained by the four-terminal method, σn(T) was transformed into σ(T). This approach is useful for cylinder specimens, in which the estimation of the radius and/or volume is difficult. An experiment in which aluminum underwent accumulative roll bonding, which is a severe plastic deformation process, validated this method of evaluating σ as a function of the fraction of high-angle grain boundaries. PMID:27250440 16. High-Resolution ac Measurements of the Hall Effect in Organic Field-Effect Transistors Chen, Y.; Yi, H. T.; Podzorov, V. 2016-03-01 We describe a high resolving power technique for Hall-effect measurements, efficient in determining Hall mobility and carrier density in organic field-effect transistors and other low-mobility systems. We utilize a small low-frequency ac magnetic field (Brms<0.25 T ) and a phase-sensitive (lock-in) detection of Hall voltage, with the necessary corrections for Faraday induction. This method significantly enhances the signal-to-noise ratio and eliminates the necessity of using high magnetic fields in Hall-effect studies. With the help of this method, we are able to obtain the Hall mobility and carrier density in organic transistors with a mobility as low as μ ˜0.3 cm2 V-1 s-1 by using a compact desktop apparatus and low magnetic fields. We find a good agreement between Hall-effect and electric-field-effect measurements, indicating that, contrary to the common belief, certain organic semiconductors with mobilities below 1 cm2 V-1 s-1 can still exhibit a fully developed, band-semiconductor-like Hall effect, with the Hall mobility and carrier density matching those obtained in longitudinal transistor measurements. This suggests that, even when μ <1 cm2 V-1 s-1 , charges in organic semiconductors can still behave as delocalized coherent carriers. This technique paves the way to ubiquitous Hall-effect studies in a wide range of low-mobility materials and devices, where it is typically very difficult to resolve the Hall effect even in very high dc magnetic fields. 17. Structural characterization, thermal, ac conductivity and dielectric properties of (C7H12N2)2[SnCl6]Cl2.1.5H2O Hajji, Rachid; Oueslati, Abderrazek; Hajlaoui, Fadhel; Bulou, Alain; Hlel, Faouzi 2016-05-01 (C7H12N2)2[SnCl6]Cl2.1.5H2O is crystallized at room temperature in the monoclinic system (space group P21/n). The isolated molecules form organic and inorganic layers parallel to the (a, b) plane and alternate along the c-axis. The inorganic layer is built up by isolated SnCl6 octahedrons. Besides, the organic layer is formed by 2,4-diammonium toluene cations, between which the spaces are filled with free Cl- ions and water molecules. The crystal packing is governed by means of the ionic N-H...Cl and Ow-H...Cl hydrogen bonds, forming a three-dimensional network. The thermal study of this compound is reported, revealing two phase transitions around 360(±3) and 412(±3) K. The electrical and dielectric measurements were reported, confirming the transition temperatures detected in the differential scanning calorimetry (DSC). The frequency dependence of ac conductivity at different temperatures indicates that the correlated barrier hopping (CBH) model is the probable mechanism for the ac conduction behavior. 18. Use of an advanced composite material in construction of a high pressure cell for magnetic ac susceptibility measurements Wang, X.; Misek, M.; Jacobsen, M. K.; Kamenev, K. V. 2014-10-01 The applicability of fibre-reinforced polymers for fabrication of high pressure cells was assessed using finite element analysis and experimental testing. Performance and failure modes for the key components of the cell working in tension and in compression were evaluated and the ways for optimising the designs were established. These models were used in construction of a miniature fully non-metallic diamond anvil cell for magnetic ac susceptibility measurements in a magnetic property measurement system. The cell is approximately 14 mm long, 8.5 mm in diameter and was demonstrated to reach a pressure of 5.6 GPa. AC susceptibility data collected on Dy2O3 demonstrate the performance of the cell in magnetic property measurements and confirm that there is no screening of the sample by the environment which typically accompanies the use of conventional metallic high pressure cells in oscillating magnetic fields. 19. Electrical conductivity measurements of bacterial nanowires from Pseudomonas aeruginosa Maruthupandy, Muthusamy; Anand, Muthusamy; Maduraiveeran, Govindhan; Sait Hameedha Beevi, Akbar; Jeeva Priya, Radhakrishnan 2015-12-01 The extracellular appendages of bacteria (flagella) that transfer electrons to electrodes are called bacterial nanowires. This study focuses on the isolation and separation of nanowires that are attached via Pseudomonas aeruginosa bacterial culture. The size and roughness of separated nanowires were measured using transmission electron microscopy (TEM) and atomic force microscopy (AFM), respectively. The obtained bacterial nanowires indicated a clear image of bacterial nanowires measuring 16 nm in diameter. The formation of bacterial nanowires was confirmed by microscopic studies (AFM and TEM) and the conductivity nature of bacterial nanowire was investigated by electrochemical techniques. Cyclic voltammetry (CV) and electrochemical impedance spectroscopy (EIS), which are nondestructive voltammetry techniques, suggest that bacterial nanowires could be the source of electrons—which may be used in various applications, for example, microbial fuel cells, biosensors, organic solar cells, and bioelectronic devices. Routine analysis of electron transfer between bacterial nanowires and the electrode was performed, providing insight into the extracellular electron transfer (EET) to the electrode. CV revealed the catalytic electron transferability of bacterial nanowires and electrodes and showed excellent redox activities. CV and EIS studies showed that bacterial nanowires can charge the surface by producing and storing sufficient electrons, behave as a capacitor, and have features consistent with EET. Finally, electrochemical studies confirmed the development of bacterial nanowires with EET. This study suggests that bacterial nanowires can be used to fabricate biomolecular sensors and nanoelectronic devices. 20. Scanning Ion Conductance Microscopy for living cell membrane potential measurement Panday, Namuna Recently, the existence of multiple micro-domains of extracellular potential around individual cells have been revealed by voltage reporter dye using fluorescence microscopy. One hypothesis is that these long lasting potential patterns play a vital role in regulating important cell activities such as embryonic patterning, regenerative repair and reduction of cancerous disorganization. We used multifunctional Scanning Ion Conductance Microscopy (SICM) to study these extracellular potential patterns of single cell with higher spatial resolution. To validate this novel technique, we compared the extracellular potential distribution on the fixed HeLa cell surface and Polydimethylsiloxane (PDMS) surface and found significant difference. We then measured the extracellular potential distributions of living melanocytes and melanoma cells and found both the mean magnitude and spatial variation of extracellular potential of the melanoma cells are bigger than those of melanocytes. As compared to the voltage reporter dye based fluorescence microscope method, SICM can achieve quantitative potential measurements of non-labeled living cell membranes with higher spatial resolution. 1. Thermal contact conductance measurements on Doublet III armor tile graphite SciTech Connect Doll, D.W.; Reis, E. 1983-12-01 Several tests were performed on the Doublet III wall armor tiles to determine the cool-down rate and to evaluate improvements made by changing the conditions at the interface between the graphite tile and the stainless steel backing plate. Thermal diffusivity tests were performed in vacuum on both TiC coated and bare graphite tiles with and without 0.13 mm (.005'') thick silver foil at the interface. The results of the armor tile cool-down tests showed improvement when a 0.13 mm (0.005'') silver foil is used at the interface. At 2.1 x 10/sup 5/ Pa (30 psi) contact pressure, the e-folding cool-down times for a TiC coated tile, bare graphite and bare graphite with a 0.06 mm (0.0035'') silver shim were 10 min., 5.0 min., and 4.1 min., respectively. Tests using high contact pressures showed that the cool-down rates converged to approx. 4.0 min. At this limit, the conduction path along the backing plate to the two cooling tubes controls the heat flow, and no further improvement could be expected. Thermal diffusivity measurements confirmed the results of the cool-down test showing that by introducing a silver foil at the interface, the contact conductance between Poco AXF-5Q graphite and 316 stainless steel could be improved by a factor of three to eight. The tests showed an increasing improvement over a range of temperatures from 25/sup 0/C to 400/sup 0/C. The data provides a technical basis for further applications of graphite tiles to cooled backing plates. 2. Improved direct measurement of A(b) and A(c) at the Z(0) pole using a lepton tag. PubMed Abe, Kenji; Abe, Koya; Abe, T; Adam, I; Akimoto, H; Aston, D; Baird, K G; Baltay, C; Band, H R; Barklow, T L; Bauer, J M; Bellodi, G; Berger, R; Blaylock, G; Bogart, J R; Bower, G R; Brau, J E; Breidenbach, M; Bugg, W M; Burke, D; Burnett, T H; Burrows, P N; Calcaterra, A; Cassell, R; Chou, A; Cohn, H O; Coller, J A; Convery, M R; Cook, V; Cowan, R F; Crawford, G; Damerell, C J S; Daoudi, M; de Groot, N; de Sangro, R; Dong, D N; Doser, M; Dubois, R; Erofeeva, I; Eschenburg, V; Fahey, S; Falciai, D; Fernandez, J P; Flood, K; Frey, R; Hart, E L; Hasuko, K; Hertzbach, S S; Huffer, M E; Huynh, X; Iwasaki, M; Jackson, D J; Jacques, P; Jaros, J A; Jiang, Z Y; Johnson, A S; Johnson, J R; Kajikawa, R; Kalelkar, M; Kang, H J; Kofler, R R; Kroeger, R S; Langston, M; Leith, D W G; Lia, V; Lin, C; Mancinelli, G; Manly, S; Mantovani, G; Markiewicz, T W; Maruyama, T; McKemey, A K; Messner, R; Moffeit, K C; Moore, T B; Morii, M; Muller, D; Murzin, V; Narita, S; Nauenberg, U; Neal, H; Nesom, G; Oishi, N; Onoprienko, D; Osborne, L S; Panvini, R S; Park, C H; Peruzzi, I; Piccolo, M; Piemontese, L; Plano, R J; Prepost, R; Prescott, C Y; Ratcliff, B N; Reidy, J; Reinertsen, P L; Rochester, L S; Rowson, P C; Russell, J J; Saxton, O H; Schalk, T; Schumm, B A; Schwiening, J; Serbo, V V; Shapiro, G; Sinev, N B; Snyder, J A; Staengle, H; Stahl, A; Stamer, P; Steiner, H; Su, D; Suekane, F; Sugiyama, A; Suzuki, S; Swartz, M; Taylor, F E; Thom, J; Torrence, E; Usher, T; Va'vra, J; Verdier, R; Wagner, D L; Waite, A P; Walston, S; Weidemann, A W; Weiss, E R; Whitaker, J S; Williams, S H; Willocq, S; Wilson, R J; Wisniewski, W J; Wittlin, J L; Woods, M; Wright, T R; Yamamoto, R K; Yashima, J; Yellin, S J; Young, C C; Yuta, H 2002-04-15 The parity violation parameters A(b) and A(c) of the Zb(b) and Zc(c) couplings have been measured directly, using the polar angle dependence of the polarized cross sections at the Z(0) pole. Bottom and charmed hadrons were tagged via their semileptonic decays. Both the electron and muon analyses take advantage of new multivariate techniques to increase the analyzing power. Based on the 1993-1998 SLD sample of 550,000 Z(0) decays produced with highly polarized electron beams, we measure A(b) = 0.919+/-0.030(stat)+/-0.024(syst), and A(c) = 0.583+/-0.055(stat)+/-0.055(syst). PMID:11955189 3. AC conductivity and relaxation mechanism in (Nd1/2Li1/2)(Fe1/2V1/2)O3 ceramics Nath, Susmita; Barik, Subrat Kumar; Choudhary, R. N. P. 2016-05-01 In the present study we have synthesized polycrystalline sample of (Nd1/2Li1/2)(Fe1/2V1/2)O3 ceramic by a standard high-temperature solid-state reaction technique. Studies of dielectric and electrical properties of the compound have been carried out in a wide range of temperature (RT - 400 °C) and frequency (1kHz - 1MHz) using complex impedance spectroscopic technique. The imaginary vs. real component of the complex impedance plot (Nyquist plot) of the prepared sample exhibits the existence of grain, grain boundary contributions in the complex electrical parameters and negative temperature coefficient of resistance (NTCR) type behavior like semiconductor. Details study of ac conductivity plot reveals that the material obeys universal Jonscher's power law. 4. Analyzing the Effects of Capacitances-to-Shield in Sample Probes on AC Quantized Hall Resistance Measurements PubMed Central Cage, M. E.; Jeffery, A. 1999-01-01 We analyze the effects of the large capacitances-to-shields existing in all sample probes on measurements of the ac quantized Hall resistance RH. The object of this analysis is to investigate how these capacitances affect the observed frequency dependence of RH. Our goal is to see if there is some way to eliminate or minimize this significant frequency dependence, and thereby realize an intrinsic ac quantized Hall resistance standard. Equivalent electrical circuits are used in this analysis, with circuit components consisting of: capacitances and leakage resistances to the sample probe shields; inductances and resistances of the sample probe leads; quantized Hall resistances, longitudinal resistances, and voltage generators within the quantum Hall effect device; and multiple connections to the device. We derive exact algebraic equations for the measured RH values expressed in terms of the circuit components. Only two circuits (with single-series “offset” and quadruple-series connections) appear to meet our desired goals of measuring both RH and the longitudinal resistance Rx in the same cool-down for both ac and dc currents with a one-standard-deviation uncertainty of 10−8 RH or less. These two circuits will be further considered in a future paper in which the effects of wire-to-wire capacitances are also included in the analysis. 5. Incorporating residential AC load control into ancillary service markets: Measurement and settlement SciTech Connect Bode, Josh L.; Sullivan, Michael J.; Berghman, Dries; Eto, Joseph H. 2013-05-01 Many pre-existing air conditioner load control programs can provide valuable operational flexibility but have not been incorporated into electricity ancillary service markets or grid operations. Multiple demonstrations have shown that residential air conditioner (AC) response can deliver resources quickly and can provide contingency reserves. A key policy hurdle to be overcome before AC load control can be fully incorporated into markets is how to balance the accuracy, cost, and complexity of methods available for the settlement of load curtailment. Overcoming this hurdle requires a means for assessing the accuracy of shorter-term AC load control demand reduction estimation approaches in an unbiased manner. This paper applies such a method to compare the accuracy of approaches varying in cost and complexity ? including regression analysis, load matching and control group approaches ? using feeder data, household data and AC end-use data. We recommend a practical approach for settlement, relying on an annually updated set of tables, with pre-calculated reduction estimates. These tables allow users to look up the demand reduction per device based on daily maximum temperature, geographic region and hour of day, simplifying settlement and providing a solution to the policy problem presented in this paper. 6. Steady heat conduction-based thermal conductivity measurement of single walled carbon nanotubes thin film using a micropipette thermal sensor Shrestha, R.; Lee, K. M.; Chang, W. S.; Kim, D. S.; Rhee, G. H.; Choi, T. Y. 2013-03-01 In this paper, we describe the thermal conductivity measurement of single-walled carbon nanotubes thin film using a laser point source-based steady state heat conduction method. A high precision micropipette thermal sensor fabricated with a sensing tip size varying from 2 μm to 5 μm and capable of measuring thermal fluctuation with resolution of ±0.01 K was used to measure the temperature gradient across the suspended carbon nanotubes (CNT) film with a thickness of 100 nm. We used a steady heat conduction model to correlate the temperature gradient to the thermal conductivity of the film. We measured the average thermal conductivity of CNT film as 74.3 ± 7.9 W m-1 K-1 at room temperature. 7. Steady heat conduction-based thermal conductivity measurement of single walled carbon nanotubes thin film using a micropipette thermal sensor. PubMed Shrestha, R; Lee, K M; Chang, W S; Kim, D S; Rhee, G H; Choi, T Y 2013-03-01 In this paper, we describe the thermal conductivity measurement of single-walled carbon nanotubes thin film using a laser point source-based steady state heat conduction method. A high precision micropipette thermal sensor fabricated with a sensing tip size varying from 2 μm to 5 μm and capable of measuring thermal fluctuation with resolution of ±0.01 K was used to measure the temperature gradient across the suspended carbon nanotubes (CNT) film with a thickness of 100 nm. We used a steady heat conduction model to correlate the temperature gradient to the thermal conductivity of the film. We measured the average thermal conductivity of CNT film as 74.3 ± 7.9 W m(-1) K(-1) at room temperature. PMID:23556837 8. Synthesis and characterization of cancrinite-type zeolite, and its ionic conductivity study by AC impedance analysis Kriaa, A.; Ben Saad, K.; Hamzaoui, A. H. 2012-12-01 The synthesis of cancrinite in the system NaOH-SiO2-Al2O3-NaHCO3-H2O was performed, according to methods described in the literature, in an autoclave under hydrothermal conditions at T = 473 K. The electrical properties of cancrinite-type zeolite pellets were investigated by complex impedance spectroscopy in the temperature range 465-800°C. The effect of temperature on impedance parameters was studied using an impedance analyzer in a wide frequency range (1 Hz to 13 MHz). The real and imaginary parts of complex impedance trace semicircles in the complex plane are plotted. The bulk resistance of the material decreases with rise in temperature. This exhibits a typical negative temperature coefficient of resistance (NTCR) behavior of the material. The results of bulk electrical conductivity and its activation energy are presented. The modulus analysis suggests that the electrical transport processes in the material are very likely to be of electronic nature. Relaxation frequencies follow an Arrhenius behavior with activation energy values not comparable to those found for the electrical conductivity. 9. SPHINX Measurements of Radiation Induced Conductivity of Foam SciTech Connect Ballard, W.P.; Beutler, D.E.; Burt, M.; Dudley, K.J.; Stringer, T.A. 1998-12-14 Experiments on the SPHINX accelerator studying radiation-induced conductivity (RIC) in foam indicate that a field-exclusion boundary layer model better describes foam than a Maxwell-Garnett model that treats the conducting gas bubbles in the foam as modifying the dielectric constant. In both cases, wall attachment effects could be important but were neglected. 10. Conductivity tensor of graphene through reflection of microwave measurements
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8273652195930481, "perplexity": 3516.2554814017253}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719638.55/warc/CC-MAIN-20161020183839-00537-ip-10-171-6-4.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/490343/nine-digit-sequences-with-exactly-one-zero-two-ones-three-twos
# Nine digit sequences with exactly one zero, two ones, three twos I'm working on a problem where I am to find the number of nine digit sequences when there are exactly one zero, two ones and three twos. I worked up a solution, but is it correct? Here's my line of thinking: There are six fixed numbers, which leaves three free. These free can be 3, 4, ..., 9. They can be chosen in $7^3$ different ways. As order matters, 9 objects may be permuted in $9!$ ways. However, there are two ones and three twos, which forces us to divide by $2!3!$. All in all, this gives us $$\frac{7^3 9!}{2!3!}$$ This is under the assumption that a zero can come first. The question doesn't address this, but if it cannot then we have the same situation as above only that the zero must not come first. So from $\frac{7^3 9!}{2!3!}$ we need to subtract the disallowed situations. Fixing the first number to zero leaves us 8 numbers to permute giving a final count of: $$\frac{7^3 9!}{2!3!}-\frac{7^3 8!}{2!3!}=\frac{7^3}{12}\left(9!-8!\right)$$ Is this correct? Am I missing something? As I wrote it down here I realized I should probably divide by something more -- don't I, for example, need to account for the cases when the free digits are not unique? - The positions for the symbols in $\{0,1,2\}$ can be chosen in $\binom{9}{6}=84$. Once chosen, the symbols $0,1,1,2,2,2$ can be arranged in those cells in $\binom{6}{1,2,3}=60$ ways, using the multinomial coefficient. The remaining $3$ cells must be assigned symbols from $\{3,4,\ldots,9\}$, which can be achieved in $7^3$ ways. So, in total we have $$\binom{9}{6}\binom{6}{1,2,3}7^3=1728720$$ such sequences. - Iam i correct we seem to differ –  Willemien Sep 11 '13 at 17:42 @Willemien no you have the same answers. ${n \choose 6}{6 \choose 1, 2, 3}7^3=\frac{9!}{6!3!}\frac{6!}{1!2!3!}7^3=\frac{9!7^3}{2!3!3!}$ which you also got. Thanks both of you! –  hejseb Sep 13 '13 at 8:35 sorry you are missing something see it as 4 groups • one 0 • two 1 • three 2 • tree others for the 3 others there are $7^3$ possibilities and for the groups $$\frac{ 9!}{2!3!3!}$$ so in total $$\frac{7^3 9!}{2!3!3!}$$ (you are a factor 6 off) if the first number cannot be 0 , multiply it by 8 divide by 9. Good luck -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8977116346359253, "perplexity": 394.00249032236945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736678979.28/warc/CC-MAIN-20151001215758-00157-ip-10-137-6-227.ec2.internal.warc.gz"}
http://www.aiaccess.net/English/Glossaries/GlosMod/e_gm_multinormal_distri.htm
Multivariate normal distribution We suggest that you first report to the entry about the bivariate normal distribution. ----- The multivariate normal distribution (or "multinormal distribution") is the most important multidimensional distribution. It is common for real world data to be at least approximately multinormally distributed, and some techniques like Discriminant Analysis explicitely make this assumption. # Definition of the multivariate normal distribution Our goal is to generalize the ordinary (univariate) normal distribution to random vectors. This can be done in several different ways. ## The wrong way We first insist on what the multivariate normal distribution is not. A natural idea would be to "define" the multivariate normal distribution as a distribution whose marginals are all normal. But we know that there exist bivariate distributions whose marginals are normal, and yet that are not binormal. We therefore have to forego this erroneous approach. ## The bivariate normal distribution We defined the bivariate normal distribution by introducing an adjustable correlation coefficient between its two normal marginals. Generalizing this approach to more than two variates is cumbersome, and will not be pursued. ## Linear combinations of the marginals It is possible to define the multivariate normal distribution as a distribution such that any linear combination of its marginals is (univariate) normal. We will not use this definition, but we'll show that this result is indeed a characteristic property of the multivariate normal distribution. ## Transformation of the standard spherical multinormal distribution The standard spherical multivariate normal distribution is defined as the joint distribution of p independent univariate standard normal variables. The (general) multivariate normal distribution may be defined as the transform of this standard spherical distribution under any regular linear transformation. ## Formal generalization of the univariate normal distribution In this Glossary, we define the multivariate normal distribution by the analytical form of its probability density, that we'll choose to be a formal generalization of the univariate case. Recall that the normal distribution is : f(x) = k.exp[-1/2.a(x - b)²] where k and a are appropriate coefficients (see here). This expression is generalized to the multidimensional case as follows : * The variable x is replaced by the vector  x = {x1, x2, ..., xp} with p components. * The normalization coefficient k is replaced by the coefficient K whose role is also to make the integral of the density equal to 1. * The term a(x - b)² is replaced by a quadratic form in (x - b) : (x - b)A(x - b)' where b is a vector. In the univariate case, we have a = 1/², which is always positive. By analogy, we impose to the matrix A to be symmetric positive definite, a property that generalizes the notion of "positiveness" to matrices. ----- So, by definition, the distribution of the random vector X = {X1, X2, ..., Xp} is said to be a multivariate normal distribution if its probability density is : f(x) = K.exp[-1/2.(x - b)A(x - b)'] with A a symmetric positive definite matrix. # Basic properties of the multivariate normal distribution In the Tutorial below, we'll establish the following results : ## Normalization coefficient Recall that the normalisation coefficient of the (univariate) normal distribution is : We'll show that the normalisation coefficient of the multivariate normal distribution is : where "det" stands for "determinant". ## Mean Recall that the mean µ of the (univariate) normal distribution is equal to b. We'll establish that the mean vector of the multivariate normal distribution is equal to the vector b. E[X] = b Therefore, we'll later on replace b by µ. ## Covariance matrix In the univariate case, we have : a = 1/² and we'll show that in the multivariate case, we have : A = -1 where is the covariance matrix of X. ## Final form Bringing these these results together, we'll obtain the following final result : The density of a multivariate normal vector is : f(x) = (2)- p / 2.[det(-1)]-1/2 .exp[-1/2.(x - µ)-1(x - µ)'] that will be denoted N(µ, ) by analogy with the notation N(µ, ²) used for the univariate normal distribution. ----- So there's a perfect analogy with the univariate case, with the covariance matrix playing now the role that ² was playing for the univariate case. Of course, this expression reduces to the ordinary normal distribution when the "vector" X is reduced to a single component. # Marginal distributions of the multivariate normal distribution Let X = {X1, X2, ..., Xp} be a random vector. A marginal distribution of X is the joint distribution of any subset of the variables (X1, X2, ..., Xp). So there are as many marginal distributions as there are such subsets, that is 2 p - 2 (ignoring the empty and the complete subsets of variables). The following illustration shows the two marginal distributions of a bivariate normal distribution : We'll show that the marginal distributions of the multivariate normal distribution are also multinormal, a fundamental result. Let X = {X1, X2, ..., Xp} be a p-dimensional multinormal vector, and consider the vector X1 made up of the first k components of X : X1 = {X1, X2, ..., Xk}       k < p The p components of X can always be indexed in such a way that any subset of k components is made to be the subset of the first k components. Then : * The distribution of X1 is a multivariate normal distribution. * The k components of the mean vector of this distribution are the means of the k variables Xi. * Its covariance matrix (of order k) is made up of the pairwise covariances of the k variables Xi. This illustration represents the covariance matrix of X : The covariance matrix X1 of X1 is just the upper left corner square submatrix of order k of X  (lower image of the illustration). This submatrix is traditionally denoted 11. ----- When k = 1, this result shows that the individual components Xi of X = {X1, X2, ..., Xp} are (univariate) normal variables. # Conditional distributions of the multivariate normal distribution Let X = {X1, X2, ..., Xp} be a random vector. A "conditional distribution" of X is the joint distribution of any subset of of the variables (X1, X2, ..., Xp) when the other variables are held fixed. In other words, it is the normalized profile of a "cut" through the distribution of X made by a hyperplane defined by the (fixed) values assigned to the other variables. So there are as many marginal distributions as there are such subsets, that is 2 p - 2 (ignoring the empty and the complete subsets of variables). This illustration shows one of the two conditional distributions of a bivariate normal distribution. Let X = {X1, X2, ..., Xp} be a multinormal vector, that we partition into two sub-vectors : X = (X1, X2 ) We'll show that the distribution of X1 conditionally to X2 is multinormal, a fundamental property. In addition, we'll show the two following important properties : * The mean vector of this condtional distribution depends linearly on X2. * The covariance matrix of this conditional distribution does not depend on X2. This last point means that if a cutting hyperplane is translated parallel to itself, all the cuts it generates have the same covariance matrix. # Multivariate normal distribution and Regression We now consider the multinormal distribution from the point of view of using X2 for predicting X1. Identifying the conditional distributions of the multivariate normal distribution, and in particular the expectation of X1 conditionally to the value of X2 allows considering the multivariate normal distribution as a linear model of regression. For example, this figure illustrates the prediction of the vector X1 = (u, v) by the unique quantity X2 = (w). This illustration is reproduced and commented more thoroughly in the Tutorial below. This model exhibits strong similarities with the standard model of Multiple Linear Regression (MLR) : * The expectation of the response variable is a linear function of the predictors. * The residuals are normal an uncorrelated, but with also two important differences : * In MLR, the predictors are considered as fixed, and therefore are not random variables, whereas X2 is here a random vector. * MLR tries to predict the value of a unique response variable y, whereas the response variable is here a vector (a group of variables). We'll show that the model based on the conditional distributions of the multivariate normal distribution is better than any other linear model X1 = f(X2 ) in two respects : * It minimizes the Mean Square Error (MSE) between predictions and observations. * It maximizes the correlation coefficient between each of the variables and any linear combination of the other variables used for predicting the value of this variable. This coefficient is called the Multiple Correlation Coefficient attached to the variable, and we'll calculate its value. # Moment generating function of the multivariate normal distribution Let X ~ N(µ, ). We'll show that its m.g.f. MX(t)  is : MX(t) = exp{t'µ + 1/2.t't} where t is a vector parameter. We'll then use this result for demonstrating again and generalize some of the previous results, and also establish a useful characteristic property of the multivariate normal distribution. # Quadratic forms in multivariate normal variables Statistics is often facing quadratic forms is multivariate normal variables, in particular : * In Analysis of Variance (ANOVA), Under certain conditions that are detailed here, these quadratic forms follow (exactly) a Chi-square distribution. ----- The squared Mahalanobis distance is a quadratic form which follows a Chi-square distribution when the variable is a multinormal vector. # Simulation of a multinormal random vector We explain here how the association of the Box-Muller transform and the Mahalanobis transformation can be used for simulating an arbitrary multivariate normal distribution. ___________________________________________________________________________ Tutorial 1 We first show that an appropriate linear transformation transforms the general multivariate normal distribution into the simplest possible multivariate normal distribution : the standard spherical  normal distribution, which is defined as the joint distribution of p independent standard normal variables (lower image of this illustration). This transformation is useful in many circumstances, as it allows to carry problems about the general multivariate normal distribution over to this particularly simple distribution. ----- From this result, we'll derive : * The value of the normalization coefficient K, * And the mean µ, of the multivariate normal distribution. We'll then calculate the covariance matrix of the multivariate normal distribution, and show that it is equal to A-1, a fundamental result. THE MULTIVARIATE NORMAL DISTRIBUTION Spherization of the multivariate normal distribution Reduction of a positive definite matrix to the identity matrix Spherization The transformation The Jacobian The standard spherical normal distribution Normalization coefficient of the multivariate normal distribution Mean of the multivariate normal distribution Covariance matrix of the multivariate normal distribution Complete form of the multivariate normal distribution TUTORIAL __________________________________________________________________ Tutorial 2 In this Tutorial, we show that the marginal distributions of the normal multivariate distribution are also multinormal, and we calculate their parameters. There exist several ways to establish this important result, of which we give three. 1) We first use a very orthodox method, a bit lengthy but that stays close to intuition and delivers the additional important result that uncorrelated marginals are necessarily independent. - We first show that a regular linear transformation changes a multinormal distribution into another multinormal distribution, whose parameters we'll calculate. This lemma is useful in just about any circumstance involving multivariate normal distributions. - We then address the special case where every component Xi (i k) of X1 = {X1, X2, ..., Xk} is uncorrelated with any component Xj ( j > k) of X2 = {Xk + 1, X k + 2, ..., Xp}. We then show that the distribution of the marginal X1 is multinormal (with of course a similar result for X2 ). In passing, we'll also show that this lack of correlation between Xi and Xj is in fact genuine independence. - We finally address the general problem where no assumption whatever is made about the correlation between the components of X. We'll show that it is still true that the marginals X1 and X2  are multinormally distributed by identifying an appropriate transformation that will transform the original distribution into a new distribution for which the two marginals are uncorrelated. We'll then be able to deduce the multinormality of the marginals of the original distribution. 2) We then give a second demonstration that uses the above mentioned lemma to short-circuit all algebraic developments. It delivers the result in just a few lines, and is an illustration of the power of Linear Algebra at the expense of intuition. 3) Finally, we'll use the the moment generating function of the multivariate normal distribution to again establish this result in a simple and elegant manner (see below). MARGINALS ARE MULTINORMAL UNCORRELATED MARGINALS ARE INDEPENDENT Linear transform of a multinormal vector Transform of the quadratic form The Jacobian The distribution of the transform is multinormal Special case : the two groups of variables are uncorrelated Partitioning the covariance matrix Partitioning the quadratic form The marginals are multinormal Lack of correlation implies independence General case Transformation of the original distribution The marginals of the transformed distribution are multinormal The marginals of the original distribution are multinormal  Second demonstration TUTORIAL _______________________________________________________________ Tutorial 3 We now calculate the conditional distributions of the multivariate normal distribution. The most straightforward method would be to call on the fundamental property of conditional distributions, which states that a conditional distribution is equal to : * The joint distribution of the complete set of variables, * Divided by the joint distribution of the conditioning variables. In this particular case, this approach leads to fairly cumbersome calculations, and we'll find it more convenient to first transform the original distribution into a new distribution that can be conveniently factored. The inverse transformation will then allow us to write the original distribution is a form that will make calculating the conditional distributions very simple. ----- We'll then remark that : 1) A conditional distribution of a multivariate normal distribution is multinormal. 2) Its mean vector depends linearly on the conditioning vector. 3) Its covariance matrix does not depend on the value assigned to the conditioning vector. CONDITIONAL DISTRIBUTIONS OF THE MULTIVARIATE NORMAL DISTRIBUTION Factorization of the joint distribution Transformation of the joint distribution Definition of the transformation Mean vector Covariance matrix Factorization of the transformed distibution Factorization of the original distibution Conditional distributions of the multivariate normal distribution The conditional distributions are multinormal The mean vector depends linearly on the conditioning vector The covariance matrix does not depend on the conditioning vector TUTORIAL _______________________________________________________________ Tutorial 4 In this Tutorial, we look at the multinormal vector X = (X1, X2 ) from the point of view of using X2 for predicting X1. Identifying the conditional distributions of the multivariate normal distribution, and in particular the expectation of X1 conditionally to the value of X2 allows considering the multivariate normal distribution as a linear model of regression. ----- We will show that this model is better than any other linear model in at least two respects : * It minimizes the Mean Square Error (MSE) between predictions and observations. * It maximizes the correlation coefficient between each of the variables and any linear combination of the other variables used for predicting the value of this variable. This coefficient is called the Multiple Correlation Coefficient attached to the variable, and we'll calculate its value. MINIMIZATION OF THE PREDICTION MEAN SQUARED ERRORS MAXIMIZATION OF THE MULTIPLE CORRELATION COEFFICIENT Residuals Residual vector Predictor and residual are uncorrelated Minimization of the prediction Mean Squared Errors Multiple correlation coefficient Correlation between observations and predictions, multiple correlation The conditional expectation model maximizes the multiple correlation coefficient Value of the multiple correlation coefficient TUTORIAL ________________________________________________________________________ Tutorial 5 We now calculate the moment generating function M(t) of the multivariate normal distribution. Because this distribution is multivariate, the parameter t is a vector, but the function itself is scalar. As is often the case, the m.g.f. turns out to be a very convenient and powerful tool for establishing all sorts of results about a distribution, sometimes in a concise and elegant manner. We'll go over some of results that we already obtained somewhat laboriously using Linear Algebra, and show how they can also be obtained by using the m.g.f.. In particular, we'll generalize the result about linear transforms of a multinormal vector to the case where : * The matrix of the transformation is square, but singular. * The matrix of the transformation is rectangular. We'll also show that a characteristic property of the multinormal distribution is that any linear combination of its components is (univariate) normal. Recall that this property is sometimes used as the definition of the multivariate normal distribution. MOMENT GENERATING FUNCTION OF THE MULTIVARIATE NORMAL DISTRIBUTION Moment generating function of the multivariate normal distribution Special case : the spherical standard multinormal distribution The general case Some immediate consequences of the Moment Generating Function General linear transform of a multinormal vector is multinormal Marginals are multinormal Uncorrelation implies independence A vector is multinormal iff all linear combinations of its components are normal TUTORIAL __________________________________________________
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9597980380058289, "perplexity": 556.9159782228594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698080772/warc/CC-MAIN-20130516095440-00012-ip-10-60-113-184.ec2.internal.warc.gz"}
http://scitation.aip.org/content/aip/journal/jcp/123/23/10.1063/1.2136155
• journal/journal.article • aip/jcp • /content/aip/journal/jcp/123/23/10.1063/1.2136155 • jcp.aip.org 1887 No data available. No metrics data to plot. The attempt to plot a graph for these metrics has failed. Minimizing broadband excitation under dissipative conditions USD 10.1063/1.2136155 View Affiliations Hide Affiliations Affiliations: 1 Department of Physical Chemistry and the Fritz Haber Research Center for Molecular Dynamics, The Hebrew University, Jerusalem 91904, Israel a) Electronic mail: [email protected] J. Chem. Phys. 123, 234506 (2005) /content/aip/journal/jcp/123/23/10.1063/1.2136155 http://aip.metastore.ingenta.com/content/aip/journal/jcp/123/23/10.1063/1.2136155 View: Figures ## Figures FIG. 1. The pulse intensity spectrum along with absorption (solid line) and fluorescence (dashed line) spectra of LDS750 molecule in acetonitrile. Adapted from Ref. 2. FIG. 2. Excited-state population as a function of the linear chirp for the isolated system . The population is shown at the end of the pulse for the high (solid line) and the low (dashed line) fluences. The duration of the corresponding transform-limited pulse is . Note the different scale for the excited population for two energy regimes. FIG. 3. (Top panel) Time evolution of the excited-state population of the isolated system for the high-energy excitation. The population is calculated for a negatively [, solid line] and a positively [, dashed line] chirped pulses. (Bottom panel) The imaginary part of the transition dipole moment multiplied by the field amplitude. FIG. 4. (Color online) (Top panel) Trajectories of the transition dipole moment renormalized by its maximal amplitude for excitation by the linear negatively chirped pulse . (Bottom panel) Excited-state population at the end of the pulse as a function of the linear chirp parameter . The calculations are performed for the system without dissipation (solid line) and for the system with vibrational relaxation with weak (, dashed line), medium (, dashed-dotted line), and strong (, dotted line) system-bath couplings. FIG. 5. (Color online) (Top panel) Trajectories of the transition dipole moment renormalized by its maximal amplitude for excitation by the linear negatively chirped pulse . (Bottom panel) Excited-state population at the end of the pulse as a function of the linear chirp parameter . The calculations are performed for the isolated system (solid line) and for the system with pure electronic dephasing: weak (, dashed line), medium (, dashed-dotted line) and strong (, dotted line) couplings. FIG. 6. Time evolution of the excited-state population of the isolated system for the high-energy excitation. The population is calculated for a linear negatively chirped pulse (solid line) and for the optimal pulse (dashed line) obtained by using a genetic algorithm with the random phases, generated at discrete points of the frequency spectrum. The inset figure shows the temporal profile of the optimized pulse. FIG. 7. (Color) Calculations for nondissipative system. Time-frequency Wigner distribution corresponding to the optimized linear (top panel) and nonlinear (bottom panel) chirped pulses. The right sides show the frequency spectra of the pulses, while their temporal profiles are shown in the upper panels. The phase is expanded in the Taylor series up to the second order (linear chirp) and in the basis of periodic functions (nonlinear chirp). FIG. 8. (Top panel) Evolution of the excited-state population of the isolated system for the high-energy excitation. The population is calculated for a linear negatively chirped pulse (solid line) and for the optimal pulse (dashed line) obtained by using a genetic algorithm with the phase expanded in the basis of periodic functions. (Bottom panel) Trajectories of the transition dipole moment renormalized by its maximal amplitude. Solid and dashed lines refer to the linear chirped pulse and its nonlinear analog, respectively. FIG. 9. (Top panel) Time evolution of the excited-state population of the dissipative system for the high-energy excitation. The population is calculated for a linear negatively chirped pulse (solid line) and for the optimal pulse obtained by using a genetic algorithm (dashed line). (Bottom panel) Trajectories of the transition dipole moment renormalized by its maximal amplitude. Calculations were performed for the system with weak vibrational relaxation and medium pure electronic dephasing . FIG. 10. (Color) Calculations for the dissipative system. Time-frequency Wigner distribution corresponding to the optimal linear (top panel) and nonlinear (bottom panel) chirped pulses. Calculations were performed for the system with medium vibrational relaxation and electronic dephasing . FIG. 11. (Color) (Top panel) Time evolution of the excited-state population of the dissipative system for the high-energy excitation. The population is calculated for a linear negatively chirped pulse (solid line) and for the optimal pulse obtained by using a genetic algorithm (dashed line). (Bottom panel) Time-frequency Wigner distribution corresponding to the optimized nonlinear chirped pulse. The calculations were performed for the primary system with the following parameters: the ground-state and the excited-state frequencies and the dimensionless displacement of . The right sides show the frequency spectra of the pulses, while their temporal profiles are shown in the upper panels. The phase is expanded in the basis of periodic functions. FIG. 12. Effect of the intensity for linearly chirped pulses. The value of the linear chirp is plotted as a function of the pulse fluence (the amplitude of the corresponding transform-limited pulse). Calculations were performed for the isolated system (squares) and as well as for a dissipative system (triangles) with f medium vibrational relaxation and medium pure electronic dephasing . FIG. 13. The excited-state population at the end of the pulse as function of the pulse fluence (the amplitude of the corresponding transform-limited pulse). Calculations were performed (top panel) for the system without dissipation and (bottom panel) for the system with medium vibrational relaxation ) and medium pure electronic dephasing . The arrow points to the maximal fluence used in the experiment by Nahmias et al. (Ref. 2) /content/aip/journal/jcp/123/23/10.1063/1.2136155 2005-12-19 2014-04-19 Article content/aip/journal/jcp Journal 5 3 ### Most cited this month More Less This is a required field
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8804566860198975, "perplexity": 1482.4064319016495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
http://physics.stackexchange.com/questions/59469/why-frequency-doesnt-change-during-refraction/59488
# Why frequency doesn't change during refraction? When light goes through one medium to another it's velocity and wavelength changes. Why frequency doesn't change in this phenomenon? - Closely related to many other questions. May have an answer from Chris here or here and also here –  Waffle's Crazy Peanut Mar 30 '13 at 13:19 The electric and magnetic fields have to remain continuous at the refractive index boundary. If the frequency changed, the light at each side of the boundary would be continuously changing it's relative phase and there would be no way to match the fields. - I think it's the simplest explanation... –  Arafat Mar 30 '13 at 17:56 @Kazi Then you should probably accept this answer. –  Douglas B. Staple Apr 10 '13 at 17:24 I'm not sure I quite buy this answer. The things that have to be continuous at the boundary are $D_\perp$, $E_\parallel$, $B_\perp$, and $H_\parallel$. On the other hand, there can be discontinuities in $D_\parallel$, $E_\perp$, $B_\parallel$, and $H_\perp$. So I think there is really more that needs to be filled in to make this a valid argument. –  Ben Crowell Sep 15 '13 at 20:24 When we think of light, we can describe it as an electromagnetic wave or as a flux of particles - photons. The latter description is more fundamental: If you could have a light source with sensitive enough intensity knob, then after just turning it on (minimum intensity), you'd be sending out photons one by one. I believe that answers to your deep questions lie therein. Behold: Energy of one light quantum (one photon) can be written $E = hf$, where $h$ is a universal (Planck's) constant, $E$ is energy and $f$ is frequency. We cannot divide photon in pieces, so its energy must stay constant and frequency goes the same way. Devices that appear to divide photons (or change photons' frequency) actually first swallow-destroy the incoming photons and then emit other photons at a different frequency. Frequency of light does not ever change, as long as you can be sure that the photons are the same as the photons at the beginning. Wavelength $L$ is, on the other hand, tied with energy through its speed, $E = hf = hv/L$ . Atoms of materials, even gases like air, impede the flow of photons - photons bounce off of the atoms (elastic collisions) or are swallowed and re-emitted by the atoms (inelastic collisions). Like I wrote above, a photon swallowed and re-emitted is a different photon. So, it is not part of the original light stream. The Snell's laws speak only about the part of light (photons) that experienced only elastic collisions in a material. So, in passing from one material to another, light changes wavelength proportionally to the change of speed, so that the ratio $v/L = f$ remains constant. But does that mean that it changes color? That depends, how you define color! As color is usually defined via wavelength (i.e. visible light wavelengths in the range 300-700 nm), then indeed, color changes on the interface of two optical materials with different indexes of refraction (like air-glass, air-water, etc). - Is there any example where photon and atoms have inelastic collision? –  Arafat Mar 30 '13 at 18:03 Think of it like this: At the boundary/interface of the medium, the number of waves you send is the number of waves you receive, at the other side, almost instantly. Frequency doesn't change because it depends on travelling of waves across the interface. But speed and wavelength change as the material on the other side may be different, so now it might have a longer/shorter size of wave and so the number of waves per unit time changes. - This is not really a specific fact about electromagnetic waves. It's a fact about all waves. The basic reason for it is cause and effect. Think of how people "do the wave" in a stadium. The way you know it's your turn to go is that the person next to you goes. When a wave travels from medium 1 to medium 2, the thing that's causing the vibration of the wave on the medium-2 side is the vibration of the wave on the medium-1 side. - ## protected by Qmechanic♦Nov 5 '13 at 19:19 Thank you for your interest in this question. Because it has attracted low-quality answers, posting an answer now requires 10 reputation on this site.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8990455269813538, "perplexity": 566.4807541442722}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663754.0/warc/CC-MAIN-20140930004103-00482-ip-10-234-18-248.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/17992-functions-differenciate.html
Math Help - functions differenciate 1. functions differenciate given 2 functions: f(x) = 2x, g(x) = -2x²+5x im supposed to determine the following g(f) inverse of g g (g inverse) the difference quotient for f the average rate of change for f yeah... a lot of Q's. someone pls explain the process to me, cuz i fail to understand it... 2. Originally Posted by brokedude given 2 functions: f(x) = 2x, g(x) = -2x²+5x im supposed to determine the following g(f) inverse of g g (g inverse) First part: $[g(f)](x)=g(f(x))=-2(f(x))^2+5f(x)=2(2x)^2+5(2x)=8x^2+10x $ Second part: There is no function $g^{-1}$, as for most $y$ there are two $x$s such that $g(x)=y$. RonL 3. Originally Posted by brokedude given 2 functions: f(x) = 2x, g(x) = -2x²+5x im supposed to determine the following: the difference quotient for f the average rate of change for f The difference quotient for $f$ is: $ DQ(f,h)=\frac{f(x+h)-f(x)}{h}=\frac{2(x+h)-2x}{h}=\frac{2h}{h}=2 $ Thus we see that the average rate of change of $f$ over any interval $(x, x+h)$ is $2$ RonL 4. Originally Posted by brokedude g(x) = -2x²+5x im supposed to determine the following g(f) inverse of g g (g inverse) the difference quotient for f the average rate of change for f Originally Posted by CaptainBlack First part: Second part: There is no function $g^{-1}$, as for most $y$ there are two $x$s such that $g(x)=y$. RonL CaptainBlack is absolutely correct about the inverse of g. It has no inverse. However we may informally get an inverse (that is to say, the process is correct even if the application is not) by: $g(x) = -2x^2 + 5x$ Let $y = -2x^2 + 5x$ Now switch the roles of x and y: $x = -2y^2 + 5y$ Now solve for y: $2y^2 - 5y + x = 0$ $y = \frac{5 \pm \sqrt{25 - 8x}}{4}$ <-- via the quadratic formula Thus $g^{-1}(x) = \frac{5 \pm \sqrt{25 - 8x}}{4}$ The problem, as CaptainBlack mentioned, is that for the graph y = g(x) there are two y values for every x (except at the vertex point), so we need to be very careful about defining a domain on which an inverse exists and just what that inverse is. (It will either be the "+" or the "-" of the inverse formula given above.) -Dan 5. thanks for your help !! 6. Originally Posted by topsquark CaptainBlack is absolutely correct about the inverse of g. It has no inverse. However we may informally get an inverse (that is to say, the process is correct even if the application is not) by: $g(x) = -2x^2 + 5x$ Let $y = -2x^2 + 5x$ Now switch the roles of x and y: $x = -2y^2 + 5y$ Now solve for y: $2y^2 - 5y + x = 0$ $y = \frac{5 \pm \sqrt{25 - 8x}}{4}$ <-- via the quadratic formula Thus $g^{-1}(x) = \frac{5 \pm \sqrt{25 - 8x}}{4}$ The problem, as CaptainBlack mentioned, is that for the graph y = g(x) there are two y values for every x (except at the vertex point), so we need to be very careful about defining a domain on which an inverse exists and just what that inverse is. (It will either be the "+" or the "-" of the inverse formula given above.) -Dan There is a way around the problem which is to extend $g$ from $\bold{R}$ to $\mathcal{P}(\bold{R})$, when: $g(g^{-1})=I$ the identity function $I(S)=S$ for all subsets of $\bold{R}$ but I doubt that this is what is wanted. RonL
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 32, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9410299062728882, "perplexity": 922.1148924875991}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00451-ip-10-147-4-33.ec2.internal.warc.gz"}
http://mathhelpforum.com/discrete-math/28087-second-order-recurrence-relation-print.html
# Second Order Recurrence relation • Feb 12th 2008, 12:42 PM frostking2 Second Order Recurrence relation I can usually solve second order ones fine but this one has me stumped! a(n) = 6a(n-1) - 9a(n-2) a(0) = 1 a(1) = 1 Using t^2 = 6t^(n-1) - 9t^(n-2) I get t = 3 and only 3 When I try to plug in S(n) = 3^n and T(n) = 3^n and solve for U(n) = bS(n) + dT^n with b3^n + d3^n and use my initial values of 1 for U(0) and for U(1) I can not find a solution for the values of b and d that meet both of these!!!!!! PLease give me a clue or two or three..... • Feb 12th 2008, 01:25 PM galactus Running it through Maple, I get: $a_{n}=(1-\frac{2n}{3})\cdot{3^{n}}$ That's something to shoot for. • Feb 12th 2008, 01:59 PM Soroban Hello, frostking2! I think we speak the same language ... We just use different symbols. Quote: $a(n) \:= \:6a(n-1) - 9a(n-2),\quad a(0) = 1,\;\;a(1) = 1$ Here's the way I was taught to handle these . . . We conjecture that: . $a(n) \:=\:X^n$ . . . that the function is exponential. The equation becomes: . $X^n \:=\:6X^{n-1} - 9X^{n-2}\quad\Rightarrow\quad X^n - 6X^{n-1} + 9X^{n-2}\;=\;0$ Divide by $X^{n-2}\!:\;\;X^2 - 6X + 9 \:=\:0\quad\Rightarrow\quad (X-3)^2\:=\:0\quad\Rightarrow\quad X \:=\:3,\,3$ With repeated roots, the function is: . $a(n) \;=\;A\!\cdot\!3^n + B\!\cdot\!n\!\cdot\!3^n$ Plug in the first two values of the sequence . . . . . $a(0) = 1:\;A\!\cdot\!3^0 + B(0)(3^0) \:=\:1 \quad\Rightarrow\quad A \:=\:1$ . . $a(1) = 1:\;A\!\cdot\!3^! + B(1)(3^1) \:=\:1\quad\Rightarrow\quad B \:=\:-\frac{2}{3}$ Hence: . $a(n) \;=\;3^n - \frac{2}{3}n\cdot3^n \;=\;3^n\left(1 - \frac{2}{3}n\right) \;=\;3^n\left(\frac{3-2n}{3}\right)$ Therefore: . $\boxed{a(n) \;=\;3^{n-1}(3 - 2n)}$ • Feb 12th 2008, 02:59 PM frostking2 Second order problem solved! Thank you soooooo much. Stupid me I NEVER considered 0 for one of the values at the last portion of the problem, second roots and values for a and b. Of course it does work and so the regular process yields an answer. Thanks so much for your time and helpful attitude!!!!!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8460402488708496, "perplexity": 1352.9414165324013}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00362-ip-10-171-10-70.ec2.internal.warc.gz"}
https://brilliant.org/discussions/thread/common-tangent-rules/
# Common tangent rules Problem 29 on the JEE mains inspired this note: Things to know •Pythagorean theorem •Graphical transformations (only vertical and horizontal shifts) This note will describe a quick and easy way to find the number of common tangents to two circles. Given the following equations for the circles: $(y-a)^2+(x-b)^2=r^2$ $y^2+x^2=s^2$ find the number of common tangents these circles will have. You may wonder, "this isn't generalized, you assume one is at the origin". Let me explain. What I did was move both circles $p$ units along the x-axis and $q$ units along the y-axis to make one at the origin. The number of tangents to the circles won't change if we perform a transformation other than a distortion. Now for the tangent rules: how many common tangents will they have if... I) $a^2+b^2=(r+s)^2$ then there are 3 common tangents. II) $a^2+b^2>(r+s)^2$ then there are 4 common tangents. Now for case 3 III) $a^2+b^2<(r+s)^2$ then we have three cases NOTE: you must check this first, don't immediately check the rules below (Quick rule, if r=s in this case, then there are 2 tangents). III-1) $a^2+b^2=(r-s)^2$ then there is 1 common tangent. III-2) $a^2+b^2>(r-s)^2$ then there are 2 common tangents. III-3) $a^2+b^2<(r-s)^2$ then there are no common tangents. Note by Trevor Arashiro 4 years, 6 months ago This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science. When posting on Brilliant: • Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused . • Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting "I don't understand!" doesn't help anyone. • Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge. MarkdownAppears as *italics* or _italics_ italics **bold** or __bold__ bold - bulleted- list • bulleted • list 1. numbered2. list 1. numbered 2. list Note: you must add a full line of space before and after lists for them to show up correctly paragraph 1paragraph 2 paragraph 1 paragraph 2 [example link](https://brilliant.org)example link > This is a quote This is a quote # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" # I indented these lines # 4 spaces, and now they show # up as a code block. print "hello world" MathAppears as Remember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting. 2 \times 3 $2 \times 3$ 2^{34} $2^{34}$ a_{i-1} $a_{i-1}$ \frac{2}{3} $\frac{2}{3}$ \sqrt{2} $\sqrt{2}$ \sum_{i=1}^3 $\sum_{i=1}^3$ \sin \theta $\sin \theta$ \boxed{123} $\boxed{123}$ Sort by: Can you tell me such way in finding common tangents to a circle &a ellipse ? - 4 years, 6 months ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 18, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9594522714614868, "perplexity": 2402.8483193556867}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986655864.19/warc/CC-MAIN-20191015032537-20191015060037-00003.warc.gz"}
https://cs.stackexchange.com/questions/22360/proving-that-max-weighted-independent-set-is-in-np
# Proving that Max Weighted Independent Set is in NP What I'm trying to do is to show a problem in NP can be reduced to the min weight vertex cover problem I've chosen the max independent weight problem = input: A graph G with weights on each vertex, output: An independent set with the max total weight Before reducing, I've tried to show that the max indep. weight problem is in NP (which is usually the first step in these reductions). I'm trying to construct a verification algorithm for this problem; but I'm stuck on trying to show that the verification algorithm can check if a certificate is the max indep. set in polynomial time. Any guidance or comments would be greatly appreciated. Thanks Max weighted independent set is the decision problem whose instances are pairs $(G,B)$ such that $G = (V,E,w)$ is a vertex-weighted graph that has an independent set of weight at least $B$. Nowhere is it claimed that $B$ is the maximum weight of an independent set. The problem (like many others) is defined in this way precisely so that it be in NP. Also, in order to show that your problem is NP-hard, it might be easier to reduce from max independent set. • Thanks very much for your swift response. The reason I phrased the max indep. problem that way was because the min vertex problem I'm provided is phrased the same way (i.e. input: a graph G with weights on each vertex, output: the vertex cover with the smallest weight). I'll try and use the independent set you described above for my reduction. Thanks again! – Allan Mar 7 '14 at 3:36 • Vertex cover is defined in the same way: the problem is to decide whether there is a vertex cover of weight at most a given weight. You're confusing the decision problem (what I describe) with the optimization problem (what you quote). – Yuval Filmus Mar 7 '14 at 3:42 The issue is with the version of the problem you are using. Note that as you define it, the output is required to be the maximum weight independent set, i.e. the optimum answer. $NP$ however is a class of decision problems, so the only valid outputs are Yes and No. So if you want to show the problem is in $NP$ you first need to convert it to a decision problem: $k$-Weight Independent Set Input: A vertex weighted graph $G=(V,E,w)$ and an integer $k$. Question: Is the a set $V'\subset V$ such that $V'$ is an independent set and $\sum_{v\in V'} w(v) \geq k$? It should be easier to see that this version is in $NP$. There is an optimization class - $NPO$ - that corresponds to $NP$, but the normal definition is that a problem is in $NPO$ if its decision variant is in $NP$ - so you're still in the position where you want to deal with the decision variant. • Thank you! This cleared up a lot of confusion for me with regards to problems in NP – Allan Mar 7 '14 at 3:37
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8476266860961914, "perplexity": 211.7633918131276}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669868.3/warc/CC-MAIN-20191118232526-20191119020526-00131.warc.gz"}
https://swyde.com/s/Uncertainty_principle
# Uncertainty principle ## Explanation The uncertainty principle states that every particle has a wave nature associated with it and it is impossible to know both the position and momentum of the wave beyond a certain level of precision at the same time. This is because the particle exists in a superposition of position and momentum, and if you were to know the position of the particle with high precision, then the momentum cannot be precisely known fundamentally. This principle is mathematically expressed as follows. $\sigma_{x}\sigma_{p} \geq \frac{\hbar}{2}$ This explains that the products of standard deviations of the position $\sigma_{x}$ and momentum $\sigma_{p}$ cannot be small at the same time. If the standard deviation of position is small, then it is apparent that the position of the particle is known with high precision. And by the fundamental nature of the particles, the standard deviation of the momentum should now be higher enough that the product of the two should be greater than $\frac{\hbar}{2}$, where $\hbar$ is the reduced Planck's constant. $E = hf$ Where $E$ is the radiation energy and $f$ is the frequency of the emitted radiation.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9615612030029297, "perplexity": 74.15065149566507}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829542.89/warc/CC-MAIN-20181218164121-20181218190121-00505.warc.gz"}
https://www.physicsforums.com/threads/beta-minus-decay.697773/
# Beta Minus Decay 1. ### omiros 30 Hello everybody, I am a first year physics student and I have a question about Nuclear Beta Minus Decay. I was thinking the other day, about a beta decay. After the nucleus is formed, the new atoms state is a positive ion with charge +1. If we think of the electron escaping from somewhere close to the nucleus the electron will be pulled by the nucleus. Is that electron bound at any time at all, or not? I understand that the function that describes the acceleration of the electron is going to be very weird, but I just care about the final kinetic energy of both. Also what happens to the rest of the electrons? How are they going to react in such a case? Do any of they emit photons? Do we usually have collision between that electron and the 'bound' electrons? 2. ### Bill_K 4,159 Atomic electron energy levels are rather small compared to the energies involved in beta decay. So the emitted electron would not be bound, although it's true it will be a bit slowed down escaping the atom, and this correction needs to be taken into account when observing the decay's energy spectrum. It's possible, although infrequent, for the emitted electron to collide with an atomic electron. A more interesting example of the interplay between nuclear decay and the atomic electrons is an alternative decay mode to beta plus decay called electron capture or K-capture, in which the nucleus grabs an atomic electron. Since this electron is taken from a low-lying shell, the atom needs to fill the hole, by emitting an X-ray, or sometimes a second ("Auger") electron. 1 person likes this. 3. ### snorkack 564 Most of time. Not always. The decay energy of rhenium 187 is just 2,6 keV, whereas the binding energy of the inner electrons of heavy atoms is in hundreds of keV The rhenium 187 nucleus is over 1 milliard times shorter lived than the neutral atoms. It follows that when a rhenium 187 nucleus undergoes beta decay, over 99,999999% times the electron is not emitted but goes into some bound state (ground or excited). Neutral dysprosium 163 atom is completely stable, so the electron is always bound. The beta decay energy is randomly divided between electron and antineutrino. Even if atomic energy levels are small, there is small but nonzero chance that the antineutrino happens to get almost all beta decay energy and the electron gets little enough to stay bound to the atom. K-capture depends on the choice of shell whence the electron is captured. Only s electrons can ever be captured, because only they go to nucleus - but all s electrons do, not just the 1s ones. The probability of K-capture is simply bigger than L-capture or higher shell captures... except when K-capture is impossible. As is the case with holmium 163. 1 person likes this.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8344907760620117, "perplexity": 780.1590681828899}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095184.64/warc/CC-MAIN-20150627031815-00204-ip-10-179-60-89.ec2.internal.warc.gz"}
http://slideplayer.com/slide/3431221/
Chapter 9: Vector Differential Calculus 1 9.1. Vector Functions of One Variable -- a vector, each component of which is a function of the same variable. Presentation on theme: "Chapter 9: Vector Differential Calculus 1 9.1. Vector Functions of One Variable -- a vector, each component of which is a function of the same variable."— Presentation transcript: Chapter 9: Vector Differential Calculus 1 9.1. Vector Functions of One Variable -- a vector, each component of which is a function of the same variable i.e., F(t) = x(t) i + y(t) j + z(t) k, where x(t), y(t), z(t): component functions t : a variable e.g., ◎ Definition 9.1: Vector function of one variable 2 。 F(t) is continuous at some t 0 if x(t), y(t), z(t) are all continuous at t 0 。 F(t) is differentiable if x(t), y(t), z(t) are all differentiable ○ Derivative of F(t): e.g., 3 ○ Curve: C(x(t), y(t), z(t)), in which x(t), y(t), z(t): coordinate functions x = x(t), y = y(t), z = z(t): parametric equations F(t)= x(t)i + y(t)j + z(t)k: position vector pivoting at the origin Tangent vector to C: Length of C: 4 ○ Example 9.2: Position vector: Tangent vector: Length of C: 5 ○ Distance function: t(s): inverse function of s(t) ○ Let Unit tangent vector: 6 。 Example 9.3: Position function: Inverse function: 7 Unit tangent vector: 8 ○ Assuming that the derivatives exist, then (1) (2) (3) (4) (5) 9 9.2. Velocity, Acceleration, Curvature, Torsion A particle moving along a path has position vector Distance function: ◎ Definition 9.2: Velocity: (a vector) tangent to the curve of motion of the particle Speed : (a scalar) the rate of change of distance w.r.t. time 10 Acceleration: or (a vector) the rate of change of velocity w.r.t. time ○ Example 9.4: The path of the particle is the curve whose parametric equations are 11 Velocity: Speed: Acceleration: Unit tangent vector: 12 ○ Definition 9.4: Curvature (a magnitude): the rate of change of the unit tangent vector w.r.t. arc length s For variable t, 13 ○ Example 9.7: Curve C: t > 0 Position vector: 14 Tangent vector: Unit tangent vector: Curvature: 15 ◎ Definition 9.5: Unit Normal Vector i) ii) Differentiation 16 ○ Example 9.8: Position vector: t > 0 Write as a function of arc length s (Example 9.7) Solve for t, Position vector: 17 Unit tangent vector: Curvature: 18 Unit normal vector: 19 9.2.1 Tangential and Normal Components of Acceleration 20 ◎ Theorem 9.1: where Proof: 21 ○ Example 9.9: Compute and for curve C with position vector Velocity: Speed: Tangential component: Acceleration vector: 22 Normal component: Acceleration vector: Since, curvature: Unit tangent vector: Unit normal vector: 23 ◎ Theorem 9.2: Curvature Proof: 24 ○ Example 9.10: Position function: 25 9.2.3 Frenet Formulas Let Binormal vector: T, N, B form a right-handed rectangular coordinate system This system twists and changes orientation along curve 26 ○ Frenet formulas : The derivatives are all with respect to s. (i) From Def. 9.5, (ii) is inversely parallel to N Let : Torsion 27 (iii) (a) (b) (c) * Torsion measures how (T, N, B) twists along the curve 28 12.3 Vector Fields and Streamlines ○ Definition 9.6: Vector Field -- (3-D) A vector whose components are functions of three variables -- (2-D) A vector whose components are functions of two variables 29 。 A vector filed is continuous if each of its component functions is continuous. 。 A partial derivative of a vector field -- the vector fields obtained by taking the partial derivative of each component function e.g., 30 ◎ Definition 9.7: Streamlines F: vector field defined in some 3-D region Ω  : a set of curves with the property that through each point P of Ω, there passes exactly one curve from  The curves in  are streamlines of F if at each point in Ω, F curve in  passing through is tangent to the 31 ○ Vector filed: : Streamline of F Parametric equations -- Position vector -- Tangent vector at -- 32 is also tangent to C at // 33 ○ Example 9.11: Find streamlines Vector field: From Integrate Solve for x and y Parametric equations of the streamlines 34 Find the streamline through (-1, 6, 2). 35 9.4. Gradient Field and Directional Derivatives ◎ Definition 9.8: Scalar field: a real-valued function e.g. temperature, moisture, pressure, hight Gradient of : a vector field 36 e.g., 。 Properties: ○ Definition 9.9: Directional derivative of in the direction of unit vector 37 ◎ Theorem 9.3: Proof: By the chain rule 38 ○ Example 9.13: 39 ◎ Theorem 9.4: has its 1. Maximum rate of change,, in the direction of 2. Minimum rate of change,, in the direction of Proof: Max.: Min.: 40 ○ Example 9.4: The maximum rate of change at The minimum rate of change at 41 9.4.1. Level Surfaces, Tangent Planes, and Normal Lines ○ Level surface of : a locus of points e.g., Sphere (k > 0) of radius Point (k = 0), Empty (k < 0) 42 ○ Tangent Plane at point to Normal vector: the vector perpendicular to the tangent plane 43 ○ Theorem 9.5: Gradient normal to at point on the level surface Proof: Let : a curve passing point P on surface C lies on 44 normal to This is true for any curve passing P on the surface. Therefore, normal to the surface 45 ○ Find the tangent plane to Let (x, y, z): any point on the tangent plane orthogonal to the normal vector The equation of the tangent plane: 46 ○ Example 9.16: Consider surface Let The surface is the level surface Gradient vector: Tangent plane at 47 48 9.5. Divergence and Curl ○ Definition 9.10: Divergence (scalar field) e.g., 49 ○ Definition 9.11: Curl (vector field) e.g., 50 ○ Del operator: 。 Gradient: 。 Divergence: 。 Curl: 51 ○ Theorem 9.6: Proof: 52 ◎ Theorem 9.7: Proof: FORMULA ○ Position vector of curve F(t)= x(t)i + y(t)j + z(t)k 。 Distance function: 。 Unite tangent vector: where C(x(t), y(t), z(t)) ○ Velocity: Speed : Acceleration: or, where=, = ○ Curvature: ○ Unit Normal Vector: ○ Binormal vector: Torsion ○ Frenet formulas: ○ Vector filed: Streamline: ○ Scalar field: Gradient: ○ Directional derivative: ○ Divergence: ○ Curl: ○ Download ppt "Chapter 9: Vector Differential Calculus 1 9.1. Vector Functions of One Variable -- a vector, each component of which is a function of the same variable." Similar presentations
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9270113110542297, "perplexity": 3917.5515243848577}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689874.50/warc/CC-MAIN-20170924044206-20170924064206-00486.warc.gz"}
http://physics.stackexchange.com/questions/14880/hearing-a-sound-backwards-because-of-doppler-effect
# Hearing a sound backwards because of Doppler effect Consider a supersonic plane (mach 2) aproaching a stationary sound source (e.g a fog horn on a boat). If I understand it correctly, the passengers in the plane can hear the sound twice. First at a 3 times higher frequency, and then (after they passed the source) a second time at normal frequency but backwards. None of the textbooks or web sites mention this backwards sound. Yet I am quite sure it must be there. Am I correct? And if so, is it actually observed (e.g. By fighter pilots) and why do textbooks never mention this? - I doubt that fighter pilots can hear anything happening outside the plane (just because it's so loud), and I can't think of any other supersonic motion, so I wouldn't be surprised if this effect has never been observed. But it's an interesting question. –  David Z Sep 19 '11 at 22:38 FYI Gareth Loy's Musimatics book mentions it at pg. 230; one gets it from Doppler shift eq. $f_d=f\frac{v_s}{v_s-u}$ (in 1d), where $v_s$ is sound speed, $u$ is the emitter's relative speed and $f$ the frequency being emitted if $u>v_s$. –  eudoxos Sep 20 '11 at 9:12 i don't think this reversal of sound would take place. –  Vineet Menon Sep 21 '11 at 4:59 @Vineet Menon: fully irrelevant what you think unless you give an argument. –  eudoxos Sep 21 '11 at 18:15 @David Zaslavsky: how about moving surface wave source on water? (to be sure, I really mean surface waves, not acoustic waves; Doppler effect is the same) –  eudoxos Sep 21 '11 at 18:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8057462573051453, "perplexity": 916.2328242720803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394010721184/warc/CC-MAIN-20140305091201-00070-ip-10-183-142-35.ec2.internal.warc.gz"}
https://zims-en.kiwix.campusafrica.gos.orange.com/wikipedia_en_all_nopic/A/Condition_number
Condition number In the field of numerical analysis, the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the input, and how much error in the output results from an error in the input. Very frequently, one is solving the inverse problem: given ${\displaystyle f(x)=y,}$ one is solving for x, and thus the condition number of the (local) inverse must be used. In linear regression the condition number of the moment matrix can be used as a diagnostic for multicollinearity.[1][2] The condition number is an application of the derivative, and is formally defined as the value of the asymptotic worst-case relative change in output for a relative change in input. The "function" is the solution of a problem and the "arguments" are the data in the problem. The condition number is frequently applied to questions in linear algebra, in which case the derivative is straightforward but the error could be in many different directions, and is thus computed from the geometry of the matrix. More generally, condition numbers can be defined for non-linear functions in several variables. A problem with a low condition number is said to be well-conditioned, while a problem with a high condition number is said to be ill-conditioned. The condition number is a property of the problem. Paired with the problem are any number of algorithms that can be used to solve the problem, that is, to calculate the solution. Some algorithms have a property called backward stability. In general, a backward stable algorithm can be expected to accurately solve well-conditioned problems. Numerical analysis textbooks give formulas for the condition numbers of problems and identify known backward stable algorithms. As a rule of thumb, if the condition number ${\displaystyle \kappa (A)=10^{k}}$, then you may lose up to ${\displaystyle k}$ digits of accuracy on top of what would be lost to the numerical method due to loss of precision from arithmetic methods.[3] However, the condition number does not give the exact value of the maximum inaccuracy that may occur in the algorithm. It generally just bounds it with an estimate (whose computed value depends on the choice of the norm to measure the inaccuracy). General definition in the context of error analysis Given a problem ${\displaystyle f}$ and an algorithm ${\displaystyle {\tilde {f}}}$ with an input x, the absolute error is ${\displaystyle \left\|f(x)-{\tilde {f}}(x)\right\|}$ and the relative error is ${\displaystyle \left\|f(x)-{\tilde {f}}(x)\right\|/\left\|f(x)\right\|}$. In this context, the absolute condition number of a problem f is ${\displaystyle \lim _{\varepsilon \rightarrow 0}\sup _{\|\delta x\|\leq \varepsilon }{\frac {\|\delta f\|}{\|\delta x\|}}}$ and the relative condition number is ${\displaystyle \lim _{\varepsilon \rightarrow 0}\sup _{\|\delta x\|\leq \varepsilon }{\frac {\|\delta f(x)\|/\|f(x)\|}{\|\delta x\|/\|x\|}}}$ Matrices For example, the condition number associated with the linear equation Ax = b gives a bound on how inaccurate the solution x will be after approximation. Note that this is before the effects of round-off error are taken into account; conditioning is a property of the matrix, not the algorithm or floating point accuracy of the computer used to solve the corresponding system. In particular, one should think of the condition number as being (very roughly) the rate at which the solution, x, will change with respect to a change in b. Thus, if the condition number is large, even a small error in b may cause a large error in x. On the other hand, if the condition number is small then the error in x will not be much bigger than the error in b. The condition number is defined more precisely to be the maximum ratio of the relative error in x to the relative error in b. Let e be the error in b. Assuming that A is a nonsingular matrix, the error in the solution A−1b is A−1e. The ratio of the relative error in the solution to the relative error in b is ${\displaystyle {\frac {\frac {\left\|A^{-1}e\right\|}{\left\|A^{-1}b\right\|}}{\frac {\|e\|}{\|b\|}}}}$ This is easily transformed to ${\displaystyle {\frac {\left\|A^{-1}e\right\|}{\|e\|}}{\frac {\|b\|}{\left\|A^{-1}b\right\|}}.}$ The maximum value (for nonzero b and e) is then seen to be the product of the two operator norms as follows: {\displaystyle {\begin{aligned}\max _{e,b\neq 0}\left\{{\frac {\left\|A^{-1}e\right\|}{\|e\|}}{\frac {\|b\|}{\left\|A^{-1}b\right\|}}\right\}&=\max _{e\neq 0}\left\{{\frac {\left\|A^{-1}e\right\|}{\|e\|}}\right\}\,\max _{b\neq 0}\left\{{\frac {\|b\|}{\left\|A^{-1}b\right\|}}\right\}\\&=\max _{e\neq 0}\left\{{\frac {\left\|A^{-1}e\right\|}{\|e\|}}\right\}\,\max _{x\neq 0}\left\{{\frac {\|Ax\|}{\|x\|}}\right\}\\&=\left\|A^{-1}\right\|\,\|A\|\end{aligned}}} The same definition is used for any consistent norm, i.e. one that satisfies ${\displaystyle \kappa (A)=\left\|A^{-1}\right\|\,\left\|A\right\|\geq \left\|A^{-1}A\right\|=1.}$ When the condition number is exactly one (which can only happen if A is a scalar multiple of a linear isometry), then a solution algorithm can find (in principle, meaning if the algorithm introduces no errors of its own) an approximation of the solution whose precision is no worse than that of the data. However, it does not mean that the algorithm will converge rapidly to this solution, just that it won't diverge arbitrarily because of inaccuracy on the source data (backward error), provided that the forward error introduced by the algorithm does not diverge as well because of accumulating intermediate rounding errors. The condition number may also be infinite, but this implies that the problem is ill-posed (does not possess a unique, well-defined solution for each choice of data; that is, the matrix is not invertible), and no algorithm can be expected to reliably find a solution. The definition of the condition number depends on the choice of norm, as can be illustrated by two examples. If ${\displaystyle \|\cdot \|}$ is the norm defined in the square-summable sequence space 2 (which matches the usual distance in a standard Euclidean space and is usually noted as ${\displaystyle \|\cdot \|_{2}}$), then ${\displaystyle \kappa (A)={\frac {\sigma _{\max }(A)}{\sigma _{\min }(A)}},}$ where ${\displaystyle \sigma _{\max }(A)}$ and ${\displaystyle \sigma _{\min }(A)}$ are maximal and minimal singular values of ${\displaystyle A}$ respectively. Hence • If ${\displaystyle A}$ is normal then ${\displaystyle \kappa (A)={\frac {\left|\lambda _{\max }(A)\right|}{\left|\lambda _{\min }(A)\right|}},}$ where ${\displaystyle \lambda _{\max }(A)}$ and ${\displaystyle \lambda _{\min }(A)}$ are maximal and minimal (by moduli) eigenvalues of ${\displaystyle A}$ respectively. • If ${\displaystyle A}$ is unitary then ${\displaystyle \kappa (A)=1.}$ The condition number with respect to L2 arises so often in numerical linear algebra that it is given a name, the condition number of a matrix. If ${\displaystyle \|\cdot \|}$ is the norm defined in the sequence space of all bounded sequences (which matches the maximum of distances measured on projections into the base subspaces and is usually denoted by ${\displaystyle \|\cdot \|_{\infty }}$), and ${\displaystyle A}$ is lower triangular non-singular (i.e., ${\displaystyle \forall i,a_{ii}\neq 0}$) then ${\displaystyle \kappa (A)\geq {\frac {\max _{i}(|a_{ii}|)}{\min _{i}(|a_{ii}|)}}.}$ The condition number computed with this norm is generally larger than the condition number computed with square-summable sequences, but it can be evaluated more easily (and this is often the only practicably computable condition number, when the problem to solve involves a non-linear algebra, for example when approximating irrational and transcendental functions or numbers with numerical methods). If the condition number is not too much larger than one, the matrix is well conditioned which means its inverse can be computed with good accuracy. If the condition number is very large, then the matrix is said to be ill-conditioned. Practically, such a matrix is almost singular, and the computation of its inverse, or solution of a linear system of equations is prone to large numerical errors. A matrix that is not invertible has condition number equal to infinity. Nonlinear Condition numbers can also be defined for nonlinear functions, and can be computed using calculus. The condition number varies with the point; in some cases one can use the maximum (or supremum) condition number over the domain of the function or domain of the question as an overall condition number, while in other cases the condition number at a particular point is of more interest. One variable The condition number of a differentiable function ${\displaystyle f}$ in one variable as a function is ${\displaystyle \left|xf'/f\right|}$. Evaluated at a point ${\displaystyle x}$ this is: ${\displaystyle \left|{\frac {xf'(x)}{f(x)}}\right|}$ Most elegantly, this can be understood as (the absolute value of) the ratio of the logarithmic derivative of ${\displaystyle f}$, which is ${\displaystyle (\log f)'=f'/f}$ and the logarithmic derivative of ${\displaystyle x}$, which is ${\displaystyle (\log x)'=x'/x=1/x}$, yielding a ratio of ${\displaystyle xf'/f}$. This is because the logarithmic derivative is the infinitesimal rate of relative change in a function: it is the derivative ${\displaystyle f'}$ scaled by the value of ${\displaystyle f}$. Note that if a function has a zero at a point, its condition number at the point is infinite, as infinitesimal changes in the input can change the output from zero to positive or negative, yielding a ratio with zero in the denominator, hence infinite relative change. More directly, given a small change ${\displaystyle \Delta x}$ in ${\displaystyle x}$, the relative change in ${\displaystyle x}$ is ${\displaystyle [(x+\Delta x)-x]/x=(\Delta x)/x}$, while the relative change in ${\displaystyle f(x)}$ is ${\displaystyle [f(x+\Delta x)-f(x)]/f(x)}$. Taking the ratio yields: ${\displaystyle {\frac {[f(x+\Delta x)-f(x)]/f(x)}{(\Delta x)/x}}={\frac {x}{f(x)}}{\frac {f(x+\Delta x)-f(x)}{(x+\Delta x)-x}}={\frac {x}{f(x)}}{\frac {f(x+\Delta x)-f(x)}{\Delta x}}}$. The last term is the difference quotient (the slope of the secant line), and taking the limit yields the derivative. Condition numbers of common elementary functions are particularly important in computing significant figures, and can be computed immediately from the derivative; see significance arithmetic of transcendental functions. A few important ones are given below: Name Symbol Condition number Addition / Subtraction ${\displaystyle x+a}$ ${\displaystyle \left|{\frac {x}{x+a}}\right|}$ Scalar Multiplication ${\displaystyle ax}$ ${\displaystyle 1}$ Division ${\displaystyle 1/x}$ ${\displaystyle 1}$ Polynomial ${\displaystyle x^{n}}$ ${\displaystyle |n|}$ Exponential function ${\displaystyle e^{x}}$ ${\displaystyle |x|}$ Natural logarithm function ${\displaystyle \ln(x)}$ ${\displaystyle \left|{\frac {1}{\ln(x)}}\right|}$ Sine function ${\displaystyle \sin(x)}$ ${\displaystyle |x\cot(x)|}$ Cosine function ${\displaystyle \cos(x)}$ ${\displaystyle |x\tan(x)|}$ Tangent function ${\displaystyle \tan(x)}$ ${\displaystyle |x(\tan(x)+\cot(x))|}$ Inverse sine function ${\displaystyle \arcsin(x)}$ ${\displaystyle {\frac {x}{{\sqrt {1-x^{2}}}\arcsin(x)}}}$ Inverse cosine function ${\displaystyle \arccos(x)}$ ${\displaystyle {\frac {|x|}{{\sqrt {1-x^{2}}}\arccos(x)}}}$ Inverse tangent function ${\displaystyle \arctan(x)}$ ${\displaystyle {\frac {x}{(1+x^{2})\arctan(x)}}}$ Several variables Condition numbers can be defined for any function ${\displaystyle f}$ mapping its data from some domain (e.g. an ${\displaystyle m}$-tuple of real numbers ${\displaystyle x}$) into some codomain (e.g. an ${\displaystyle n}$-tuple of real numbers ${\displaystyle f(x)}$), where both the domain and codomain are Banach spaces. They express how sensitive that function is to small changes (or small errors) in its arguments. This is crucial in assessing the sensitivity and potential accuracy difficulties of numerous computational problems, for example polynomial root finding or computing eigenvalues. The condition number of ${\displaystyle f}$ at a point ${\displaystyle x}$ (specifically, its relative condition number[4]) is then defined to be the maximum ratio of the fractional change in ${\displaystyle f(x)}$ to any fractional change in ${\displaystyle x}$, in the limit where the change ${\displaystyle \delta x}$ in ${\displaystyle x}$ becomes infinitesimally small:[4] ${\displaystyle \lim _{\varepsilon \to 0^{+}}\sup _{\|\delta x\|\leq \varepsilon }\left[\left.{\frac {\left\|f(x+\delta x)-f(x)\right\|}{\|f(x)\|}}\right/{\frac {\|\delta x\|}{\|x\|}}\right]}$, where ${\displaystyle \|\cdot \|}$ is a norm on the domain/codomain of ${\displaystyle f}$. If ${\displaystyle f}$ is differentiable, this is equivalent to:[4] ${\displaystyle {\frac {\|J(x)\|}{\|f(x)\|/\|x\|}}}$, where ${\displaystyle J(x)}$ denotes the Jacobian matrix of partial derivatives of ${\displaystyle f}$ at ${\displaystyle x}$ and ${\displaystyle \|J(x)\|}$ is the induced norm on the matrix. References 1. Belsley, David A.; Kuh, Edwin; Welsch, Roy E. (1980). "The Condition Number". Regression Diagnostics: Identifying Influential Data and Sources of Collinearity. New York: John Wiley & Sons. pp. 100–104. ISBN 0-471-05856-4. 2. Pesaran, M. Hashem (2015). "The Multicollinearity Problem". Time Series and Panel Data Econometrics. New York: Oxford University Press. pp. 67–72 [p. 70]. ISBN 978-0-19-875998-0. 3. Cheney; Kincaid (2007-08-03). Numerical Mathematics and Computing. ISBN 978-0-495-11475-8. 4. Trefethen, L. N.; Bau, D. (1997). Numerical Linear Algebra. SIAM. ISBN 978-0-89871-361-9. This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 93, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9813876748085022, "perplexity": 250.73757292287155}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499524.28/warc/CC-MAIN-20230128054815-20230128084815-00644.warc.gz"}
https://physics.stackexchange.com/questions/431425/why-dont-we-use-sign-convention-during-the-derivation-of-a-lens-maker-formula
# Why don't we use sign convention during the derivation of a lens maker formula? Please have a look at the Lens makers formula. In any derivation of Geometrical optics, we use the sign convention twice: once while deriving it and next while using it for general cases. But in the derivation of lens makers formula, we don't consider negative and positive values of the radius of curvature while solving for both spherical surfaces. This should lead to a wrong answer and in fact I solved one example which through individual analysis of both spherical surfaces gave a different answer than while using the lens makers formula directly. I think am not getting this. From what I have read, it's because no matter which surface light hits first the net refraction is same. This sign convention doesn't play a major role. But I am still confused. I posted this question a while ago and now have realised what I was missing. In the derivation of any other formula for e.g. the mirror formula or magnification for mirrors or lenses etc., we use a specific case involving, usually but not necessarily, a convex lens. Thus we use the sign convention during the derivation as well as later while solving problems involving different sets of lenses or mirrors However in lens makers formula, we use straight and direct formula where the first and second radius can take any value. Why? You may ask. Why is it that using a specific case isn't important here? This is because we never use any specific condition involving say a convex surface on both sides. We just say an object situated in the negative direction, undergoes refraction through first surface, forms an image which again undergoes refraction through second surface. No specific condition that image formed is going to be in the positive co-ordinate, or gonna be enlarged smaller etc. So during the derivation no mention of any case. We directly apply convention for the problem we are solving. • Forgot to mention but the answer is really basic and simple. – tiffy Oct 2 '18 at 12:03 I have a better explaination. You see we have used the formula for refraction on spherical surface $$\frac{n_2}{v}-\frac{n_1}{u}=\frac{n_2-n_1}{R}$$ in the derivation of lens makers formula. Now, the use of sign convention in any derivation is only to make the formula generalized. If we don't use sign convention, we can use the the derived formula in only the situation which we considered while deriving it. And since we have already used sign convention in the derivation of formula for refraction through spherical surfaces, we don't have to use it again in derivation of lens makers formula. • Hi user220718, note that it's typical to place a single space after all punctuation marks, not just periods. We also have MathJax enabled on this site to make equations look nice, search "notation" in help center to learn more. – Kyle Kanos Jan 23 '19 at 15:18
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8551129698753357, "perplexity": 334.70798281448066}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657151197.83/warc/CC-MAIN-20200714181325-20200714211325-00032.warc.gz"}
https://www.computer.org/csdl/proceedings/quatic/2010/4241/00/4241a089-abs.html
2010 Seventh International Conference on the Quality of Information and Communications Technology (2010) Porto, Portugal Sept. 29, 2010 to Oct. 2, 2010 ISBN: 978-0-7695-4241-6 pp: 89-96 ABSTRACT Traditionally, test cases are used to check whether a system conforms to its requirements. However, to achieve good quality and coverage, large amounts of test cases are needed, and thus huge efforts have to be put into test generation and maintenance. We propose a methodology, called Abstract Testing, in which test cases are replaced by verification scenarios. Such verification scenarios are more abstract than test cases, thus fewer of them are needed and they are easier to create and maintain. Checking verification scenarios against the source code is done automatically using a software model checker. In this paper we describe the general idea of Abstract Testing, and demonstrate its feasibility by a case study from the automotive systems domain. INDEX TERMS abstract testing verification requirements engineering bounded model checking CITATION H. Post, T. Kropf, C. Sinz, F. Merz and T. Gorges, "Abstract Testing: Connecting Source Code Verification with Requirements," 2010 Seventh International Conference on the Quality of Information and Communications Technology(QUATIC), Porto, Portugal, 2010, pp. 89-96. doi:10.1109/QUATIC.2010.14 CITATIONS SHARE 87 ms (Ver 3.3 (11022016))
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.872555673122406, "perplexity": 2303.876761950333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583658928.22/warc/CC-MAIN-20190117102635-20190117124635-00362.warc.gz"}
http://libros.duhnnae.com/2017/aug/150156921116-The-topology-of-systems-of-hyperspaces-determined-by-dimension-functions-Mathematics-General-Topology.php
# The topology of systems of hyperspaces determined by dimension functions - Mathematics > General Topology The topology of systems of hyperspaces determined by dimension functions - Mathematics > General Topology - Descarga este documento en PDF. Documentación en PDF para descargar gratis. Disponible también para leer online. Abstract: Given a non-degenerate Peano continuum $X$, a dimension function$D:2^X *\to0,\infty$ defined on the family $2^X *$ of compact subsets of $X$,and a subset $\Gamma\subset0,\infty$, we recognize the topological structureof the system $2^X,\D {\le\gamma}X {\alpha\in\Gamma}$, where $2^X$ is thehyperspace of non-empty compact subsets of $X$ and $D {\le\gamma}X$ is thesubspace of $2^X$, consisting of non-empty compact subsets $K\subset X$ with$DK\le\gamma$. Autor: T.Banakh, N.Mazurenko Fuente: https://arxiv.org/
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9660077095031738, "perplexity": 2701.801045158887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039744381.73/warc/CC-MAIN-20181118135147-20181118161147-00166.warc.gz"}
http://mathoverflow.net/questions/100276/can-one-prove-complex-multiplication-without-assuming-cft?sort=newest
# Can one prove complex multiplication without assuming CFT? The Kronecker-Weber Theorem, stating that any abelian extension of $\mathbb Q$ is contained in a cyclotomic extension, is a fairly easy consequence of Artin reciprocity in class field theory (one just identifies the ray class groups and shows that each corresponds to a cyclotomic extension). However, one can produce a more direct and elementary proof of this fact that avoids appealing to the full generality of class field theory (see, for example, the exercises in the fourth chapter of Number Fields by Daniel Marcus). In other words, one can prove class field theory for $\mathbb Q$ using much simpler methods than for the general case. The theory of complex multiplication is similar to the theory of cyclotomic fields (and hence the Kronecker-Weber Theorem) in that it shows that any abelian extension of a quadratic imaginary field is contained in an extension generated by the torsion points of an elliptic curve with complex multiplication by our field. To prove this, one normally assumes class field theory and then shows that the field generated by the $m$-torsion (or, more specifically, the Weber function of the $m$-torsion) is the ray class field of conductor $m$. My question is: Can one prove that any abelian extension of an imaginary quadratic field $K$ is contained in a field generated by the torsion of an elliptic curve with complex multiplication by $K$ without resorting to the general theory of class field theory? I.e. where one directly proves class field theory for $K$ by referring to the elliptic curve. Is there a proof in the style of the exercises in Marcus's book? Note: Obviously there is no formal formulation of what I'm asking. One way or another, you can prove complex multiplication. But the question is whether you can give a proof of complex multiplication in a certain style. - (modified) Historically, Complex Multiplication precedes Class Field Theory and many of the main theorems of CM for elliptic curves were proved directly. See Algebren (3 volumes) by Weber or Cox's book for an exposition. Please also read Birch's article on the beginnings of Heegner points where he points this out explicitly (page three, paragraph beginning "Complex multiplication ...).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9090471863746643, "perplexity": 145.19268935846853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500829754.11/warc/CC-MAIN-20140820021349-00085-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/chemical-reaction-equation-historical-question.749178/
# Chemical reaction equation, historical question 1. Apr 17, 2014 ### 7777777 I am reading a chemistry book printed in 1805. The chemical reaction equations are written using the equality symbol = instead of the arrow →, which is used in modern times. Anyway sometimes it is still possible to see the "old fashioned" way: http://www.jeron.je/anglia/learn/sec/science/changmat/page13.htm Does anyone know why the equality symbol was abandoned, and when did it happen in the history of chemistry? Are there reasons why this change was needed? I know only a little about chemistry, I think this is a very basic question, but I cannot seem to find the complete solution myself. I can think that maybe the = was replaced by → because chemical reaction equations are not mathematical equations, there is no equality in the equation in mathematical sense. If the chemical equations are not mathematics, then why the addition symbol + has not been replaced by something else? The addition is a mathematical operation, so should it be understood to mean also a chemical reaction? Something is added into something else, perhaps this is an universal concept applicable not just in mathematics. 2. Apr 17, 2014 ### PhysicoRaj An arrow indicates direction, whereas an equality sign does not. 3. Apr 17, 2014 ### DrDu As long as a reaction is not in equilibrium, the reaction proceeds in one or the other direction. Hence it is more convenient to use arrows. In some situations, it is also necessary to distinguish formally between reactands and products, e.g. in calculating the potential of a electrochemical half cell, you divide by convention the product of the concentration of the products by that of the reactands. 4. Apr 17, 2014 ### 7777777 Ok, there is a direction in chemical equation, reactants are cause and products are effect, hence there is causality. But not in mathematical equation, there is symmetry in mathematical equation instead of causality. 1+1→2 does not make sense because 2 is not caused by 1+1, instead there is symmetry: 1+1=2 and 2=1+1. Perhaps this is a weakness of mathematics, it does not seem offer causality. 5. Apr 17, 2014 ### PhysicoRaj And there are instances where mathematics offers a cause and effect. Implication Mathematical Induction Contraposition 6. Apr 17, 2014 ### 256bits I found this. chemistry and symbols http://www.chemistryviews.org/details/ezine/2746271/History_and_Usage_of_Arrows_in_Chemistry.html 1789 Lavoisier uses "=" sign for a chemical equation. 1884 Vant Hoff uses double arrows 1901 single arrow to designate direction, products and reactants http://www.chemistryviews.org/SpringboardWebApp/userfiles/chem/image/2012/2012_November/Arrow [Broken] http://www.chemistryviews.org/SpringboardWebApp/userfiles/chem/image/2012/2012_November/Arrow [Broken] Other uses of arrows in chemistry shown, past and present. Last edited by a moderator: May 6, 2017 7. Apr 18, 2014 ### PhysicoRaj Nice links. This timeline was very interesting→ Similar Discussions: Chemical reaction equation, historical question
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8570728302001953, "perplexity": 1990.6872354983352}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688926.38/warc/CC-MAIN-20170922074554-20170922094554-00204.warc.gz"}
https://www.vedantu.com/formula/percentage-formula
# Percentage Formula View Notes ## Calculate Percentage There are various formulas to find a percentage that helps in solving percentage problems. Imagine the most basic percentage formula: P% × A = B. However, there are many mathematical variations of the percentage calculation formulas. Let's take a look at the three basic percentage problems that can be solved using percentage formulas. ‘A’ and ‘B’ are numbers and P is the percentage: • Find P percent of A • Find what percent of A is B • Find A if P percent of it is B For example, 25% of 1000 is 250 $\frac{is}{of}$ = %/100 or $\frac{part}{whole}$ = %/100 Percentage Formula ### How to Find what percent (%) of A is B. Example: What percent of 75 is 15? Follow the below step-by-step procedure and solve percentage problem in one go 1. First, you will require to convert the problem to an equation using the formula of percentage i.e. A/B = P% 2. Do the math, since A is 75, B is 15, so the equation becomes: 15/75 = P% 3. Solve the equation: 15/75 = 0.20 4. Note! The outcome tends to be always in decimal form, not a percentage. Therefore. We will require multiplying the outcome by 100 to obtain the percentage. 5. Convert the decimal 0.20 to a percent multiplying it by 100 6. We get 0.20 × 100 = 20% So 20% of 75 is 15 ### How to Find A if P percent of it is B Example: 50 are 10% of what number? Follow the below step-by-step procedure and solve percentage problem in one go 1. First, convert the problem to an equation using the formula of percentage i.e. B/P% = A 2. Given that value of ‘B’ is 50, P% is 10, so the equation is 50/10% = A 3. Convert the percentage to a decimal form, dividing by 100. 4. Converting 10% to a decimal brings us: 10/100 = 0.10 5. Substitute 0.10 for 10% in the equation: 50/0.10 = A 6. Do the math: 50/0.10 = A A = 500 So 50 is 10% of 500 ### Percentage Difference Formula The percentage difference between the two values/numbers is reckoned by dividing the absolute value of the difference between the two values by the average of those two values. Multiplying the outcome by 100 is purposed to produce the solution in the form of percent, rather than in decimal. Take a look at the below equation for an easy explanation;- Percentage Difference formula = |A1 – A2|/ (A1 + A2)/2× 100 For example: find out the percentage difference between two values of 20 and 4 Solution: given two values is 20 and 4 So, |20 - 4| / (20 + 4)/2 × 100 = 4/3 × 100 = 1.33 × 100 = 133.33% ### Percentage Change Formula Percentage decrease and increase are reckoned by determining by the difference between two values and comparing that difference to the primary value. With respect to mathematical concepts, this involves considering the absolute value of the difference between two values and dividing the outcome by the primary value, typically computing how much the primary value has changed. The percentage change calculator computes an increase or decrease of a definite percentage of the input number. It typically takes into account converting a percent into its decimal equivalent, and either adding or subtracting the decimal equivalent from and to 1, respectively. Multiplying the primary number by this value will lead to either an increase or decrease of the number by the given percent. Refer to the example below for clarification. Refer to the below equation for easy explanation:- Example: 700 increased by 20% (0.2) 700 × (2 + 0.2) = 840 700 decreased by 10% 700 × (1 – 0.1) = 630 ### Solved Examples Example1 Find out ___% of 15 is 6 Solution1 Here whole = 15 and part = 6, but % is missing We obtain: 6/15 = %/100 Replacing % by x and cross-multiplying provides: 6 × 100 = 15 × x 600 = 15 × x Divide 600 by 15 to get x 600/15 = 40, so x = 40 Thus, __40_% of 15 is 6 Example2 The tax on a soap dispenser machine is Rs 25.00. The tax rate is 20%. What is the price without tax? Solution2 P × 20/100 = 25 = 20/100 equal 5 Solve the equation multiplying both sides by 100 and then dividing both sides by 20. The price without tax is Rs. = 125 ### Fun Facts 1. The percentage (%) sign bears a significant ancient connection. Ancient Romans often performed calculations in fractions dividing by 100, which is presently equivalent to the computing percentages. 2. Calculations with a denominator of 100 became more typical since the introduction of the decimal system. 3. Percentage methods had frequently been used in medieval arithmetic texts to describe finances, such as interest rates. Q1. How to know the percentage calculation is Accurate? Ans. A common mistake we tend to do when finding percentages is division instead of multiplication of the decimal conversion. For the fact that percentages are often perceived as parts of a larger whole thing, there can be a likelihood of dividing instead of multiplying when met with a problem such as "find 30% of 160." Always remember, after converting the percent to a decimal form, next is to multiply, not divide. A proper understanding of percent enables us to estimate whether the answer is logical. From the above example, knowing that 30% is between one-quarter and one-half, this would mean the answer should be somewhere between 30 and 50. And the answer is 48 (0.30×160). By dividing (0.30×160), you would get 533.33 which are completely wrong. Q2. What is the Percentage? Ans. Percent refers to a proportion "out of 100" or "for every 100”. It is denoted by the symbol (%) symbol. A percentage makes for a fast means to write down a fraction with a denominator of 100. For example, instead of saying “the tutor covered 17 history lessons out of every 100," we say "she covered 17% of the history syllabus." Q3. How to convert a Percentage to a Decimal? Ans. Eliminate the symbol of percentage and divide by 100 25.70% = 25.7/100 = 0.257 Q4. How to convert a Decimal to a Percentage? Ans. Add the sign of percentage and multiply by 100 0.257 = 0.257 × 100 = 25.7%
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.920504629611969, "perplexity": 1347.292515978308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141692985.63/warc/CC-MAIN-20201202052413-20201202082413-00236.warc.gz"}
http://math.stackexchange.com/questions/29068/differential-equation
# Differential equation Solve the differential equation: $$x \frac{dy}{dx} = y(3-y)$$ where x=2 when y=2, giving y as a function of x. Can someone solve this and then explain what the second line about $x=2$, $y=2$ means? - Solutions of differential equations generally depend on a specified initial value. (Compare to the case of taking antiderivatives of a function: you have to write $+C$ because there is a free constant in the expression. By specifying the value of the antiderivative you want at a point, you fix that constant uniquely.) – Willie Wong Mar 25 '11 at 22:19 The second line just tells you that $y(2) = 2$. – Arturo Magidin Mar 26 '11 at 3:14 So you want to "solve" the initial value problem: $\begin{cases} x y^\prime (x) = y(x)\ (3-y(x)) \\ y(2)=2 \end{cases}$. Your ODE has a singular point in $x=0$ (for the coefficient of the $y^\prime (x)$ term vanishes), hence if the IVP has a solution it will have not to be defined in $x=0$. Put the ODE in normal form, i.e. rewrite: $\displaystyle y^\prime (x) =\frac{y(x)\ (3-y(x))}{x}$; the function $f(x,y):= \frac{y (3-y)}{x}$ is of class $C^\infty$ in $(x,y)\in \Big(]-\infty ,0[\cup ]0,+\infty[\Big)\times \mathbb{R}$, hence the existence and uniqueness theorem applies and your IVP has unique local solution $y(x)$ whose graph passes through the point $(2,2)$. The solution $y(x)$ is continuous in a neighbourhood $I_1$ of $x=2$ (because it has to be differentiable to satisfy the ODE), hence the composite function $f(x,y(x))$ is continuous in $I_1$; as $y^\prime (x)=f(x,y(x))$, then $y^\prime (x)$ is continuous in $I_1$, therefore $y(x)$ is a $C^1$ function in $I_1$. But then $y^\prime (x)$ is of class $C^1$ in $I_1$, for the composite function $f(x,y(x))$ is of class $C^1$ (apply the chain rule); therefore $y(x)$ is of class $C^2$... Bootstrapping, you see that $y(x)$ is of class $C^\infty$ in the neighbourhood $I_1$ of the initial point $2$. Moreover, the solution $y(x)$ is also strictly increasing in a neighbourhood of $2$: in fact, $y(2)=2>0$ hence by continuity you can find a neighbourhood $I_2\subseteq I_1$ of $2$ in which $0<y(x)<3$, so: $\displaystyle y^\prime (x)=\frac{y(x)\ (3-y(x))}{x} >0$, thus $y(x)$ increases strictly. Now you have all the ingredients to properly solve your problem: in fact, in $I_2$ you can divide both sides of the ODE by $y(x)\ (3-y(x))$ and rewrite: $\displaystyle \frac{y^\prime (x)}{y(x)\ (3-y(x))} =\frac{1}{x}$; now fix a point $x \in I_2$ and integrate both sides from $2$ to $x$: $\displaystyle \int_2^x \frac{y^\prime (t)}{y(t)\ (3-y(t))}\ \text{d} t =\int_2^x \frac{1}{t}\ \text{d} t$ (I've introduced a dummy variable in the integrals); now the RHside gives you $\ln x -\ln 2$, hence you have to work on the LHside. Keeping in mind that $y(t)$ is strictly monotone hence invertible in $I_2$, we can make the change of variable $\theta =y(t)$: as $y(2)=2$ and $\text{d} \theta = y^\prime (t)\ \text{d} t$, you get: $\displaystyle \begin{split}\int_2^x \frac{y^\prime (t)}{y(t)\ (3-y(t))}\ \text{d} t &= \int_2^{y(x)} \frac{1}{\theta (3-\theta)}\ \text{d} \theta \\ &=\frac{1}{3} \ln \theta - \frac{1}{3} \ln (3-\theta) \Big|_2^{y(x)} \\ &=\frac{1}{3} \left(\ln y(x) -\ln (3-y(x)) -\ln 2\right)\end{split}$. Therefore the solution to your problem is implicitly determined by the equation: $\displaystyle \ln \left( \frac{y(x)}{3-y(x)}\right) = \ln \frac{x^3}{4}$, i.e.: $\displaystyle \frac{4y(x)}{3-y(x)}=x^3$. The latter equation is a rational algebraic equation w.r.t. $y(x)$ and can be solved with the usual tools, which yield: $\displaystyle y(x)=\frac{3x^3}{4+x^3}$. There is more that can be said, e.g. how the local solution $y(x)$ can be extended to a maximal solution... But that's another story. - another story indeed! Despite the overkill, I am grateful for a formal discussion and solution. +1 – The Chaz 2.0 Mar 26 '11 at 13:40 @The Chaz: My two cents: I don't think it's an overkill... It is just the correct way of doing the exercise. – Pacciu Mar 26 '11 at 15:53 This belief is evident by the nature of your answer! I'll continue to appreciate your rigor while "monkeying around" in my own way ;) – The Chaz 2.0 Mar 26 '11 at 16:07 @The Chaz: Thank you! And watch your steps while "monkeying around" ;D – Pacciu Mar 26 '11 at 16:25 This is easy because it's separable variables. So solve $dy/(y(3-y))=dx/x$. - ... by integrating both sides, and then use the initial value to determine the constant of integration – Henry Mar 25 '11 at 22:45 The method you adopted for separating variables is usually called urang-utang© method by some funny mathematicians. They mock: "People using this method whitout knowing its fomal justification (if any!) resemble Orangutans using thing to make rudimental tools and messing with them"; in fact, the method is based on a totally unformal algebraic manipulation of differentials which is hard to formalize (hence it's almost meaningless). – Pacciu Mar 25 '11 at 23:56 @Pacciu: See this question, and in particular, Mike Spivey's answer there. – Rahul Mar 26 '11 at 2:29 @pac its not meaningless, just leave $y'$ alone, two functions (of $x$) are equal, so are their integrals (wrt $x$). – yoyo Mar 26 '11 at 14:31 @Rahul: Thanks for the reference, but I already know the story ;D @yoyo: There are people believing that, say, one can pass from $\frac{\text{d} y}{\text{d} x} =f(y)$ to $\text{d} y=f(x)\ \text{d} x$ by multiplying both sides by $\text{d} x$... But what's the meaning of this? How can a differential be considered as a number when it is not a number (for, it is just a symbol or a linear map)? This is what I was referring to when I wrote unformal algebraic manipulation of differentials which is [...] almost meaningless. – Pacciu Mar 26 '11 at 15:46 This is just an Initial Value Problem. You use the techniques you know to solve for the general solution. Here is a good resource: http://tutorial.math.lamar.edu/Classes/DE/Linear.aspx. More specifically, you should look at "Separable Equations" The general solution in this case will have one arbitrary constant. You use the I.V.P to solve for this constant. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9220090508460999, "perplexity": 390.660518647835}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860106452.21/warc/CC-MAIN-20160428161506-00194-ip-10-239-7-51.ec2.internal.warc.gz"}
https://link.springer.com/article/10.1007/s00454-017-9916-5?error=cookies_not_supported&error=cookies_not_supported&code=c28068ea-1d3b-489f-aa82-691d48f33777&code=8005c167-077b-46e0-bc35-212d0dbf5bef
# On Homotopy Types of Euclidean Rips Complexes ## Abstract The Rips complex at scale r of a set of points X in a metric space is the abstract simplicial complex whose faces are determined by finite subsets of X of diameter less than r. We prove that for X in the Euclidean 3-space $$\mathbb {R}^3$$ the natural projection map from the Rips complex of X to its shadow in $$\mathbb {R}^3$$ induces a surjection on fundamental groups. This partially answers a question of Chambers, de Silva, Erickson and Ghrist who studied this projection for subsets of $$\mathbb {R}^2$$. We further show that Rips complexes of finite subsets of $$\mathbb {R}^n$$ are universal, in that they model all homotopy types of simplicial complexes PL-embeddable in $$\mathbb {R}^n$$. As an application we get that any finitely presented group appears as the fundamental group of a Rips complex of a finite subset of $$\mathbb {R}^4$$. We furthermore show that if the Rips complex of a finite point set in $$\mathbb {R}^2$$ is a normal pseudomanifold of dimension at least two then it must be the boundary of a crosspolytope. This is a preview of subscription content, access via your institution. ## Notes 1. 1. Chambers et al. [3] use the term k-connected to describe the situation when the induced map on $$\pi _k$$ is also a bijection, although it is more standard to call the latter a k-equivalence. ## References 1. 1. Attali, D., Lieutier, A., Salinas, D.: Vietoris–Rips complexes also provide topologically correct reconstructions of sampled shapes. Comput. Geom. 46(4), 448–465 (2013) 2. 2. Björner, A.: Topological methods. In: Graham, R., Grötschel, M., Lovász, L. (eds.) Handbook of Combinatorics, vol. 2, pp. 1819–1872. Elsevier, Amsterdam (1995) 3. 3. Chambers, E.W., de Silva, V., Erickson, J., Ghrist, R.: Vietoris–Rips complexes of planar point sets. Discrete Comput. Geom. 44(1), 75–90 (2010) 4. 4. Chazal, F., de Silva, V., Oudot, S.: Persistence stability for geometric complexes. Geom. Dedicata 173, 193–214 (2014) 5. 5. Deza, M., Dutour, M., Shtogrin, M.: On simplicial and cubical complexes with short links. Isr. J. Math. 144(1), 109–124 (2004) 6. 6. Dranišnikov, A.N., Repovš, D.: Embedding up to homotopy type in Euclidean space. Bull. Aust. Math. Soc. 47(1), 145–148 (1993) 7. 7. Hausmann, J.-C.: On the Vietoris–Rips complexes and a cohomology theory for metric spaces. In: Quinn, W. (ed.) Prospects in Topology. Annals of Mathematics Studies, vol. 138, pp. 175–188. Princeton University Press, Princeton (1995) 8. 8. Kozlov, D.N.: Combinatorial Algebraic Topology. Algorithms and Computation in Mathematics, vol. 21. Springer, Berlin (2008) 9. 9. Latschev, J.: Vietoris–Rips complexes of metric spaces near a closed Riemannian manifold. Arch. Math. 77(6), 522–528 (2001) 10. 10. tom Dieck, T.: Algebraic Topology. EMS Textbooks in Mathematics. European Mathematical Society, Zürich (2008) 11. 11. Vietoris, L.: Über den höheren Zusammenhang kompakter Räume und eine Klasse von zusammenhangstreuen Abbildungen. Math. Ann. 97(1), 454–472 (1927) ## Acknowledgements We thank Jesper M. Møller for helpful discussions and for suggesting the collaboration of the first and third author. We also thank the referees for their suggestions. Some of this research was performed while the second author visited the University of Copenhagen. The second author is grateful for the hospitality of the Department of Mathematical Sciences there. MA was supported by VILLUM FONDEN through the network for Experimental Mathematics in Number Theory, Operator Algebras, and Topology. ## Author information Authors ### Corresponding author Editor in Charge: Kenneth Clarkson ## Rights and permissions Reprints and Permissions Adamaszek, M., Frick, F. & Vakili, A. On Homotopy Types of Euclidean Rips Complexes. Discrete Comput Geom 58, 526–542 (2017). https://doi.org/10.1007/s00454-017-9916-5 • Revised: • Accepted: • Published: • Issue Date: ### Keywords • Vietoris–Rips complex
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8950809836387634, "perplexity": 1970.0990699586118}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487600396.21/warc/CC-MAIN-20210613041713-20210613071713-00233.warc.gz"}
https://tex.stackexchange.com/questions/476618/how-align-left-dedicatory
# How align left dedicatory my code is \documentclass[a4paper,12pt]{article} \usepackage[paper=a4paper,left=30mm,right=20mm,top=25mm,bottom=30mm]{geometry} \newenvironment{dedication} {\clearpage % we want a new page \thispagestyle{empty}% no header and footer \vspace*{\stretch{1}}% some space at the top \itshape % the text is in italics \raggedleft % flush to the right margin } {\par % end the paragraph \vspace{\stretch{3}} % space at bottom is three times that at the top \clearpage % finish off the page } \begin{document} \begin{dedication} Dedicated to google and wikipedia \end{dedication} \end{document} this is result: • You have specified \raggedleft which is the opposite of what you want. Try \raggedright. – barbara beeton Feb 25 '19 at 15:51 • @barbarabeeton I want it to be placed on the right side of the page, and the content I want to be aligned to the left – x-rw Feb 25 '19 at 15:57 • To get a uniform indentation on the left, you can specify \leftskip=<dimen>\parindent=0pt where <dimen> is the amount of space you want on the left (e.g., 2cm). This is plain TeX notation, not LaTeX, but it should work. – barbara beeton Feb 25 '19 at 16:02 • @barbarabeeton the example i put in the figure in red letters – x-rw Feb 25 '19 at 16:04 • Yes, that's what I gave the code for. It should be replace the \raggedleft in your code. – barbara beeton Feb 25 '19 at 16:13 The code you post specifies \raggedleft, which is the opposite of what you want. \raggedright is what you should be using. You also want a uniform indentation on the left. Replace the instruction \raggedleft in your code by the following: \leftskip=2cm \raggedright \parindent=0pt Replace the 2cm in this code by the width of the indentation that you want. This code is in "plain TeX" style, not LaTeX, but it should work with no problem, although some LaTeX users would prefer a LaTeX-specific formulation. The reason I was trying to answer in comments is that I don't currently have the ability to test; I don't like to provide untested answers.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9114366769790649, "perplexity": 1744.221854099424}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178376144.64/warc/CC-MAIN-20210307044328-20210307074328-00364.warc.gz"}
http://solidmechanics.org/Text/Chapter3_3/Chapter3_3.php
3.3 Hypoelasticity $–$ elastic materials with a nonlinear stress-strain relation under small deformation Hypoelasticity is used to model materials that exhibit nonlinear, but reversible, stress strain behavior even at small strains.  Its most common application is in the so-called `deformation theory of plasticity,’ which is a crude approximation of the behavior of metals loaded beyond the elastic limit. A hypoelastic material has the following properties The solid has a preferred shape The specimen deforms reversibly:  if you remove the loads, the solid returns to its original shape. The strain in the specimen depends only on the stress applied to it $–$ it doesn’t depend on the rate of loading, or the history of loading. The stress is a nonlinear function of strain, even when the strains are small, as shown in the picture above.  Because the strains are small, this is true whatever stress measure we adopt (Cauchy stress or nominal stress), and is true whatever strain measure we adopt (Lagrange strain or infinitesimal strain). We will assume here that the material is isotropic (i.e. the response of a material is independent of its orientation with respect to the loading direction).  In principle, it would be possible to develop anisotropic hypoelastic models, but this is rarely done. The stress strain law is constructed as follows: Strains and rotations are assumed to be small.  Consequently, deformation is characterized using the infinitesimal strain tensor ${\epsilon }_{ij}$ defined in Section 2.1.7.   In addition, all stress measures are taken to be approximately equal.  We can use the Cauchy stress ${\sigma }_{ij}$ as the stress measure. ### When we develop constitutive equations for nonlinear elastic materials, it is usually best to find an equation for the strain energy density of the material as a function of the strain, instead of trying to write down stress-strain laws directly.  This has several advantages: (i) we can work with a scalar function; and (ii) the existence of a strain energy density guarantees that deformations of the material are perfectly reversible. If the material is isotropic, the strain energy density can only be a function strain measures that do not depend on the direction of loading with respect to the material.   One can show that this means that the strain energy can only be a function of invariants of the strain tensor $–$ that is to say, combinations of strain components that have the same value in any basis (see Appendix B).  The strain tensor always has three independent invariants: these could be the three principal strains, for example.   In practice it is usually more convenient to use the three fundamental scalar invariants: ${I}_{1}={\epsilon }_{kk}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{I}_{2}=\frac{1}{2}\left({\epsilon }_{ij}{\epsilon }_{ij}-{\epsilon }_{kk}{\epsilon }_{pp}/3\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{I}_{3}=\mathrm{det}\left(\epsilon \right)=\frac{1}{6}{\in }_{ijk}{\in }_{lmn}{\epsilon }_{li}{\epsilon }_{mj}{\epsilon }_{nk}$ ### Here, ${I}_{1}$ is a measure of the volume change associated with the strain; ${I}_{2}$ is a measure of the shearing caused by the strain, and I can’t think of a good physical interpretation for ${I}_{3}$.  Fortunately, it doesn’t often appear in constitutive equations. Strain energy density: In principle, the strain energy density could be any sensible function $U\left({I}_{1},{I}_{2},{I}_{3}\right)$.   In most practical applications, nonlinear behavior is only observed when the material is subjected to shear deformation (characterized by ${I}_{2}$ ); while stress varies linearly with volume changes (characterized by ${I}_{1}$ ).   This behavior can be characterized by a strain energy density $U=\frac{1}{6}K{I}_{1}^{2}+\frac{2n{\sigma }_{0}{\epsilon }_{0}}{n+1}{\left(\frac{{I}_{2}}{{\epsilon }_{0}^{2}}\right)}^{\left(n+1\right)/2n}$ where $K,{\sigma }_{0},{\epsilon }_{0},n$ are material properties (see below for a physical interpretation). Stress-strain behavior For this strain energy density function, the stress follows as ${\sigma }_{ij}=\frac{\partial U}{\partial {\epsilon }_{ij}}=\frac{K}{3}{\epsilon }_{kk}{\delta }_{ij}+{\sigma }_{0}{\left(\frac{{I}_{2}}{{\epsilon }_{0}^{2}}\right)}^{\left(1-n\right)/2n}\left(\frac{{\epsilon }_{ij}-{\epsilon }_{kk}{\delta }_{ij}/3}{{\epsilon }_{0}}\right)$ The strain can also be calculated in terms of stress ${\epsilon }_{ij}=\frac{1}{3K}{\sigma }_{kk}{\delta }_{ij}+{\epsilon }_{0}{\left(\frac{{J}_{2}}{{\sigma }_{0}^{2}}\right)}^{\left(n-1\right)/2}\left(\frac{{\sigma }_{ij}-{\sigma }_{kk}{\delta }_{ij}/3}{{\sigma }_{0}}\right)$ where ${J}_{2}=\left({\sigma }_{ij}{\sigma }_{ij}-{\sigma }_{kk}{\sigma }_{pp}/3\right)/2$ is the second invariant of the stress tensor. To interpret these results, note that ### If the solid is subjected to uniaxial tension, (with stress ${\sigma }_{11}=\sigma$ and all other stress components zero); the nonzero strain components are ${\epsilon }_{11}=\frac{\sigma }{3K}+\frac{2}{\sqrt{3}}{\epsilon }_{0}{\left(\frac{\sigma }{\sqrt{3}{\sigma }_{0}}\right)}^{n}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\epsilon }_{22}={\epsilon }_{33}=\frac{\sigma }{3K}-\frac{1}{\sqrt{3}}{\epsilon }_{0}{\left(\frac{\sigma }{\sqrt{3}{\sigma }_{0}}\right)}^{n}$ ### If the solid is subjected to hydrostatic stress (with ${\sigma }_{11}={\sigma }_{22}={\sigma }_{33}=\sigma$ and all other stress components zero) the nonzero strain components are ${\epsilon }_{11}={\epsilon }_{22}={\epsilon }_{33}=\frac{\sigma }{K}$ ### If the solid is subjected to pure shear stress (with ${\sigma }_{12}={\sigma }_{21}=\tau$ and all other stress components zero) the nonzero strains are ${\epsilon }_{12}={\epsilon }_{21}={\epsilon }_{0}{\left(\frac{\tau }{{\sigma }_{0}}\right)}^{n}$ Thus, the solid responds linearly to pressure loading, with a bulk modulus K.  The relationship between shear stress and shear strain is a power law, with exponent n This is just an example of a hypoelastic stress-strain law $–$ many other forms could be used.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 24, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9541245102882385, "perplexity": 707.8694058677527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146562.94/warc/CC-MAIN-20200226211749-20200227001749-00475.warc.gz"}
https://brilliant.org/discussions/thread/mechanics-4/
× # Mechanics Find the law of force to the pole when the path is the cardioid $$r=a(1- \cos \theta)$$, and prove that if $$F$$ were the force at the apse, and $$V$$ the velocity $$3V^2=4aF$$. Note by Syed Subhan Siraj 1 year, 6 months ago Sort by: First we assume that the motion is under a central force. Applying logarithmic differentiation:$r=a\left( 1-\cos { \theta } \right) \\ \Rightarrow \frac { 1 }{ r } \frac { dr }{ d\theta } =\frac { a\sin { \theta } }{ a\left( 1-\cos { \theta } \right) } =\frac { 2\sin { \frac { \theta }{ 2 } } \cos { \frac { \theta }{ 2 } } }{ 2\sin ^{ 2 }{ \frac { \theta }{ 2 } } } =\cot { \frac { \theta }{ 2 } } =\cot { \phi } \\ \Rightarrow \phi =\frac { \theta }{ 2 }$ where $$\phi$$ is the polar-tangential angle in pedal coordinates. Now we have $p=r\sin { \phi } =r\sin { \frac { \theta }{ 2 } } =\frac { r }{ \sqrt { 2 } } \sqrt { 2\sin ^{ 2 }{ \frac { \theta }{ 2 } } } =\frac { r }{ \sqrt { 2 } } \sqrt { 1-\cos { \theta } } =\frac { r }{ \sqrt { 2 } } \sqrt { \frac { r }{ a } } =r\sqrt { \frac { r }{ 2a } } \\ \Rightarrow 2a{ p }^{ 2 }={ r }^{ 3 }$ Differentiating both sides w.r.t. r:$4ap\frac { dp }{ dr } =3{ r }^{ 2 }\\ \Rightarrow \frac { dp }{ dr } =\frac { 3{ r }^{ 2 } }{ 4ap } \\ \Rightarrow \frac { { h }^{ 2 } }{ { p }^{ 3 } } \frac { dp }{ dr } =\frac { { h }^{ 2 }.3{ r }^{ 2 } }{ { p }^{ 3 }.4ap } =3a\frac { { h }^{ 2 } }{ { r}^{ 4 } } =F$ Thus force is inversely proportional to fourth power of distance. Now, at an apse$\frac { dr }{ d\theta } =0\\ \Rightarrow \sin { \theta } =0\\ \Rightarrow \theta =0\quad or\quad \pi$ But $\theta =0\\ \Rightarrow r=0$ which is a cusp of the cardioid. Thus $\theta =\pi \\ \Rightarrow r=2a=p\\ \Rightarrow h=vp=2av\\ \Rightarrow F=3a{ \left( 2av \right) }^{ 2 }{ \left( \frac { 1 }{ 2a } \right) }^{ 2 }=\frac { 3{ v }^{ 2 } }{ 4a } \\ \Rightarrow 4aF=3{ v }^{ 2 }$[Q.E.D.] · 1 year, 3 months ago thx · 1 year, 2 months ago You are welcome..:-) · 1 year, 2 months ago sir ap teacher hoo?? · 1 year, 2 months ago Nahi nahi main to ek student hoon..college mein parta hoon.. · 1 year, 2 months ago
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9995330572128296, "perplexity": 4368.639714044157}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187792.74/warc/CC-MAIN-20170322212947-00577-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.americanantigravity.com/mike-windells-plasma-beam-experiments
In 1990 Mike Windell and Warren York began experimenting with modulating resonant plasma beams, and soon discovered that by placing a target in or near the beam they could cause structural lattice changes in various crystalline materials. They were forced to temporarily suspend their investigation into this new form of crystal energy due to lack of funding, but were recently able to resume their research and found that they could harden or soften materials much like the Hutchison Effect does by modulating the phase angle and duty cycle of the resonant beam. “We also found that we could increase the performance of semiconductors by about 30%. We believe that many of the anomalous effects were due to time dilation. We have observed the apparent conversion of electrical energy in excess of what can be explained by conventional theory. It was also found that manyother strange and interesting effects could be produced. We are now working on making sure that all of the observed phenomena are 100% reproducible.” — Mike Windell
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8309834003448486, "perplexity": 686.6754848870607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738960.69/warc/CC-MAIN-20200813043927-20200813073927-00196.warc.gz"}
https://dsp.stackexchange.com/questions/67210/minimum-value-of-g-amplitude-that-guarantees-an-error-probability-of-at-least
# Minimum value of G (Amplitude) that guarantees an error probability of at least $10^{-2}$ in a 32-PAM transmission system pretty much new here. This question comes from an Online course quiz which i have already completed but cant seem to get a good sleep over, just because i cant figure it out. Below is the question in the Image. Given the range of the sample distribution and the error probability, from what I know, $$P_{err} = erfc(G/\sigma)$$ IMO, $$\sigma$$ is assumed to be the error energy, which we were told is given by $$\sigma = \Delta^2 / 12$$ where $$\Delta = (B - A) / 2^R$$ $$B - A = (100 - (-100)) = 200$$ and $$R$$ is the Range of the various intervals which I at one point chose to be 32 and at another point chose to be 5. From some programmed online calculator, I got the inverse error function of $$P_{err}$$ given by $$erfc^{-1}(0.01)$$ (which corresponds to $$(G/\sigma)$$) to be 1.821, but this is where it all goes bad, as I keep getting wrong values for $$G$$ which I presume is caused by the wrong results from the computation of $$\sigma$$. I know i might be doing it all wrong, and that's why am here. • One would hope that the question would ask for the minimum spacing that guarantees an error probability of at most $10^{-2}$ instead of at least $10^{-2}$ !! – Dilip Sarwate May 7 '20 at 2:37 I am not sure if you can use $$P_{err}=\text{erfc}(G/\sigma)$$ because noise is not gaussian distributed. Here is my take on it. Assuming uniform probability of transmission for all 32 symbols $$x_i$$,the received signal $$y=x_i+n$$ so given that $$x_i$$ was transmitted $$y$$ is also uniformly distributed in the interval $$[-100+x_i,100+x_i]$$. Suppose say the transmitted symbol was $$3G$$. The range of $$y$$ is $$[-100+3G,100+3G]$$. If $$G \ge 100$$, there would be no issue even if noise occurs. You would always detect the correct symbol if you use appropriate boundaries ($$|y-x_i| \le 100$$). Suppose say $$G \lt 100$$ so these boundaries overlap. If $$G=75$$, what would happen if we receive value $$y=150$$? The transmitted symbol could have been either $$G$$ or $$3G$$. So we can choose either $$G$$ or $$3G$$ with probability of $$0.5$$. Similarly, on the other side if $$y \gt 275$$, you can choose $$5G$$ as transmitted symbol with probability $$0.5$$ So the correct decision will be taken when $$175 \le y \le 275$$, so $$P_{err,x_i=3G}=0.5$$. So if you want $$P_e=0.01$$, for the symbols having 2 neighbors, you can distribute the error probability evenly on both sides ($$0.01$$ with each of probability 0.5). If youur transmit symbol was $$G$$, the overlap of regions will be at $$G+100-0.005=99.995$$ which will be your $$2G$$. So $$G \ge 49.9975$$. • Oh good point Jithin! (that it's not a Gausisan tail probability)- I see now in the fine print of the question that the distribution and B and A are all specified, I missed that...deleting my incorrect answer. – Dan Boschen May 5 '20 at 18:09 • Wow, stuffs like this were never mentioned in the lecture. I will sit with this tomorrow and absorb it all and then proceed to check if this is right. Thanks a lot for the help guys. – Dhavids May 6 '20 at 19:14 • @Dhavids I am curious to know which online course is this. Because I have hardly come across pure digital communication courses online with quiz and exams. – jithin May 8 '20 at 17:27 • @jithin This is a Cousera 8 weeks dsp course, you can access it here. It covers mostly the basics and there are weekly quizzes as well as jupyter notebook assignments. It was a fun ride for someone like me who just wanted a taste of what DSP is all about. – Dhavids May 10 '20 at 21:14 • @jithin I Just plugged in both answers (49.99 and 50) and both were deemed incorrect. Although i can get a good sleep over it now, i still want to understand how is done. I will probably mail one of the instructors. Thanks a lot. – Dhavids May 10 '20 at 21:30 You need to distinguish between: 1. The error rate for the inner symbols - the error is half of the overlapped segments between the symbol to its closest neighbors (by symmetry we consider one and multiply by 2). $$P_{e1} = Pr(|n|>100-G) = 2*Pr(n>100-G) = 2 \frac{100 - G}{200}$$ 2. The error rate for the points 31G and -31G - same as above bu only for one segment: $$P_{e2} = Pr(n>100-G) = \frac{100 -G}{200}$$ It remains to solve $$\frac{30}{32} P_{e1} +\frac{2}{32} P_{e2} = 0.01$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 44, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9101340770721436, "perplexity": 422.3208124907948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703515075.32/warc/CC-MAIN-20210118154332-20210118184332-00608.warc.gz"}
http://math.stackexchange.com/questions/177575/show-pnn-px-nn-x-1n-alpha
# show $P(N>n)=P(X_{(n:n)}-X_{(1:n)}<\alpha)$ [duplicate] Possible Duplicate: Finding $E(N)$ in this question suppose $X_1,X_2,\ldots$ is sequence of independent random variables of $U(0,1)$ if $N=\min\{n>0 :X_{(n:n)}-X_{(1:n)}>\alpha , 0<\alpha<1\}$ that $X_{(1:n)}$ is smallest order statistic and $X_{(n:n)}$ is largest order statistic. how can show $P(N>n)=P(X_{(n:n)}-X_{(1:n)}<\alpha)$ - ## marked as duplicate by Dilip Sarwate, Did, t.b., Sasha, Guess who it is.Aug 2 '12 at 5:14 If $(Z_n)_{n\geqslant0}$ is a sequence of random variables such that $Z_n\leqslant Z_{n+1}$ for every $n\geqslant0$, then $N_a=\inf\{n\geqslant0\,;\,Z_n\gt a\}$ is such that, for every $n\geqslant0$, $$[N_a\gt n]=[Z_1\leqslant a,\ldots,Z_n\leqslant a]=[Z_n\leqslant a].$$ Note: No probability here, this an almost sure result (as probabilists like to say), that is, a deterministic result (as everybody else says).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9680064916610718, "perplexity": 542.7612878751811}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097396.10/warc/CC-MAIN-20150627031817-00121-ip-10-179-60-89.ec2.internal.warc.gz"}
https://planetmath.org/munntree
# Munn tree Let $X$ be a finite set, and $\left(X\amalg X^{-1}\right)^{\ast}$ the free monoid with involution on $X$. It is well known that the elements of $\left(X\amalg X^{-1}\right)^{\ast}$ can be viewed as words on the alphabet $\left(X\amalg X^{-1}\right)$, i.e. as elements of the free monod on $\left(X\amalg X^{-1}\right)$. The Munn tree of the word $w\in\left(X\amalg X^{-1}\right)^{\ast}$ is the $X$-inverse word graph $\mathrm{MT}(w)$ (or $\mathrm{MT}_{X}(w)$ if $X$ needs to be specified) with vertex and edge set respectively $\mathrm{V}(\mathrm{MT}(w))=\mathrm{red}(\mathrm{pref}(w))=\left\{\mathrm{red}(% v)\,|\,v\in\mathrm{pref}(w)\right\},$ $\mathrm{E}(\mathrm{MT}(w))=\left\{(v,x,\mathrm{red}(vx))\in\mathrm{V}(\mathrm{% MT}(w))\times\left(X\amalg X^{-1}\right)\times\mathrm{V}(\mathrm{MT}(w))\right\}.$ The concept of Munn tree was created to investigate the structure of the free inverse monoid. The main result about it says that it “recognize” whether or not two different word in $\left(X\amalg X^{-1}\right)^{\ast}$ belong to the same $\rho_{X}$-class, where $\rho_{X}$ is the Wagner congruence on $X$. We recall that if $w\in\left(X\amalg X^{-1}\right)^{\ast}$ [resp. $w\in\left(X\amalg X^{-1}\right)^{+}$], then $[w]_{\rho_{X}}\in\mathrm{FIM}(X)$ [resp. $[w]_{\rho_{X}}\in\mathrm{FIS}(X)$]. ###### Theorem 1 (Munn) Let $v,w\in\left(X\amalg X^{-1}\right)^{\ast}$ (or $v,w\in\left(X\amalg X^{-1}\right)^{+}$). Then $[v]_{\rho_{X}}=[w]_{\rho_{X}}$ if and only if $\mathrm{MT}(v)=\mathrm{MT}(w)$ As an immediate corollary of this result we obtain that the word problem in the free inverse monoid (and in the free inverse semigroup) is decidable. In fact, we can effectively build the Munn tree of an arbitrary word in $\left(X\amalg X^{-1}\right)^{\ast}$, and this suffice to prove wheter or not two words belong to the same $\rho_{X}$-class. The Munn tree reveals also some property of the $\mathcal{R}$-classes of elements of the free inverse monoid, where $\mathcal{R}$ is the right Green relation. In fact, the following result says that “essentially” the Munn tree of $w\in\left(X\amalg X^{-1}\right)^{\ast}$ is the Schützenberger graph of the $\mathcal{R}$-class of $[w]_{\rho_{X}}$. ###### Theorem 2 Let $w\in\left(X\amalg X^{-1}\right)^{\ast}$. There exists an isomorphism (in the category of $X$-inverse word graphs) $\Phi:\mathrm{MT}(w)\rightarrow\mathcal{S}\Gamma(X;\varnothing;[w]_{\rho_{X}})$ between the Munn tree $\mathrm{MT}(w)$ and the Schützenberger graph $\mathcal{S}\Gamma(X;\varnothing;[w]_{\rho_{X}})$ given by $\Phi_{\mathrm{V}}(v)=[v]_{\rho_{X}},\ \ \forall v\in\mathrm{V}(\mathrm{MT}(w))% =\mathrm{red}(\mathrm{pref}(w)),$ $\Phi_{\mathrm{E}}((v,x,\mathrm{red}(vx)))=([v]_{\rho_{X}},x,[vx]_{\rho_{X}}),% \ \ \forall(v,x,\mathrm{red}(vx))\in\mathrm{E}(\mathrm{MT}(w)).$ ## References • 1 W.D. Munn, Free inverse semigroups, Proc. London Math. Soc. 30 (1974) 385-404. • 2 N. Petrich, Inverse Semigroups, Wiley, New York, 1984. • 3 J.B. Stephen, Presentation of inverse monoids, J. Pure Appl. Algebra 63 (1990) 81-112. Title Munn tree MunnTree 2013-03-22 16:11:59 2013-03-22 16:11:59 Mazzu (14365) Mazzu (14365) 20 Mazzu (14365) Definition msc 20M05 msc 20M18 SchutzenbergerGraph Munn tree
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 39, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8715221285820007, "perplexity": 588.1576843841617}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202474.26/warc/CC-MAIN-20190320230554-20190321012554-00559.warc.gz"}
http://wiki.contextgarden.net/PaperSetup
PaperSetup TODO: Merge with Paper sizes and Layout (See: To-Do List) Paper setup is one of the most basic requirements for creating your own style. In this article, the basics of paper setup are explained; the more advanced setups are described in the Page Design chapter of the new ConTeXt manual. Basic setup Setting paper size (\setuppapersize) Plain TeX and LaTeX were primarily developed in the US. So, they default to letter paper, which is the standard paper size in the US. ConTeXt was developed in the Netherlands. So, it defaults to A4 paper, which is the standard paper size in Europe (and almost everywhere else in the world). Changing paper size is easy, for letter paper:[1] `\setuppapersize[letter]` Similarly, to get A4 paper, use: `\setuppapersize[A4]` Pre-defined paper sizes Both A4 and letter are predefined paper sizes. ConTeXt predefines many other commonly used paper sizes. These include: • letter, ledger, tabloid, legal, folio, and executive sizes from the North American paper standard; • sizes A0A10, B0B10, and C0C10 from the A, B, and C series of the ISO-216 standard; • sizes RA0RA4 and SRA0SRA4 from the RA and SRA series of ISO-217 paper standard; • sizes C6/C5, DL, and E4 from ISO-269 standard envelope sizes; • envelope 9envelope 14 sizes from the American postal standard; • sizes G5 and E5 from the Swedish SIS-014711 standard. These are used for Swedish theses; • size CD for CD covers; • size S3S6, S8, SM, and SW for screen sizes. These sizes are useful for presentations. S3S6 and S8 have an aspect ratio of 4:3. S3 is 300pt wide, S4 is 400pt wide, and so on. S6 is almost as wide as a A4 paper. SM and SW are for medium and wide screens; they have the same height as S6; • a few more paper sizes, which I will not mention here. See page-lay.mki(i|v) for details. Defining new paper sizes (\definepapersize) The predefined paper sizes in ConTeXt cannot fit all needs. To define a new paper size, use ```\definepapersize[exotic] [width=50mm, height=100mm]``` which defines a paper that is 50mm wide and 100mm high; the name of this paper is exotic (we could have used any other word). All predefined paper sizes are defined using \definepapersize. For example, A4 paper is defined as: `\definepapersize [A4] [width=210mm,height=297mm]` Use this new paper size like any of the predefined paper sizes. For example, to set the paper size to 50mm x 100mm paper, use `\setuppapersize[exotic]` Orientation Most of the popular paper sizes default to a portrait orientation. To get landscape orientation, use `\setuppapersize[letter,landscape]` Changing paper setup mid-document Normally, the paper size is set up once—in the environment file—and doesn't need to be changed later. But, occasionally, changing paper size mid-document is needed; for example, to insert a table or a figure in landscape mode. There are two ways to change the paper size mid-document. To illustrate those, let us first define two paper sizes for convenience: ```\definepapersize[main] [A4] \definepapersize[extra][A4,landscape]``` One way to change document size is to permanently change the paper size using \setuppapersize and then revert back using \setuppapersize. ```% Set the default paper size \setuppapersize[main] \starttext % ... % text with main paper size % ... \page \setuppapersize[extra] % ... % pages in landscape mode % ... \page \setuppapersize[main] % ... % back to main paper size % ... \stoptext``` The \page before \setuppapersize is necessary as \setuppapersize changes the size of the current page. Often times, a different paper size is needed only for one page. Rather than manually switching the paper size back and forth using <\setuppapersize, a convenient alternative is to use \adaptpapersize, which automatically reverts back to the existing paper size after one page. This is illustrated by the following example. ```\setuppapersize[main] \starttext Page 1. Portrait \page Page 2. Portrait \page Page 3. Landscape \page Page 4. Portrait \page \stoptext``` As with \setuppapersize, always use an explicit \page before \adaptpapersize. Setting print size Occasionally you may want to print on a larger paper than the actual page size. This could be because you want to print to the edge of the page—so you print on a large paper and crop later—or because the page size that you are using is not standard. For example, suppose you want to print an A5 page on a A4 paper (and crop later). For that, you need to specify that the paper size is A5 but the print paper size is A4. This information is specified using the two argument version of the \setuppapersize: `\setuppapersize[A5][A4]` Changing page location By default, this places the A5 page on the top left corner of the A4 paper. To place the A5 page in the middle of the A4 paper use: ```\setuppapersize[A5][A4] \setuplayout[location={middle,middle}]``` Other possible values for location are: {top,left}, {top,middle}, {top,right}, {middle,right}, {middle,left}, {bottom,left}, {bottom,middle}, and {bottom,right}. Since {middle, middle} is the most commonly used value, it has a shortcut—location=middle. If you use {*,left} or {*,right} and print double-sided, then also add duplex as an option; for example location={duplex,top,left}. This ensures that the page paper is moved appropriately on even pages. Crop marks To get crop marks (also called cut marks) use `\setuplayout[marking=on]` By default, the page numbers are also included with the crop marks. To get additional information like job name, current date and time along with the crop marks, use `\setuplayout[marking=text]` If you want just the crop marks, and no other text, use `\setuplayout[marking=empty]` Defining page and print size combinations It is convenient to define paper-size/print-paper-size combination for later reuse. These are also defined using \definepapersize. For example, suppose you want to define two paper-size/print-paper-size combinations: A4 paper on A4 print paper for normal work flow, and A4 paper on A3 print paper for the final proofs. For that, use the following: ```\definepapersize[regular][A4][A4] \definepapersize[proof] [A4][A3]``` You can then combine these paper sizes with Modes: ```\setuppapersize[regular] \doifmode{proof}{\setuppapersize[proof]}``` Then, when you compile the document in the normal manner, you will get A4 paper on A4 print paper; if you compile the document with --mode=proof, then you will get a A4 paper on A3 print paper. Notes 1. The syntax used here only works with ConTeXt versions newer than February 2011. Before that, you had to use `\setuppapersize[letter][letter]` to get letter sized paper. You may wonder why we need to repeat the paper size twice. In most cases, these are the same. You only need to use different arguments if you want to print on a bigger paper and trim it later (see the section on print size for details). C O N T E X T G A R D E N
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8351833820343018, "perplexity": 2350.759877252708}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802766292.96/warc/CC-MAIN-20141217075246-00122-ip-10-231-17-201.ec2.internal.warc.gz"}
http://blog.darkbuzz.com/2012/04/evidence-against-entraining-aether.html
## Friday, April 13, 2012 ### Evidence against entraining the aether Joseph Levy of France has just posted Is the aether entrained by the motion of celestial bodies? What do the experiments tell us?. He revisits the issue of whether the Michelson–Morley experiment rules out luminiferous aether drift from the motion of the Earth. Since the publication of Einstein’s basic article “On the electrodynamics of moving bodies” in 1905, the aether has been excluded from the area of physics, being regarded as inexistent or at least inactive. Such an attitude signified that the laws of physics could be formulated in the same way, that the aether exists or not,... This approach appeared quite revolutionary in 1905, since it called into question the ideas developed by a number of classical physicists such as Hooke, Lavoisier, Young, Huygens, Laplace, Fresnel, and Lorentz among others. I do not agree with this. I say that Einstein did not refute Lorentz, and what Lorentz meant by the aether was what he said in 1895: It is not my intention to ... express assumptions about the nature of the aether. Levy does explain how the aether concept (if not the name) is universally accepted today: In fact, despite its properties that seem so different from ordinary matter, a number of arguments speak in favour of a substratum [9] and these arguments have multiplied in the early twentieth century with the development of quantum mechanics. It is difficult, indeed, to accept that a “vacuum”, endowed with physical properties such as permittivity and permeability may be empty. The ability of such an empty vacuum to transmit electromagnetic waves is also doubtful. Quantum mechanics, on its part, regards the vacuum as an ocean of pairs of fluctuating virtual particles-antiparticles of very small life-time, a ppearing and disappearing spontaneously, which can be interpreted as a gushing of the aether, although the aether is not officially recognized by quantum mechanics. The interaction of the electrons and the vacuum, in particular, is regarded as the cause of the shifting of the alpha ray of the hydrogen atom spectrum, referred to as lamb shift [10]. The fluctuations of the vacuum are also assumed to expl ain the Casimir effect [11], and the Davies Fulling, Unruh, effect [12]. Einstein himself around 1916 changed his mind as regards the hypothesis of the aether. ... A proof of the undeniable existence of the aether was given in ref [14]. Thus, the question to be answered today is not to verify its existence, but rather to specify its nature and its properties, and, in the first place, to determine if it is entrained (or not) by the translational motion of celestial bodies due to gravitation. You will be reassured to learn that the conclusions of Lorentz and Poincare about relativity in 1900 are still good today, and evidence is against the aether drag hypothesis.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8429923057556152, "perplexity": 838.8296347550462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368700477029/warc/CC-MAIN-20130516103437-00093-ip-10-60-113-184.ec2.internal.warc.gz"}
http://davicr.wordpress.com/2006/09/02/miktex-25-and-hyperref/
## MikTex 2.5 and Hyperref Just now I upgraded my old 2.4 version of MikTeX to the 2.5 version. The details of the new version can be found in the last link. Those who already have version 2.4 installed may use the MikTex Update Wizard. There was a change in the default option of hyperref, it was changed from hypertex to dvips. The line \usepackage{hyperref} is no longer compatible with the DVI building (but pdf documents are generated normally). To correct this problem one may either replace \usepackage{hyperref} with \usepackage[hypertex]{hyperref} in each tex file (*) or, which I prefer, return to the original default option. To do the latter, first locate the file hyperref.cfg (its standard folder is C:\texmf\tex\latex0miktex), then open it with an ascii editor (e.g., Notepad “Bloco de notas”). This file has only one line. Where it reads “dvips” replace with “hypertex”. Save the file “hyperref.cfg” and now everything will work fine. For both DVI and PDF outputs one may use \usepackage{hyperref}. (*) A PDF document can be built with that option, but the hyperlinks won’t work. For some reason, the modification in the hyperref.cfg file doesn’t have this drawback. Explanations? Explore posts in the same categories: Uncategorized ### 7 Comments on “MikTex 2.5 and Hyperref” 1. [...] Atualização 2. Miktex 2.5 and Hyperref. Explore posts in the same categories: Uncategorized [...] 2. Marcel Zemp Says: Thanks! This solved my problem! I had the problem that lines were not broken anymore in the table of content, figures etc. First, I had no idea why that happened. But then I also realised that the hyperlinks do not work anymore. I made the changes in the hyperref.cfg file and now everything works fine: hyperlinks are back and the line-breaking in the table of content etc. works fine! I don’t know why it doesn’t work in the “new” version. Is that a known bug? 3. Davi Says: Hi Marcel, thanks for your message! I don’t know if this modification in the hyperref standard option was done on purpose, but I really didn’t like it (and lost some time to solve it). It appers that many people have complains on this. 4. Guillaume Says: Hi, I tried to change the hyperref.cfg file like mentioned above but it still do not work. In this file I open, the only place where it says “dvips” goe like this : {hdvips}. I tried to replace the “dvips” with “hypertex” and it do not work. Anyone have the same problem ? This is a major bug. Thanks 5. Davi Says: Hi Guillaume, now and in the next days I don’t have access to a computer with this miktex version installed, but I don’t remember of seeing “{hdvips}”; nevertheless in the end you should have “{hypertex}”. In the 2.4 version the complete hyperref.cfg file reads: \ProvidesFile{hyperref.cfg}% [2003/03/08 v1.0 MiKTeX 'hyperref' configuration] \providecommand*{\Hy@defaultdriver}{hypertex}% \endinput Best regards, Davi. 6. Manuel Luque Says: Thank you very much! You have been very helpful for me! Replacing the line \usepackage{hyperref} with \usepackage[hypertex]{hyperref} solved completely my problems with dvipdfm. 7. lami Says: Same here! Thanks a lot!
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8353525996208191, "perplexity": 2919.95874477554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119650193.38/warc/CC-MAIN-20141024030050-00126-ip-10-16-133-185.ec2.internal.warc.gz"}
https://jmlr.csail.mit.edu/papers/v21/16-252.html
## Learning Causal Networks via Additive Faithfulness Kuang-Yao Lee, Tianqi Liu, Bing Li, Hongyu Zhao; 21(51):1−38, 2020. ### Abstract In this paper we introduce a statistical model, called additively faithful directed acyclic graph (AFDAG), for causal learning from observational data. Our approach is based on additive conditional independence (ACI), a recently proposed three-way statistical relation that shares many similarities with conditional independence but without resorting to multi-dimensional kernels. This distinct feature strikes a balance between a parametric model and a fully nonparametric model, which makes the proposed model attractive for handling large networks. We develop an estimator for AFDAG based on a linear operator that characterizes ACI, and establish the consistency and convergence rates of this estimator, as well as the uniform consistency of the estimated DAG. Moreover, we introduce a modified PC-algorithm to implement the estimating procedure efficiently, so that its complexity is determined by the level of sparseness rather than the dimension of the network. Through simulation studies we show that our method outperforms existing methods when commonly assumed conditions such as Gaussian or Gaussian copula distributions do not hold. Finally, the usefulness of AFDAG formulation is demonstrated through an application to a proteomics data set. [abs][pdf][bib]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8812327980995178, "perplexity": 876.007248657269}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107894890.32/warc/CC-MAIN-20201027225224-20201028015224-00206.warc.gz"}
https://homework.cpm.org/category/CCI_CT/textbook/int2/chapter/1/lesson/1.2.2/problem/1-36
### Home > INT2 > Chapter 1 > Lesson 1.2.2 > Problem1-36 1-36. If the perimeter of the rectangle at right is $112$ cm, which equation below represents this fact? Once you have selected the appropriate equation, solve for $x$ 1. $(2x−7)+(4x+3)=112$ 1. $4(2x−7)=112$ 1. $2(2x−7)+2(4x+3)=112$ 1. $(2x−7)(4x+3)=112$ Perimeter is the sum of the sides.
{"extraction_info": {"found_math": true, "script_math_tex": 6, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8146577477455139, "perplexity": 1365.778988211898}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780061350.42/warc/CC-MAIN-20210929004757-20210929034757-00155.warc.gz"}
http://en.wikipedia.org/wiki/User:NorwegianBlue/area_of_a_square_on_the_surface_of_a_sphere
User:NorwegianBlue/area of a square on the surface of a sphere Initial presentation of the problem how do one find the area of a square drawn on a sphere? Simple answer, you cant, a square is 2D and a sphere is 3D, so you can not draw a square on a sphere. Stefan 09:38, 24 May 2006 (UTC) It might be better to say that a sphere is a surface (a two dimensional Riemannian manifold) with constant positive curvature, while the plane is a surface (two dimensional manifold) with constant zero curvature. ---CH 10:03, 24 May 2006 (UTC) Well there's a way to do triangles (it's in my friends multivariable calculus book), but I dunno about squares. --M1ss1ontomars2k4 (T | C | @) 03:48, 25 May 2006 (UTC) So.. why don't you just divide that square in two triangles .. and add up the results? as in an object with 4 sides, and their interior angle does not add up to 360 degrees. HOw do you find the area? See Theorema egregium. ---CH 10:03, 24 May 2006 (UTC) If you are asking how to find the area of some shape on a sphere, then perhaps we can give you a helpful answer - but in order to do so, you have todefine the shape 'exactly'. For example we could start analysing the area of a square projected onto the surface of a sphere. This isn't a square, it has curved edges. So back to you... -- SGBailey 10:25, 24 May 2006 (UTC) BTW, did you know that an equilateral "triangle" on a sphere touching the points lat=0,long=0; lat=0,long=90; lat=90,long=any has three 90 degree corners and has an area of 1/8 of the sphere? -- SGBailey 10:25, 24 May 2006 (UTC) You will have to express the sides of the "square" mathmatically to determine the boundaries of the double integral that will give you the area. You will probably want to solve it in spherical coordinates. You are going to have to know some calculus for this one. --Swift 11:11, 24 May 2006 (UTC) actually, the whole thing goes like this, i am tryin' the find the area of this... A square with the side of 10 cm, and draw loci (10cm) on each corners (quarter of a circle in a square to give the "square") This is NOT a homework question, i just want a head start of what to do. If you do not understand what i said, tell me and i'll create an image from Paint. many thanks! I do not understand! OK one more 'simple' answer to your to 'simple' question, between a very very very small bit more than 100 square cm to about maybe 200 square cm. According to Square (geometry), a hemispere would be a valid "squareon a sphere" with side length = 0.25*sphere circumference and area=0.5*sphere area. Indeed presumably a hemisphere is an instance of every regular polygon with the same area and side length = 1/N * circumference. I note that this is a "valid" (?) 2 sided polygon and even a valid (?) one sided polygon! -- SGBailey 16:12, 24 May 2006 (UTC) I don't get this. Perhaps drawing that picture would help clear things up. --Swift 08:33, 25 May 2006 (UTC) Is this about spherical geometry?Yanwen 20:58, 24 May 2006 (UTC) Excuse me if I'm missing something obvious here, but M1ss1ontomars2k4 says: Well there's a way to do triangles (it's in my friends multivariable calculus book), but I dunno about squares. So, find the area of a right-angled triangle (or sphere-surfaced equivalent) with shorter sides both length 10cm, and double it. Grutness...wha? 05:46, 25 May 2006 (UTC) Grutness, You are missing something. On a sphere triangles etc don't scale like that. -- SGBailey 11:59, 25 May 2006 (UTC) Yes user:grutness is sort of right - but what is missing is the radius of the sphere otherwise answers will have to be expressed as a function of radius. Take the solid angle created by half the 'square' ie a spherical triangle - double it and multiply by r squared to get the surface area. Spherical trigonometry may help as will solid angle - see continued talk below Wikipedia:Reference desk/Science#(continued) Area on a sphereHappyVR 16:42, 27 May 2006 (UTC) Followup on science section This is the problem. It's not a homework question and I just need to know how to work it out. Length of the square is 10 cm, find the shaded area (the curved lines are the loci of the 4 corners) As you just want to know how to do it, I will not carry out actual calculations. Call A, B, C, D the vertices of the square in a clockwise fashion starting from the bottom left one. Let E be the top point of intersection between the four circumferences, and let F be the right one. Then one can show that the segments AE subtends pi/6and AF divide the right angle BAD into three equal angles, each of them measuring $\pi/6$. Hence, if you set up Cartesian cohordinates so that A=(0,0), B=(0,1), C=(1,1) and D=(1,0), the x-cohordinate of F is just sin($\pi/6$)=$\sqrt{3}/2$. The equation of the circumference centered in A being $x^2 + y^2 = 1$, the area you are looking for is $4\int_{1/2}^{\sqrt{3}/2} \sqrt{1 - x^2} - 1/2 dx$ (using symmetry to simplify things). Cthulhu.mythos 14:48, 26 May 2006 (UTC) That's quite funny. There's a much easier way to figure this out. I won't give the details just in case it is homework, but the approach looks like this: let a be the area coloured yellow area, and b be the area of one of the four curvy arrowhead-like shapes in the corners. Express the area of the whole square in terms of a and b. Express the area of a quarter circle in terms of a and b. This gives you simultaneous equations in a and b which you can easily solve for a. Gdr 15:33, 26 May 2006 (UTC) Your method doesn't account for all the areas. In addition to "a" and "b" there are is also the pointy area between two "b" areas. 199.201.168.100 15:37, 26 May 2006 (UTC) Yes, you're quite right. So call the thin area at the side c and make three simultaneous equations in a, b and c. Gdr 15:55, 26 May 2006 (UTC) There's certainly a way to avoid calculus. It's easy enough to get the cartesian coordinates of the four vertices of the yellow area. (e.g. the top one is at (1/2, sqrt(3)/2) just because it makes an equilateral triangle with the bottom two vertices of the main square.) Then take the yellow area to be a square joining its four vertices (aligned diagonal to the coordinate axes), plus four vaguely lens-shape pieces. The area of each of the lens-shape pieces is obtained by considering drawing straight lines connecting its two vertices to the opposite corner (i.e. to the center of the arc): it's the area of the sector of the circle minus the area of the triangle. Hope this makes some kind of sense. Arbitrary username 20:56, 26 May 2006 (UTC) Like the person posing the question, I am not a mathematician. I suspect that the responses so far have not given enough practical detail to be helpful to the questioner. Based on the original question, and on this repost, I'll have a go at reformulating what I think the questioner had in mind: We are working on the surface of a sphere. We have two pairs of great circles. The angle between the first pair of great circles, expressed in radians, is 10cm/r, where r is the radius of the sphere. The angle between the second pair of great circles is equal to the angle between the first pair. The plane defined by the axes corresponding to the first pair of great circles is perpendicular to the plane defined by the axes corresponding to the second pair of great circles. At two opposite locations on the surface of the sphere, "squares" are formed, as illustrated in the image. Is it possible to express the area of one of these "squares" analytically, such that the area tends to 100cm2 as r tends to infinity? --NorwegianBlue 21:26, 26 May 2006 (UTC) • I love math problems that have multiple approaches. I'll wave my hands a bit and assert that the corners of the yellow area cut the arcs in thirds. Call A the yellow corner on the left, and B the one on top. Construct segment AB. Figure out the area between segment AB and arc AB, and add four of 'em to a square of side AB. I think the result will look something like $(2R\sin \frac{\pi}{12})^{2} + (2R^2(\frac{\pi}{6} - \sin\frac{\pi}{6}))$ but that's only because I looked up the formulas for the circular segment on mathworld. Signed, math degree 30 years ago next month and am rusty as all hell. --jpgordon∇∆∇∆ 05:41, 27 May 2006 (UTC) • The question was related to the area of a square on the surface of a sphere, and the preceding answer appears to be plane geometry (correct me if I'm wrong!). I think we can be reasonably sure that this is not a homework question, because of the vague way in which it was formulated. I believe what the questioner had in mind was the area illustrated in yellow here: The red curves are supposed to represent great circles. Is anybody able to come up with a formula for the yellow area in terms of r, the radius of the sphere? Also, it would be nice if the person that posed the question confirmed that this is what he/she is looking for. --NorwegianBlue 09:19, 27 May 2006 (UTC) • Oh, it doesn't matter what they're looking for -- this is fun! Probably belongs over in WP:RD/Math. I ignored the sphere thing, for some reason or another. But isn't there insufficient information to calculate this? (Is this a solid angle on a sphere?)--jpgordon∇∆∇∆ 16:13, 27 May 2006 (UTC) Yes it is a solid angle of a sphere type question - the missing info. is the radius r of the sphere - without that answers will need to be functions of r. By the way if the interior angles of a triangle drawn on a sphere are a,b and c then the solid angle covered by the triangle (spherical geometry here) is a+b+c-pi in steradians. HappyVR 16:32, 27 May 2006 (UTC) My question, and possibly the original poster's question, was if somebody could provide a formula for the area, in terms of r. --NorwegianBlue 16:43, 27 May 2006 (UTC) Followup on maths section This question was originally posted in the science section, but belongs here. The original questioner has stated clearly that it is not a homework question. It was formulated as follows: "How do one find the area of a square drawn on a sphere? A square with the side of 10 cm, and draw loci (10cm) on each corners (quarter of a circle in a square to give the "square")" Based on the discussion that followed, I think what the questioner had in mind is the area illustrated in yellow in the drawing below: The red circles are supposed to be two pairs of great circles. The angle between the first pair of great circles, expressed in radians, is 10cm/R, where R is the radius of the sphere. The angle between the second pair of great circles is equal to the angle between the first pair. The plane defined by the axes corresponding to the first pair of great circles is perpendicular to the plane defined by the axes corresponding to the second pair of great circles. The question is how to express the yellow area in terms of R, the radius of the sphere. Obviously, as R ? 8, the area ? 100 cm2. • - I am not a mathematician, but felt that it "ought to" be possible to express this area in terms of R, and decided to try to find the necessary information. I found Girard's theorem, which states that the area of a triangle on a sphere is (A + B + C - p) × R2, where A, B and C are the angles between the sides of the triangle, as illustrated in the second drawing. I also found the law of sines for triangles on a sphere, which relates the angles A, B and C to the angles a, b and c which define the sides of the triangle $\frac{\sin a}{\sin A}=\frac{\sin b}{\sin B}=\frac{\sin c}{\sin C}.$ I then attempted to divide the square into two triangles, and compute the area of one of these, but am stuck because I don't know the diagonal. Since this is spherical geometry, I doubt that it is as simple as $\sqrt{2} \times 10 cm$. I would appreciate if somebody told me if I am on the right track, and, if so, how to complete the calculations. If my presentation of the problem reveals that I have misunderstood some of the theory, please explain. --NorwegianBlue 14:11, 28 May 2006 (UTC) The natural way I suspect the question should presumably be answered is to take the square on the flat plane and use Jacobians to transform it onto the sphere. Those with a firmer grip of analysis would probably want to fill in the details at this point... Dysprosia 15:28, 28 May 2006 (UTC) An easier way to tackle this might be to exploit the symetry of the situation. Slice the sphere into 4 along z=0 and x=0. This will give four identical squares with four right angles and two sides of length 5. The cut the squares along x+z=0, x-z=0 giving eight triangles, each with one 45 degree angles, one right angle and one side of length 5. --Salix alba (talk) 15:44, 28 May 2006 (UTC) I think vibo is on the right track. You can use the law of sines to calculate the length of the diagonal. -lethe talk + 15:44, 28 May 2006 (UTC) The law of cosines for spherical trig gives cos c = cos2 a. -lethe talk + 16:06, 28 May 2006 (UTC) From which I get using the spherical law of sines that sin A = sin a/ v(1 – cos4 a). A = B and C = p/2, so I have the triangle, and hence the square. -lethe talk + 16:11, 28 May 2006 (UTC) To lethe: How can you say that C = p/2? This is spherical geometry, and the four "right" angles in the "square" in the first drawing add up to more than 2p, don't they, or am I missing something? --NorwegianBlue 16:49, 28 May 2006 (UTC) You may be right, I cannot assume that the angles are right angles. Let me mull it over some more. -lethe talk + 16:58, 28 May 2006 (UTC) OK, I think the right assumption to make is that C = 2A. I can solve this triangle as well, but it's quite a bit messier. Lemme see if I can clean it up, and then I'll post it. -lethe talk + 17:20, 28 May 2006 (UTC) $2R^2\left(2\sin^{-1}\left(\frac{\sin a}{\sqrt{1-\cos^4 a}}\right) - \frac{\pi}{2}\right)$ For the square. Now I just have to see whether this answer works. -lethe talk + 16:14, 28 May 2006 (UTC) And now I'm here to tell you that Mathematica assures me that this function approaches s2 as the curvature goes to zero. From the series, I can say that to leading two orders of correction, area = s2 + s4/6R2 + s6/360R4. -lethe talk + 16:25, 28 May 2006 (UTC) I got the following for the diagonal angle c of the big square from "first principles" (just analytic geometry in 3D): cos(c/2) = 1 / sqrt(1+2t2), where a = 10cm/R and t = tan(a/2). --LambiamTalk 16:02, 28 May 2006 (UTC) I'm afraid I didn't understand (I'm not a mathematician :-) ). If we let (uppercase) C be the "right" (i.e. 90°+something) angle in the triangle in the second figure, and (lowercase) c be the diagonal that we are trying to calculate, could you please show the steps leading to this result (or rephrase it, if I misinterpreted your choice of which of the angles A,B,C that was the "right" one)? --NorwegianBlue 18:07, 28 May 2006 (UTC) For simplicity, let's put R = 1, since you can divide all lengths first by R, and multiply the area afterwards by R2. Then the equation of the sphere is x2 + y2 + z2 = 1. Take the point nearest to the spectator in the first image to be (x,y,z) = (0,0,1), so z decreases when receding. Take the x-axis horizontal and the y-axis vertical. A grand circle is the intersection of a plane through the sphere's centre (0,0,0) with the sphere. The equation of the plane that gives rise to the grand circle whose arc segment gives the top side of the "square" is y = tan(a/2) × z = tz (think of it as looking sideways along the x-axis). At the top right corner of the "square" we have x = y. Solving these three equations (sphere, plane, x = y) for z, using z > 0, gives us z = 1 / sqrt(1+2t2). Now if c is the angle between the rays from the centre of the sphere to this corner and its opposite (which, if R = 1, is also the length of the diagonal), so c/2 is the angle between one of these rays and the one through (0,0,1), then z = cos(c/2). Combining this with the other equation for z gives the result cos(c/2) = 1 / sqrt(1+2t2). Although I did not work out the details, I think you can combine this with Salix alba's "cut in eight" approach and the sines' law to figure out the missing angle and sides. --LambiamTalk 19:59, 28 May 2006 (UTC) new calculation As vibo correctly points out above, the square will not have right angles, so my calculation is not correct. Here is my new calculation. Assuming all angles of the square are equal, label this angle C. Then draw the diagonal, and the resulting triangle will be equilateral with sides a and angles A, and 2A = C. The law of sines tells me $\frac{\sin a}{\sin A} = \frac{\sin c}{\sin 2A} \,\!$ from which I have $\sin c = 2\cos A \sin a. \,\!$ From the law of cosines I have that $\cos c = \cos^2 a +\sin^2a(2\cos^2A-1). \,\!$ My goal here is to eliminate c. First I substitute cos A: $\cos c = \cos^2+\sin^2a\left(\frac{\sin^2 c}{2\sin^2 a}-1\right) \,\!$ which reduces to the quadratic equation $\cos^2c+2\cos c-1=2\cos 2a. \,\!$ So I have $\cos c=-1\pm\sqrt{2+2\cos 2a} \,\!$ and using cos A = sin c/2sin a, I am in a position to solve the triangle $A=\cos^{-1}\left(\frac{1}{2}\sqrt{1-\left(-1+\sqrt{2+2\cos 2a}\right)^2}\csc a\right). \,\!$ I'm pretty sure this can be simplified quite a bit, but the simplification I got doesn't agree with the one Mathematica told me. Anyway, the expansion also has the right limit of s2. -lethe talk + 20:16, 28 May 2006 (UTC) Despite the figure, which is only suggestive (and not quite correct), are we agreed on the definition of a "square on a sphere"? The question stipulates equal side lengths of 10 cm. To avoid a rhombus we should also stipulate equal interior angles at the vertices, though we do not have the luxury of stipulating 90° angles. Food for thought: Is such a figure always possible, even on a small sphere? (Suppose the equatorial circumference of the sphere is itself less than 10 cm; what then?) Even if it happens that we can draw such a figure, is it clear what we mean by its area? Or would we prefer to stipulate a sufficiently large sphere? (If so, how large is large enough?) Figures can be a wonderful source of inspiration and insight, but we must use them with a little care. --KSmrqT 20:40, 28 May 2006 (UTC) The figure was drawn by hand, and is obviously not quite correct, but doesn't the accompanying description: The red circles are supposed to be two pairs of great circles. The angle between the first pair of great circles, expressed in radians, is 10cm/R, where R is the radius of the sphere. The angle between the second pair of great circles is equal to the angle between the first pair. The plane defined by the axes corresponding to the first pair of great circles is perpendicular to the plane defined by the axes corresponding to the second pair of great circles. resolve the ambiguity with respect to the rhombus, provided that the area of the square is less than half of the area of the sphere? --NorwegianBlue 21:51, 28 May 2006 (UTC) What is meant by "the angle between the … circles"? That's not really the same as the arclength of a side as depicted. Also note that the orginal post suggests that the side might be a quarter of a circle. If that is true, then the "square" is actually a great circle! Each angle will be 180°, and the area "enclosed" will be a hemisphere of a sphere with radius 20 cm/π, namely 2π(20 cm/π)2 = 800 cm2/π, approximately 254.65 cm2. By a series of manipulations I came up with $\cos^2 A = \frac{\cos a}{1+\cos a} ,$ where a is 10 cm/R, the side length as an angle. The angle of interest is really C = 2A, for which $\cos C = -\tan^2 \frac{a}{2} .$ For the hemisphere case, a = π/2 produces C = π; while for the limit case, a = 0 produces C = π/2. The original question was about the area, so we should conclude with that: (4C-2π)R2. --KSmrqT 04:43, 29 May 2006 (UTC) By "the angles between a pair of great circles", I meant the angle between the plane P1 in which the first great circle lies, and the plane P2 in which the second great circle lies. The arc length depicted was intended to represent the intersection between the surface of the sphere, and a plane P3, which is orthogonal to P1 and P2, and which passes through the centre of the sphere. As previously stated, I have little mathematical training. I therefore made a physical model by drawing on the surface of a ball, before making the first image. I convinced myself that such a plane is well-defined, and that this length of arc on a unit sphere would be identical to the angle between P1 and P2. Please correct me if I am mistaken, or confirm if I am right. --NorwegianBlue 20:19, 29 May 2006 (UTC) Every great circle does, indeed, lie in a well-defined plane through the center of the sphere. Between two such planes we do have a well-defined dihedral angle. The problem arises when we cut with a third plane. If we cut near where the two planes intersect we get a short arc; if we cut far from their intersection we get a longer arc. In other words, the dihedral angle between the two planes does not determine the arclength of the "square" side. Instead, use the fact that any two distinct points which are not opposite each other on the sphere determine a unique shortest great circle arc between them, lying in the plane containing the two points and the center. Our value a is the angle between the two points, as measured at the center of the sphere. Were we to pick two opposite points, we'd have a = π, which is half the equatorial circumference of a unit sphere. For a sphere of radius R, the circumference is 2πR. We are told that the actual distance on the sphere is exactly 10 cm, but we are not told the sphere radius. The appearance of the "square" depends a great deal on the radius, and so does its area. When the radius is smaller, the sides "bulge out" to enclose more area, the corner angles are greater, and the sphere bulges as well. As the sphere radius grows extremely large, the square takes up a negligible portion of the surface, the sides become straighter, the angles approach perfect right angles, and the sphere bulges little inside the square. We do not have a handy rule for the area of a square on a sphere. Luckily, the area of a triangle on a sphere follows a powerful and surprisingly simple rule, based on the idea of angular excess. Consider a triangle drawn on a unit sphere, where the first point is at the North Pole (latitude 90°, longitude irrelevant), the second point drops straight down onto the equator (latitude 0°, longitude 0°), and the third point is a quarter of the way around the equator (latitude 0°, longitude 90°). This triangle has three perfect right angles for a total of 270° (or 3π/2), and encloses exactly one octant — one eighth of the surface area — of the sphere. The total surface area is 4π, so the triangle area is π/2. This area value is exactly the same as the excess of the angle sum, 3π/2, compared to a flat right triangle, π. The simple rule is, this is true for any triangle on a unit sphere. If instead the sphere radius is R, the area is multipled by R2. Thus we simplify our area calculation by two strategies. First, we divide out the effect of the radius so that we can work on a unit sphere. Second, we split the "square" into two equal halves, two equilateral triangles, by drawing its diagonal. Of course, once we find the triangle's angular excess we must remember to double the value (undoing the split) and scale up by the squared radius (undoing the shrink). Notice that this mental model assumes the sphere radius is "large enough", so that at worst the square becomes a circumference. We still have not considered what we should do if the sphere is smaller than that. It seems wise to ignore such challenges for now. --KSmrqT 21:23, 29 May 2006 (UTC) Thank you. I really appreciate your taking the time to explain this to me which such detail and clarity. --NorwegianBlue 23:34, 29 May 2006 (UTC) Coordinate Transform What if we perform a simple coordinate transform to spherical coordinates and perform a 2-dimensional integral in phi and theta (constant r = R). Then, dA = r^2*sin(theta)*dphi*dtheta, and simply set the bounds of phi and theta sufficient to make the lengths of each side 10 cm. Nimur 18:11, 31 May 2006 (UTC) Calculation completed Thanks a million to the users who have put a lot of work in explaining this to me, and in showing me the calculations necessary. I started out based on the work of lethe. Armed with a table of trigonometric identities, I went carefully through the calculations, and am happy to report that I feel that I understood every single step. I was not able to simplify the last expression much further, the best I can come up with is $\cos A=\frac{1}{2}\csc a\sqrt{2\sqrt{2+2\cos 2a} - 2\cos 2a - 2} \, .\!$ You should probably make use of the identity $\cos 2a = 2\cos^2a -1$ here, it simplifies this expression quite a bit. -lethe talk + 02:01, 30 May 2006 (UTC) Since the r.h.s. is based on a only, which is a known constant when the radius and length of arc are given (a=10cm/R for the given example), let us substitute $G=g(a) \,\!$ for the r.h.s. Note that the function is undefined at a=0°±180° because of the sine function in the denominator. There is a graph of g(a) on my user page. We can now calculate the area of the triangle, and that of the square. $\cos A = G . \,\!$ $\cos 2A = 2\cos^2 A - 1 = 2G^2-1 \, .\!$ According to Girard's formula, we then have $area_{triangle} = (A + B + C - \pi) \times R^2 \!$ $area_{triangle} = \left( 2\cos^{-1} G + \cos^{-1}(2G^2 - 1)-\pi \right) \times R^2\!$ $area_{square} = 2\times \left( 2\cos^{-1} G + \cos^{-1}(2G^2 - 1)-\pi \right) \times R^2\!$ I calculated the behaviour of this area function on a unit sphere when a is in the range (0°...180°): Seems reasonable up to 90°. The value at 90° corresponds to the "square" with four corners on a great circle that KSmrq mentions above, i.e. to a hemisphere, and the area, 2p is correct. in the interval [90°..180°), the function returns the smaller of the two areas. I also notice that the function looks suspiciously elliptical. Are we computing a much simpler function in a roundabout way? I next studied how the formula given by KSmrq works out: $area_{triangle} = \left( 2\cos^{-1} \left( \sqrt{\frac{\cos a}{1+\cos a} }\, \right) + \cos^{-1}(-\tan^2 \frac{a}{2})-\pi \right) \times R^2\!$ $area_{square} = 2 \times \left( 2\cos^{-1} \left( \sqrt{\frac{\cos a}{1+\cos a} }\, \right) + \cos^{-1}(-\tan^2 \frac{a}{2})-\pi \right) \times R^2\!$ I computed the area, and found that in the range (0°..90°], the formulae of lethe and KSmrq yield identical results, within machine precision. Above 90°, the formula of KSmrq leads to numerical problems (nans). Finally, I would like to address the question of the orignial anonymous user who first posted this question on the science desk. Let us see how the area of the square behaves as R increases, using 10 cm for the length of arc in each side of the "square". The smallest "reasonable" value of R is 20cm/p ˜ 6.366 cm, which should lead to a surface area of approximately 254.65 cm2, as KSmrq points out. Driven by curiosity, I will start plotting the function at lower values than the smallest reasonable one (in spite of KSmrq's advice to "ignore such challenges for now"). Here is the graph: Unsurprisingly, the function behaves weirdly below the smallest reasonable value of R, but from R ˜ 6.366 cm and onwards, the function behaves as predicted, falling rapidly from 254.65 cm2, and approaching 100 cm2 asymptotically. In case anybody is interested in the calculations, I have put the program on my user page. Again, thank you all. --NorwegianBlue 23:54, 29 May 2006 (UTC) Well done. It does appear that you overlooked my simple formula for the area, which depends on C alone. Recall that when the square is split, the angle A is half of C, so the sum of the angles is A+A+C, or simply 2C. This observation applies to User:lethe's results as well, where we may use simply 4A. So, recalling that a = 10 cm/R, a better formula is $\mathrm{area}_\mathrm{square} \,\!$ ${} = \left( 4 \cos^{-1}(-\tan^2 \frac{a}{2})-2\pi \right) \times R^2 \,\!$ ${} = \left( 4 \cos^{-1}(-\tan^2 \frac{10\ \mathrm{cm}}{2 R})-2\pi \right) \times R^2 . \,\!$ For the arccosine to be defined, its argument must be between -1 and +1, and this fails when the radius goes below the stated limit. (A similar problem occurs with the formula for A, where a quantity inside a square root goes negative.) Both algebra and geometry are telling us we cannot step carelessly into the domain of small radii. Try to imagine what shape the "square" may take when the circumference of the sphere is exactly 10 cm; both ends of each edge are the same point! Not only do we not know the shape, we do not know what to name and measure as the "inside" of the square. This raises an important general point about the teaching, understanding, and application of mathematics. Statements in mathematics are always delimited by their range of applicability. Every function has a stated domain; every theorem has preconditions; every proof depends on specific axioms and rules of inference. Once upon a time, we manipulated every series with freedom, with no regard to convergence; to our chagrin, that sometimes produced nonsense results. Once it was supposed that every geometry was Euclidean, and that every number of interest was at worst a ratio of whole numbers; we now make regular use of spherical geometry and complex numbers. When we state the Pythagorean theorem, we must include the restriction of the kind of geometry in which it applies. When we integrate a partial differential equation, the boundary conditions are as important as the equation itself. It is all too easy to fall into the careless habit of forgetting the relevance of limitations, but we do so at our peril. --KSmrqT 02:36, 30 May 2006 (UTC) Yes, I did overlook the (now painfully obvious) fact that the sum of the angles was 2C. Your final point is well taken. I understood that the reason for the NaN's was a domain error, but thanks for pointing out the exact spots. --NorwegianBlue 19:40, 30 May 2006 (UTC) Supplementary material Graph of the function g(a) lethe provided the following function for cos A, where A=B represents half of the "right" (i.e. 90°+something) angle C in a "square" on the surface of a sphere. $\cos A=\frac{1}{2}\csc a\sqrt{2\sqrt{2+2\cos 2a} - 2\cos 2a - 2} \, .\!$ Since the r.h.s. is based on a only, which is a known constant when the radius and length of arc are given (a=10cm/R for the example that prompted my follow-up question), I will substitute $G=g(a) \,\!$ for the r.h.s. Note that the function is undefined at a=0°±180° because of the sine function in the denominator. The graph of g(a) looks like this: The function appears to approach $\frac{1}{2}\sqrt{2}$ as a approaches 180°, as well as when a approaches 0° from above. Computations Here is the program that was used for the calculations referred to: #include <iostream> #include <math.h> &bnsp; const double PI = 3.1415926535897932384626433832795; void ErrorExit(const char* msg, int lineno) { std::cerr << msg << " program line: " << lineno << '\n'; exit(2); } // __________________________________________________ // double csc(double arg) { double s = sin(arg); if (s == 0) { ErrorExit("Division by zero attempted!", __LINE__); } return 1.0/s; } // __________________________________________________ // double pow2(double arg) { return arg*arg; } // __________________________________________________ // double g_lethe(double a) { return 0.5*csc(a)*sqrt(2.0*sqrt(2.0 + 2.0*cos(2.0*a)) - 2.0*cos(2.0*a) - 2.0); } // __________________________________________________ // double area_lethe(double a) { double G = g_lethe(a); return 2*(2*acos(G) + acos(2*pow2(G)-1)-PI); } // __________________________________________________ // double area_ksmrq(double a) { return 2*(2*acos(sqrt(cos(a)/(1.0+cos(a)))) + acos(-pow2(tan(a/2.0)))-PI); } // __________________________________________________ // int main() { std::cout << "Calculating G as a function of a" << '\n'; std::cout << "=================================\n\n"; int i; for (i = -90; i <= 540; ++i) { double a = static_cast<double>(i)*PI/180.0; // Cheating a little to avoid division by zero if (i == 0) { a += 0.0001; } else if (i == 360) { a -= 0.0001; } double G = g_lethe(a); std::cout << i << "; " << G << '\n'; } &bnsp; std::cout << "\n\n\n"; std::cout << "Calculating area of square in a unit sphere as a function of a" << '\n'; std::cout << "===============================================================\n\n"; &bnsp; for (i = 0; i <= 180; ++i) { double a = static_cast<double>(i)*PI/180.0; if (i == 0) { // Cheating a little to avoid division by zero a += 0.0001; } else if (i == 180) { // Cheating a little because of the discontinuity at 180 degrees a -= 0.0001; } double S = area_lethe(a); double T = area_ksmrq(a); std::cout << i << "; " << S << "; " << T << '\n'; } std::cout << "\n\n\n"; std::cout << "Calculating area of square with 10cm side as a function of R" << '\n'; std::cout << "=============================================================\n\n"; // for (i = 10; i < 2000; ++i) { double R = 0.1*static_cast<double>(i)/PI; double a = 10.0/R; double S = area_lethe(a)*pow2(R); std::cout << R << "; " << S << '\n'; } return 0; } Info from KSmrq which was commented out By a series of manipulations I came up with $\cos^2 A = \frac{\cos a}{1+\cos a} ,$ where a is 10 cm/(2πR), the side length as an angle. The angle of interest is really C = 2A, for which $\cos C = -\tan^2 \frac{a}{2} .$ For the hemisphere case, a = π/2 produces C = π; while for the limit case, a = 0 produces C = π/2. Calculations (best done privately) The idea of the derivation is to start with the haversine formula for C, noting that we have an equilateral triangle so the first term vanishes. Thus, noting C = 2A, $\mathrm{haversin}\ c = \sin^2 a \,\mathrm{haversin}\ 2A , \,\!$ or, noting haversin z = 12versin z = 12(1-cos z), $1-\cos c = (1-\cos 2A)\sin^2 a , \,\!$ or, noting cos 2A = 2cos2 A-1, $\cos c = 1 - 2 \sin^2 A\,\sin^2 a . \,\!$ We also have, as lethe observed, $\sin c = 2\cos A \sin a. \,\!$ Now we can eliminate c using the fundamental trigonometric identity, and eliminate both sin2 A and sin2 a as well. We obtain a quadratic equation in x = cos2 A and y = cos a, $(y^2-1)x^2 + (-2y^2)x + (y^2) = 0. \,\!$ No doubt a cleaner way to this simple solution exists, but this may suffice.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 41, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8586063385009766, "perplexity": 513.191387355933}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447552819.108/warc/CC-MAIN-20141224185912-00016-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www2.math.binghamton.edu/p/seminars/stat/170831
### Sidebar seminars:stat:170831 Statistics Seminar Department of Mathematical Sciences DATE: Thursday, August 31, 2017 1.15p-2.15p WH 100E Qiqing Yu, Binghamton University The Marginal Distribution Approach For Testing Independence And Goodness-of-fit In Linear Models Abstract We propose a test to simultaneously test the assumption of independence and goodness-of-fit for a linear regression model $Y=\beta X+W$, where $\beta\in R^p$. If $E(|Y||X)=\infty$, then all existing tests are not valid and their levels with a nominal size $0.05$ can be as large as $0.9$. Our approach is valid even if $E(|Y||X)=\infty$ or $E(||X||)=\infty$. Thus it is more realistic than all the existing tests. Our approach is based on the difference between two estimators of the marginal distribution $F_Y$, and thus it is called the MD approach. We establish the consistency of the MD test. We compare the MD approach to the existing tests such as the test in R packagegam” or the test in Sen and Sen (2014) through simulation studies. If the existing tests are valid, then none of the existing tests and the MD test is uniformly more powerful than the other. We apply the MD approach to 3 real data sets.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8783678412437439, "perplexity": 413.3540224732594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526670.1/warc/CC-MAIN-20190720194009-20190720220009-00558.warc.gz"}
https://www.physicsforums.com/threads/momentum-measurement-of-a-particle-in-quantum-mechanics.954148/
# Momentum measurement of a particle in Quantum Mechanics • Start date • Tags • #1 890 38 ## Homework Statement What will momentum measurement of a particle whose wave - function is given by $\psi = e^{i3x} + 2e^{ix}$ yield? Sketch the probability distribution of finding the particle between x = 0 to x = 2π. ## The Attempt at a Solution The eigenfunctions of momentum operator is given by $A e^{ikx}$ where k = $\frac p {\hbar}$ with eigen value p = ${\hbar} k$. Thus eigenvalue of $e^{i3x}$ is $3 \hbar$ and $e^{ix}$ is $\hbar$. I feel myself tempted to take the eigenvalues of momentum operator to be discrete and say that the momentum measurement will be either $3 \hbar$ or $\hbar$. As the eigenvalue of momentum operator is continuous, I should use equation. (3.56) to answer the question. Assuming that the question asks to calculate the probability distribution at t = 0, probability density would be given by $| \psi |^2 = 3 + 2 ( e^{ i2x} +e^{-i2x} )$., a complex function. But, the probability density should be a real valued function. Is this correct? #### Attachments • 96.8 KB Views: 257 • 101.3 KB Views: 183 Last edited: Related Advanced Physics Homework Help News on Phys.org • #2 Orodruin Staff Emeritus Homework Helper Gold Member 16,666 6,444 Plane waves are not normalizable so you really cannot write the probability in that manner (the wave function in momentum space is a sum of two delta functions). However, given the coefficients you should be able to deduce the probabilities (the coefficients are the probability amplitudes) by assuming that the total probability is one. • #3 890 38 the coefficients are the probability amplitudes How does one get to know this in case of continuous eigenvalues? • #4 vela Staff Emeritus Homework Helper 14,580 1,187 Assuming that the question asks to calculate the probability distribution at t = 0, probability density would be given by $| \psi |^2 = 3 + 2 ( e^{ i2x} +e^{-i2x} )$., a complex function. But, the probability density should be a real valued function. Is this correct? I think you made a slight error. Anyway, your expression for $| \psi |^2$ is real. • #5 Orodruin Staff Emeritus Homework Helper Gold Member 16,666 6,444 I think you made a slight error. Anyway, your expression for $| \psi |^2$ is real. However, it does not answer the question since it is the momenta that are asked for, not the position. • #6 vela Staff Emeritus Homework Helper 14,580 1,187 Part of the question asked for a sketch of the probability as a function of $x$. • #7 Orodruin Staff Emeritus Homework Helper Gold Member 16,666 6,444 Part of the question asked for a sketch of the probability as a function of $x$. That's what I get for reading too fast ... • Last Post Replies 0 Views 283 • Last Post Replies 0 Views 1K • Last Post Replies 0 Views 1K • Last Post Replies 1 Views 963 • Last Post Replies 8 Views 3K • Last Post Replies 0 Views 1K • Last Post Replies 6 Views 889 • Last Post Replies 1 Views 1K
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9855653047561646, "perplexity": 781.9541957152485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370494349.3/warc/CC-MAIN-20200329140021-20200329170021-00420.warc.gz"}
https://www.physicsforums.com/threads/engineering-mathematics.394503/
# Engineering Mathematics 1. Apr 12, 2010 ### matqkks Does anyone know where I can find engineering exam questions on the web. I am trying to do a survey of various questions from different universities. 2. Feb 4, 2011 3. May 9, 2011 ### samuelarnold Just on Googling you can find a lots of sample questions solved and unsolved. 4. Jan 7, 2012 ### diegojolin I am trying to solve an equation that involves sum series and the unknown is the number of times i have to add, this is easy to solve just by guessing when the number of additions is small, but if it gets large.... is there any analytic way to solve this kind of equations? form: sum (e^n) from n b to x = a; 5. Jan 7, 2012 ### gomunkul51 @diegojolin: find a closed expression for the sum(e^k), from 0 (or 1) to n. then equate it to the known sum then solve for n. 6. Jan 8, 2012 ### diegojolin Oks, thanks, I've tried and at least the computer seems to work faster this way Similar Discussions: Engineering Mathematics
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8849126696586609, "perplexity": 1142.0644540692988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948581053.56/warc/CC-MAIN-20171216030243-20171216052243-00515.warc.gz"}
http://mathhelpforum.com/calculus/131999-integration-find-fourier-coefficients-print.html
# Integration to find the Fourier coefficients • March 4th 2010, 02:43 AM Gaudium Integration to find the Fourier coefficients Hi, I want to calculate the Fourier series expansion of f(x)= a sin(x) / (1-2 a cos(x) + a^2), where |a|<1 and -pi < x <pi, but I cannot integrate the function "cos(nx) f(x)". Is there a trick for this integration. Thanks. • March 4th 2010, 10:22 PM CaptainBlack Quote: Originally Posted by Gaudium Hi, I want to calculate the Fourier series expansion of f(x)= a sin(x) / (1-2 a cos(x) + a^2), where |a|<1 and -pi < x <pi, but I cannot integrate the function "cos(nx) f(x)". Is there a trick for this integration. Thanks. Observe that: $f(x)=\frac{d}{dx} \left[ \frac{1}{2}\ln(1-2a \cos(x)+a^2)\right]$ then integration by parts will help CB
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9885852932929993, "perplexity": 4450.920313578405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400378815.18/warc/CC-MAIN-20141119123258-00194-ip-10-235-23-156.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/2331283/how-to-integrate-this-int-frac-cos5x-cos-4x1-2-cos-3x-dx
# How to integrate this : $\int \frac{\cos5x+\cos 4x}{1-2\cos 3x}\,dx$ [duplicate] How to integrate this : $\int \frac{\cos 5x+\cos 4x}{1-2\cos 3x}\,dx$ My approach : We know that $\cos A+cosB = 2\cos(\frac{A+B}{2})\cos(\frac{A-B}{2})$ But it is not working here, please suggest, thanks. The integrand function can be greatly simplified. If $n\in\mathbb{N}$ we have that $\cos(nx)$ is a polynomial in $\cos(x)$ with degree $n$. By setting $z=\cos\theta$ we have: $$\cos(4x)+\cos(5x) = T_4(z)+T_5(z) = 1+5 z-8 z^2-20 z^3+8 z^4+16 z^5\tag{1}$$ $$1-2\cos(3x) = 1- 2\,T_3(z) = 1 + 6 z - 8 z^3\tag{2}$$ and it is not difficult to notice that the RHS of $(2)$ is a divisor of the RHS of $(1)$: $$\frac{\cos(4x)+\cos(5x)}{1-2\cos(3x)} = 1 - z - 2 z^2 = -\cos(x)-\cos(2x) \tag{3}$$ so the wanted integral is simply $\color{red}{C-\sin(x)-\frac{1}{2}\sin(2x)}$. $$$$\begin{split}\int\dfrac{\cos 5x+\cos 4x}{1-2\cos 3x}\,dx&=\int\dfrac{2\cos \left(\frac{5x+4x}{2}\right)\cos \left(\frac{5x-4x}{2}\right)}{1-2\left[2\cos^2 \left(\frac{3x}{2}\right)-1\right]}\,dx\\&=\int\dfrac{2\cos \left(\frac{9x}{2}\right)\cos \left(\frac{x}{2}\right)}{3-4\cos^2 \left(\frac{3x}{2}\right)}\,dx\\&=\int\dfrac{2\cos \left(\frac{9x}{2}\right)\cos \left(\frac{x}{2}\right)\cos \left(\frac{3x}{2}\right)}{3\cos \left(\frac{3x}{2}\right)-4\cos^3 \left(\frac{3x}{2}\right)}\,dx\\&=-\int\dfrac{2\cos \left(\frac{9x}{2}\right)\cos \left(\frac{x}{2}\right)\cos \left(\frac{3x}{2}\right)}{\cos \left(\frac{9x}{2}\right)}\,dx\\&=-\int 2\cos \left(\frac{x}{2}\right)\cos \left(\frac{3x}{2}\right)\,dx\\&=-\int\left(\cos 2x + \cos x\right)\,dx\\&=-\int\cos 2x\,dx - \int\cos x\,dx\\&=-\dfrac{\sin 2x}{2} - \sin x + C\\&=-\left(\dfrac{\sin 2x}{2} + \sin x\right) + C \end{split}$$$$ Remember the fact that $$\cos 3x = 4\cos^3x - 3\cos x$$ I think the following is easier. $$\cos5x+\cos4x=\cos5x+\cos{x}+\cos4x+\cos2x-\cos{x}-\cos{2x}=$$ $$=2\cos3x\cos2x+2\cos3x\cos{x}-\cos{x}-\cos{2x}=(2\cos3x-1)(\cos2x+\cos{x})$$ and the rest is smooth.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.9936215281486511, "perplexity": 1313.683722043471}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370504930.16/warc/CC-MAIN-20200331212647-20200401002647-00457.warc.gz"}
https://www.freemathhelp.com/forum/threads/109181-Simple-Bernoulli-trial-prob-of-pipe-failure-during-inspection?p=420068&mode=linear
# Thread: Simple Bernoulli trial: prob of pipe failure during inspection 1. ## Simple Bernoulli trial: prob of pipe failure during inspection I'm really getting baffled with this question that has taken me far too long to complete and would love some guidance. An accident caused the catastrophic failure of metal pipes in a factory. There were six metal pipes in the garage at any given time. Table 1 shows the numbers of metal pipe failures that had occurred on each of the 23 previous inspections. Table 1 Number of metal pipe failures Number of failed metal pipes 0 1 2 3 4 5 6 Number of inspections------16 5 2 0 0 0 0 (i) Let p be the probability that a pipe fails on an inspection. What distribution is appropriate to describe the failure or non-failure of a particular metal pipe on a particular inspection? For this I have said that this is a Bernoulli Distribution due to only two possible outcomes of failure and non-failure. (ii) A reasonable estimate of p is 3/46 or 0.065. Explain where this number comes from. This is where I am getting stuck on. I cannot work out what p is using the information they have provided. 2. Can you calculate the mean of the given distribution? What is the mean of the Binomial Distribution in terms of it's parameter, p? 3. Originally Posted by tkhunny Can you calculate the mean of the given distribution? What is the mean of the Binomial Distribution in terms of it's parameter, p? The mean of a binomial distribution is np, but I don't know how to calculate the p in this particular question. 4. In all situations, the mean is calculated by the definition: $\sum x_{i}\cdot p\left(x_{i}\right)$ In your case, you have 23 inspections. Using the formula, above, we have: 0*(16/23) + 1*(5/23) + 2*(2/23) There is your Mean. Now what? 5. Originally Posted by tkhunny In all situations, the mean is calculated by the definition: $\sum x_{i}\cdot p\left(x_{i}\right)$ In your case, you have 23 inspections. Using the formula, above, we have: 0*(16/23) + 1*(5/23) + 2*(2/23) There is your Mean. Now what? I genuinely don't know what to do after this. 6. n*p = Mean n = 23 Mean = ??? Solve for p. 7. Originally Posted by tkhunny n*p = Mean n = 23 Mean = ??? Solve for p. If the mean is 9/23, and n=23, then p = 9/23 / 23 = 9/529 8. That's where I would start. Good work. 9. Originally Posted by tkhunny That's where I would start. Good work. But the answer for p in the question is 3/46 or 0.065, whereas I got 9/523. 10. Well, that was a Binomial Approximation. Is there a distribution that you feel might be more appropriate? Poisson? Beta? Weibull? #### Posting Permissions • You may not post new threads • You may not post replies • You may not post attachments • You may not edit your posts •
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9367380142211914, "perplexity": 1137.1433350261552}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948527279.33/warc/CC-MAIN-20171213143307-20171213163307-00439.warc.gz"}
http://mathoverflow.net/questions/88640/asymptotic-behaviour-of-int-fta-cosatdt?sort=votes
Asymptotic behaviour of $\int f(t)^a\cos(at)dt$ Are there any known necessary or sufficient conditions such that $$\lim_{a\rightarrow \infty}\int_{-1}^1f(t)^a\cos(at)dt=0$$ where $f:[-1,1]\rightarrow[1,\infty)$ is an even smooth concave real function such that $f(-1)=f(1)=1$? - Sorry for the late reply (for the last month I hardly had any time for anything like MO). The answer (somewhat vague) is that the curve $t\mapsto (t,\log f(t))$ ($-1\le t\le 1$) has to be the image of the upper half circle under some analytic in $\mathbb C\setminus[(-\infty,-1]\cup[1,+\infty)]$ mapping $F$, which is symmetric ($F(\bar z)=\overline{F(z)}$), one to one in the unit disk, and whose derivative $F\\,'$ has decent boundary behavior. The "decent boundary behavior" is the vague part here. Unfortunately, I cannot make it less vague unless someone tells me how exactly to recognize the distributions on $[-1,1]$ whose Fourier transform tends to $0$. It is clear that they are necessarily fairly tame (of not more than the first order, etc.) and that all $L^1$-functions are there but where exactly you are in between is a mystery to me. On the crudest level, it tells you that $f$ must be analytic. Let me know if such description is of any interest to you. If it is, I'll post the details. EDIT: OK, here go the details. It is a somewhat long story, so I may need more than one patch of free time to type it. I apologize in advance for bumping this thread. Also, since the integral against $\sin at$ is zero, we can just as well talk about the full Fourier transform, i.e., the integration against $e^{-iat}$ 1) It is actually quite surprising that such functions exist at all. After all, the jump discontinuity normally means that the best rate of decay of the Fourier transform is $1/a$ and that slow rate of decay is played against the exponential growth of the integrand. So, I'll start with constructing one such function. It'll be easier to work with $g(t)=\log f(t)$, which is a smooth non-negative function with endpoint values $0$. Put $g(t)=\delta(1-t^2)$ with small $\delta>0$. Then the integral can be written as the path integral $\int_\gamma \frac{dt}{dz}e^{-iaz}\\,dz$ where $\gamma$ is the curve $t\mapsto z(t)=t+ig(t)$. Note that $z(t)=t+i\delta(1-t^2)$ is an analytic function of $t$ and for small $\delta>0$, it is invertible in a fairly large disk. Thus, we can talk about its analytic branch $t(z)$ that coincides with $\Re z$ on $\gamma$ and is analytic in (a neighborhood of) the region $D$ bounded by $[-1,1]$ and $\gamma$. So, $t'(z)$ is also analytic there and we can shift the contour of integration from $\gamma$ to $[-1,1]$, which results in the representation $\int_{-1}^1 t'(z)e^{-iaz}\\,dz$, which is just the ordinary Fourier transform of the integrable (and even smooth) function $t'(z)$ restricted to $[-1,1]$, so the integral, indeed, tends to $0$ in this case. 2) What I'd like to show now is that this contour integral representation and the possibility to shift the contour is the only possible reason for this effect. The starting point is that if the integral is bounded on the entire real line (the boundedness on the negative semi-axis is trivial and the boundedness on the positive one is less than what has been requested), then there exists a distribution $T$ supported on $[-1,1]$ such that the integral equals $\langle T, e^{-iat}\rangle$ for all $a\in\mathbb C$ (that is just a version of Paley-Wiener). Thus, the difference $\langle T, e^{-iat}\rangle-\int_\gamma \frac{dt}{dz}e^{-iaz}\\,dz$ vanishes for all $a\in \mathbb C$. Now, the linear span of functions $e^{-iaz}$ is dense in the space of functions analytic in any fixed neighborhood of $D$ meaning that $\langle T, \psi(t)\rangle-\int_\gamma \frac{dt}{dz}\psi(z)\\,dz=0$ for every function analytic in some neighborhood of $D$. We will take $\psi(z)=\frac{1}{z-\zeta}$ with $\zeta\notin D$ and get the Cauchy integral plus something analytic in $\mathbb C\setminus[-1,1]$ vanish outside $D$. Note that this Cauchy integral and the distribution part are also well-defined for $\zeta\in D$ and give an analytic function of $\zeta$ there. Moreover, by Plemelj's jump formulae, the boundary values of that function on $\gamma$ are just $\frac{dt}{dz}$ (up to $2\pi i$ and $\pm$, which we aren't concerned with here). The upshot is that $\frac{dt}{dz}$ has an analytic extension to $D$ continuous up to $\gamma$ (here we use that the curve is assumed to be of some decent smoothness; otherwise we'll have to sing a long song of non-tangential boundary values a.e., etc.) The behavior on $[-1,1]$ may be more complicated in general and the boundary values there exist only in the sense of distributions. The possibility of analytic continuation to the open domain $D$ guarantees only the possibility to shift the contour to something hovering as low over $[-1,1]$ as we wish, i.e., to the subexponential growth of the integral (for which it is necessary and sufficient). However, if you settle for some more reasonable class than $C_0$, say, $L^2$, then $T$ will be just an $L^2$ function and you'll have the classical theory of boundary values that will allow you to show that our distribution is, indeed, the boundary value of the analytic extension of $\frac{dt}{dz}$ and the reason for smallness of the integral is the possibility of the ordinary contour shift. I have no idea what you are going to use all this for, so I prefer to avoid the discussion of all those technical issues. Instead, I'll discuss in detail what the possibility of this analytic extension of the derivative means for the curve $\gamma$ itself. 3) Let $Q$ be the lower unit half-disk $\{z:|z|<1,\Im z<0\}$. Let $\varphi$ be the conformal mapping from $Q$ to $D$ such that the interval $[-1,1]\subset\partial Q$ is mapped to $\gamma$ and the lower semicircle is mapped to $[-1,1]$. The derivative $\varphi'$ is a continuous up to the boundary (except for the points $-1,1$ where it has an easy to control power singularity) non-vanishing function in $D$ (here we use reasonable smoothness of $f$ again). Note that after the composition with $\varphi$, the function $\frac{dt}{dz}$ on $\gamma$ becomes $\frac{(\Re\varphi)'}{\varphi'}$ on $[-1,1]$. This should be extendable analytically to the lower half-disk with "decent boundary values". Since $\varphi'$ has such extension, we conclude that so does $(\Re\varphi)'$. But this function is real-valued, so the Schwarz reflection principle applies and we conclude that it extends analytically to the entire unit disk. Let $(\Re\varphi)'=F$ where $F$ is a symmetric analytic function in the unit disk. The function $\varphi'-F$ is purely imaginary on $[-1,1]$ and extends analytically to the lower half-circle. Thus, using the reflection principle again, we conclude that $\varphi'=F+iG$ where $F,G$ are symmetric analytic functions in the unit disk and $F$ has decent boundary values on the lower semicircle. Thus, $\varphi'$ and $\varphi$ are analytic in the unit disk. Moreover, since $F+iG=\varphi'$ is nice on the lower semicircle, $G$ also has decent boundary values there and therefore, after reflecting, we see that $\varphi'$ has decent boundary values on the upper semicircle. To get the proclaimed description, it suffices now to map the unit disk to the upper half-plane so that the lower semicircle is mapped to $[-1,1]$ and use the reflection principle again for the last time. That's it (modulo minor technicalities that I swept under the rug, but, as I said, to get into those would make no sense without knowing what exactly you are after). - Thank you for your answer. More details would definitively be helpful. –  Roland Bacher Mar 9 '12 at 16:38 Here is a possible beginning. Note first that $$\int_{-1}^1 f(t)^a \cos(at) dt= 2I_a:= 2\int_{0}^1 f(t)^a \cos(at) dt.$$ Hence, it suffices to investigate $I_a$. Let me first assume that $f'(t) <0$ on $(0,1)$. (Note that if $f'(t_0)=0$ for some $t_0\in (0,1)$ then $f'(t)=0$ on $[0,t_0]$.) This means that the map $t\mapsto f$ is one-to-one. We regard $t$ as a function of $f$. Then the change in variables formula implies. $$I_a= \int_1^{f(0)} f^a \cos(a t)\frac{dt}{df} df$$ I can make this formula friendlier to the 21st century mathematician by changing notations, $$t \longleftrightarrow \phi,\;\;\; f \longleftrightarrow x$$ and we can rewrite the above as $$I_a= \int_1^{x_0} x^a\frac{d\phi}{dx} \cos( a \phi(x) ) dx = \frac{1}{a}\int_1^{x_0} x^a \frac{d}{dx}\Bigl( \sin\bigl(\; a\phi(x)\;\bigr) \Bigr)dx$$ $$=\frac{1}{a}\Bigl( x^a\sin\bigl( a\phi(x)\bigr)\;\Bigr)\Bigr|^{x_0}_1- \int_1^{x_0}x^{a-1}\sin\bigl(\; a\phi(x) \;\bigr)dx.$$ Now observe that $\phi(x_0) =0$, $\phi(1)=1$, so the first term above goes to zero as $a\to\infty$. At this point it may be useful to look in some books on asymptotics of integrals. A good place to start is Bleistein & Handelsman: Asymptotic expansions of Integrals, Dover Also you need to keep in mind that $$\frac{d\phi}{dx}< 0,\;\;\forall x\in (1,x_0)$$ $$\lim_{x\nearrow x_0} \frac{d\phi}{dx}=-\infty.$$ - Thank you for the reference, I will check if it is useful. –  Roland Bacher Feb 16 '12 at 17:33
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9850000143051147, "perplexity": 113.12509409703462}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119647865.10/warc/CC-MAIN-20141024030047-00024-ip-10-16-133-185.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/524439/number-of-triangles-sharing-all-vertices-but-no-sides-with-a-given-octagon
# Number of triangles sharing all vertices but no sides with a given octagon The number of triangles whose vertices are the the vertices of the vertices of an octagon but none of whose sides happen to come from the sides of the octagon. My Attempt: Let $\{A,B,C,D,E,F,G,H\}$ be the vertices of an octagon. It is given that none of the side of octagon is the side of the triangle, so we do not take consecutive points. So we take either $\{A,C,E,G\}$ OR $\{B,D,F,H\}$ points out of which we will take only three points, because we have form a triangle. So This can be done by $\binom{4}{3}+\binom{4}{3} = 8$ But the only options given are 24, 52, 48, and 16. Where have I made a mistake? • Why would you keep an odd number of vertices between vertices on an edge? $\{A,D,G\}$ is a perfectly fine triangle. – Patrick Da Silva Oct 13 '13 at 12:14 If two of the vertices are $A$ and $C$, what are the possible third vertex? Look at the whole list $A,...,H$ • Thanks Michael Got it. for $\bf{\{A,C,E,G\}}$ first we will select $2$ vertices from these for and then remaining select from $\bf{\{B,D,F,H\}}$. This can be done by $\displaystyle \binom{4}{2}\times \binom{4}{1} = 24$. similarly for $\bf{\{B,D,,F,H\}}$ first we will select $2$ vertices from these for and then remaining select from $\bf{\{A,C,E,G\}}$. This can be done by $\displaystyle \binom{4}{2}\times \binom{4}{1} = 24$ . So Total $= 24+24 = 48$ – juantheron Oct 13 '13 at 12:13 Suppose the vertices are labelled $1,2,\dots,8$. Count the number of triangles for which one of the vertices is $1$. Then the second vertex, going around "clockwise" (let's say the octagon was represented that way) is among $3,4$ or $5$ (if we put one at $6$, the third vertex would be $7$ or $8$, which would give the triangle a side in common with the octagon). For each case you can count the number of options : three options for $3$, two for $4$, one for $5$, for a total of $6$. This gives us $6$ triangles that have a vertex at $1$. The group $\mathbb Z / 8 \mathbb Z$ acts on the triangles by mapping the triangle with vertices $(a,b,c)$ to the triangle with vertices $(a+k,b+k,c+k)$ (where $k \in \mathbb Z / 8 \mathbb Z$ and you can consider $a,b,c \in \mathbb Z / 8 \mathbb Z$). It is not hard to see that each triangle has an orbit of size $8$ under this action. So if we count the triangles by considering those who have a vertex at $1$ and then rotate them via the group action, we will triple count because each triangle has three vertices. Therefore the answer is $(6 \times 8)/ 3 = 16$. Hope that helps, Hints for the "small" problem at hand: (i) How many triangles are there without any restrictions? (ii) How many triangles have exactly one side in common with the octagon? (iii) How many triangles have exactly two sides in common with the octagon? We now consider the more general problem: Given a regular $n$-gon $P$, how many $r$-gons $Q$ with vertices from $P$ are there that don't share a side with $P$? An admissible $r$-gon leaves $n-r$ unused vertices. Write a string of $n-r+1$ zeros, where the first and the last zero denote the same "distinguished" unused vertex. Choose $r$ of the $n-r$ slots between the zeros and insert an $1$ into these slots. You then have an encoding of an admissible $r$-gon. There are ${n-r\choose r}$ ways to chose the slots, and there are $n$ ways to choose which vertex of $P$ should be the "distinguished" unused vertex. The total number $N$ of admissible $r$-gons $Q\subset P$ is then given by $$N={n\over n-r}{n-r\choose r}\ ,$$ because the choice of the "distinguished" unused vertex has to be discounted. For $n=8$ and $r=3$ one obtains $N=16$. • Thanks Professor got it. Total Triangle without any restriction is $\displaystyle = \binom{8}{3} = 56$. Total Triangle with one side common $\displaystyle = \binom{4}{1}\times 8 = 32$ and Total no. Triangle in which exactly two side common is $= 8$ – juantheron Oct 13 '13 at 12:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8378655314445496, "perplexity": 164.0162896892477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528141.87/warc/CC-MAIN-20190722154408-20190722180408-00467.warc.gz"}
http://mathoverflow.net/questions/140104/lower-bound-for-eulers-totient-for-almost-all-integers?answertab=active
# Lower bound for Euler's totient for almost all integers Let $\varphi(n)$ be the Euler's totient function. It is well know that $\liminf_{n \to \infty} \frac{\varphi(n)}{n / \log \log n} = e^{-\gamma}$, so that for $\varepsilon > 0$ it results $\frac{\varphi(n)}{n} \geq \frac{e^{-\gamma}-\varepsilon}{\log \log n}$ for large $n$. Actually, the "local minima" of $\frac{\varphi(n)}{n}$ are attached for $n = p_1 \cdots p_k$ (the product of the first $k$ primes) and the set of primorial is really sparse. I wonder if it is known a lower bound for $\varphi(n)$ like: "$\varphi(n) / n \geq f(n)$ for all $n$ but a set of null asymptotic density", where $f(n)$ is a function bigger then $\frac{e^{-\gamma}-\varepsilon}{\log \log n}$. - For $n/\phi(n)$ "Small values of the Euler function and the Riemann hypothesis Jean-Louis Nicolas" might be related to your question. –  joro Aug 22 '13 at 14:21 Since the average value of $n/\phi(n)$ is bounded, it follows that for any function $f(n)$ tending to zero as $n$ tends to infinity one has $\phi(n)/n \ge f(n)$ except on a set of zero density. –  Lucia Aug 22 '13 at 17:08 @Lucia Thank you for your answer! However I can't find a reference for the average value of $n / \varphi(n)$, I know that average value of $\varphi(n) / n$ is $6 / \pi^2$, but $n / \varphi(n)$ I don't know. –  user21706 Aug 22 '13 at 19:21 I got. The average value of $n / \varphi(n)$ is $315\zeta(3)/(2\pi^4)$. "R. Sitaramachandrarao. On an error term of Landau II, Rocky Mountain J. Math. 15 (1985), 579-588" –  user21706 Aug 22 '13 at 19:59 Your question has been answered by Lucia already, but you might also be interested in looking up the Erd\H{o}s--Wintner theorem. A special case (proved already by Schoenberg) is that for each $u \geq 0$, the set of $n$ with $\phi(n)/n \leq u$ has an asymptotic density $D(u)$; moreover, $D(u)$ is continuous and increasing on $[0,1]$. There are also estimates available for the size of $D(u)$ when $u$ is near zero, and of $1-D(u)$ when $u$ is near $1$. For this, see Erd\H{o}s's paper "Some remarks about additive and multiplicative functions": http://www.renyi.hu/~p_erdos/1946-11.pdf Sorry, but I do know understand you answer. How do you prove that if $E$ is a set of null asymptotic density then $\liminf_{E \not\ni n \to \infty} \varphi(n) / (n / \log\log n) = e^-\gamma$ ? Thanks. –  user21706 Aug 22 '13 at 16:59
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9638388156890869, "perplexity": 235.87510387062423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444339.31/warc/CC-MAIN-20141017005724-00327-ip-10-16-133-185.ec2.internal.warc.gz"}