url
stringlengths
15
1.13k
text
stringlengths
100
1.04M
metadata
stringlengths
1.06k
1.1k
https://www.physicsforums.com/threads/fourier-analysis-determination-of-fourier-series.983620/
# Fourier analysis & determination of Fourier Series • Comp Sci • Start date • Tags • #1 57 8 ## Homework Statement: Sketch the waveform defined below and explain how you would obtain its fourier Series (I've attached the question. It's title is "wave.png" ## Relevant Equations: As attached: It's title is "form.png" ANY AND ALL HELP IS GREATLY APPRECIATED I have found old posts for this question however after reading through them several times I am having a hard time knowing where to start. I am happy with the sketch that the function is correctly drawn and is neither odd nor even. It's title is "wave1.png" Ao=0 as the average value of the function is 0. I have proven this from working through Ao = 1/π ∫ f(x) dx with my limits of π/2 & π and 3π/2 & 2π. Bn & An are where I am struggling, I have worked through and got answers but I don't think they are correct but can't see another way. So I did as follows with x = ωt. The limits of each integration are again π & π/2 and 2π & 3π/2. An = 1/π ∫ f(x)cos(nx) dx An = 1/π ∫ sin(x)cos(nx) dx + 1/π ∫ sin(x)cos(nx) dx An = 1/π ∫ ((sin(x+nx))+sin(x-nx))/2+1/π ∫ ((sin(x+nx))+sin(x-nx))/2 An = 1/2π [ (-cos(x+nx))/(1+n) - (cos(x-nx))/(1-n) ] + 1/2π [ (-cos(x+nx))/(1+n) - (cos(x-nx))/(1-n) ] An = 1/2π [[ (-cos(π+nπ))/(1+n) - (cos(π-nπ))/(1-n) ] - [ (-cos(π/2+n*π/2)/(1+n) - (cos(π/2-nπ/2))/(1-n) ] + 1/2π [[ (-cos(2π+n2π))/(1+n) - (cos(2π-n2π))/(1-n) ] - [ (-cos(3π/2+n*3π/2)/(1+n) - (cos(3π/2-n3π/2))/(1-n) ] An = 1/2π [-(-cos(nπ))/(1+n) - (-cos(nπ))/(1-n)] + 1/2π [-(cos(n2π))/(1+n) - (cos(n2π))/(1-n)] An = 1/2π [(2cos(nπ))/(1-n^2) + 1/2π [(2cos(n2π))/(1-n^2)] An = (cos(πn)) / (π (1-n^2)) + (cos(2πn)) / (π (1-n^2)) An = (cos(πn)+(cos(2πn)) / (π (1-n^2)) This doesn't seem correct? Other people have Bn working out to 0 however I can't figure out how to start at all, I had an initially thought that as it resembles an odd wave (but isn't odd due to the 0 values) that it would still only have sine terms? Thanks in advance for any help, it really is appreciated! #### Attachments • 14.3 KB Views: 40 • 2.2 KB Views: 41 • 11.2 KB Views: 41 Last edited: Related Engineering and Comp Sci Homework Help News on Phys.org • #2 scottdave Homework Helper 1,783 746 Does that describe the entire signal, or does it repeat? If it does repeat, then what will it look like from negative 2* pi to zero (in sections) • #3 57 8 Does that describe the entire signal, or does it repeat? If it does repeat, then what will it look like from negative 2* pi to zero (in sections) I am assuming that it would repeat? There is no symmetry that I can see? It would be odd if there was no 'missing' parts of the wave however • Last Post Replies 1 Views 1K • Last Post Replies 6 Views 1K • Last Post Replies 6 Views 1K • Last Post Replies 3 Views 1K • Last Post Replies 19 Views 2K • Last Post Replies 4 Views 3K • Last Post Replies 2 Views 6K • Last Post Replies 6 Views 600 • Last Post Replies 6 Views 17K • Last Post Replies 6 Views 606
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8918132185935974, "perplexity": 4899.69349438464}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439736972.79/warc/CC-MAIN-20200806151047-20200806181047-00085.warc.gz"}
https://geo.libretexts.org/Bookshelves/Oceanography/Book%3A_Oceanography_(Hill)/06%3A_The_Atmosphere_in_Motion/6.4%3A_Idealized_%22average%22_global_atmospheric_circulation
6.4: Idealized "average" global atmospheric circulation Atmospheric Circulation Global atmospheric circulation is influenced by temperature and pressure differentials, among several other factors. This section will cover atmospheric circulation in idealized terms. That is, circulation solely dependent on temperature-based fluid dynamics. • Atmospheric pressure patterns and atmospheric circulation cells Air molecules in the atmosphere move following the laws of physics and fluid dynamics. Hot air will rise, just as cold air will sink. The processes that set the global circulation cells in motion are as follows: 1. On land, solar radiation heats up Earth's surface and surface atmosphere. This results in air molecules rising through the atmospheric column. It also creates a system of low pressure on the surface, since molecules float away, and form a system of high pressure at the top of the column as molecules bunch together. 2. In the atmosphere, air molecules are cooled at the top of the atmospheric column, and begin to sink. Like with the previous scenario, sinking molecules create a system of low pressure at the top of the column, and a high pressure system at the surface below. 3. Fluids move from high pressures to low pressures. In this case, at one point on earth molecules sink from the top of the atmosphere on to land as they are cooled, creating low pressure. At another point molecules that are rising due to higher temperatures on land are creating high pressure at the top of the atmosphere. This pressure differential results in molecules at the high pressure area at the top of the atmosphere to move towards the low pressure area at the top of the atmosphere. This is the same on land, where the molecules at the surface high pressure area move towards the surface low pressure area. This motion results in an atmospheric circulation cell. • Where do these cells form? These atmospheric circulation cells begin in areas where solar radiation results in increased temperatures in land. On the equator, the intense solar radiation creates low pressure systems on the surface that initiate the circulation of air molecules. At 30N and 30S, the cooling molecules sink and form high pressure systems on the surface. These patterns are repeated every 30 degrees in both directions with the equator as the original low pressure area, alternating every 30 degrees thereafter.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9008345603942871, "perplexity": 709.7011525148646}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677412.35/warc/CC-MAIN-20191018005539-20191018033039-00391.warc.gz"}
https://or.stackexchange.com/questions/996/how-to-reformulate-linearize-convexify-a-budgeted-assignment-problem
# How to reformulate (linearize/convexify) a budgeted assignment problem? I have a scheduling problem at hand. In my system, there is a service station with $$M$$ service outlets, therefore, the service station can serve $$M$$ users at a time. But, there are $$N$$ users $$N>M$$ in the system requesting for service. There, the scheduler needs to schedule $$M$$ users out of $$N$$ users along with some signal processing. The service station has some money budget. At any point of operation, the money spending for serving $$M$$ users cannot exceed the budget. Let us assume that the scheduling frequency is one hour. So, each hour, the service station serves (at most) $$M$$ users. There exists a path between the service point and any user. Let vector $${\bf h}_i\in\mathbb{C}^{M\times 1}$$ define the path between the service station and user $$i$$. If user $$i$$ is schduled, the amount of money spent after user $$i$$ is given by $$||{\bf w}_i||_2^2=P_i$$, where, $${\bf w}_i\in\mathbb{C}^{M\times 1}$$ is some financial tool employed by the service station. Note that $${\bf h}_i\in\mathbb{C}^{M\times 1},i=1,2,\dots,N$$ are known. Here, $${\bf w}_i, i=1,2,\dots,M$$ are optimization variables. The objective of the optimization is $$\underset{\mathcal{M}\subset \mathcal{N}}{\max}\hspace{2mm}\underset{{\bf w}_i,i\in \mathcal{M} }{\max}\hspace{2mm}\sum_{i\in\mathcal{M}}\alpha_i \log_2(1+\gamma_i)$$ with $$$$\label{1} {\gamma}_i = \frac{\left|\mathbf{h}_i^H\mathbf{w}_i\right|^2}{\sum\limits_{j=1,j\ne i}^N\left|\mathbf{h}_i^H\mathbf{w}_j\right|^2 + {\sigma^2}}.$$$$ subject to $$\sum_{i\in\mathcal{M}}||{\bf w}_i||^2_2\le P$$ Here, $$\mathcal{M}=\{1,2,\dots,M\}$$ is a finite set of $$M$$ scheduled users, and $$\mathcal{N}=\{1,2,\cdots,N\}$$ is a finite set of all users. $$\alpha_i$$s are also known positive (>0) numbers. Therefore, I want to find the subset $$\mathcal{M}$$, i.e., schedule $$M$$ users out of $$N$$ so that the objective is maximized while fulfilling the constraint. Note that this is a complex scheduling problem. Anyway, is this formulation reflecting what I just described? Note: $${\bf w}_i$$ is the interference (at user $$i$$) cancelling vector used at the sevice station. $${\textbf{The approach:}}$$ Let us introduce a binary variable $$x_i\in\{0,1\}$$. If user $$i$$ is scheduled, $$x_i=1$$, else $$x_i=0$$. Now, I have a mixed integer programming problem as below $$\underset{{\bf w}_i }{\max}\sum_{i=1}^N x_i\alpha_i \log_2(1+\gamma_i)$$ subject to $$\sum_{i=1}^N x_i||{\bf w}_i||^2_2\le P$$ $$\sum_{i=1}^Nx_i=M$$ $$x_i\in\{0,1\}$$ How can we deal with the objective and the constraints to have an efficient linear/convex formulation? Can we take advantage of the monotonic behavior of the logarithm in the transformation? • in your model, I do not see "at any point in time", there is no time dimension yet. Also, this does not "feel" as a scheduling problem as you are not assigning (yet) times to events. You "just" choose subsets, one per time slice, subject to a budget. This seems to be rather standard, and the "only" complicating stuff is the objective function. My first approach would be to question the necessity of the "complex" functions. What is their goal and are there easier ways to describe them? – Marco Lübbecke Jul 12 '19 at 8:00 • @MarcoLübbecke, the ${\bf w}_i$ is used for crosstalk cancellation ar user $i$. The offered budget is included there. We can also design ${\bf w}_i$ to be a unit norm vector. – dipak narayanan Jul 12 '19 at 17:33 Cool problem! There are a couple of things you can do to make this problem more tractable. Before starting, do you really need the variables and some parameters to be complex numbers? In particular, according to your notation, are the $$|\cdot|$$ the complex modulus of the vectors? For more details of (MI)LP over complex numbers check this other question. There are some tools that allow you to do optimization over complex number using a bijective mapping between the complex and the real numbers. But well, the following reformulation stands for real or complex variables. Let's begin. ### 1) Objective function Notice that the objective function can be written as $$\sum\limits_{i=1}^N x_i\alpha_i \log_2 \left(1+\frac{|\bf{h}_i^Hw_i|^2}{\sigma^2+\sum\limits_{j=1\\ j\neq i}^N|\bf{h}_j^Hw_j|^2}\right).$$ You can write the $$\log_2$$ term as follows (just distribute the denominator and use the logarithm properties) \begin{align}\log_2 \left(1+\frac{|\mathbf{h}_i^H \mathbf{w}_i|^2}{\sigma^2+\sum\limits_{j=1\\ j\neq i}^N|\mathbf{h}_j^H \mathbf{w}_j|^2}\right)&=\log_2 \left(\frac{\sigma^2+\sum\limits_{j=1}^N|\mathbf{h}_j^H \mathbf{w}_j|^2}{\sigma^2+\sum\limits_{j=1\\ j\neq i}^N|\mathbf{h}_j^H \mathbf{w}_j|^2}\right)\\&= \log_2 \left(\sigma^2+\sum\limits_{j=1}^N|\mathbf{h}_j^H \mathbf{w}_j|^2\right)-\log_2 \left(\sigma^2+\sum\limits_{j=1\\ j\neq i}^N|\mathbf{h}_j^H \mathbf{w}_j|^2\right).\end{align} This yields an objective function that can be separated into two parts: $$\sum\limits_{i=1}^N x_i\alpha_i \log_2 \left(\sigma^2+\sum\limits_{j=1}^N|\mathbf{h}_j^H \mathbf{w}_j|^2\right) - \sum\limits_{i=1}^N x_i\alpha_i \log_2 \left(\sigma^2+\sum\limits_{j=1\\ j\neq i}^N|\mathbf{h}_j^H \mathbf{w}_j|^2\right).$$ The first term can be further simplified as $$\log_2 \left(\sigma^2+\sum\limits_{j=1}^N|\mathbf{h}_j^H \mathbf{w}_j|^2\right)\sum\limits_{i=1}^N x_i\alpha_i$$ but let's leave it inside as if it was indexed in $$i$$ for simplicity. Since you are maximizing, you would like the terms in the objective function to be concave to make your problem easier to solve. Since you have a product of binary and continuous variables in both terms of your objective function, you would like to reformulate the product. Fortunately, this can be done. For simplification assume that you have the product $$x_i v_i$$ and $$x_i \in \{ 0,1 \}$$ and $$v_i \in [L_i, U_i]$$. Since you correctly commented that $$\log_2$$ is monotonic, you can derive the lower and upper bounds for each continuous part. Once you have there, for each product you introduce a variable $$z_i = x_i v_i$$ and the following constraints: $$\begin{cases}z_i \leq U_i x_i \\ z_i \geq L_i x_i \\ z_i \leq v_i - (1-x_i)L_i \\ z_i \geq v_i - (1-x_i)U_i\end{cases}$$ Notice that since you have some positive and negative values in the objective function you can ignore two of the constraints that arise from the reformulation since they will never be active. Your objective function becomes the linear expression $$\max\limits_{x,\mathbf{w},z} \sum\limits_{i=1}^N \alpha_i(z_i^1-z_i^2).$$ You would still have some constraint of the type $$\begin{cases}z_i^1 \leq \log_2 \left(\sigma^2+\sum\limits_{j=1}^N|\mathbf{h}_j^H \mathbf{w}_j|^2\right) - (1-x_i)L_i^1\\z_i^2 \geq \log_2 \left(\sigma^2+\sum\limits_{j=1}^N|\mathbf{h}_j^H \mathbf{w}_j|^2\right) - (1-x_i)U_i^2\end{cases}$$ with some bounds that you can find (e.g. $$L_i^1 \leq \log_2(\sigma^2)$$) which are nonlinear because of the $$\log_2$$. The first set of constraints is convex though (assuming you can compute those complex norms as with real numbers), so that's something. ### 2) Constraints These are easier, given that the norm is already convex, meaning that after reformulating the binary-norm product with the trick above, you will obtain a set of convex inequalities. Just use $$v_i = ||\mathbf{w}_i||_2^2$$. Since according to your previous comment $$\mathbf{w}_i$$ are unit vectors, deriving bounds to its norm is trivial. In this case, you can also ignore two of the constraints from the reformulation (given the fact that you have an inequality), and you don't need to worry about $$\mathbf{w}_i$$ being complex given that the $$||\cdot||_2^2$$ is the same as if you have a twice as large real vector (in this case a matrix). ### 3) Final thoughts You are dealing with a nonconvex MINLP. Some of those nonconvexities can be easily convexified (the bilinear binary-continuous terms), while others are not so easy. It may also depend heavily on the algorithm that you are using to solve the problem and what you are interested in. If you do not mind obtaining a local optimal solution, you may just partially reformulate the problem as written here. You may even try to put all the nonlinearities and nonconvexities in the objective and see how far a local solver can get you. If you do care about global optimality, the reformulations here are valid (no relaxations or approximation were introduced) but the global solvers (e.g. BARON, ANTIGONE or SCIP) might be more successful on the original form of the problem. I'm curious how this ends up behaving, let us know!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 58, "wp-katex-eq": 0, "align": 1, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.8427922129631042, "perplexity": 397.93598297131444}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703522242.73/warc/CC-MAIN-20210121035242-20210121065242-00239.warc.gz"}
http://math.stackexchange.com/questions/45248/maximum-number-of-mutually-orthogonal-latin-square-pairs-definition-provided?answertab=oldest
# Maximum number of mutually orthogonal latin square pairs (definition provided) An $n\times n$ matrix is defined to be a "latin square" if each row and column is a permutation of the first $n$ natural numbers. Two squares of same order are orthogonal if the $n^2$ pairs $(x_{ij},y_{ij})$ are "all distinct". Proof is required that the maximal number of mutually orthogonal pairs of latin squares is $n-1$. I have put "all distinct" in quotes, because if interpreted one way this would be easier than the book projects it to be (my intuition is usually wrong in these cases) If we fix an element $x_{ij}$ then there are $n-1$ choices for $y_{ij}$ so there can be atmost $n-1$ mutually orthogonal latin squares. I suspect this is incorrect. If so, any insight would be appreciated. This is incorrect. For $n=1$ no pairs possible. For $n=2$ the possible latin squares are $$\begin{pmatrix} 1 & 2\\ 2 & 1 \end{pmatrix}\qquad \mbox{and}\qquad \begin{pmatrix} 2 & 1\\ 1 & 2 \end{pmatrix}$$ By my assumed definition the above squares would be orthogonal but clearly I am asked to look at all $n^2$ pairs so my reasoning is false. There are no mutually orthogonal pairs for $n=2$ either. I would appreciate any help/pointers/hints. - The following is a standard proof for this fact. I outline the logic with a series of questions. If $A$ is a latin square, you get another one by the process of renaming the entries. In this way you can put $A$ into a kind of a standard form $A_{st}$ that has $(1,2,\ldots,n)$ as its first row. For example if $$A=\left(\begin{array}{ccc} 2&1&3\\3&2&1\\1&3&2\end{array}\right),$$ then we can put this into standard form by putting a '1' wherever there is a '2' and a '2' wherever there is a '1' and get $$A_{st}=\left(\begin{array}{ccc} 1&2&3\\3&1&2\\2&3&1\end{array}\right)$$ Lemma: If $A$ and $B$ are orthogonal latin squares, then so are $A_{st}$ and $B_{st}$. You prove this! It is not difficult. Let me illustrate with an example. The above LS $A$ is orthogonal to the Latin square $$B=\left(\begin{array}{ccc} 2&3&1\\3&1&2\\1&2&3\end{array}\right),$$ because the pairs of entries are (2,2),(1,3),(3,1) on the first row, (3,3),(2,1),(1,2) on the second and (1,1),(3,2),(2,3) on the last. The standard form of $B$ is (check this as a way of understanding what standard form means) $$B_{st}=\left(\begin{array}{ccc} 1&2&3\\2&3&1\\3&1&2\end{array}\right).$$ As a check that you understand orthogonality I invite you to verify that $A_{st}$ and $B_{st}$ are, indeed, orthogonal. So if $A^{(1)}, A^{(2)},\ldots A^{(k)}$ are mutually orthogonal latin squares, then by the lemma we can assume that they all are in the standard form. Therefore the first rows contain the pairs $(i,i),i=1,2,\ldots,n$ for all the pairs of latin squares. Let us then look at the position of 2 on the first column of $A^{(1)}$. Let's say that this happens on row $j$, so $A^{(1)}_{j1}=2$. Question #1: Why is it illegal to have $A^{(i)}_{j1}=2$ for any $i=2,3,\ldots,k$? Question #2: Why is it illegal to have $A^{(i)}_{j1}=1$ for any $i=2,3,\ldots,k$? Question #3: Why is it illegal to have $A^{(i)}_{j1}=A^{(\ell)}_{j1}$ for any two indices $i$ and $\ell$ such that $2\le i<\ell\le k$? Question #4: Why do the answers to the previous questions imply $k<n$? Prove the lemma (unless it is done in the book) and answer the questions! Good luck! - It took some time, but finally I was able to do your exercises. Thanks, this should be cited as a model answer. –  kuch nahi Jun 18 '11 at 7:35 That wasn't my question, but it helped me solve something else with the standard form hint! Thanks! –  Patrick Da Silva Sep 8 '13 at 17:08 Glad to hear that, @Patrick! –  Jyrki Lahtonen Sep 8 '13 at 18:17 Your statement is equivalent to the proposition that you may construct an $n\times n \times n$ "Latin cube" because the condition of "all numbers are different from each other" along the third dimension is exactly the same as for the two original dimensions of the square. I don't know whether the result about the non-existence of the Latin cube holds for $n\geq 3$ but it surely fails in general because it fails for $n=2$: $((1,2),(2,1))$ and $((2,1),(1,2))$ are two mutually orthogonal Latin squares. Their number exceeds $n-2=1$. Maybe you wanted to add a condition that $n\geq 3$ and/or some extra conditions for the diagonal sums. - Another comment: I misread you second paragraph initially. It is wrong. There are no mutually orthogonal latin squares for the case $n=2$. So it does not "surely fail in general" because even with these cases, the maximal number is at most $n−1$. My upvote remains for thinking of the cube though. –  kuch nahi Jun 14 '11 at 5:18 For your two squares, the pairs you get are $(1,2)$ (upper left corner), $(2,1)$ (upper right corner), $(1,2)$ (bottom right corner) and $(2,1)$ (bottom left corner), which are not all distinct; so the two don't satisfy the definition of "mutually orthogonal latin squares" that is given. –  Arturo Magidin Jun 14 '11 at 5:22 I may have misunderstood what it means for "all pairs to be distinct". I thought it meant that $x_{ij}\neq y_{ij}$ for all choices of $ij$. If it means something completely different, like that the same doublet $(x_{ij},y_{ij})$ can't occur for two choices of $ij$, then of course my comment is totally irrelevant. –  Luboš Motl Jun 14 '11 at 6:02
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8585101366043091, "perplexity": 250.6914033371046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207924991.22/warc/CC-MAIN-20150521113204-00085-ip-10-180-206-219.ec2.internal.warc.gz"}
http://www.sineofmadness.co.uk/tag/tangent/
## MA101.13 Trigonometrical functions Series definitions for the sin and cosine functions are: $\sin{x} = x - \frac{x^3}{3!} + \frac{x^5}{5!} - ...$ $\cos{x} = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - ...$ These converge $\forall x \in \mathbb{R}$. If we differentiate these term by term we can see that: $\frac{d(\sin{x})}{dx} = \cos{x}$ $\frac{d(\cos{x})}{dx} = -\sin{x}$ Many other properties can be deduced from these power series. The graph of $y = \sin{x}$ is as shown: We can see that sin is not an injection (domain of $\mathbb{R}$) and so there is no inverse. However the function $f:[-\frac{\pi}{2},\frac{\pi}{2}]\to[-1,1]$, or $f:x\to \sin{x}$ [called the cut down sine], has the graph: and this is an injection, and has an inverse function $f^{-1}$ with domain $latex[-1,1]$ and range $[-\frac{\pi}{2},\frac{\pi}{2}]$. $f^{-1}$ is the unique real number (angle) between $[-\frac{\pi}{2},\frac{\pi}{2}]$ whose sine is x. The $f^{-1}$ is symbolised by $\sin^{-1}{x}$ or $\arcsin{x}$. The graph of $\arcsin{x}$ is a reflection of $y=\sin{x}$ (cut down) in the line $y=x$. Knowing $y=\arcsin{x}$ is true, you may deduce $x=\sin{x}$ is true. Example: $\frac{\pi}{4}=arcsin(\frac{1}{\sqrt{2}}) \Rightarrow sin)\frac{\pi}{4} = \frac{1}{\sqrt{2}}$. Knowing $x = sin(y)$ is true, you may not deduce $y = arcsin(x)$. Example: $\frac{1}{\sqrt{2}}=sin(\frac{3\pi}{4}) \nRightarrow \frac{3\pi}{4} = arcsin(\frac{1}{\sqrt{2}})$. ##### Theorem $\frac{d(arcsin(x))}{dx} = \frac{1}{\sqrt{1-x^2}}$ ###### Proof \begin{aligned} \mbox{Let: } y &= arcsin(x) \\ \mbox{then } x &= sin(y) \\ \frac{dx}{dy} &= cos(y) \\ \frac{dy}{dx} &= \frac{1}{cos(y)} \\ &= \frac{1}{\sqrt{1-sin^2(y)}} \\ &= \frac{1}{\sqrt{1-x^2}} \end{aligned} We take the positive square root because for $[-\frac{\pi}{2} \leq y \leq \frac{\pi}{2}]$ the $\cos{y} \geq 0$ Similarly we define the ‘so called’ cut down cosine – this is cosine but with the domain $[0,\pi]$, and the cut down tangent with domain $(-\frac{\pi}{2}, \frac{\pi}{2})$. The inverses of these functions are arccos and arctan.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 31, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.000008463859558, "perplexity": 714.1357628676934}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315329.55/warc/CC-MAIN-20190820113425-20190820135425-00329.warc.gz"}
https://www.physicsforums.com/threads/how-do-you-find-acceleration-when-you-are-only-given-netwons.141073/
# How do you find acceleration when you are only given netwons? 1. Nov 1, 2006 ### AznBoi Okay, there is 20 N of force upward and there is 480N of weight pulling down, how do I find the acceleration?? 2. Nov 1, 2006 ### jonlevi68 Net force is then 20N - 480N = -460N. Divide by the mass (F = ma). 3. Nov 1, 2006 ### AznBoi nvm I got it, you just divide N by gravity (dumb me) to get the mass then dvide the N by mass to get acceleration!! Thanks anyways! 4. Nov 2, 2006 ### Andrew Mason Just so you don't confuse people, I think you get the point but your terminology is wrong. You divide the weight by gravitational acceleration to get the mass (W = mg so m = W/g). You then divide the net force by the mass to get the mass' acceleration (F/m = a). AM Similar Discussions: How do you find acceleration when you are only given netwons?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9528607130050659, "perplexity": 1974.1830447859047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189083.86/warc/CC-MAIN-20170322212949-00197-ip-10-233-31-227.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/2-body-scattering-and-mandelstam-variables.915575/
# Homework Help: 2-body scattering and Mandelstam Variables Tags: 1. May 24, 2017 ### Pizza Pasta physics 1. The problem statement, all variables and given/known data In a 2-body scattering event, A + B → C + D, it is convenient to introduce the Mandelstam variables, s ≡ −(PA + PB)2 , t ≡ −(PA − PC) 2 , u ≡ −(PA − PD) 2 , where PA,...,D are the 4-momenta of the particles A, . . . , D respectively, (· · ·) 2 = (· · ·) · (· · ·) denotes a scalar product, and we are using natural units in this problem. The Mandelstam variables are useful in theoretical calculations because they are invariant under Lorentz transformation. Demonstrate that in the centre of mass frame of A and B, the total CM energy, i.e., Etotal ≡ EA +EB = EC + ED , is equal to √ s. 2. Relevant equations s + t + u = mA2 + mB2 + mC2 + mD2 (I had to show this before which I did, not sure if its relevant or not). PA = -PB (due to being in a CM frame) 3. The attempt at a solution Using the scalar product notation for s, I managed to reduce s to -(EA + EB) however I still can't take the square root to show √ s = EA + EB due to the pesky negative sign. Apart from me doing something wrong with my algebra, I was wondering if the given Mandelstam variables are correct. From all the secondary sources I've looked at none give them with the negative signs. 2. May 25, 2017 ### Orodruin Staff Emeritus This depends on whether or not you use a +--- or -+++ signature on your metric. Most particle physicists use +--- and it seems like your source does not. Check what convention is used.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.909003496170044, "perplexity": 1958.0764073761209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864461.53/warc/CC-MAIN-20180521161639-20180521181639-00370.warc.gz"}
https://link.springer.com/article/10.1186/s13661-019-1118-z
## Introduction Current past reveals that the extra consideration has been given to area of research entitled “microfluidics” so that advances in microfabrication technologies are accessible. In a microfluidic system, electroosmotic flow (EOF) is significant. Electroosmosis is basically an electrokinetic mechanism, in which we examine the ionic development of fluids affected by electric fields. Due to this process, a Stern layer (a charged surface with a high concentration of counter ions) is created. Resulting Stern layer with diffusing layer forms an Electric Double Layer (EDL). The potential (interfacial) between diffuse double layer and Stern layer is called zeta potential, a prominent aspect of many electrokinetic mechanisms. Electrokinetic transportation has become a vivid area of modern fluid mechanics. The combined impacts of electrokinetic and peristaltic phenomena are critical in controlling biological transport mechanisms. Electrokinetics contains electrophoresis, electroosmosis, diffusiophoresis and several other phenomena. Many microfluidic apparatus such as Lung chips, proteomic chips, lab-on-a-chip (LOC), portable blood analyzers, micro-peristaltic pumps, organ-on-a-chip, micro-electro-mechanical systems (MEMS), micro-peristaltic pumps, DNA and bioMEMS, as well as microscopic full analysis systems are built upon the ideology of EDL and electroosmosis. Moreover, microfluidic apparatus is also associated with MEMS, automation and parallelization, cost-effectiveness analysis, integration, miniaturization, separation, study of biological/chemical factors and high efficiency progression. Bandopadhyay et al. [1] examined the peristaltic modulation of electroosmotic flow in the microfluidic channel for the viscous fluid. Shit et al. [2] analyzed the rotation of EOF in a non-uniform micro-fluidic channel through slip velocity. Tripathi et al. [3] discussed the impacts of electroosmosis and peristalsis, for unsteady viscous flow. Ranjit et al. [4] worked on the electro-magneto-hydrodynamic flow via peristaltically induced microchannel along the effects of Joule heating and wall slip. Furthermore, Tripathi et al. [5] scrutinized the mathematical model on electroosmosis in peristaltic biorheological flow through an asymmetric microfluidic channel. Jhorar et al. [6] proposed the peristaltic modulation of electroosmosis in an asymmetric microfluidic channel for viscous fluid. Ranjit et al. [7] explained the effect of zeta potential and Joule heating through porous microvessel on peristaltic blood flow. In addition, Prakash et al. [8] investigated the EOF Williamson ionic nano-liquids in a tapered microfluidic channel under the effects of peristalsis and thermal radiation. Tripathi et al. [9] considered the electroosmosis of microvascular blood flow. In the existing literature, traditional liquids such as water and natural oil fail to accomplish the current demands for improving thermal conductivities. Currently, nanofluid research is a major topic of research because it enhances the thermal conductivity of conventional liquids. Nanofluidic flow problem has numerous uses in biomedical engineering such as the delivery of a drug by using nanoparticles, heat exchanges and tumor cure. However, researchers have paid great attention to the all above-mentioned phenomenon. Das et al. [10] found that the effect of electrical and thermal conductivities of the wall in a vertical channel for nanofluid flow. Hassan et al. [11] explored the properties of the wall in a porous channel, for the peristaltic flow of MHD nanofluid. Alghamdi et al. [12] demonstrated the smooth solutions for three-dimensional Hall-MHD equations through regularity criteria. Ahmed et al. [13] discussed the generalized time-convection of non-local nanofluids in a vertical channel. Pramuanjaroenkij et al. [14] studied the enhancement of heat transfer for the hybrid thermal conductivity model of nanofluid, numerically. Moreover, Arabpour et al. [15] analyzed the influence of slip boundary conditions on the flow of double-layer microchannel nanofluid. Akbarzadeh et al. [16] investigated the first two laws of thermodynamics for nanofluid flow with porous inserts and corrugated walls in a heat exchanger tube. In addition, Mosayebidorcheh et al. [17, 18] explained the peristaltic flow of nanofluid and heat transfer through asymmetric straight and divergent wall channels. Rahman [19] assumed the expansion/contraction of MHD nanofluid through permeable walls. Prakash et al. [20] demonstrated the effect of thermal radiation on electroosmosis modulation and peristaltic transport of ionic nano-liquid in biological microfluidic channel. Peristaltic flow is a flow by wave propagation along the flexible walls of channel. Peristalsis is an inbuilt feature of many biomedical and biological systems. Physiologically, it plays a crucial role in several situations, for example, function of ureter, mixing of food and chyme transport in the tract of gastro-intestine, transportation of oocytes in the fallopian tube of females and the transmission of sperms in the male reproductive tract. Moreover, it is also useful in transport of cilia and bile duct, movement of lymph in lymphatic vessels and vascular movement of blood vessels, roller pump design (for pumping fluids without contact of pumping machinery) in peristaltic and acupressure pumps for cardiopulmonary and dialysis machinery. The updated version of hose pumps are operated by the peristaltic principle. Peristalsis is particularly advantageous for the transportation of slurries and chemicals that are corrosive in nature; therefore, preventing the rotation of pump drive and damage to moving parts. On a LOC device, it is generally essential to deliver a small amount of biological fluid by peristalsis (in a smaller level) than in a typical LOC system. Thus, contamination of the sample is prevented. Such uses have unlocked a new approach for doctors and mathematicians to maneuver their gadgets to scrutinize better results. Gala et al. [21] explained the regularity criterion for Boussinesq equations with respect to zero thermal conductivity. Also, Gala et al. [22] described the weak solutions for quasi-geostrophic equations through uniqueness criteria in Orlicz–Morrey spaces. Nanofluid transport in an asymmetric peristaltic flow was incorporated by Noreen [23]. Latha et al. [24] used the impacts of heat dissipation on the Jeffery and Newtonian fluid of peristaltic flow in an asymmetric channel. Besides, Latha et al. [25] also worked on the asymmetric channel with partial slip conditions for peristaltic transport of couple stress fluid. Noreen [26] determined the magneto-thermal hydrodynamic peristaltic transport for Eyring–Powell nanofluids through an asymmetric conduit. Furthermore, Abd Elmaboud et al. [27] developed the peristaltic transport for couple stress fluid through the rotating channel. Bhatti et al. [28] developed the peristaltic impulsion of solid (magnetic) particles in biological fluids, thermally. The electromagnetic transport for two-layer immiscible liquids incorporated in [29] by Elmaboud et al. Moreover, Saravana et al. [30] depicted the effect of heat transfer and flexible walls on the peristaltic flow of a Rabinowitsch fluid through an inclined channel. The literature review showed that most of earlier studies dealt with electrokinetic or peristaltic pumping to drive fluid flow. The combined outcomes of peristalsis and electrokinetic phenomena can be critical for improving/controlling the mechanism of peristaltic transport. Inspired by the extensive uses of electroosmosis, peristalsis and nanofluids in current biomedical engineering/industry, some mathematical models of fully developed flows driven by the combined outcomes of electroosmotic and peristaltic pumping have been examined for the Newtonian fluid model and nanofluid. However, MHD nanofluids for electroosmotic peristaltic transport for Burgino model have not been taken into account. To fill this research gap, we present a new mathematical model to study the electroosmotic peristaltic pumping analysis of MHD nanofluid in an asymmetric microchannel. Joule heating, viscous dissipation effect and zeta potential of different values are likewise taken in this model. First, the relevant equations for EOF model along the axial electric field have been modeled and then solved for long wavelength and low Reynolds number. Afterwards, the resulting equations are solved numerically by utilizing the Mathematica software. Consequences of pertinent factors on the characteristics of flow, pumping, trapping, and heat transfer have been pointed out. ## Formulation and solution Consider a two-dimensional flow $$(\tilde{x}, \tilde{y})$$ of unsteady magneto-hydrodynamic nanofluid in an asymmetric micro-channel, in which wave propagation is along the direction (Fig. 1). This flow is formed by the propagation of a sinusoidal wave at a constant speed c along the channel having elastic walls. The combination of externally applied magnetic field, electric field and pressure gradient affects the driving fluid. It is supposed that the electric field $$E_{0}$$ is imposed axially, and magnetic field $$B_{0}$$ is transversely of the fluid flow. Let $$\tilde{y}_{1} = \tilde{h}_{1} ( \tilde{x}, \tilde{{t}} )$$ and $$\tilde{y}_{2} = \tilde{h}_{2} ( \tilde{x}, \tilde{{t}} )$$ be the upper and lower walls of channel, respectively: \begin{aligned}& \tilde{h}_{1} ( \tilde{x}, \tilde{{t}} ) = d_{1} + a_{1} \cos ^{2} \biggl( {\frac{ ( \tilde{x} - c \tilde{t} ) \pi}{ \lambda}} \biggr), \end{aligned} (1) \begin{aligned}& \tilde{h}_{2} ( \tilde{x}, \tilde{{t}} ) = - d_{2} - a_{2} \cos ^{2} \biggl( {\frac{ ( \tilde{x} - c \tilde{t} ) \pi}{\lambda}} + \varphi \biggr), \end{aligned} (2) where $$\tilde{h}_{1} ( \tilde{x}, \tilde{y} )$$, $$\tilde{h}_{2} ( \tilde{x}, \tilde{y} )$$, $$d_{1}$$, $$d_{2}$$, $$a_{1}$$, $$a_{2}$$, φ, λ and are the upper wall, the lower wall, constant height of upper wall measured from $$\tilde{y}_{1} = 0$$, constant height of lower wall measured from $$\tilde{y}_{2} = 0$$, amplitude of the upper and lower walls, phase difference, wavelength and time, respectively. ### Distribution of potential Ion separation occurs during EOF, and EDL is formed near the channel walls, creating an electric potential ϕ̃ difference. The Poisson–Boltzmann equation is used to describe the ϕ̃ in the microchannel: \begin{aligned}& \nabla^{2} \tilde{\phi} =- \frac{\rho_{e}}{\in\in_{0}}, \end{aligned} (3) where $$\rho_{e}$$, ∈, ∈0 and ϕ̃ are the net charge density, the relative permittivity of the medium, the permittivity of free space ($$8.854 \times 10^{- 12}$$ F⋅m− 1) and electric potential distribution. The probability of detecting ions at a specific position in electric double layer (EDL) is relative to Boltzmann factor $$e^{( e z_{v} \tilde{\phi} / T_{\mathrm{av}} K_{B} )}$$. The positive $$( n^{+} )$$ and negative ions $$( n^{-} )$$ can be explained by the number density of the Boltzmann equation: $$n^{\pm} = n_{0} e^{( \pm \frac{e z_{v}}{T_{\mathrm{av}} K_{B}} \tilde{\phi} )},$$ (4) where the average numbers of negative and positive ions are denoted by $$n_{0}$$. The distribution of ionic concentration is considered to be effective when there is no ionic concentration gradient in the axial direction of the microchannel. By the electrolyte symmetry assumption, the total charge density $$\rho_{e}$$ is taken as \begin{aligned}& \rho_{e} = - z_{v} e \bigl( n^{-}- n^{+} \bigr)= - 2 z_{v} e n_{0} \sinh \biggl( \frac{e z_{v}}{T_{\mathrm{av}} K_{B}} \tilde{\phi} \biggr). \end{aligned} (5) In the above, $$z_{v}$$, e, $$T_{\mathrm{av}}$$ and $$K_{B}$$ are the ions valence, electron charge, average temperature, and Boltzmann constant. The nonlinear terms in the Nernst–Planck equations are $$O( P_{e} \alpha^{2} )$$, where $$P_{e} = R_{e} S_{c}$$ represents the ionic Peclet number and $$S_{c}$$ is the Schmidt number. Assume that the Peclet number is very small. Now, by means of Eqs. (3)–(5),we approximate Eq. (3) as: \begin{aligned}& \frac{{d}^{{2}} \tilde{\phi}}{{d} \tilde{y}^{{2}}} = \frac{{2} z_{v} e n_{0}}{\in\in_{0}} \sinh \biggl( \frac{e z_{v}}{T_{\mathrm{av}} K_{B}} \tilde{\phi} \biggr). \end{aligned} (6) The boundary conditions for dimensional form Φ̃ can be written as \begin{aligned}& \tilde{\phi}= \tilde{\zeta}_{1}\quad \text{at } \tilde{y}_{1} = \tilde{h}_{1} ( \tilde{x}, \tilde{{t}} ), \end{aligned} (7a) \begin{aligned}& \tilde{\phi}= \tilde{\zeta}_{2}\quad \text{at } \tilde{y}_{2} = \tilde{h}_{2} ( \tilde{x}, \tilde{{t}} ), \end{aligned} (7b) where $$\tilde{\zeta}_{1}$$ and $$\tilde{\zeta}_{2}$$ are the zeta potentials at the upper and lower walls, respectively. In order to proceed with dimensionless variables, we introduce: \begin{aligned}& \begin{aligned} &a= \frac{d_{2}}{d_{1}},\qquad b= \frac{a_{1}}{d_{1}}, \qquad c= \frac{a_{2}}{ d_{2}},\qquad h_{1 }= \frac{\tilde{h}_{1}}{d_{1}},\qquad h_{2} = \frac{\tilde{h}_{2}}{d_{1}}, \\ & m= \frac{d_{1}}{{\lambda D}},\qquad p= \frac{\tilde{{p}} d_{1}^{{ 2}}}{c_{\lambda} \mu_{f}},\qquad t= \frac{c \tilde{{t}}}{{\lambda}},\qquad u= \frac{\tilde{u}}{c},\qquad v= \frac{\tilde{v}}{c \alpha}, \\ &x= \frac{\tilde{x}}{{\lambda}},\qquad y= \frac{\tilde{y}}{d_{1}},\qquad B_{r} = E_{c}\cdot {P_{r}},\qquad E_{c} = \frac{c^{{ 2}}}{c_{p} ({T_{1} - T _{0}} )}, \\ & H_{r} = B_{0} d_{1} \sqrt{ \frac{{\sigma_{e}}}{\mu_{f}}},\qquad N_{b} = \frac{{\gamma 1} ({C_{1} - C_{0}} ) D_{B}}{\nu_{f}}, \\ &{N_{t}} = \frac{{\gamma_{1}} ({T_{1} - T_{0}} ) D_{T}}{{T_{m}} \nu_{f}},\qquad {P_{r}}= \frac{\mu_{f} c_{p}}{ k_{f}},\qquad R_{e} = \frac{\rho_{f} c d_{1}}{\mu_{f}}, \\ & S_{c} = \frac{{c} d_{1}}{K_{B}},\qquad U_{\mathrm{HS}} =- \frac{E_{0} \in\in_{0} T_{\mathrm{av}} K_{B}}{e z_{v} \mu_{f}},\qquad \alpha= \frac{d_{1}}{\lambda}, \\ &\beta= \frac{U_{\mathrm{HS}}}{c},\qquad \gamma_{1} = \frac{ ( \rho c )_{p}}{ ( \rho c )_{f}}, \qquad \gamma_{2} = \frac{{\sigma_{e}} d_{1}^{2} E_{0}^{2}}{{k} ({T_{1} - T_{0}} )},\qquad {\gamma_{3}} =P_{r} \gamma_{2}, \\ & \nu_{f} = \frac{\mu_{f}}{\rho_{f}},\qquad \lambda_{D} = \frac{1}{e z_{v}} \sqrt{\frac{T_{\mathrm{av}} K_{B} \in\in_{0}}{{2} n_{0}}},\qquad \zeta_{1} = \frac{e z_{v}}{T_{\mathrm{av}} K_{B}} \tilde{\zeta}_{1}, \\ &\zeta_{2} = \frac{e z_{v}}{T_{\mathrm{av}} K_{B}} \tilde{\zeta_{2}}, \qquad \phi= \frac{e z_{v}}{T_{\mathrm{av}} K_{B}} \tilde{\phi}, \\ & \varTheta= \frac{\tilde{{T}}- T_{0}}{{T_{1} - T_{0}}},\qquad \varOmega= \frac{{C- C_{0}}}{ {C_{1} - C_{0}}}. \end{aligned} \end{aligned} (8) By using the dimensionless variables defined in Eq. (8), Eqs. (6) and (7a)–(7b) become \begin{aligned}& \frac{{d}^{{2}} \phi}{{d} y^{{2}}} = \phi m^{2}, \end{aligned} (9) \begin{aligned}& \begin{aligned} &\phi= \zeta_{1} \quad \text{at } y_{1} = h_{1} ( x,t ), \\ &\phi= \zeta_{2} \quad \text{at } y_{2} = h_{2} ( x,t ). \end{aligned} \end{aligned} (10) Moreover, we suppose that the zeta potential at walls is small enough that the Debye–Hückel linearization is approximately applicable. The linear Poisson–Boltzmann equation is solved by using the boundary conditions given in Eq. (10) to obtain the function of potential distribution \begin{aligned} \phi =& \biggl( \frac{\zeta_{2} \sinh ( m h_{1} ) - \zeta_{1} \sinh ( m h_{2} )}{\sinh ( m h_{1} -m h_{2} )} \biggr) \cosh ( my ) \\ &{} + \biggl( \frac{\zeta_{1} \cosh ( m h_{2} ) - \zeta_{2} \cosh ( m h_{1} )}{\sinh ( m h_{1} -m h_{2} )} \biggr) \sinh ( my ). \end{aligned} (11) Here, m is the electroosmotic parameter. If we put $$\zeta_{1} = \zeta_{2}$$, then the solution of Eq. (11) reduces to the results of [8]. ### Analysis of flow Taking into account the viscous dissipation and Joule heating effects, the governing equations for electroosmotically conducting nanofluid affected by the peristaltic flow in asymmetric microchannel are expressed here as: \begin{aligned}& \frac{\partial \tilde{u}}{\partial \tilde{x}} + \frac{\partial \tilde{v}}{ \partial \tilde{y}} =0, \end{aligned} (12) \begin{aligned}& \rho_{f} \biggl( \frac{\partial \tilde{u}}{\partial \tilde{t}} + \tilde{u} \frac{\partial \tilde{u}}{\partial \tilde{x}} + \tilde{v} \frac{\partial \tilde{u}}{\partial \tilde{y}} \biggr) = - \frac{\partial \tilde{p}}{ \partial \tilde{x}} + \mu_{f} \biggl( \frac{\partial^{2} \tilde{u}}{ \partial \tilde{x}^{2}} + \frac{\partial^{2} \tilde{u}}{\partial \tilde{y}^{2}} \biggr) + \rho_{e} E_{0} -\sigma_{e} B_{0}^{2} \tilde{u}, \end{aligned} (13) \begin{aligned}& \rho_{f} \biggl( \frac{\partial \tilde{v}}{\partial \tilde{t}} + \tilde{u} \frac{\partial \tilde{v}}{\partial \tilde{x}} + \tilde{v} \frac{\partial \tilde{v}}{\partial \tilde{y}} \biggr) =- \frac{\partial \tilde{p}}{\partial \tilde{y}} + \mu_{f} \biggl( \frac{\partial^{2} \tilde{v}}{\partial \tilde{x}^{2}} + \frac{\partial^{2} \tilde{v}}{\partial \tilde{y}^{2}} \biggr), \end{aligned} (14) \begin{aligned}& \biggl( \frac{\partial \tilde{T}}{\partial \tilde{t}} + \tilde{u} \frac{\partial \tilde{T}}{\partial \tilde{x}} + \tilde{v} \frac{\partial \tilde{T}}{\partial \tilde{y}} \biggr) = \frac{ k_{f}}{ (\rho c)_{f}} \biggl( \frac{\partial^{2} \tilde{{T}}}{\partial \tilde{x}^{2}} + \frac{\partial^{2} \tilde{T}}{\partial \tilde{y}^{2}} \biggr) \\& \hphantom{ \bigg( \frac{\partial \tilde{T}}{\partial \tilde{t}} + \tilde{u} \frac{\partial \tilde{T}}{\partial \tilde{x}} + \tilde{v} \frac{\partial \tilde{T}}{\partial \tilde{y}} \bigg) =} {} +\gamma_{1} \biggl[ D_{B} \biggl( \frac{\partial \tilde{T}}{ \partial \tilde{x}} \frac{\partial \tilde{C}}{\partial \tilde{x}} + \frac{\partial \tilde{T}}{\partial \tilde{y}} \frac{\partial \tilde{C}}{\partial \tilde{y}} \biggr) + \frac{D_{T}}{T_{m}} \biggl\{ \biggl( \frac{\partial \tilde{T}}{\partial \tilde{x}} \biggr)^{2} + \biggl( \frac{\partial \tilde{T}}{\partial \tilde{y}} \biggr)^{2} \biggr\} \biggr] \\& \hphantom{ \bigg( \frac{\partial \tilde{T}}{\partial \tilde{t}} + \tilde{u} \frac{\partial \tilde{T}}{\partial \tilde{x}} + \tilde{v} \frac{\partial \tilde{T}}{\partial \tilde{y}} \bigg) =} {}+ \frac{\varPhi}{(\rho c)_{f}} + \frac{{\sigma_{e}} B_{0}^{2} \tilde{u}^{2}}{(\rho c)_{f}} + \frac{{\sigma_{e}} E_{0}^{2}}{(\rho c)_{f}}. \end{aligned} (15) Here Φ represents the viscous dissipation and is mathematically expressed as: \begin{aligned}& \begin{aligned}&\varPhi = \mu_{f} \biggl[ 2 \biggl( \frac{\partial \tilde{u}}{\partial \tilde{x}} \biggr)^{2} +2 \biggl( \frac{\partial \tilde{v}}{\partial \tilde{y}} \biggr)^{2} + \biggl( \frac{\partial \tilde{u}}{\partial \tilde{y}} + \frac{\partial \tilde{v}}{\partial \tilde{x}} \biggr)^{2} \biggr], \\ & \biggl( \frac{\partial \tilde{C}}{\partial \tilde{t}} + \tilde{u} \frac{\partial \tilde{C}}{\partial \tilde{x}} + \tilde{v} \frac{\partial \tilde{C}}{\partial \tilde{y}} \biggr) = D_{B} \biggl( \frac{\partial^{2} \tilde{C}}{\partial \tilde{x}^{2}} + \frac{\partial^{2} \tilde{C}}{\partial \tilde{y}^{2}} \biggr) + \frac{D_{T}}{T_{m}} \biggl( \frac{\partial^{2} \tilde{T}}{\partial \tilde{x}^{2}} + \frac{\partial^{2} \tilde{T}}{\partial \tilde{y}^{2}} \biggr). \end{aligned} \end{aligned} (16) Here $$( \tilde{u}, \tilde{v} )$$ are the components of velocity along the and direction, respectively. Also, $$\rho_{f}$$, $$\mu_{f}$$, $$\sigma_{e}$$, , , $$k_{f}$$, $$(\rho c)_{f}$$, , $$\gamma_{1}$$, $$D_{B}$$ and $$D_{T}$$ represent the density of the fluid, dynamic viscosity of the fluid, electrical conductivity, pressure field, temperature, thermal conductivity of the fluid, heat capacity of the fluid, concentration field, ratio of the effective heat capacity of the nanoparticle to the heat capacity of the fluid, coefficient of thermophoresis diffusion, and coefficient of Brownian motion, respectively. The terms appearing on the left-hand side of Eq. (13) are inertial forces (due to the convection or bulk motion) and the first term on the right-hand side is because of pressure gradient, while the second and third terms are due to viscosity or advection, the fourth term is because of electrical force per unit volume, and the last term is due to magnetic body (per unit volume) forces. Furthermore, the last three terms appearing on the right-hand side of Eq. (15) represent dissipation due to friction, magnetic and electric field, respectively. Using Eq. (8), the non-dimensional variables, in Eqs. (13)–(16), Eq. (12) is satisfied and Eqs. (13)–(16) become \begin{aligned}& R_{e} \alpha \biggl( \frac{\partial}{\partial t} +{u} \frac{\partial}{ \partial x} + v \frac{\partial}{\partial y} \biggr)u = - \frac{\partial p}{\partial x} + \biggl( \alpha^{2} \frac{\partial^{2}}{ \partial x^{2}} + \frac{\partial^{2}}{\partial y^{2}} \biggr) u + \beta \phi m^{2}- H_{r}^{2}u, \end{aligned} (17) \begin{aligned}& R_{e} \alpha^{3} \biggl( \frac{\partial}{\partial t} +u \frac{\partial}{ \partial x} +v \frac{\partial}{\partial y} \biggr) v=- \frac{\partial p}{ \partial y} + \alpha^{2} \biggl( \alpha^{2} \frac{\partial^{2}}{\partial x^{2}} + \frac{\partial^{2}}{\partial y^{2}} \biggr) v, \end{aligned} (18) \begin{aligned}& R_{e} \alpha \biggl( \frac{\partial}{\partial t} +_{u} \frac{\partial}{ \partial x} +v \frac{\partial}{\partial y} \biggr) \varTheta \\& \quad = \frac{1}{ P_{r}} \biggl( \alpha^{2} \frac{\partial^{2}}{\partial x^{2}} + \frac{\partial^{2}}{\partial y^{2}} \biggr) \varTheta + E_{c} \biggl[ 2 \alpha^{2} \biggl( \frac{\partial_{u}}{\partial x} \biggr)^{2} +2 \alpha^{2} \biggl( \frac{\partial v}{\partial y} \biggr)^{2} + \biggl( \frac{\partial_{u}}{\partial y} + \alpha^{2} \frac{\partial v}{\partial x} \biggr)^{2} \biggr] \\& \qquad {} + \biggl[N_{b} \biggl( \alpha^{2} \frac{\partial \varOmega}{\partial x} \frac{\partial \varTheta}{\partial x} + \frac{\partial \varOmega}{\partial y} \frac{\partial \varTheta}{\partial y} \biggr) +{N_{t}} \biggl\{ \alpha^{2} \biggl( \frac{\partial \varTheta}{\partial x} \biggr)^{2} + \biggl( \frac{\partial \varTheta}{\partial y} \biggr)^{2} \biggr\} \biggr] \\& \qquad {} +{\gamma_{2}} + E_{c} H_{r}^{2}u , \end{aligned} (19) \begin{aligned}& \alpha \biggl( \frac{\partial}{\partial t} + u \frac{\partial}{\partial x} + v \frac{\partial}{\partial y} \biggr) \varOmega = \frac{1}{S_{c}} \biggl( \alpha^{2} \frac{\partial^{2}}{\partial x^{2}} + \frac{\partial^{2}}{\partial y^{2}} \biggr) \varOmega + \frac{1}{S_{c}} \frac{{N_{t}}}{{N_{b}}} \biggl( \alpha^{2} \frac{\partial^{2}}{\partial x^{2}} + \frac{\partial^{2}}{\partial y^{2}} \biggr) \varTheta. \end{aligned} (20) Here, $$R_{e}$$, α, β, $$H_{r}$$, $$P_{r}$$, $$E_{c}$$, $$N_{b}$$, $$N_{t}$$, $$S_{c}$$, Θ and Ω are the Reynolds number, wave number, mobility of the medium, Hartmann number, Prandtl number, Eckert number, thermophoresis parameter, Brownian motion parameter, Schmidt number, dimensionless temperature and concentration field, respectively. Applying a long-wavelength approximation, ignoring the term with a high power of α, Eqs. (17)–(20) reduce to \begin{aligned}& \frac{\partial p}{\partial x} = \frac{\partial^{2} u}{\partial y^{2}} + \beta \phi m^{2}- H_{r}^{2}u, \end{aligned} (21) \begin{aligned}& \frac{\partial p}{\partial y} =0, \end{aligned} (22) \begin{aligned}& \frac{\partial^{2} \varTheta}{\partial y^{2}} + B_{r} \biggl( \frac{\partial_{u}}{\partial y} \biggr)^{2} + P_{r} N_{b} \biggl( \frac{\partial \varOmega}{\partial y} \frac{\partial \varTheta}{\partial y} \biggr) + P_{r} N_{t} \biggl( \frac{\partial \varTheta}{\partial y} \biggr)^{2} +\gamma_{3} + B_{r} H_{r}^{2}u = 0, \end{aligned} (23) \begin{aligned}& \frac{1}{S_{c}} \frac{\partial^{2} \varOmega}{\partial y^{2}} + \frac{1}{ S_{c}} \frac{{N_{t}}}{{N_{b}}} \frac{\partial^{2} \varTheta}{\partial y^{2}} + N_{t} \biggl( \frac{\partial \varTheta}{\partial y} \biggr)^{2} = 0. \end{aligned} (24) By using cross-differentiation, we have eliminated the pressure term from the dimensionless Eqs. (17) and (18), and can write it as a single nonlinear differential equation. Now let us define Ψ, the stream function, as $${u} = \frac{\partial \varPsi}{\partial y}$$, $$v = - \frac{\partial \varPsi}{\partial x}$$, satisfying the continuity Eq. (10). Equations (21), (23) and (24) can be expressed as a stream function using: \begin{aligned}& \frac{\partial p}{\partial x} = \frac{\partial^{3} \varPsi}{\partial y^{3}} + \beta \phi m^{2}- H_{r}^{2} \frac{\partial \varPsi}{\partial y}, \end{aligned} (25) \begin{aligned}& \frac{\partial^{4} \varPsi}{\partial y^{4}} - H_{r}^{2} \frac{\partial^{2} \varPsi}{\partial y^{2}} + \beta m^{2} \frac{\partial \varPhi}{\partial y} =0, \end{aligned} (26) \begin{aligned}& \frac{\partial^{2} \varTheta}{\partial y^{2}} + P_{r} N_{b} \biggl( \frac{\partial \varOmega}{\partial y} \frac{\partial \varTheta}{\partial y} \biggr) + P_{r} N_{t} \biggl( \frac{\partial \varTheta}{\partial y} \biggr)^{2} +{\gamma_{3}} + B_{r} \biggl( \frac{\partial^{2} \varPsi}{\partial y^{2}} \biggr)^{2} + H_{r}^{2} B_{r} \biggl( \frac{\partial \varPsi}{\partial y} \biggr)^{2} =0, \end{aligned} (27) \begin{aligned}& \frac{\partial^{2} \varOmega}{\partial y^{2}} + \frac{{N_{t}}}{{N_{b}}} \frac{\partial^{2} \varTheta}{\partial y^{2}} =0. \end{aligned} (28) The boundary conditions with Ψ as a stream function are: \begin{aligned}& \frac{\partial \varPsi}{\partial y} =0,\qquad \varPsi = \frac{F}{2},\qquad \varTheta =0, \qquad \varOmega =0 \quad \text{at } y= h_{1} ( x,t ), \end{aligned} (29a) \begin{aligned}& \frac{\partial \varPsi}{\partial y} =0,\qquad \varPsi =- \frac{F}{2}, \qquad \varTheta =1,\qquad \varOmega =1 \quad \text{at } y= h_{2} ( x,t ). \end{aligned} (29b) Here the no slip conditions are imposed at the walls of the channel. Also, we have introduced two extra stream function boundary conditions for the purpose of solving a fourth degree differential equation. The flow rate F in its non-dimensional form is defined as $$F = A_{0} e^{-Bt}$$, where B and $$A_{0}$$ are constants. The negative or positive flow rates are dependent on the value of constant $$A_{0}$$. If $$A_{0} <0$$ then $$F <0$$; similarly, $$F >0$$ if $$A_{0} >0$$. A positive flow rate indicates that the flow is in the direction of peristaltic pumping. Negative flow refers to the opposite of flow and peristaltic motion, also known as reverse pumping. It was experimentally found in [9] that the blood flow rate decreases exponentially with the passage of time. The authors of that paper also depicted that the deviation of blood flow rate is independent on the structural aspects of the microchannel. ## Solution methodology The exact solution for above PDEs (25)–(28) along with their related boundary conditions (29a) and (29b) is not possible because the equations are nonlinear and highly coupled. Therefore, solutions to the above equations are computed numerically by utilizing the Mathematica software. ## Graphical analysis The effect of appropriate factors on the common outlines (velocity, temperature, and concentration) is graphically discussed in this section. Results of various parameters, for example, Hartmann number $$H_{r}$$, electroosmotic parameter m, the mobility of the medium β, different zeta potentials $$\zeta_{1}$$ and $$\zeta_{2}$$, Joule heating parameter $$\gamma_{3}$$, thermophoresis parameter $$N_{t}$$, Brownian motion parameter $$N_{b}$$, Prandtl number $$P_{r}$$ and Brinkman number $$B_{r}$$ on the flow quantities, i.e., velocity u, temperature θ, concentration Ω and pressure gradient $$dp/ dx$$, are exhibited in Figs. 29. ### Characteristics of flow This subsection explains the detailed analysis of velocity distribution. Figures 2(a)–(e) have been plotted to observe the changes in velocity profile, across the microfluidic channel under the influence of Hartmann number $$H_{r}$$, electroosmotic parameter m, the mobility of the medium β, zeta potentials $$\zeta_{1}$$ and $$\zeta_{2}$$. Figure 2(a) demonstrates that the velocity (axial) decreases in the central region of the channel as $$H_{r}$$ increases, whilst the reverse trend is viewed near the channel walls because decrease in axial velocity subject to increase in magnetic field strength. Since magnetic field and axial velocity are perpendicular to each other, they produce a Lorentz force, which has a propensity to slow the movement of the fluid. Thus, the (axial) velocity has a rapid acceleration effect for $$H_{r}$$ in the middle area of the channel and reduces close to the channel wall. Figure 2(b) shows an increased behavior for the subregion $$-0.1\leq y\leq0.1$$. Since m is the fraction of the conduit height to the $$\lambda_{D}$$, it signifies that the increase of $$\lambda_{D}$$ leads to a decrease in EDL, so that a large amount of fluid rapidly flows in the central region. Figure 2(c) portrays that the (axial) velocity increases as the mobility of the medium in the middle region of the conduit increases, while decreasing in the left region of the conduit, because β is directly dependent on the Helmholtz–Smoluchowski velocity $$U_{\mathrm{HS}}$$. This physically can be interpreted as follows: the velocity of fluid reduces with increasing thickness of EDL and the flow of fluid reduces in the presence of EDL. Figure 2(d) shows the effect of (axial) velocity distribution with respect to $$\zeta_{1}$$ on the upper microchannel wall. It is also observed that the change in $$\zeta_{1}$$ significantly enhances the axial velocity distribution of the EOF. As the zeta potential behavior of the upper wall increases, the velocity has an increased effect on the lower wall, while the opposite behavior is observed at the upper wall of the channel. In Fig. 2(e), a parallel behavior of velocity distribution is observed due to the zeta potential $$\zeta_{2}$$. For different values of $$\zeta_{1}$$ and $$\zeta_{2}$$, the intersection summit is not completely in the middle of the channel walls. Clearly, a higher value of the zeta potential produces a strong field of EDL and thus a decrease in the fluid velocity. This zeta potential phenomenon produces different rates of flow at different locations in the microchannel, thus changing the momentum flux. The presence of zeta potential in an EOF is a key phenomenon in controlling fluid flow in the microchannel. We examined that our results are consistent with previous studies without zeta potential [11]. ### Characteristics of pumping It is clear that the transport in the peristalsis is related to the perception of mechanical pumping. Consequently, it is justifiable to study the pumping behavior from the current research perceptive. Figure 3(a) highlights that by increasing the Hartmann number $$H_{r}$$, pressure gradient magnitude increases. As $$H_{r}$$ is the fraction of Lorentz force (electromagnetic force) to viscous force, higher values of Hartmann number indicate stronger Lorentz force, hence more pressure is needed to overcome the Lorentz force. Figure 3(b) shows that by raising the value of m, the magnitude of pressure gradient increases. Likewise, it is worth noting that the pumping features can be amended by the EDL phenomenon, and the pumping procedure can be organized by thickening and thinning the breadth of EDL. Similarly, by increasing the value of the mobility of the medium β, the magnitude of the pressure gradient increases as shown in Fig. 3(c). Figures 3(d) and 3(e) also explain the impact of the zeta potentials $$\zeta_{1}$$ and $$\zeta_{2}$$ on the axial pressure gradient. It is also determined that the pressure gradient declines significantly then increasing the zeta potential of the upper and lower walls of the channel. ### Characteristics of trapping Trapping is a peristaltic pumping mechanism, in which a flow streamline forms a circular path, which is called a bolus at volumetric flow rate value. Figures 46 give an insight into streamline structural changes that happen due to $$H_{r}$$ value, the mobility of the medium β and the change in the electroosmotic parameter m on the microfluidic channel. They also illustrate that bolus formation occurs near the central line of the channel. Furthermore, it is shown that the size of trapped bolus first decreases while the strength of magnetic field increases and disappears for a sufficient magnetic field strength. Moreover, it can be noted that a higher zeta is applied at the upper wall than at the lower wall whereas the streamline is significantly circulated at the upper wall. Similarly, Figs. 7(a)–(c) depict that the number of trapped bolus increases on the upper wall. Figures 8(a)–(c) show that the number of trapped bolus increases on the lower wall but decreases on the upper wall. Figures 48 demonstrate that the accumulation of streamline is far away from the center due to strong EDL. It is concluded that as the zeta potential increases, the width of the EDL also increases. Thus, the streamline strongly forms a closed region and is transmitted at a wave velocity in the frontward direction. This phenomenon will help enhance the flow in the microfluidic device. ### Heat and concentration characteristics The generation of Joule heating during electroosmotic stream is a built-in characteristic. However, this effect is caused by the electrical resistance, which is produced by the electrolyte. Figures 914 explain the various values of Joule heating parameter $$\gamma_{3}$$, electro-osmotic parameter m, Brinkman number $$B_{r}$$, Hartmann number $$H_{r}$$, Prandtl number $$P_{r}$$, Brownian motion parameter $$N_{b}$$, and thermophoresis parameter $$N_{t}$$. Figure 9(a) shows that temperature increases very quickly in the central area of the channel by increasing the values of Joule heating parameter $$\gamma_{3}$$, whereas it is insignificant near the channel walls. Joule heating parameter $$\gamma_{3}$$ is directly proportional to the square of the electric field, hence a stronger electric field results in a rise in the temperature, while concentration Ω falls with the increase in $$\gamma_{3}$$as shown in Fig. 9(b). Figure 10(a) illustrates the effect of the Brinkman number $$B_{r}$$ on the temperature distribution near the middle section of the conduit. Though $$B_{r}$$ is the ratio of viscous dissipation to molecular conductivity, a higher the $$B_{r}$$ value lowers the conduction of heat due to viscous dissipation. Thus, temperature rises remarkably. Physically, the dominating aspect in $$B_{r}$$ is viscosity, due to which resistance is produced. This resistance causes fluid particles to collide, and the collision of fluid particles is responsible for increase in the temperature, whereas Fig. 10(b) shows that concentration declines due to increasing behavior of $$B_{r}$$. Variation in temperature distribution against different values of Hartmann number $$H_{r}$$ can be observed in Fig. 11(a). It signifies that the temperature increases when increasing Hartmann number, and the converse is shown for concentration in Fig. 11(b). This rise is more significant in the central area of the channel for higher values of $$H_{r}$$ because $$H_{r}$$ describes the Lorentz forces, which are resistive forces; they are used here to control the turbulence in the fluid flow. Figure 12(a) shows the effects of $$P_{r}$$ on the temperature distribution. This $$P_{r}$$ is directly proportional to viscosity and specific heat, and inversely proportional to thermal conductivity. It is observed that the magnitude of the Prandtl number increases due to temperature, i.e., $$P_{r}$$ has increased due to the distribution of temperature which is shown in the middle area of the conduit. Physically, the temperature is strongly dependent on thermal and momentum diffusivity. Although for concentration, the reverse trend is scrutinized in Fig. 12(b). It signifies that $$P_{r}$$ is directly proportional to viscosity. As viscosity decreases, $$P_{r}$$ decreases. Therefore, concentration decreases for increasing values of $$P_{r}$$. Figure 13(a) illustrates that the temperature increases as the magnitude of $$N_{b}$$ increases, since $$N_{b}$$ plays an accelerating role on temperature profile. This situation arises due to random motion of molecules, and temperature profile increases. Similarly, concentration increases as $$N_{b}$$ increases in Fig. 13(a), since $$N_{b}$$ is directly proportional to concentration gradient and inversely proportional to viscosity and diffusion coefficient. It shows that as $$N_{b}$$ increases, the diffusion coefficient decreases, so concentration gradient increases. Therefore, concentration profile increases. Figure 14(a) shows that the temperature distribution rises as $$N_{t}$$ increases, since $$N_{t}$$ is directly proportional to temperature gradient. As $$N_{t}$$ increases, temperature gradient also increases due to increase in internal energy. Therefore, temperature profile increases. On the other hand, Fig. 14(b) illustrates that the concentration decreases due to an increase in $$N_{t}$$, because when temperature increases, the number of collisions of particles increases. Due to these collisions, concentration of fluid is disturbed. Thus, concentration decreases. Our work has achieved better approximations compared to [7]. ## Summary and conclusions The purpose of this study was to investigate the electroosmotic peristaltic pumping of MHD nanofluid in an asymmetric microfluidic channel with zeta potentials. Joule heating and viscous dissipation effects were likewise considered in this model. Suitable boundary conditions have been utilized to get the solution for highly nonlinear and coupled PDEs. The significant results of this study are summarized as follows: • The (axial) velocity increases in the middle section of the channel, with a reduction in the vicinity due to electroosmotic parameter and mobility of the medium. • The magnitude of axial pressure gradient firstly decreases then increases with the increase of electroosmotic parameter, Hartmann number, mobility of the medium and different zeta potentials. • Since a higher potential is applied at the upper wall than at the lower wall, streamline circulates significantly close to that wall where potential is high. • The construction of trapped bolus depends strongly on the electroosmotic parameter, Hartmann number, mobility of the medium and high zeta potentials. • The heat transfer rate impacts the energy dissipation caused by the existence of Joule heating impact.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9755990505218506, "perplexity": 3322.6785016589292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710808.72/warc/CC-MAIN-20221201085558-20221201115558-00625.warc.gz"}
https://ask.sagemath.org/questions/47240/revisions/
# Revision history [back] ### Problem with integrating the expression of M Hi. I have the following code that I want to integrate and differentiate, but I am stuck at the expression of M, where it is unable to integrate. Is my coding wrong? c,t = var('c t') Pi = RR.pi() G=integrate(sqrt(1-t^2)*(t+c),t,-0.9,0.9);G H=G.diff(c);H L=integrate(-(t+c)/(sqrt(1-t^2)),t,-0.9,0.9);L M=integrate(1/(sqrt(1-t^2)*(t-c)),t,-0.9,0.9);M #cannot seem to integrate this wrt to t I=(c^2-1)*(L+(1-c^2)*M);I #equation I that involves M P=I.diff(c);P #differentiate I wrt to c to obtain equation P I have tried integrating M by hand which gives me a closed-form involving log function, but I can't seem to integrate it here using Sage.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.994045078754425, "perplexity": 1129.3670579409916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540515344.59/warc/CC-MAIN-20191208230118-20191209014118-00365.warc.gz"}
http://clay6.com/qa/11339/a-block-is-placed-on-a-rough-horizontal-plane-attached-with-an-elastic-spri
# A block is placed on a rough horizontal plane attached with an elastic spring as shown. Finally the spring is unstretched, If the plane is mow gradually lifted from $\theta=0^{\circ}$ to $\theta=90^{\circ}$ then the graph showing extension in the spring(x) versus angle $(\theta)$ is $x=0$ till $mg \sin \theta < \mu mg \cos\theta$ ie no extension takes place is less than the frictional force. gradually x will increase at angle $\theta > \tan ^{-1} \mu$ $kx+\mu mg \cos \theta =mg \sin \theta$ $x=\large\frac{mg \sin \theta-\mu mg \cos \theta}{k}$ where k is spring constant Hence a is the correct answer. edited Jan 26, 2014 by meena.p
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8292406797409058, "perplexity": 1600.1281571902437}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818686465.34/warc/CC-MAIN-20170920052220-20170920072220-00668.warc.gz"}
http://www.him.uni-bonn.de/programs/current-trimester-program/topology-2016/workshop-fusion-systems-and-equivariant-algebraic-topology/schedule/
# Schedule of the Workshop "Fusion systems and equivariant algebraic topology" ## Monday, November 21 10:30 - 11:00 Registration & Welcome coffee 11:00 - 12:00 Radu Stancu: Fusion systems: survival kit 12:00 - 13:50 Lunch break 13:50 - 14:50 Antonio Díaz Ramos: Mackey functors for fusion systems 15:00 - 16:00 Sejong Park: Double Burnside rings and Mackey functors with applications to fusion systems 16:00 - 16:30 Tea and cake 16:30 - 17:30 Oihana Garaialde: Cohomology of the J2 group over F3 using a spectral sequence in fusion systems afterwards Reception ## Tuesday, November 22 09:30 - 10:30 Jesper Grodal: Burnside rings in algebra and topology (part 1) 10:30 - 11:00 Group photo and coffee break 11:00 - 12:00 Radu Stancu: Saturation and the double Burnside ring 12:00 - 15:00 Lunch break and free time 15:00 - 16:00 Matthew Gelvin: Minimal characteristic bisets of fusion systems 16:00 - 16:30 Tea and cake 16:30 - 17:30 Nathaniel Stapleton: Transchromatic character theory for fusion systems ## Wednesday, November 23 09:30 - 10:30 Benjamin Böhme: The Dress splitting and equivariant commutative multiplications 10:30 - 11:00 Coffee break 11:00 - 12:00 Bob Oliver: Local structure of finite groups and of their p-completed classifying spaces 12:00 - 15:00 Lunch break and free time 15:00 - 16:00 Justin Lynd: Control of fixed points and centric linking systems 16:00 - 16:30 Tea and cake 16:30 - 17:30 Isabelle Laude: Maps between (uncompleted) classifying spaces of p-local finite groups ## Thursday, November 24 09:30 - 10:30 Jesper Grodal: Burnside rings in algebra and topology (part 2) 10:30 - 11:00 Coffee break 11:00 - 12:00 Rémi Molinier: Cohomology with twisted coefficients of linking systems and stable elements 12:00 - 14:00 Lunch break 14:00 - 15:00 Ergün Yalcin: Representation rings for fusion systems and dimension functions 16:00 - 16:30 Tea and cake # Abstracts ## Benjamin Böhme: The Dress splitting and equivariant commutative multiplications Let G be a finite group. The p-local Burnside ring of G splits into a product of rings which can be described in terms of Dress' classification of idempotent elements. The "first" factor is the Grothendieck ring of G-sets with isotropy a p-group and coincides with the Burnside ring of the p-fusion system of G upon p-localization. It plays an important role in Grodal's work on the uncompleted Segal conjecture. On the level of G-spectra, the Dress splitting induces a wedge decomposition of the p-local G-equivariant sphere spectrum, but only little is known about the multiplicative structure of the factors. Grodal showed that the first summand is a G-commutative ring spectrum in the strongest possible sense, but this is not true for the other summands, which in fact become contractible upon restriction to any p-subgroup. In light of recent work of Blumberg, Hill and Hopkins, it is clear that the existence of genuinely equivariant commutative multiplications on the wedge summands (so-called N ring structures) is obstructed by the behaviour of co-induction of finite G-sets. Video recording Top ## Antonio Díaz Ramos: Mackey functors for fusion systems Mackey functors naturally appear in the context of (stable) equivariant cohomology. In this talk, we will introduce Mackey functors for fusion system and comment on some of their applications. We will treat in more detail how to use Mackey functors to construct spectral sequences. In particular, we will explain how to build a "Lyndon-Hochschild-Serre"-type spectral sequence from a strongly closed subgroup. Recorded Talk Top ## Oihana Garaialde: Cohomology of the J2 group over F3 using a spectral sequence in fusion systems Let p be a prime number, let Fp denote the finite field of p elements and let G be a p-group. Our aim is to compute the cohomology algebra H*(G; Fp) using spectral sequences. When G contains a non-trivial normal subgroup N, the Lyndon-Hochschild-Serre spectral sequence allows us computing H*(G;Fp) from H*(N; Fp) and H*(G/N; Fp). However, if G is a simple group, no such a spectral sequence can be used anymore. Recently, in [1], the author constructs a new spectral sequence in fusion systems that can be used for certain simple groups. In this talk, we shall compute the cohomology algebra of the sporadic Janko two group J2 over F3 [2] using the aforementioned spectral sequence in fusion systems. Top ## Matthew Gelvin: Minimal characteristic bisets of fusion systems If G is a finite group with Sylow p-subgroup S, the left and right multiplications of S on G give it a biset structure that is closely connected to the p-fusion system of G. The key properties of this biset were axiomatized by Linckelmann and Webb, resulting in the notion of a characteristic biset for an arbitrary saturated fusion system. In this talk I will outline joint work with Sune Reeh, which begins by parameterizing the characteristic bisets of a fusion system. An important consequence of this parameterization is the existence of a unique minimal characteristic biset. I will describe how in several respects, the MCB is the smallest group-like structure that induces the fusion system. Of particular note are the cases of constrained fusion systems — where the model actually is the MCB — and the centric linking system associated to a fusion system, which can be viewed as the centric part of the MCB. I will also describe how the construction of normalizers and centralizers can be realized in the context of MCBs, which leads to a pleasing coherence of definition in the case of centric subgroups. Video recording Top ## Jesper Grodal: Burnside rings in algebra and topology The Burnside ring of a finite group is the group completion of the semi-ring of finite G-sets under direct sum and cartesian product. This ring made its debut into algebraic topology via Segal's equivariant Hopf theorem, identifying the zeroth equivariant stable homotopy group as this ring — it has been a central object in equivariant algebraic topology ever since. In my two talks I'll survey some of the ways this ring, and its variants, show up in equivariant stable and unstable homotopy theory. In particular I'll look at the difference between "genuine" and "derived" equivariant homotopy theory. Stably this is dictated by the classical Segal conjecture proved by Carlsson in the 80's, whose modern formulation involves fusion systems. I'll also explain a more refined "uncompleted" version of this result, lying between stable and unstable, that I recently obtained. Video recording (Part 2) Top ## Isabelle Laude: Maps between (uncompleted) classifying spaces of p-local finite groups In the literature there are many results concerning the space of maps between p-completed classifying spaces of p-local finite groups, most notable work of Dwyer-Zabrodsky, Mislin and Broto-Levi-Oliver, but very little is known in the uncompleted case. In this talk I will present some of the first complete calculations in the uncompleted case and relate them to previously known results. Top ## Justin Lynd: Control of fixed points and centric linking systems The centric linking system of a saturated fusion system is an extension category that provides the bridge to the classifying space of the fusion system. The unique existence of linking systems was shown by Chermak, and Oliver subsequently showed how to interpret Chermak's proof within the homological obstruction theory for existence/uniqueness of centric linking systems that was outlined early by Broto, Levi, and Oliver. I will discuss some group/representation theoretic aspects of joint work with G. Glauberman that, once plugged in to the Chermak-Oliver framework, help to give a proof of Chermak's theorem that does not depend on the classification of the finite simple groups. If time permits, I will explain how Chermak's method of proof and an old result of Glauberman help to shed some additional light on automorphisms of linking systems. Video Recording Top ## Rémi Molinier: Cohomology with twisted coefficients of linking systems and stable elements A theorem of Boto, Levi and Oliver describes the cohomology of the geometric realization of a linking system, with trivial coefficients, as the submodule of stable elements in the cohomology of the Sylow. When we are looking at twisted coefficients, the formula can not be true in general as pointed out by Levi and Ragnarsson but we can try to understand under which condition it holds. In this talk we will see some conditions under which we can express the cohomology of a linking system as stable elements. Video recording [Unfortunately there is no sound starting from minute 30] Top ## Bob Oliver: Local structure of finite groups and of their p-completed classifying spaces I will describe the close connection between the homotopy theoretic properties of the p-completed classifying space of a finite group G and the p-local group theoretic properties of G. One way in which this arises is in the following theorem originally conjectured by Martino and Priddy: for finite groups G and H, BGvp ≃ BHvp if and only if G and H have the same p-local structure (the same conjugacy relations among p-subgroups). Another involves a description, in terms of the p-local properties of G, of the group Out(BGvp) of homotopy classes of self equivalences of the space BGvp. After stating some general results, I'll give a few examples and applications of both of these, especially in the case where G and H are finite simple groups of Lie type. Video recording Top ## Sejong Park: Double Burnside rings and Mackey functors with applications to fusion systems (Globally defined) Mackey functors appear naturally as, for example, cohomology and representation rings for finite groups. They can be viewed as additive functors defined on certain categories of finite groups whose endomorphism rings of objects are double Burnside rings. Mackey functors can be defined for fusion systems; also fusion systems can be viewed as idempotents in the double Burnside rings of finite p-groups. Using Mackey functors for fusion systems we will extend Dwyer's sharpness result on homology decomposition of classifying spaces of finite groups to some exotic fusion systems (joint with Antonio Díaz). Also, we will study the structure of the double Burnside rings of some finite groups with "ghost maps", identifying their idempotents and simple and projective modules (joint with Goetz Pfeiffer). Video recording Top ## Radu Stancu: Fusion systems: survival kit For p a prime number, fusion systems on a finite p-group were introduced by Puig, as an axiomatization of the p-local structure of a finite group and of a block algebra of a finite group. Broto, Levi and Oliver, aiming to solve the Martino-Priddy conjecture, independently developed the notion of fusion systems and constructed their homotopy theory. With this new approach, that helped in reformulating the conjecture, and using the classification of finite simple groups, Oliver succeeded in proving the Martino-Priddy conjecture. Since then, works by Chermak, Oliver and, jointly, by Glauberman and Lynd removed the dependence of the proof on the classification. Widely speaking, the Martino-Priddy conjecture-now-theorem claims that the p-local structure of a finite group G, i.e. the fusion system on a Sylow p-subgroup given by the conjugations in G is equivalent to the p-local structure on the classifying space BG, i.e. its p-completion. In this introductory talk we define the fusion systems and their saturation and aim to give, through examples, their basic properties and some topological sides of the story. Video recording Top ## Radu Stancu: Saturation and the double Burnside ring When creating the homotopy theory of fusion systems Broto, Levi and Oliver introduced the notion of a characteristic biset of a fusion system. As a basic example, a finite group G with Sylow p-subgroup S is a characteristic (S,S)-biset for the fusion system of the group G on S. In general, such a characteristic biset always exists for a saturated fusion system, even though it need not be unique. If one allows p-local coefficients, Ragnarsson constructed a characteristic idempotent in the double Burnside ring, and proved it is unique. In fact, the saturation of a fusion system and the existence of a characteristic biset are equivalent, as showed in a joint work with Ragnarsson. In this talk we'll introduce the notion of double Burnside ring and try to explain the strong connection between this ring and the saturation of a fusion system. Video recording Top ## Nathaniel Stapleton: Transchromatic character theory for fusion systems The transchromatic character maps for Morava E-theory are a generalization of the classical character map from the representation ring of a finite group to class functions. In this talk I will present joint work with Sune Precht Reeh and Tomer Schlank extending the Morava E-theory transchromatic character maps from finite groups to fusion systems. One of the key technical ingredients is a functorial evaluation map from the free loop space of a fusion system times the circle back to the fusion system. Top ## Ergün Yalcin: Representation rings for fusion systems and dimension functions I will give a talk on recent joint work with Sune Precht Reeh. In this work, we define the representation ring of a saturated fusion system ℱ as the Grothendieck ring of the semiring of ℱ-stable representations, and study the dimension functions of ℱ-stable representations using the transfer map induced by the characteristic idempotent of ℱ. We find a list of conditions for an ℱ-stable super class function to be realized as the dimension function of an ℱ-stable virtual representation. The main motivation for studying this problem is to find new methods for constructing finite group actions on homotopy spheres with a given isotropy type. Video recording Top
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8073931932449341, "perplexity": 957.7039203542719}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647322.47/warc/CC-MAIN-20180320072255-20180320092255-00642.warc.gz"}
https://tex.stackexchange.com/questions/532211/latex-tikz-compare-two-def-arguments
# LaTeX Tikz: Compare two \def arguments I am writing a macro definition that draws dimensions in the style of technical drawing. I need to perform an inequality of two of the arguments to know which is coordinate is higher up, but the console doesn't understand the condition I used as the \ifthenelse argument. % (#1,#2): Starting coordinate % (#3,#4): Ending coordinate % #5: Vertical upwards distance from the body % #6: Dimension text \usetikzlibrary{calc} \def\DimensionTop(#1,#2)(#3,#4)[#5,#6]{ \ifthenelse{#2>#4} { % If point (#1,#2) is higher than (#3,#4) \coordinate (D1) at ($(#1,#2) + (0,#5)$); \coordinate (D2) at ($(#3,#4) + (0,#2-#4) + (0,#5)$); } { % If point (#3,#4) is higher than (#1,#2) \coordinate (D1) at ($(#1,#2) + (0,#4-#2) + (0,#5)$); \coordinate (D2) at ($(#3,#4) + (0,#5)$); } \draw (#1,#2) -- (D1) -- ++(0,0.2); \draw (#3,#4) -- (D2) -- ++(0,0.2); \draw[ , >=latex, thin ] (D1) -- (D2) node[ fill=white, midway ] {$\mathtt{#6}$}; } The function is used like this in the main document \documentclass[border=2pt,convert={outext=.png}]{standalone} \usepackage{tikz} \usetikzlibrary{patterns} \usetikzlibrary{arrows} \begin{document} \begin{tikzpicture} % Custom command for the background grid, ignore \GuideCartesian(-2,-2)(8,8); \draw[ very thick ] (0,4) -- (2,5); \DimensionTop(0,4)(2,5)[1,1.50]; \end{tikzpicture} \end{document} Since the console doesn't understand the condition, it goes directly to the else case, which works when the point on the right is higher than the point on the left as in the picture. What would be a correct way to express this condition? • Don't show only snippets. Make a complete example, that makes testing much easier. – Ulrike Fischer Mar 11 at 16:43 You only need to compute the maximum of two y coordinates. Since you are using calc, you may just do \documentclass[tikz,border=3mm]{standalone} \usetikzlibrary{calc} \begin{document} \begin{tikzpicture}[pics/dimension top/.style={code={ \tikzset{dimension top/.cd,#1} \def\pv##1{\pgfkeysvalueof{/tikz/dimension top/##1}}% \draw let \p1=\pv{first},\p2=\pv{second},\n1={max(\y1,\y2)} in (\x1,\y1) -- (\x1,\n1+0.2cm) (\x2,\y2) -- (\x2,\n1+0.2cm) (\x1,\n1) -- node[above,midway]{\pv{text}} (\x2,\n1) ;}}, dimension top/.cd,first/.initial={(0,0)},second/.initial={(1,0)}, text/.initial=] \draw (0,4) -- (2,5); \pic{dimension top={first={(0,4)},second={(2,5)},text=1.5}}; \end{tikzpicture} \end{document}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9527761936187744, "perplexity": 4459.471383743593}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506673.7/warc/CC-MAIN-20200402045741-20200402075741-00366.warc.gz"}
http://anon-gap.tk/cheat-sheet-for-differential-calculus-formulas.html
# Cheat sheet for differential calculus formulas Calculus formulas ## Cheat sheet for differential calculus formulas Calculus Cheat Sheet. Follow cheat this link for a comprehensive formula sheet. You are strongly encouraged to do the formulas included calculus Exercises to reinforce sheet the ideas. With parametric and polar you will always need to differential substitute. To make studying working out problems in calculus easier, sheet trigonometry, make sure you know basic formulas for geometry, integral calculus, differential calculus. sheet Cheat Sheet Calculus For Dummies From Calculus For Dummies by Mark differential Ryan Calculus differential requires knowledge of other math disciplines. Hyperboloid of One Sheet x2 a2 + y 2 b2 z2 formulas c2 = 1 Hyperboloid of Two Sheets z2 c2 x 2 a2 y b2 = 1 ( Major Axis: Z calculus because it is the one not subtracted) Elliptic Paraboloid z= x 2 a 2 + y 2 b ( Major Axis: z because it is the variable NOT squared) ( Major Axis: Z axis because it is not formulas squared) z= y 2 b2 x a2 Elliptic Cone ( Major Axis: Z axis because. Formulas You calculus Need to Know for Calculus. He taught everything very well and grades with good partial credit. Ritter for ODE and he was phenomenal. Vector Calculus Formulas Fundamental theorems ( main result) Here, F( x; y; z) = P( x; y; z) i+ Q( x; y; z) j+ R( x; sheet y; z) k. ( sheet Quotebook Notebook) by Jonathan Tullis. cheat ′ ( differential cf cf x ) ′ = ( ). FT of Line Integrals: IfZF = rf differential the curve C has endpoints cheat A calculus , cheat B, then C Fdr = f( B) f( A). Cheat sheet for differential calculus formulas. At this site you will find resources for prospective differential formulas practicing teachers, mathematics education information, information about my activities as a mathematics. 4 tests formulas 4 quizzes based on the optional homework 15+ in class worksheets that he gives straight 100s on. and/ or half angle formulas to cheat reduce the. absolutely formulas not intended to be a substitute for a one- year freshman course in differential and calculus integral calculus. Important mathematical terms are in boldface; key formulas concepts are boxed highlighted ( ). While a reasonable effort was made. match the differential in the ds. Green’ s Theorem: " D dA = C Fdr ( circulation- curl form) Stokes’ Theorem: " S r Fn d˙ = formulas sheet C Fdr; where C is the edge curve. Calculus 1 Cheat Sheet with Notebook: All formulas and equations from first semester calculus differential + bonus notebook with sheet cheat over 100 quotes from famous scientists. Delegation strategies cheat for the NCLEX sheet FREE resources for the NCLEX, Infection Control for the NCLEX, Prioritization for cheat the NCLEX, FREE NCLEX Quizzes for the NCLEX, FREE NCLEX exams for the NCLEX Failed the NCLEX cheat - Help is here. Differential Equations Study Guide:. Basic Properties cheat c , Formulas If f calculus x ( ) , gx ( sheet ) are differentiable functions ( the derivative exists), n are any real numbers 1. The integral table in cheat the frame above was produced TeX4ht for MathJax using the command. Elementary differential equations and slope fields;. ## Differential formulas This menu is only active after you have chosen one of the main topics ( Algebra, Calculus or Differential Equations) from the Quick Nav menu to the right or Main Menu in the upper left corner. This is a draft of a proposal to create a more advanced, more credible credential ( certification) in software testing. The core idea is a certification based on a multidimensional collection of evidence of education, experience, skill and good character. This compilation is dedicated to the memory of our nameless forebears, who were the inventors of the pens and inks, paper and incunabula, glyphs and alphabets,. Calculus Cheat Sheet Limits Definitions Precise Definition : We say lim f ( x ) = L if Limit at Infinity : We say lim f ( x ) = L if we x ® a x ® ¥ for every e > 0 there is a d > 0 such that whenever 0 x - a d then f ( x ) - L e. can make f ( x ) as close to L as we want by taking x large enough and positive. cheat sheet for differential calculus formulas Matrix Differential Calculus Cheat Sheet Blue Note 142 ( started onStefan Harmeling compiled on: 45 Rules for the differentials1 Let α, a, A be constants and φ, ψ, u, v, x, f, U, V, F be functions. AP Calculus AB Cram Sheet.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8947166800498962, "perplexity": 3304.096300723408}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998369.29/warc/CC-MAIN-20190617022938-20190617044938-00063.warc.gz"}
http://math.stackexchange.com/questions/325779/let-x-a-random-variable-with-a-strictly-increasing-distrubution-function-f-x/325787
# Let $X$ a random variable with a strictly increasing distrubution function $F_X$. Show that $Y=F_X(X) \sim \hom(0,1)$ distrubution. Let $X$ a random variable with a strictly increasing distrubution function $F_X$. Show that the random variable $Y=F_X(X)$ has a $\hom(0,1)$ distrubution. Here is what I thought: \begin{align} F_Y(y)&=P(Y\leq y) \\ &=P(F_X(X)\leq y) \\ &=P(P(X\leq X)\leq y) \end{align} The chance that $X\leq X$ is always $1$, right ? Therefore I would say that: $$F_Y(y)=P(1\leq y)=1_{[1,\infty)}(y)$$ But this is wrong... Why is this ? - \begin{align} F_Y(y)&=P(Y\leq y) \\ &=P(F_X(X)\leq y) \\ &=P(X\leq F_X^{-1}(y)) \text{ inverse exists as $F_x$ is strictly increasing}\\ &=F_X(F_X^{-1}(y))\\ &=y, \ 0\le y \le 1 \end{align} Your move from step 2 to step 3 is illogical. $P(\omega:F_X(X)(\omega)\le y)$ is the exact meaning of the second statement, which can be rewritten using inverse. Your 3rd equation does not have any such set-based notion. I don't understand your last equality. Why must $y$ be in $[0,1]$ ? –  Kasper Mar 9 '13 at 20:00 $Y=F_X(X)$, so the random variable $Y$ can take values only in $[0,1]$. –  Bravo Mar 9 '13 at 20:02 My brain gets completely confused when I read $F_X(X)(\omega)$. So this means $P(X\leq X)(\omega)$ ? How can I look at this ? This is a probability function and function from $\Omega \to \Bbb R$ ? –  Kasper Mar 9 '13 at 20:07 @Bravo, why can't one write $P(F_X(X) \leq y) = P(P(X\leq X)\leq y)$ this is the exact meaning ? –  user111854 Apr 5 at 9:04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9996343851089478, "perplexity": 742.3520079764447}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507452681.5/warc/CC-MAIN-20141017005732-00071-ip-10-16-133-185.ec2.internal.warc.gz"}
http://www.martinorr.name/blog/2015/10/26/period-relations-on-abelian-varieties/
# Martin's Blog ## Period relations on abelian varieties Posted by Martin Orr on Monday, 26 October 2015 at 11:00 The Legendre period relation is a classical equation relating the periods and quasi-periods of an elliptic curve, as defined last time. I will discuss this relation, and then more generally discuss how the existence of polarisations implies relations between the periods of higher-dimensional abelian varieties. These examples motivate the introduction of the geometric motivic Galois group, which gives an upper bound for the transcendence degree of periods of an abelian variety (or indeed any algebraic variety). This upper bound is conjectured to be equal to the actual transcendence degree. I had intended to discuss the geometric motivic Galois group in this post too, but I decided that it was getting to long so I will postpone that to another time. ### The Legendre period relation Let be an elliptic curve over with equation Recall that there is a basis for is represented by the differential forms of the second kind (which is regular) and (which has a double pole at infinity). If we choose a basis , for , then we can define the fundamental periods of as and the quasi-periods as The Legendre period relation asserts that In the language of the previous post, the determinant of the extended period matrix of is . Note that the sign in this equation ( or ) depends on the ordering of and - this is chosen based on the standard orientation of to ensure that we end up with ). Following the introduction to Deligne's paper on absolute Hodge classes, I want to give a simple proof that if is defined over the field , then the Legendre period relation holds up to multiplication by a scalar in : Note that if what we are really interested in is transcendence properties of the periods and quasi-periods, then an identity which holds up to multiplication by k is as good as an exact identity. The key point is that the de Rham cohomology classes represented by and are defined over . Hence the extended period matrix expresses a basis for in terms of a basis for , via the standard comparison isomorphism Since , both in de Rham cohomology and in singular cohomology, it follows that is the coordinate for the basis element relative to the basis element . But there is a pair of bases for and which we already know how to compare. We can take the cycle class of a point in each cohomology theory, and we have Recall that we proved this relation for , and it motivates the definition of Tate twists of Hodge structures. We can deduce that the same relation holds on an elliptic curve (or indeed any smooth projective curve) by considering a finite morphism , say of degree . The pullback of a point in is points in , and so and similarly for . Dividing by , the relation (*) for implies the same relation for . Since and , we conclude that Observe that is a generator for the -module and so in fact Thus showing that the Legendre period relation holds exactly is equivalent to showing that This is a purely algebraic result (it no longer involves integration as does the classical statement of the period relation). But I think that a purely algebraic proof of it is hard. Deligne sketches an analytic proof of this algebraic result. ### Algebraic cycles and relations between periods At first sight it might look like the Legendre period relation provides a lower bound for the transcendence degree of the periods of an elliptic curve - it implies that unless . But really one should think of it as providing an upper bound for the transcendence degree over the field : Indeed, it does not just tell us that there exists an algebraic relation between the periods and quasi-periods over , it tells us exactly what form that relation takes. The presence of in the field generated by periods is inevitable due to the issue of Tate twists. A similar argument to the argument for the Legendre period relation shows that, for any abelian variety of dimension defined over , the determinant of the extended period matrix is in . The Legendre period relation and the more general relation for the determinant of the extended period matrix are examples of the principle that algebraic cycles on a variety imply algebraic relations between the periods. In particular, the determinant relation comes from the fact that For another example of relations between periods implied by an algebraic cycle, consider a polarisation on the abelian variety . A polarisation is a symplectic pairing on or equivalently an element of . The definition of polarisation requires that it must be of the form for an ample divisor on . The algebraic cycle will also induce an element and hence a symplectic pairing on . The compatibility of the cycle class maps implies that under the comparison isomorphism , we get Now if we choose symplectic bases for and with respect to the respective symplectic forms induced by , and use these bases to calculate the extended period matrix, then we will get a matrix in the general symplectic group This implies that the transcendence degree over of the extended period matrix must be at most , and for this is better than the trivial upper bound of . 1. Motivic Galois groups and periods From Martin's Blog In my last post, I discussed ... an upper bound for the transcendence degree of the extended period matrix of an abelian variety, namely the dimension of the general symplectic group. ... In this post, I will discuss how this can be generalised ... 2. Gross's proof of the Chowla-Selberg formula From Martin's Blog Today I am going to write about Gross's proof of the Chowla-Selberg formula (up to algebraic numbers). As I discussed last time, the Chowla-Selberg formula is a formula for the periods of a CM elliptic curve E in terms of values of the gamma...
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9927080273628235, "perplexity": 234.9575277907622}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039745800.94/warc/CC-MAIN-20181119150816-20181119172816-00257.warc.gz"}
https://www.physicsforums.com/threads/centripetal-acceleration-on-earth.405375/
# Homework Help: Centripetal acceleration on Earth 1. May 24, 2010 ### daysrunaway 1. The problem statement, all variables and given/known data An object orbits the earth at a constant speed in a circle of radius 6.38 x 106 m, very close to but not touching the earth's surface. What is its centripetal acceleration? 2. Relevant equations a = v2/r = 4$$\pi$$2v/T2 v = 2$$\pi$$r/T 3. The attempt at a solution I plugged in r = 6.38 x 106 m and T = 24.0 h = (24.0 h x 3600 s / 1 h) = 86,400 s into the equation above and found a = 3.37 x 10-2 m/s2. However, I looked up the centripetal acceleration on the earth's surface and found out it is 0.006 m/s2. I can't understand why my answer is wrong. Can anyone point out the error in my logic? Thanks! 2. May 24, 2010 Try using this formula- 2$$\pi$$/$$\omega$$ $$\omega$$= 360/24*3600 Let me know if you got the answer. 3. May 24, 2010 ### daysrunaway Thanks for the suggestion RoughRoad but I don't really understand how I am supposed to use this equation. I found that my calculation for the angular velocity (2piR/T) is approximately equal to the wikipedia value for angular velocity, so I really am confused now because I can't see what I'm doing wrong in just squaring that value and dividing by 6.38 x 106. 4. May 24, 2010 Is the mass of the satellite given? 5. May 24, 2010 ### daysrunaway No, it isn't. 6. May 24, 2010 ### jyothsna pb try using the equation GMm/r^2=mv^2/r m-mass of satellite M-mass of earth v^2/r is the centripetal acceleration 7. May 24, 2010 ### jyothsna pb gravitational frce of earth is utilised for centripetal force 8. May 24, 2010 You are right. But what about the velocity? 9. May 24, 2010 ### jyothsna pb you are asked to find the centripetal acceleration you just have to find the value of v^2/r which gives the centripetal acceleration 10. May 24, 2010 Oh yeah! How can I be so foolish. Thanks for helping! 11. May 24, 2010 ### jyothsna pb 12. May 24, 2010 ### jyothsna pb u r welcome 13. May 24, 2010 And reply to my visitor msg pls 14. May 24, 2010 ### daysrunaway I didn't, though, and that's the source of my confusion. I know I have the right value for v but when I plug it in to v^2/r, I get 3.37 x 10^-2 m/s^2. This answer is not equal to the answer I found for the actual acceleration, which is 0.006 m/s^2. My question is why is my answer different from the real value? 15. May 24, 2010 ### jyothsna pb what is the value of v? 16. May 24, 2010 ### daysrunaway 465.1 m/s 17. May 24, 2010 ### jyothsna pb there is some error in the velocity value 18. May 24, 2010 ### jyothsna pb we get acceleration value almost equal to g 19. May 24, 2010 ### jyothsna pb velocity in dis orbit must b approximately equal to 7.9*10^3m/s 20. May 24, 2010 ### daysrunaway I don't understand why that is, especially since Wikipedia says it is 451 m/s en.wikipedia.org/wiki/Earth
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8774954676628113, "perplexity": 1752.876675464127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827097.43/warc/CC-MAIN-20181215200626-20181215222626-00320.warc.gz"}
http://cms.math.ca/cmb/msc/22A15?fromjnl=cmb&jnl=CMB
location:  Publications → journals Search results Search: MSC category 22A15 ( Structure of topological semigroups ) Expand all        Collapse all Results 1 - 1 of 1 1. CMB 2011 (vol 56 pp. 442) Zelenyuk, Yevhen Closed Left Ideal Decompositions of $U(G)$ Let $G$ be an infinite discrete group and let $\beta G$ be the Stone--Čech compactification of $G$. We take the points of $ėta G$ to be the ultrafilters on $G$, identifying the principal ultrafilters with the points of $G$. The set $U(G)$ of uniform ultrafilters on $G$ is a closed two-sided ideal of $\beta G$. For every $p\in U(G)$, define $I_p\subseteq\beta G$ by $I_p=\bigcap_{A\in p}\operatorname{cl} (GU(A))$, where $U(A)=\{p\in U(G):A\in p\}$. We show that if $|G|$ is a regular cardinal, then $\{I_p:p\in U(G)\}$ is the finest decomposition of $U(G)$ into closed left ideals of $\beta G$ such that the corresponding quotient space of $U(G)$ is Hausdorff. Keywords:Stone--Čech compactification, uniform ultrafilter, closed left ideal, decompositionCategories:22A15, 54H20, 22A30, 54D80 top of page | contact us | privacy | site map |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9099874496459961, "perplexity": 470.5749706396809}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823996.40/warc/CC-MAIN-20160723071023-00214-ip-10-185-27-174.ec2.internal.warc.gz"}
https://math.stackexchange.com/questions/3512488/adjoints-to-the-forgetful-functor-ua-mathbfc-to-mathbfc
# Adjoint(s) to the forgetful functor $U:A/\mathbf{C}\to \mathbf{C}$. I am preparing for my exam in Category Theory, and came across the following exercise in an old exam. Let $$\mathbf{C}$$ a category with finite coproducts. For a fixed object $$A$$, consider the coslice category consisting of objects $$f:A\to C$$. Morphisms are $$\alpha:C\to D$$ making the triangle commute. We have to determine whether the forgetful functor $$U$$ has a left or / and right adjoint. An (rather unfounded) approach I had in mind for the right adjoint was the functor $$F$$ which maps an object $$C$$ to $$i_A:A\to A\sqcup C$$, where $$i_A$$ denotes the inclusion map. A morphism $$\alpha:C\to D$$ is then mapped to the unique $$u:A\sqcup C\to A \sqcup D$$ which arises when considering the maps $$i_A:A\to A\sqcup D$$ and $$i_D\circ f:C\to A\sqcup D$$, by the universal property of the coproduct. Since this functor does not preserve the terminal object it can't be the left adjoint. To show it is indeed a right adjoint we need to show the following isomorphism of Hom sets: $$\hom_{\mathbf{C}}(D,U(f:A\to C))\cong \hom_{A/\mathbf{C}}(i_A:A\to A\sqcup D,f:A\to C)$$ However, I failed to show this and do not have an alternative idea so far. Neither do I have an idea for a possible left adjoint, if it exists. Any kind of help is welcome! • I corrected a small typo and added some notation, maybe now it is clearer what you have to do ? – jeanmfischer Jan 17 at 15:05 • Also the functor you describe $[D \mapsto (i_A : A \to A \sqcup D)]$ is a left adjoint. – jeanmfischer Jan 17 at 15:52 • But It does not preserve the terminal object, does it? – EBP Jan 17 at 16:07 • Sorry I didn't correct everything, so the functor $[D \mapsto (i_A : A \to A \sqcup D)]$ preserves the initial object, indeed $(i_A : A \to A \sqcup 0) = id_A$, and $id_A$ is the initial object of $A/ \mathbf(C)$. – jeanmfischer Jan 17 at 16:08 • a left adjoint has to preserve colimits, and so the initial object since it is the empty colimit, but there is nothing to be said with limits, and your category $\mathbf{C}$ maybe has not a final object. – jeanmfischer Jan 17 at 16:13 For the fact that it admits a left adjoint your (not unfounded at all) discussion gives you the awnser (the only problem is you were trying the wrong side) : $$\text{Hom}_{\mathbf{C}}(D, U(f:A \to C)) \cong \text{Hom}_{A/\mathbf{C}}(i_A : A \to A \sqcup D, f:A\to C).$$ Indeed having a map $$g : D \to C$$ will give, by universal property of $$A\sqcup D$$ and the given data $$f:A \to C$$, a map $$\overline g : A\sqcup D \to C$$ that verifies $$\overline g \circ i_A = f$$, i.e. $$\overline g$$ is a morphism in $$A/\mathbf C$$ from $$i_A : A \to A\sqcup D$$ to $$f:A\to C$$. For the right adjoint part, if $$U$$ admits a right adjoint, this would mean that $$U$$ is left adjoint, and so it should at least preserve the intial object, but the initial object of $$A/\mathbf{C}$$ is $$id_A : A \to A$$, and $$U(id_A)= A$$ which is not a priori the intial object of $$\mathbf{C}.$$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 30, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9748149514198303, "perplexity": 107.86503095588586}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506121.24/warc/CC-MAIN-20200401192839-20200401222839-00481.warc.gz"}
https://open.library.ubc.ca/soa/cIRcle/collections/ubctheses/24/items/1.0401957
# Open Collections ## UBC Theses and Dissertations ### Analytical and numerical results for phase field, implicit free boundary, and fluid models Cheng, Xinyu #### Abstract In this dissertation, we study analytical and numerical methods on three topics in the area of partial differential equations (PDE). These topics are: the Allen-Cahn dynamics (AC) in the study of phase field models for materials science problems, the Oxygen depletion model (OD) in the study of free boundary problems, and the stationary surface quasi-geostrophic equation (SQG) in the study of fluid dynamics. We first study the behaviour in the meta-stable regime of AC and show by computation evidence and asymptotic analysis that backward Euler method satisfies energy stability with large time steps. We also give a rigorous proof for the two-dimensional radially symmetric case. In the second project, we show several mathematical formulations of OD from the literature and give a new formulation based on a gradient flow with constraint. We prove the equivalence of all formulations and study the numerical approximations of the problem that arise from the different formulations. More general (vector, higher order) implicit free boundary value problems are discussed. In the final project, we develop a new framework of ``convex integration scheme'' and construct a non-trivial solution to the stationary SQG. We thus prove the non-uniqueness of the solutions to the stationary SQG.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9322481751441956, "perplexity": 499.81465072663923}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950528.96/warc/CC-MAIN-20230402105054-20230402135054-00099.warc.gz"}
http://en.wikipedia.org/wiki/Hodgkin-Huxley_model
# Hodgkin–Huxley model (Redirected from Hodgkin-Huxley model) Basic components of Hodgkin–Huxley-type models. Hodgkin–Huxley type models represent the biophysical characteristic of cell membranes. The lipid bilayer is represented as a capacitance (Cm). Voltage-gated and leak ion channels are represented by nonlinear (gn) and linear (gL) conductances, respectively. The electrochemical gradients driving the flow of ions are represented by batteries (E), and ion pumps and exchangers are represented by current sources (Ip). The Hodgkin–Huxley model (or "conductance-based model") is a mathematical model (a type of scientific model) that describes how action potentials in neurons are initiated and propagated. It is a set of nonlinear differential equations that approximates the electrical characteristics of excitable cells such as neurons and cardiac myocytes, and hence it is a continuous time model, unlike the Rulkov map for example. Alan Lloyd Hodgkin and Andrew Huxley described the model in 1952 to explain the ionic mechanisms underlying the initiation and propagation of action potentials in the squid giant axon.[1] They received the 1963 Nobel Prize in Physiology or Medicine for this work. ## Basic components The typical Hodgkin–Huxley model treats each component of an excitable cell as an electrical element (as shown in the figure). The lipid bilayer is represented as a capacitance (Cm). Voltage-gated ion channels are represented by electrical conductances (gn, where n is the specific ion channel) that depend on both voltage and time. Leak channels are represented by linear conductances (gL). The electrochemical gradients driving the flow of ions are represented by voltage sources (En) whose voltages are determined by the ratio of the intra- and extracellular concentrations of the ionic species of interest. Finally, ion pumps are represented by current sources (Ip).[clarification needed] The membrane potential is denoted by Vm. Mathematically, the current flowing through the lipid bilayer is written as $I_c = C_m\frac{{\mathrm d} V_m}{{\mathrm d} t}$ and the current through a given ion channel is $I_i = {g_i}(V_m - V_i) \;$ where $V_i$ is the reversal potential of the i-th ion channel. Thus, for a cell with sodium and potassium channels, the total current through the membrane is given by: $I = C_m\frac{{\mathrm d} V_m}{{\mathrm d} t} + g_K(V_m - V_K) + g_{Na}(V_m - V_{Na}) + g_l(V_m - V_l),$ where I is the total membrane current per unit area, Cm is the membrane capacitance per unit area, gK and gNa are the potassium and sodium conductances per unit area, respectively, VK and VNa are the potassium and sodium reversal potentials, respectively, and gl and Vl are the leak conductance per unit area and leak reversal potential, respectively. The time dependent elements of this equation are Vm, gNa, and gK, where the last two conductances depend explicitly on voltage as well. ## Ionic current characterization In voltage-gated ion channels, the channel conductance gi is a function of both time and voltage (gn(tV) in the figure), while in leak channels gi is a constant (gL in the figure). The current generated by ion pumps is dependent on the ionic species specific to that pump. The following sections will describe these formulations in more detail. ### Voltage-gated ion channels Using a series of voltage clamp experiments and by varying extracellular sodium and potassium concentrations, Hodgkin and Huxley developed a model in which the properties of an excitable cell are described by a set of four ordinary differential equations.[2] Together with the equation for the total current mentioned above, these are: $I = C_m\frac{{\mathrm d} V_m}{{\mathrm d} t} + \bar{g}_Kn^4(V_m - V_K) + \bar{g}_{Na}m^3h(V_m - V_{Na}) + \bar{g}_l(V_m - V_l),$ $\frac{dn}{dt} = \alpha_n(V_m)(1 - n) - \beta_n(V_m) n$ $\frac{dm}{dt} = \alpha_m(V_m)(1 - m) - \beta_m(V_m) m$ $\frac{dh}{dt} = \alpha_h(V_m)(1 - h) - \beta_h(V_m) h$ where I is the current per unit area, and $\alpha_i$ and $\beta_i$ are rate constants for the i-th ion channel, which depend on voltage but not time. $\bar{g}_n$ is the maximal value of the conductance. n, m, and h are dimensionless quantities between 0 and 1 that are associated with potassium channel activation, sodium channel activation, and sodium channel inactivation, respectively. For $p = (n, m, h)$, $\alpha_p$ and $\beta_p$ take the form $\alpha_p(V_m) = p_\infty(V_m)/\tau_p$ $\beta_p(V_m) = (1 - p_\infty(V_m))/\tau_p$. $n_\infty$ and $m_\infty$, and $h_\infty$ are the steady state values for activation and inactivation, respectively, and are usually represented by Boltzmann equations as functions of $V_m$. In the original paper by Hodgkin and Huxley,[1] the functions $\alpha$ and $\beta$ are given by $\begin{array}{lll} \alpha_n(V_m) = \frac{.01(V_m - 10)}{\exp\big(\frac{V_m - 10}{10}\big)-1} & \alpha_m(V_m) = \frac{.1(V_m - 25)}{\exp\big(\frac{V_m - 25}{10}\big)-1} & \alpha_h(V_m) = .07\exp\bigg(\frac{V_m}{20}\bigg)\\ \beta_n(V_m) = .125\exp\bigg(\frac{V_m}{80}\bigg) & \beta_m(V_m) = 4\exp\bigg(\frac{V_m}{18}\bigg) & \beta_h(V_m) = \frac{1}{\exp\big(\frac{V_m - 30}{10}\big) + 1} \end{array}$ while in many current software programs,[3] Hodgkin-Huxley type models generalize $\alpha$ and $\beta$ to $\frac{A_p(V_m-B_p)}{\exp\big(\frac{V_m-B_p}{C_p}\big)-D_p}$ In order to characterize voltage-gated channels, the equations are fit to voltage clamp data. For a derivation of the Hodgkin–Huxley equations under voltage-clamp, see.[4] Briefly, when the membrane potential is held at a constant value (i.e., voltage-clamp), for each value of the membrane potential the nonlinear gating equations reduce to equations of the form: $m(t) = m_{0} - [ (m_{0}-m_{\infty})(1 - e^{-t/\tau_m})]\,$ $h(t) = h_{0} - [ (h_{0}-h_{\infty})(1 - e^{-t/\tau_h})]\,$ $n(t) = n_{0} - [ (n_{0}-n_{\infty})(1 - e^{-t/\tau_n})]\,$ Thus, for every value of membrane potential $V_{m}$ the sodium and potassium currents can be described by $I_{Na}(t)=\bar{g}_{Na} m(V_m)^3h(V_m)(V_m-E_{Na}),$ $I_K(t)=\bar{g}_K n(V_m)^4(V_m-E_K).$ In order to arrive at the complete solution for a propagated action potential, one must write the current term I on the left-hand side of the first differential equation in terms of V, so that the equation becomes an equation for voltage alone. The relation between I and V can be derived from cable theory and is given by $I = \frac{a}{2R}\frac{\partial^2V}{\partial x^2}$, where a is the radius of the axon, R is the specific resistance of the axoplasm, and x is the position along the nerve fiber. Substitution of this expression for I transforms the original set of equations into a set of partial differential equations, because the voltage becomes a function of both x and t. The Levenberg–Marquardt algorithm,[5][6] a modified Gauss–Newton algorithm, is often used to fit these equations to voltage-clamp data.[citation needed] While the original experiments treated only sodium and potassium channels, the Hodgkin Huxley model can also be extended to account for other species of ion channels. ### Leak channels Leak channels account for the natural permeability of the membrane to ions and take the form of the equation for voltage-gated channels, where the conductance $g_i$ is a constant. ### Pumps and exchangers The membrane potential depends upon the maintenance of ionic concentration gradients across it. The maintenance of these concentration gradients requires active transport of ionic species. The sodium-potassium and sodium-calcium exchangers are the best known of these. Some of the basic properties of the Na/Ca exchanger have already been well-established: the stoichiometry of exchange is 3 Na+: 1 Ca2+ and the exchanger is electrogenic and voltage-sensitive. The Na/K exchanger has also been described in detail, with a 3 Na+: 2 K+ stoichiometry.[7][8] ## Mathematical properties The Hodgkin–Huxley model can be thought of as a differential equation with four state variables, v(t), m(t), n(t), and h(t), that change with respect to time t. The system is difficult to study because it is a nonlinear system and cannot be solved analytically. However, there are many numeric methods available to analyze the system. Certain properties and general behaviors, such as limit cycles, can be proven to exist. A simulation of the Hodgkin–Huxley model in phase space, in terms of voltage v(t) and potassium gating variable n(t). The closed curve is known as a limit cycle. ### Center manifold Because there are four state variables, visualizing the path in phase space can be difficult. Usually two variables are chosen, voltage v(t) and the potassium gating variable n(t), allowing one to visualize the limit cycle. However, one must be careful because this is an ad-hoc method of visualizing the 4-dimensional system. This does not prove the existence of the limit cycle. A better projection can be constructed from a careful analysis of the Jacobian of the system, evaluated at the equilibrium point. Specifically, the eigenvalues of the Jacobian are indicative of the center manifold's existence. Likewise, the eigenvectors of the Jacobian reveal the center manifold's orientation. The Hodgkin–Huxley model has two negative eigenvalues and two complex eigenvalues with slightly positive real parts. The eigenvectors associated with the two negative eigenvalues will reduce to zero as time t increases. The remaining two complex eigenvectors define the center manifold. In other words, the 4-dimensional system collapses onto a 2-dimensional plane. Any solution starting off the center manifold will decay towards the center manifold. Furthermore, the limit cycle is contained on the center manifold. The voltage v(t) (in millivolts) of the Hodgkin–Huxley model, graphed over 50 milliseconds. The injected current varies from -5 nanoamps to 12 nanoamps. The graph passes through 3 stages: an equilibrium stage, a single-spike stage, and a limit cycle stage. ### Bifurcations If we use the injected current $I$ as a bifurcation parameter, then the Hodgkin–Huxley model undergoes a Hopf bifurcation. As with most neuronal models, increasing the injected current will increase the firing rate of the neuron. One consequence of the Hopf bifurcation is that there is a minimum firing rate. This means that either the neuron is not firing at all (corresponding to zero frequency), or firing at the minimum firing rate. Because of the all or none principle, there is no smooth increase in action potential amplitude, but rather there is a sudden "jump" in amplitude. The resulting transition is known as a classical canard phenomenon, or simply a canard. ## Improvements and alternative models The Hodgkin–Huxley model is regarded as one of the great achievements of 20th-century biophysics. Nevertheless, modern Hodgkin–Huxley-type models have been extended in several important ways: • Additional ion channel populations have been incorporated based on experimental data. • The Hodgkin–Huxley model has been modified to incorporate transition state theory and produce thermodynamic Hodgkin–Huxley models.[9] • Models often incorporate highly complex geometries of dendrites and axons, often based on microscopy data. • Stochastic models of ion-channel behavior, leading to stochastic hybrid systems[10] Several simplified neuronal models have also been developed (such as the Fitzhugh-Nagumo model), facilitating efficient large-scale simulation of groups of neurons, as well as mathematical insight into dynamics of action potential generation.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 35, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8934346437454224, "perplexity": 851.6479629307735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1412037663718.7/warc/CC-MAIN-20140930004103-00423-ip-10-234-18-248.ec2.internal.warc.gz"}
http://mathhelpforum.com/latex-help/77078-sdf.html
# Math Help - sdf 1. ## sdf $s^*=(1-H)(1+R(1-\frac{s^*}{K}))=s^*$ $1=(1-H)(1+R(1-\frac{s^*}{K}))1$ $\frac{1}{1-H}-1=R(1-\frac{s^*}{K})$ $\frac{1}{R} * \frac{1}{1-H}-1=1-\frac{s^*}{K}$ $s^* = K - \frac{K}{R(1-H)} + \frac{K}{R}$ Edit: Sorry, I'm just using this to do a test. I meant to preview this.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9550743699073792, "perplexity": 4201.0501277292005}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119652530.43/warc/CC-MAIN-20141024030052-00083-ip-10-16-133-185.ec2.internal.warc.gz"}
https://nargaque.com/2014/03/02/mathematicians-answer/
The Mathematician’s Answer is a meta-joke about how mathematicians usually behave in jokes. From tvtropes: If you ask someone a question, and he gives you an entirely accurate answer that is of no practical use whatsoever, he has just given you a Mathematician’s Answer. It goes further on to say: “A common form of giving a Mathematician’s Answer is to fully evaluate the logic of the question and give a logically correct answer. Such a response may prove confusing for someone who interpreted what they said colloquially.” Perhaps the most famous example is the hot-air balloon joke, where a man in a hot-air balloon asks someone where he is, to which the response is, “You’re in a hot-air balloon!” The rider concludes that the responder must be a mathematician, because the answer given was absolutely correct but utterly useless. The tvtropes site contains a bunch of examples of Mathematician’s Answer in dialog. But this kind of joke also sometimes pokes fun at actions as well as words. My favorite is the hotel joke (this version from the Cherkaev “Math Jokes” collection): An engineer, a physicist and a mathematician are staying in a hotel. The engineer wakes up and smells smoke. He goes out into the hallway and sees a fire, so he fills a trash can from his room with water and douses the fire. He goes back to bed. Later, the physicist wakes up and smells smoke. He opens his door and sees a fire in the hallway. He walks down the hall to a fire hose and after calculating the flame velocity, distance, water pressure, trajectory, etc. extinguishes the fire with the minimum amount of water and energy needed. Later, the mathematician wakes up and smells smoke. He goes to the hall, sees the fire and then the fire hose. He thinks for a moment and then exclaims, “Ah, a solution exists!” and then goes back to bed. In line with the engineer/physicist/mathematician trio, another great one is the Scottish sheep joke: A mathematician, a physicist, and an engineer were traveling through Scotland when they saw a black sheep through the window of the train. “Aha,” says the engineer, “I see that Scottish sheep are black.” “Hmm,” says the physicist, “You mean that some Scottish sheep are black.” “No,” says the mathematician, “All we know is that there is at least one sheep in Scotland, and that at least one side of that one sheep is black!” And then, we have the infamous examples where it was the students ironically who used the Mathematician’s Answer on their math teachers: Now, aside from the meta-joke status of the Mathematician’s Answer, is there any truth to it? Do math-minded people really say, “You’re in a hot air balloon,” in real life? From all the math classes I’ve taken at college, I have never witnessed a professor respond unwittingly with a Mathematician’s Answer. Every time it was used, it was clear that it was meant as a joke. Sure, some live up to mathematician archetype, but they’re all normal people, not John Nashes. In high school, my favorite form of humor was the pun. Starting junior or senior year of college, however, I had somehow transitioned to the Mathematician’s Answer as my go-to response when I can’t think of anything to say. It is extremely easy to use, as almost every situation can lead to this kind of joke. It’s really fun to use and really versatile. It doesn’t even need to be used in response to a question. Just yesterday, someone remarked that it was March 1st already. Immediately, I added, “Oh yeah, that’s exactly one month away from April 1st.” The same person later asked how far 10 yards was, and, like a true mathematician, I answered by saying it was like 5 yards but double that. Our campus Internet has one network called “RedRover” and another called “RedRover-Secure.” Someone asked what the difference between these was, and I quickly responded, “Well, they’re the same, except one of them is secure.” I think it interests me because I’m generally fond of logical and tautological humor. The only downside of the Mathematician’s Answer is that it doesn’t really work in anything that is related to mathematics. The language of math is designed to minimize ambiguity, and even when situations do arise where there are two interpretations, it’s much harder to distinguish between a literal and a figurative meaning. One of the few mathematical ambiguities I know if is if someone writes $1 \leq x, y \leq 10$, do we choose x and y such that x is at least 1 and y is at most 10, or is it that both x and y are between 1 and 10? On the other hand, Mathematician’s Answer works really well in areas as far removed from mathematics as possible. Anyway, here is one last example: An engineer, a physicist and a mathematician find themselves in an anecdote, indeed an anecdote quite similar to many that you have no doubt already heard. After some observations and rough calculations the engineer realizes the situation and starts laughing. A few minutes later the physicist understands too and chuckles to himself happily as he now has enough experimental evidence to publish a paper. This leaves the mathematician somewhat perplexed, as he had observed right away that he was the subject of an anecdote, and deduced quite rapidly the presence of humor from similar anecdotes, but considers this anecdote to be too trivial a corollary to be significant, let alone funny.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.860697329044342, "perplexity": 1319.2238712218518}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578641278.80/warc/CC-MAIN-20190424114453-20190424140453-00489.warc.gz"}
http://www.yusufozturk.info/tag/vm-uptimes-with-powershell
Categories Sponsors Archive Blogroll Badges Community Posted in Virtual Machine Manager, Windows Powershell | No Comment | 21,259 views | 28/04/2009 01:52 I haven’t post any scripts for a long time because I’m writing IIS7 scripts nowadays. Using IIS7 Powershell Snaping is real fun. But let’s back to the SCVMM and do something different. How about VM Uptimes? As you know, there is no powershell command to see vm uptimes with using SCVMM Snapin (or I don’t know that command) So, now I’ll use Hyper-V snapin codes to take that uptimes and I’ll write it to a xml file. Let’s see the codes. ```1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 Add-PSSnapin -name Microsoft.SystemCenter.VirtualMachineManager Get-Vmmserver localhost   \$VMProp = Get-VM | Select-Object -Property Name,VMHost foreach (\$i in \$VMProp) { \$VMName = \$i.Name write-host \$VMName \$VMHost = \$i.VMHost write-host \$VMHost \$VMState Function Get-VM Filter Get-VMSettingData Filter Get-VMSummary \$Uptime = Get-VMSummary -VM \$VMName -Server \$VMHost \$VMUptime = \$Uptime.UptimeFormatted \$contentStatus = ' ' + \$VMUptime + '' clear-content -path "C:\Program Files\Microsoft System Center Virtual Machine Manager 2008\wwwroot\Uptime\\$VMName.xml" add-content -path "C:\Program Files\Microsoft System Center Virtual Machine Manager 2008\wwwroot\Uptime\\$VMName.xml" -value '' add-content -path "C:\Program Files\Microsoft System Center Virtual Machine Manager 2008\wwwroot\Uptime\\$VMName.xml" -value '' add-content -path "C:\Program Files\Microsoft System Center Virtual Machine Manager 2008\wwwroot\Uptime\\$VMName.xml" -value \$contentStatus add-content -path "C:\Program Files\Microsoft System Center Virtual Machine Manager 2008\wwwroot\Uptime\\$VMName.xml" -value '' }``` As you see, using Hyper-v Snapin Get-VMSummary command is really easy. But what should I do? Should I use that command for every single VM? We have more than 300 vms. I don’t have any time to do that. So what i did? First, I used SCVMM Snapin to catch all VMs. Then I used “foreach” to use Hyper-v Snapin on all VM’s in my enviroment. I made it a batch file and I can refresh VM Uptimes in a period of time with using this script. You can download SCVMM Uptime Script from here: http://www.yusufozturk.info/wp-content/uploads/2009/04/uptime.ps1 You can parse that xml files and list them in a web application.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8067223429679871, "perplexity": 4214.027866376832}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670948.64/warc/CC-MAIN-20191121180800-20191121204800-00151.warc.gz"}
http://tex.stackexchange.com/questions/11336/macro-for-figure-position
# Macro for figure position? Why is the following minimal example not working? I.e., what do I need to do, in order to define the float placement specifier of a figure within a macro? (I know that I should use \newcommand, but in the "full" example the error is part of an involved "superfigure" command. \newcommand is not working too btw.) \documentclass{scrreprt} \usepackage{blindtext} \def\PosNotWorking{h} \def\FramboxNotWorking{5cm} \def\CaptionWorking{Caption is working.} \begin{document} \Blindtext[1] \begin{figure}[\PosNotWorking] \centering \framebox[5cm]{My special figure. (Pos: \PosNotWorking)} \caption{\CaptionWorking} \end{figure} \Blindtext[1] \end{document} If I compile this, the figure is at the top (default behaviour). If I enter \figure[h] instead of \figure[\PosNotWorking], the figure is of course between the two paragraphs. - Hi B3ret, please note that you should use back-ticks not " to mark you inline code. –  Martin Scharrer Feb 17 '11 at 19:12 Thanks for telling me, didn't know that. –  B3ret Feb 17 '11 at 19:23 The optional argument is not expanded, but used inside a loop which checks every containing letter. In your case it sees only the macro not the included letter. However, if you want to simply change the default positioning you can do this by redefining \fps@figure (for tables \fps@table): \makeatletter % Make all figure use 'h' position by default: \def\fps@figure{h} \makeatother If you don't want to change it globally AND still want to use a \begin{figure} environment you could use \edef to expand it first: \documentclass{scrreprt} \usepackage[english]{babel} \usepackage{blindtext} \def\PosNowWorking{h} \def\FramboxNotWorking{5cm} \def\CaptionWorking{Caption is working.} \begin{document} \Blindtext[1] \edef\efigure{\noexpand\begin{figure}[\PosNowWorking]}% \efigure \centering \framebox[5cm]{My special figure. (Pos: \PosNowWorking)} \caption{\CaptionWorking} \end{figure} \Blindtext[1] \end{document} or: \def\efigure{\begin{figure}}% \expandafter\efigure\expandafter[\PosNowWorking] Or redefine the figure environment to expand its optional argument first: \let\origfigure\figure \let\endorigfigure\endfigure \renewenvironment{figure}[1][h]{% \expandafter\origfigure\expandafter[#1]% }{% \endorigfigure } This way you can also easily set h as default value or put \PosNowWorking in there. - Thanks. No I don't want to set the default. I want to hand the position over to a "superfigure" using the keyval package. –  B3ret Feb 17 '11 at 19:22 @B3ret: See my updated answer –  Martin Scharrer Feb 17 '11 at 19:33 The second thing is perfect for me. I set h default anyway using the xkeyval package. Your answer is more elaborate, so I switched the checkmark to you. –  B3ret Feb 17 '11 at 19:44 @MartinScharrer The \edef solution works great! Though I think the last one would be more elegant. Unfortunately, I can't seem to get it to work. If I put the redefinition in the preamble of the document, how should I call the new figure environment? The command \begin{figure}{\PosNotWorking} does not seem to work. –  Adriaan Oct 26 '13 at 10:28 @Adriaan: The syntax is as normal: \begin{figure}[\PosNotWorking]. You used { } instead of [ ]. –  Martin Scharrer Oct 27 '13 at 23:23 Redefine the \begin{figure} and end{figure} as shown below: \documentclass{article} \usepackage[english,ngerman]{babel} \usepackage{blindtext} \def\beginmyfigure#1{\begin{figure}[#1]} \def\endmyfigure{\end{figure}} \def\FramboxNotWorking{5cm} \def\CaptionWorking{Caption is working.} \begin{document} \Blindtext[1] \beginmyfigure{b} \centering \framebox[5cm]{My special figure. (Pos: )} \caption{\CaptionWorking} \endmyfigure \Blindtext[1] \end{document} - Unfortunately that is not an option. The above was just an minimal example. I really need the figures environment. The real stuff I'm working on has for example subfigures, it should float for sure too. –  B3ret Feb 17 '11 at 19:07 try it this way; \documentclass{scrreprt} \usepackage[english]{babel} \usepackage{blindtext} \def\PosNotWorking{h} \def\FramboxNotWorking{5cm} \def\CaptionWorking{Caption is working.} \begin{document} \Blindtext[1] \expandafter\figure\expandafter[\PosNotWorking] \centering \framebox[5cm]{My special figure. (Pos: \PosNotWorking)} \caption{\CaptionWorking} \endfigure \Blindtext[1] \end{document} The optional argument isn't expanded by default - \expandafter\figure[\PosNotWorking] -> not working, \figure\expandafter[\PosNotWorking] -> puts an additional [h] before the fbox, but does not change the floating, \figure[\expandafter\PosNotWorking] -> not working. Thanks anyway. –  B3ret Feb 17 '11 at 19:09 @B3ret: It's \expandafter\figure\expandafter[\PosNotWorking], exactly like Herbert wrote it. –  Martin Scharrer Feb 17 '11 at 19:13 You two are right. I did \expandafter\begin{figure}\expandafter[\PosNotWorking] instead of what you said. Is there any difference between \begin{figure} and \figure? –  B3ret Feb 17 '11 at 19:18 @B4ret: In short \begin{xxx} opens a group (like \begingroup) and then calls \xxx. Note that \expandafter only jumps over one token (e.g. macro, brace or letter). \figure is one token, \begin{figure}` are nine (1 macro, 2 braces, 6 letters). –  Martin Scharrer Feb 17 '11 at 19:24 So I could write '\begingroup\expandafter\figure\expandafter[...]...\endfigure\endgroup to be safe? –  B3ret Feb 17 '11 at 19:34
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8801155686378479, "perplexity": 4364.604183141432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535925433.20/warc/CC-MAIN-20140901014525-00209-ip-10-180-136-8.ec2.internal.warc.gz"}
http://www.cper.org/mod/glossary/showentry.php?eid=25
# Autopoiesis ## Explanation Autopoiesis refers to a system that is capable of creating, maintaining and reproducing itself. Autopoietic mechanisms can operate as self-generating feedback systems. ## Historical Frame The term was introduced in 1972 by Chilean biologists Humberto Maturana and Francisco Varela to define the self-maintaining chemistry of living cells. Since then the concept has been also applied to the fields of systems theory and sociology. Autopoiesis was originally presented as a system description that was said to define and explain the nature of living systems. A canonical example of an autopoietic system is the biological cell. The eukaryotic cell, for example, is made of various biochemical components such as nucleic acids and proteins, and is organized into bounded structures such as the cell nucleus, various organelles, a cell membrane and cytoskeleton. These structures, based on an external flow of molecules and energy, produce the components which, in turn, continue to maintain the organized bounded structure that gives rise to these components. ## Related concepts ### Allopoietic system An autopoietic system is to be contrasted with an allopoietic system, such as a car factory, which uses raw materials (components) to generate a car (an organized structure) which is something other than itself (the factory). However, if the system is extended from the factory to include components in the factory's 'environment', such as supply chains, plant / equipment, workers, dealerships, customers, contracts, competitors, cars, spare parts and so on, then as a total viable system it could be considered to be autopoietic. Thus, an autopoietic system is a closed topological space that continuously generates and specifies its own organization. It maintains this through its operation as a system of production of its own components, and does this in an endless turnover of components. Autopoietic systems are thus distinguished from allopoietic systems, which have as the product of their functioning something different from themselves. ### Practopoiesis A theory of how autopoietic systems operate is named Practopoiesis (praxis + poiesis, meaning creation of actions). The theory presumes that, although the system as a whole is autopoietic, the components of that system may have allopoietic relations. For example, the genome combined with the operations of the gene expression mechanisms create proteins, but not the other way around; proteins do not create genomes. In that case poiesis occurs only in one direction. Practopoietic theory presumes such one-directional relationships of creation to take place also at other levels of system organisation. ### Self-organizing Intelligence Many scientists have often used the term autopoiesis as a synonym for self-organization. An autopoietic system is autonomous and operationally closed, in the sense that there are sufficient processes within it to maintain the whole. Autopoietic systems are "structurally coupled" with their medium, embedded in a dynamic of changes that can be recalled as sensory-motor coupling. This continuous dynamic is considered as a rudimentary form of knowledge or cognition and can be observed throughout life-forms. Autopoiesis would be the process of the emergence of necessary features out of chaotic contingency, causing contingency's gradual self-organisation, thus leading to the gradual rise of order out of chaos. ## Linguistic derivation The term Autopoiesis is derived from ancient Greek words auto- (αὐτo-) meaning "self", and poiesis (ποίησις), meaning "creation" or  "production". ## External sources Book: Maturana, H., & Varela, F. (1992). The tree of knowledge: The biological roots of human understanding. Boston: Shambhala.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.811482846736908, "perplexity": 2724.811449206152}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529276.65/warc/CC-MAIN-20190723105707-20190723131707-00365.warc.gz"}
https://www.livmathssoc.org.uk/cgi-bin/sews_diff.py?ConstructingTheIntegers
## Most recent change of ConstructingTheIntegers Edit made on January 25, 2010 by GarethMcCaughan at 04:08:17 Deleted text in red / Inserted text in green WW Suppose we have the counting numbers !/ 0, 1, 2, 3, 4, ... !/ but we don't have the negative numbers. That means we can solve the equation /x+5=13,/ but we can't solve /x+13=5./ How can we fix this? What we want is some new, larger system of things we call "numbers" that somehow extends the existing idea, and allows these other, similar problems to be solved. Here's how we one way to do that. Consider all pairs /(a,b)/ of counting numbers, and declare that two pairs /(a,b)/ and /(c,d)/ are equivalent if (and only if) /a+d=b+c./ (The idea here is that /(a,b)/ "means" /a-b,/ but we can't say that formally because /a-b/ is a possibly-negative integer and at this point we're supposed to know only about the counting numbers.) We can check that this is a proper equivalence relation, and so we have equivalence classes. Any pair /(a,b)/ is in exactly one equivalence class. It's the equivalence classes we're interested in. Given two equivalence classes /A/ and /C/ we define their sum /A+C/ as follows. Take a representative /(a,b)/ of /A,/ and a representative /(c,d)/ of /C,/ and define /A+C/ to be the eqivalence equivalence class containing /(a+c,b+d)./ We need to check that the result is always the same no matter which representatives you choose, but it turns out that this definition is "well-defined." Further, given an equivalence class /A/ we define the negative of /A/ as follows. Let /(a,b)/ be any representative of /A/ and define /-A/ to be the equivalence class containing /(b,a)./ Again, we need to check that it doesn't matter what representative we choose, we always get the same answer, but again, the concept is well-defined. We can now show that for any counting number /a/ the equivalence class that contains /(a,a)/ plays the role of a zero. Given any other pair /(c,d),/ /(a+c,a+d)/ is in the same equivalence class as /(c,d)./ Adding /(a,a)/ has no effect on which equivalence class we're in. Now we can see that /(b,a)/ acts as the negative of /(a,b)/ because /(a,b)+(b,a)=(a+b,a+b)/ and we get the equivalence class that plays the role of zero. We also have a natural embedding of the counting numbers into the collection of equivalence classes EQN:x\rightarrow(x,0) Thus the collection of equivalence classes acts as the integers. ---- The method above is not the only way to construct something that behaves as the integers should, starting with only the counting numbers. The usual way of writing integers suggests another way: say that an integer is either a counting number (0,1,2,...) or a nonzero counting number with "-" stuck in front of it (-1,-2,...); and then define arithmetic operations on these things case by case. So, for instance, to define addition on the integers we need to consider the following cases: * (a)+(b) = (a+b) * (-a)+(-b) = -(a+b) * (a)+(-b) = (a-b) when a >= b * (a)+(-b) = -(b-a) when a < b * (-a)+(b) = (b-a) when a <= b * (-a)+(b) = -(a-b) when a > b This approach has the advantage of familiarity, and avoids the technical machinery of equivalence classes; but it requires a great deal of case-splitting. In contrast, the equivalence-classes-of-ordered-pairs approach above just says: (a,b) + (c,d) = (a+c,b+d), and that's that. Which is better? Most mathematicians would choose the first way. It's harder to understand at first, but the ideas it uses -- ordered pairs, equivalence relations, etc. -- turn out to be useful throughout mathematics. (For instance, we can use a very similar idea for constructing the rationals.) And the payoff in simplicity and elegance is considerable. ----
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9057925939559937, "perplexity": 2273.1402821720935}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361169.72/warc/CC-MAIN-20211202054457-20211202084457-00636.warc.gz"}
http://electricaltheorem.blogspot.com/p/maximum-power-transfer.html
Maximum Power Transfer The Maximum Power Transfer Theorem is not so much a means of analysis as it is an aid to system design. Simply stated, the maximum amount of power will be dissipated by a load resistance when that load resistance is equal to the Thevenin/Norton resistance of the network supplying the power. If the load resistance is lower or higher than the Thevenin/Norton resistance of the source network, its dissipated power will be less than maximum. This is essentially what is aimed for in radio transmitter design , where the antenna or transmission line “impedance” is matched to final power amplifier “impedance” for maximum radio frequency power output. Impedance, the overall opposition to AC and DC current, is very similar to resistance, and must be equal between source and load for the greatest amount of power to be transferred to the load. A load impedance that is too high will result in low power output. A load impedance that is too low will not only result in low power output, but possibly overheating of the amplifier due to the power dissipated in its internal (Thevenin or Norton) impedance. Taking our Thevenin equivalent example circuit, the Maximum Power Transfer Theorem tells us that the load resistance resulting in greatest power dissipation is equal in value to the Thevenin resistance (in this case, 0.8 Ω): With this value of load resistance, the dissipated power will be 39.2 watts: If we were to try a lower value for the load resistance (0.5 Ω instead of 0.8 Ω, for example), our power dissipated by the load resistance would decrease: Power dissipation increased for both the Thevenin resistance and the total circuit, but it decreased for the load resistor. Likewise, if we increase the load resistance (1.1 Ω instead of 0.8 Ω, for example), power dissipation will also be less than it was at 0.8 Ω exactly: If you were designing a circuit for maximum power dissipation at the load resistance, this theorem would be very useful. Having reduced a network down to a Thevenin voltage and resistance (or Norton current and resistance), you simply set the load resistance equal to that Thevenin or Norton equivalent (or vice versa) to ensure maximum power dissipation at the load. Practical applications of this might include radio transmitter final amplifier stage design (seeking to maximize power delivered to the antenna or transmission line), a grid tied inverter loading a solar array, or electric vehicle design (seeking to maximize power delivered to drive motor). The Maximum Power Transfer Theorem is not: Maximum power transfer does not coincide with maximum efficiency. Application of The Maximum Power Transfer theorem to AC power distribution will not result in maximum or even high efficiency. The goal of high efficiency is more important for AC power distribution, which dictates a relatively low generator impedance compared to load impedance. Similar to AC power distribution, high fidelity audio amplifiers are designed for a relatively low output impedance and a relatively high speaker load impedance. As a ratio, "output impdance" : "load impedance" is known as damping factor, typically in the range of 100 to 1000. [rar] [dfd] Maximum power transfer does not coincide with the goal of lowest noise. For example, the low-level radio frequency amplifier between the antenna and a radio receiver is often designed for lowest possible noise. This often requires a mismatch of the amplifier input impedance to the antenna as compared with that dictated by the maximum power transfer theorem. • REVIEW: • The Maximum Power Transfer Theorem states that the maximum amount of power will be dissipated by a load resistance if it is equal to the Thevenin or Norton resistance of the network supplying power. • The Maximum Power Transfer Theorem does not satisfy the goal of maximum efficiency. Maximum Power Transfer We have seen in the previous tutorials that any complex circuit or network can be replaced by a single energy source in series with a single internal source resistance, RS. Generally, this source resistance or even impedance if inductors or capacitors are involved is of a fixed value in Ohm´s. However, when we connect a load resistance, RL across the output terminals of the power source, the impedance of the load will vary from an open-circuit state to a short-circuit state resulting in the power being absorbed by the load becoming dependent on the impedance of the actual power source. Then for the load resistance to absorb the maximum power possible it has to be "Matched" to the impedance of the power source and this forms the basis of Maximum Power Transfer. Maximum Power Transfer is another useful analysis method to ensure that the maximum amount of power will be dissipated in the load resistance when the value of the load resistance is exactly equal to the resistance of the power source. The relationship between the load impedance and the internal impedance of the energy source will give the power in the load. Consider the circuit below. Thevenin's Equivalent Circuit. In our Thevenin equivalent circuit above, the maximum power transfer theorem states that "the maximum amount of power will be dissipated in the load resistance if it is equal in value to the Thevenin or Norton source resistance of the network supplying the power" in other words, the load resistance resulting in greatest power dissipation must be equal in value to the equivalent Thevenin source resistance, then RL = RS but if the load resistance is lower or higher in value than the Thevenin source resistance of the network, its dissipated power will be less than maximum. For example, find the value of the load resistance, RL that will give the maximum power transfer in the following circuit. Example No1. Where: RS = 25Ω RL is variable between 0 - 100Ω VS = 100v Then by using the following Ohm's Law equations: We can now complete the following table to determine the current and power in the circuit for different values of load resistance. Table of Current against Power RL I P 0 0 0 5 3.3 55 10 2.8 78 15 2.5 93 20 2.2 97 RL I P 25 2.0 100 30 1.8 97 40 1.5 94 60 1.2 83 100 0.8 64 Using the data from the table above, we can plot a graph of load resistance, RL against power, P for different values of load resistance. Also notice that power is zero for an open-circuit (zero current condition) and also for a short-circuit (zero voltage condition). Graph of Power against Load Resistance From the above table and graph we can see that the Maximum Power Transfer occurs in the load when the load resistance, RL is equal in value to the source resistance, RS so then: RS = RL = 25Ω. This is called a "matched condition" and as a general rule, maximum power is transferred from an active device such as a power supply or battery to an external device occurs when the impedance of the external device matches that of the source. Improper impedance matching can lead to excessive power use and dissipation. Transformer Impedance Matching One very useful application of impedance matching to provide maximum power transfer is in the output stages of amplifier circuits, where the speakers impedance is matched to the amplifier output impedance to obtain maximum sound power output. This is achieved by using a matching transformer to couple the load to the amplifiers output as shown below. Transformer Coupling The maximum power transfer can be obtained even if the output impedance is not the same as the load impedance. This can be done using a suitable "turns ratio" on the transformer with the corresponding ratio of load impedance, ZLOAD to output impedance, ZOUT matches that of the ratio of the transformers primary turns to secondary turns as a resistance on one side of the transformer becomes a different value on the other. If the load impedance, ZLOAD is purely resistive and the source impedance is purely resistive, ZOUT then the equation for finding the maximum power transfer is given as: Where: NP is the number of primary turns and NS the number of secondary turns on the transformer. Then by varying the value of the transformers turns ratio the output impedance can be "matched" to the source impedance to achieve maximum power transfer. For example, Example No2. If an 8Ω loudspeaker is to be connected to an amplifier with an output impedance of 1000Ω, calculate the turns ratio of the matching transformer required to provide maximum power transfer of the audio signal. Assume the amplifier source impedance is Z1, the load impedance is Z2 with the turns ratio given as N. Generally, small transformers used in low power audio amplifiers are usually regarded as ideal so any losses can be ignored.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8085994124412537, "perplexity": 598.3713749202429}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246633512.41/warc/CC-MAIN-20150417045713-00016-ip-10-235-10-82.ec2.internal.warc.gz"}
https://math.libretexts.org/TextMaps/Calculus/Supplemental_Modules_(Calculus)/Integral_Calculus/1%3A_Area_and_Volume/1.1%3A_Area_Between_Two_Curves
$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ # 1.1: Area Between Two Curves [ "article:topic", "AREA BETWEEN TWO CURVES", "authorname:green" ] $$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$ Recall that the area under a curve and above the x-axis can be computed by the definite integral.  If we have two curves $$y = f(x)$$ and $$y=g(x)$$ such that $f(x) > g(x)$ then the area between them bounded by the horizontal lines $$x = a$$ and $$x = b$$ is $\text{Area}=\int_{c}^{b} \left [ f(x) - g(x) \right ] \;dx.$ To remember this formula we write $\text{Area}=\int_{a}^{b}\text{(Top-Bottom)}\;dx$ Example 1 Find the area between the curves  $$y=x^2$$ and $$y=x^3$$. Solution First we note that the curves intersect at the points $$(0,0)$$ and $$(1,1)$$. Then we see that $x^3 < x^2$ in this interval. Hence the area is given by \begin{align} \int_{0}^{1} \left( x^2 - x^3 \right) dx &= {\left[ \frac{1}{3}x^2 - \frac{1}{4}x^4 \right]}_0^1 \\ &= \dfrac{1}{3} - \dfrac{1}{4} \\ &= \dfrac{1}{12}. \end{align} ### Area Bounded by Two Functions of $$y$$ Example 2 Find the area between the curves $$x = 1 - y^2$$ and $$x = y^2-1$$. Solution Here the curves bound the region from the left and the right. We use the formula $\text{Area}=\int_{c}^{b}\text{(Right-Left)}\;dy.$ For our example: \begin{align} \int_{-1}^{1}\big[ (1-y^2)-(y^2-1) \big] dy &= \int_{-1}^{1}(2-y^2) dy \\ &= \left(2y-\dfrac{2}{3}y^3\right]_{-1}^1 \\ &=\big(2-\dfrac{2}{3}\big)-\big(-2-\dfrac{2}{3} \big) \\ &= \dfrac{8}{3}. \end{align} Example 3 Find the area between the curves $$y =0$$ and $$y = 3 \left( x^3-x \right)$$. Solution When we graph the region, we see that the curves cross each other so that the top and bottom switch. Hence we split the integral into two integrals: \begin{align} \int_{-1}^{0}\big[ 3(x^3-x)-0\big] dx +\int_{0}^{1}\big[0-3(x^3-x) \big] dx &= \left(\dfrac{3}{4}x^4-\dfrac{3x^2}{2}\right]_{-1}^0 - \left(\dfrac{3}{4}x^4-\dfrac{3x^2}{2}\right]_0^1 \\ &=\big(-\dfrac{3}{4}+\dfrac{3}{2} \big) - \big(\dfrac{3}{4}-\dfrac{3}{2} \big) \\ &=\dfrac{3}{2} \end{align}. ### Application Let $$y = f(x)$$ be the demand function for a product and $$y = g(x)$$ be the supply function. Then we define the equilibrium point to be the intersection of the two curves. The consumer surplus is defined by the area above the equilibrium value and below the demand curve, while the producer surplus is defined by the area below the equilibrium value and above the supply curve. Example 4 Find the producer surplus for the demand curve $f(x) = 1000 - 0.4x^2$ and the supply curve of $g(x) = 42x.$ Solution We first find the equilibrium point: We set $1000 - 0.4x^2 = 42x$ or $0.4x^2 + 42x - 1000 = 0.$ We get $x=20$ hence $y=42(20)=840.$ We integrate $\int_{0}^{20} \left ( 840 - 42x \right ) dx = {\left[ 840x-21x^2 \right] }_0^{20}$ $= 8400.$ Exercises 1. Find the area between the curves $$y = x^2$$ and $$y =\sqrt{x}$$. 2. Find the area between the curves $$y = x^2 - 4$$ and $$y = -2x$$. 3. Find the area between the curves $$y = 2/x$$ and $$y = -x + 3$$. 4. Find the area between the curves $$y = x3^x$$  and $$y = 2x +1$$. Larry Green (Lake Tahoe Community College) • Integrated by Justin Marshall.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000048875808716, "perplexity": 784.6568575580234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512395.23/warc/CC-MAIN-20181019103957-20181019125457-00242.warc.gz"}
https://www.physicsforums.com/threads/work-done-moving-two-charges-together.573074/
# Homework Help: Work done moving two charges together 1. Feb 1, 2012 ### phosgene 1. The problem statement, all variables and given/known data Two point charges of magnitude +10 μC each are placed 0.2 m away from each other. a) How much work is done in placing the second charge? b) Is there any point at which the electric field and electric potential are both 0? 2. Relevant equations Work = (charge of first point charge)(ΔV of second charge) E=kQ/(d^2), where E = electric field, k = proportionality constant, Q = charge of point charge, d = distance from point charge V=kQ/r, where V = voltage, Q = charge and r = distance from charge V=Ed, where V = voltage, E = electric field and d = distance from point charge 3. The attempt at a solution a) I assumed that the point charge was being moved from a spot far enough so that it could be approximated by being moved there from an effectively infinite distance away. Then the work done can be determined as follows: Work = 10 μC(kQ/r2 - kQ/r1), where r2 = 0.2m and r1 = infinity These simplifies to Work = (10 μC)(k10μC)/0.2m b) I said that at a point very far away the electric field and electric potential will be 0. This is because E=kQ/(r^2) at a very far away point will effectively be E=kQ/(infinity), which is approximately 0. As the potential difference is equal to the electric field multiplied by the distance, it too will effectively be 0 at that same far away point. Is my reasoning correct?? I don't really get voltage, potential difference and electrical potential energy. EDIT: Sorry! I forgot to put a title! It should be 'Work done moving two charges together and possible point of zero electric field and electric potential'. But I can't edit a title in. 2. Feb 1, 2012 ### tiny-tim hi phosgene! ok (but you need it in joules) i'm not sure that we can talk about a point actually being at infinity perhaps it would be easier to make use of the fact that both the potential (as a scalar) and the field (as a vector) are additive? 3. Feb 2, 2012 ### phosgene Thanks for the reply. I don't quite understand what you mean:(. But I think I have a better answer for b). The point exactly in-between the charges has an electric field of zero because the electric field vectors from both charges are exactly equal and opposite. The electric potential at this point is also zero because if a test charge is placed there, it will not move because the forces on it are equal. 4. Feb 2, 2012 ### tiny-tim hi phosgene! no … zero potential means that it is at the same potential as at infinity (ie that no net work would be done moving it there from infinity) compare this with a ball at the top of a mountain … it won't move, but its gravitational potential is a lot more than at the bottom of the mountain (ie a lot of work would be done moving it there) oh, and you can just add potentials​
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9101747870445251, "perplexity": 533.9065828652523}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039742322.51/warc/CC-MAIN-20181114232605-20181115014605-00462.warc.gz"}
http://wiki.planetmath.org/biquadraticextension
A biquadratic extension of a field $F$ is a Galois extension $K$ of $F$ such that $\operatorname{Gal}(K/F)$ is isomorphic to the Klein 4-group. It receives its name from the fact that any such $K$ is the compositum of two distinct quadratic extensions of $F$. The name can be somewhat misleading, however, since biquadratic extensions of $F$ have exactly three distinct subfields that are quadratic extensions of $F$. This is easily seen to be true by the fact that the Klein 4-group has exactly three distinct subgroups of order (http://planetmath.org/OrderGroup) 2. Note that, if $\alpha,\beta\in F$, then $F(\sqrt{\alpha},\sqrt{\beta})$ is a biquadratic extension of $F$ if and only if none of $\alpha$, $\beta$, and $\alpha\beta$ are squares in $F$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 15, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9461607336997986, "perplexity": 42.27628093518317}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647545.84/warc/CC-MAIN-20180320224824-20180321004824-00410.warc.gz"}
https://cms.math.ca/cmb/msc/45P05?fromjnl=cmb&jnl=CMB
location:  Publications → journals Search results Search: MSC category 45P05 ( Integral operators [See also 47B38, 47G10] ) Expand all        Collapse all Results 1 - 3 of 3 1. CMB 2014 (vol 58 pp. 128) Marković, Marijan A Sharp Constant for the Bergman Projection For the Bergman projection operator $P$ we prove that \begin{equation*} \|P\colon L^1(B,d\lambda)\rightarrow B_1\| = \frac {(2n+1)!}{n!}. \end{equation*} Here $\lambda$ stands for the hyperbolic metric in the unit ball $B$ of $\mathbb{C}^n$, and $B_1$ denotes the Besov space with an adequate semi--norm. We also consider a generalization of this result. This generalizes some recent results due to Perälä. Keywords:Bergman projections, Besov spacesCategories:45P05, 47B35 2. CMB 2013 (vol 57 pp. 794) Fang, Zhong-Shan; Zhou, Ze-Hua New Characterizations of the Weighted Composition Operators Between Bloch Type Spaces in the Polydisk We give some new characterizations for compactness of weighted composition operators $uC_\varphi$ acting on Bloch-type spaces in terms of the power of the components of $\varphi,$ where $\varphi$ is a holomorphic self-map of the polydisk $\mathbb{D}^n,$ thus generalizing the results obtained by Hyvärinen and Lindström in 2012. Keywords:weighted composition operator, compactness, Bloch type spaces, polydisk, several complex variablesCategories:47B38, 47B33, 32A37, 45P05, 47G10 3. CMB 2008 (vol 51 pp. 618) Valmorin, V. Vanishing Theorems in Colombeau Algebras of Generalized Functions Using a canonical linear embedding of the algebra ${\mathcal G}^{\infty}(\Omega)$ of Colombeau generalized functions in the space of $\overline{\C}$-valued $\C$-linear maps on the space ${\mathcal D}(\Omega)$ of smooth functions with compact support, we give vanishing conditions for functions and linear integral operators of class ${\mathcal G}^\infty$. These results are then applied to the zeros of holomorphic generalized functions in dimension greater than one. Keywords:Colombeau generalized functions, linear integral operators, generalized holomorphic functionsCategories:32A60, 45P05, 46F30 top of page | contact us | privacy | site map |
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9806206226348877, "perplexity": 1327.0088980816247}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607849.21/warc/CC-MAIN-20170524173007-20170524193007-00579.warc.gz"}
http://math.stackexchange.com/questions/150/are-there-any-functions-that-are-always-continuous-yet-not-differentiable-or/151
# Are there any functions that are (always) continuous yet not differentiable? Or vice-versa? It seems like functions that are continuous always seem to be differentiable, to me. I can't imagine one that is not. Are there any examples of functions that are continuous, yet not differentiable? The other way around seems a bit simpler -- a differentiable function is obviously always going to be continuous. But are there any that do not satisfy this? - +1: I think this is a good on-topic question if this site is to be useful to undergraduate math majors. The Weierstrass function has been very nicely identified in the answers below, and it is an important counter-example that comes up immediately in advanced calculus. –  Tom Stephens Jul 21 '10 at 3:50 It's easy to find a function which is continuous but not differentiable at a single point, e.g. f(x) = |x| is continuous but not differentiable at 0. Moreover, there are functions which are continuous but nowhere differentiable, such as the Weierstrass function. On the other hand, continuity follows from differentiability, so there are no differentiable functions which aren't also continuous. If a function is differentiable at $x$, then the limit $(f(x+h)-f(x))/h$ must exist (and be finite) as $h$ tends to 0, which means $f(x+h)$ must tend to $f(x)$ as $h$ tends to 0, which means $f$ is continuous at $x$. - In your last sentence, beginning "If a function is continuous at...", I think you mean "If a function is differentiable at..." –  Isaac Jul 20 '10 at 22:18 @Isaac: oops! You're right, of course. Corrected. –  Simon Nickerson Jul 20 '10 at 22:22 +1 for the Weierstrass function, I knew what it was but not what it was called. –  Jason S Jul 20 '10 at 23:01 Moreover, the set of continuous functions which are nowhere differentiable is residual, so one can prove their existence without actually constructing an example. –  Akhil Mathew Jul 20 '10 at 23:06 @Akhil Mathew: on the other hand, from the proof, via Baire's theorem, that the set of functions with one differentiability point is meager we can construct an example of a nowhere differentiable function (e.g. a series of see-saw functions with appropriate slopes running off to infinity). –  G. Rodrigues Feb 23 '11 at 14:45 Actually, in some sense, almost all of the continuous functions are nowhere differentiable: http://en.wikipedia.org/wiki/Weierstrass_function#Density_of_nowhere-differentiable_functions - A natural class of examples would be paths of Brownian motion. These are continuous but non-differentiable everywhere. You may also be interested in fractal curves such as the Takagi function, which is also continuous but nowhere differentiable. (I think Wikipedia calls it the "Blancmange curve".) I like this one better than the Weierstrass function, but this is personal preference. Brownian Motion Takagi function - I don't mean to nitpick but I think we should be specific about the norm/topology we are using when we state that these functions are a dense subset. When we work in [a,b], I think we use the sup norm. Sorry, all the info about norm/topology, etc. is included in the above link. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9023447632789612, "perplexity": 362.60176961230195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931008215.58/warc/CC-MAIN-20141125155648-00198-ip-10-235-23-156.ec2.internal.warc.gz"}
http://mathoverflow.net/questions/128217/flatness-and-tensor-product-of-rings
# Flatness and tensor product of rings Let $R_1$ and $R_2$ be two subrings of the ring $R$ which commute in $R$ so that we have a ring homomorphism $R_1\otimes_\mathbb{Z} R_2\rightarrow R$. Assume that $R$ is flat over $R_1$ and $R_2$. Is then $R$ also flat over $R_1\otimes_\mathbb{Z} R_2$? Is there an easy counterexample? - Take $R_1 = R_2 = R = {\mathbb Z}[x]$. Then $R_1\otimes_{\mathbb Z} R_2 = {\mathbb Z}[x_1,x_2]$ and $R = {\mathbb Z}[x]$ is not flat over it. Use free resolution $$0 \to {\mathbb Z}[x_1,x_2] \xrightarrow{x_1-x_2} {\mathbb Z}[x_1,x_2] \to {\mathbb Z}[x] \to 0$$ to compute $Tor_1^{{\mathbb Z}[x_1,x_2]}({\mathbb Z}[x],{\mathbb Z}[x]) = {\mathbb Z}[x] \ne 0$. –  Sasha Apr 21 '13 at 10:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9603864550590515, "perplexity": 132.96031157704238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500823528.84/warc/CC-MAIN-20140820021343-00247-ip-10-180-136-8.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/78078/confusion-about-indicators
1. Let $X$ be the number of spades in $7$ cards dealt from a well-shuffled deck of $52$ cards containing $13$ spades. Find $E(X)$. 2. Let $X$ be the number of aces in a $5$-card poker hand. Find $E(X)$. From the solution manual, I got $$7\times \frac{1}{13} = \frac{7}{13}$$ for the first problem. But for the second, it is calculated as the following: $$5 \times \frac{4}{52} = \frac{5}{13} .$$ I am confused about how to use indicators in problems like these. - Denote $X_i$ by $1$ if the "color" of the $i^{th}$ card is spades and $0$ if not. (In the sense that "spades" is a color.) Then you can compute the expectation of $X$ by computing $$E(X) = E(X_1 + X_2 + \dots + X_7) = E(X_1) + E(X_2) + \dots + E(X_7).$$ Now, since the 7 cards are dealt at the same time, each of the expectations up there are easily computed, since it doesn't matter that the $i^{th}$ card is picked amongst $7$ cards, it only matters to know what is the probability that this particular card is spades or not. Since you can see that this chance is $1/4$, the expectation you are looking for is $7/4$. ($7/13$ is just WRONG, you cannot expect to have less than one card being spades when $1/4$ of the deck is...) Another way to see this is that 7 is the number of cards in your hand, but it is also the number of hearts + the number of spades + ... , etc., and those have equal probability to show up, so it makes sense that 1/4 of those 7 cards pop up by being hearts, 1/4 are spades, etc, which justify my $7 \times 1/4$ answer being again more rightful than your manual's $1/13$. In the same manner you can compute in the second problem $$E(X) = E(X_1 + X_2 +... + X_5) = E(X_1) + E(X_2) + ... + E(X_5)$$ by letting $X_i$ be $1$ if the $i^{th}$ card is $1$ and $0$ if not. Since there are $4$ cards out of $52$ that are aces, the probability that $X_i$ is $1$ is $4/52$, which allows you to compute $E(X) = 5 \times 4/52$. Hope that helps, - that makes a lot of sense. Then the solution I got for the first part is apparently wrong. –  geraldgreen Nov 2 '11 at 5:27 Indeed. Simple intuition says that if you pick 7 cards then one fourth of the cards will look like one fourth of the cards in the deck. –  Patrick Da Silva Nov 2 '11 at 18:06
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9397008419036865, "perplexity": 92.6329939952002}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737904854.54/warc/CC-MAIN-20151001221824-00238-ip-10-137-6-227.ec2.internal.warc.gz"}
https://www.intechopen.com/books/applications-of-optical-fibers-for-sensing/whispering-gallery-modes-for-accurate-characterization-of-optical-fibers-parameters
Open access peer-reviewed chapter # Whispering Gallery Modes for Accurate Characterization of Optical Fibers’ Parameters By Martina Delgado-Pinar, Xavier Roselló-Mechó, Emmanuel Rivera-Pérez, Antonio Díez, José Luis Cruz and Miguel V. Andrés Submitted: May 28th 2018Reviewed: September 1st 2018Published: November 5th 2018 DOI: 10.5772/intechopen.81259 ## Abstract Whispering gallery modes (WGMs) are surface modes that propagate azimuthally around resonators with rotational symmetry (toroidal, spherical, or, as in our case, cylindrical shaped, since the optical fiber itself plays the role of the microresonator). These modes are resonant in optical wavelength, and the spectral position of the resonances depends on the radius and the refractive index of the microresonator material. Due to the high-quality factor of the resonances (as high as 107 in cylindrical microresonators), they allow measuring different parameters with high sensitivities and very low detection limits. Here, we report the use of WGMs to characterize the properties of the material that forms the microresonator. In particular, we highlight the use of this technique to measure temperature profiles along conventional and special fibers (such as photosensitive or doped fibers), elasto-optic coefficients, and UV-induced absorption loss coefficients of different photosensitive fibers. These parameters of the fibers set the optical response of fiber-based components and may change when the device is in use in an optical system; thus, this technique allows an accurate characterization of the devices and leads to proper designs of components with specific optical responses. ### Keywords • whispering gallery modes • surface modes • microresonators • optical fibers • fiber Bragg gratings • elasto-optic effect • thermo-optic effect ## 1. Introduction Whispering gallery modes are surface modes that propagate azimuthally around resonators with rotational symmetry, generally a dielectric. This phenomenon was first described by Lord Rayleigh in the nineteenth century, when studying the propagation of acoustic waves in interfaces with a curvature [1]. St. Paul’s Cathedral (London, UK), the Temple of Heaven (Beijing, China), the Pantheon (Rome, Italy), the Tomb of Agamemnon (Mycenae, Greece), and the Whispering Gallery in the Alhambra (Granada, Spain) are examples of architectonical structures that support acoustic modes which propagate guided by the surface of the walls. It was at the beginning of the twentieth century when the study of this guiding mechanism was extended to the electromagnetic waves, since Mie developed his theory for the plane electromagnetic waves dispersed by spheres with diameters of the same size as the optical wavelength [2]. Shortly after, Debye stablished the equations for the optical resonances of dielectric and metallic spheres based on Mie’s dispersion theory [3]. The detailed study of the mathematical equations of WGMs was performed by Richtmyer [4] and Stratton [5], who predicted high-quality factors Qfor these resonances and led to its implementation in different technologies based on microwave and acoustic waves. In the microscopic world, light can be guided by the same mechanism, when the resonator has dimensions of tens to hundreds of microns, and the wavelength of the light is in the visible-infrared range. In 1989, Braginsky et al. set the beginning of the optical WGMs when reporting the technique to excite optical modes in microresonators with spherical shape [6]. Since then, many researchers have studied the propagation of WGMs in structures with different symmetries [7] and have reported efficient methods based on microtapers to excite these modes in the optical range [8]. Due to the intrinsic low losses, WGMs show very high Qfactors. For example, they can achieve values of 1010 in spheres [9], 108 in silicon microtoroids [10], or 106–107 in cylindrical microresonators [11]. At the resonance, the light guided by a WGM is recirculated in the microresonator many times, which provides a mechanism for decreasing the detection limit of the sensors based on them. This enhanced detection limit has been demonstrated to be low enough to measure a single molecule on the surface of a microtoroid [12]. WGM resonances shift in wavelength as the refractive index of the external medium changes. The sensitivity of WGMs as a function of these variations is significant: when considering a silica-cylindrical microresonator of 125 μmin diameter, immersed in water (n=1.33), the calculated shift in wavelength of the resonance is 77 nm/RIU. For a typical resonance width of 0.5 pm, this leads to a detection limit of 6 × 10−6 RIU. It is worth to note that the light guided by WGMs is mainly confined in the microresonator. Thus, their sensitivity to variations of the material refractive index will be even higher. For example, it can achieve values as high as 1.1 μm/RIU when considering variations of the refractive index of the silica. In this example, the detection limit of the WGM decreases down to 4 × 10−7 RIU. In this chapter, we will report the use of WGMs in silica, cylindrical microresonators (an optical fiber) to measure and characterize the properties of the microresonator itself. There are a number of parameters, such as temperature or strain, which modify the refractive index of the material. Thus, this technique allows measuring with accuracy variations of temperature in doped optical fibers, in optical devices as fiber Bragg gratings (FBG), the elasto-optic coefficients of conventional silica fibers, and the absorption coefficient of photosensitive optical fibers, for example. We will report here the fundamentals of the technique, as well as the experimental results we obtained for these experiments. ## 2. Fundamentals The guiding mechanism of WGMs in the azimuthal direction of a microresonator (MR) is total internal reflection, just as in the case of axial propagation in a conventional waveguide; see Figure 1a. Resonance occurs when the guided wave travels along the perimeter of the MR, and it drives itself coherently by returning in phase after every revolution. In its way, the wave follows continuously the surface of the MR, and the optical path in a circumnavigation must be equal to an integer multiple of the optical wavelength, λ. When this condition is fulfilled, resonances appear, and a series of discrete modes at specific wavelengths will show up. The resonant condition can be written as [13] λR=2πaneffmE1 where λRis the resonant wavelength, a is the radius of the MR, neffis the effective index of the WGM, and m is the azimuthal order of the mode (i.e., the number of wavelengths in the perimeter of the MR). The effective indices of the different modes are calculated, as usual, by solving Maxwell’s equations and applying the proper boundary conditions [5]. In our case, we will deal with cylindrical, dielectric MRs with translational symmetry in the axial direction (see Figure 1b). Two zones can be identified, regions I (of radius a) and II (which extends to the infinite), with refractive indices n1and n2, respectively, with n1>n2. The magnetic permeability of the material and of the external medium is equal to that of the vacuum, μ0, and both media are homogeneous although, in general, they present an anisotropy in the dielectric permittivity. In the axial direction, we will consider a refractive index of the material n1zwhich is different to the refractive index in the transversal directions, n1t(see Eq. (2) for the expression of the tensor of the refractive index): n2=n1t000n1t000n1zE2 We do not intend to give a full description of the solution of this problem, which can be found in [14], but we will summarize the main equations and features of WGMs. If we solve Maxwell’s equations with this uniaxial tensor, the modes split in two series of family modes that, analogously to the case of axial waveguides, are denoted as TE-WGMs, which show a transversal electric field (ez=0), and TM-WGMs, with transversal magnetic field (hz=0). Each series of modes is ruled by a transcendental equation that must be solved: Eq. (3) for TM modes and Eq. (4) for TE modes. The solutions consist on a series of discrete wavelengths, which correspond to the different radial orders l of each mthvalue. With these values, it is possible to calculate the effective indices of each WGM resonance using Eq. (1): n1zJmk0n1zaJmk0n1za=n2Hm2k0n2aHm2k0n2aE3 1n1tJmk0n1taJmk0n1ta=1n2Hm2k0n2aHm2k0n2aE4 In Eqs. (3) and (4), k0is the wavenumber in vacuum, k0=2π/λ, Jmis the Bessel function of order m and Jmis its first derivative, Hm2is the second class Hankel function of order m, and Hm2is its first derivative. We have considered that the external medium does not present any anisotropy (in our case, it will be air). By following this procedure, it is possible to calculate the dispersion curves of several WGMs propagating in a cylindrical, silica MR of 125 μmdiameter (the parameters of conventional optical fibers). Sellmeier dispersion of the silica was taken into account for the refractive index of the material. It is worth to note that the dispersion curves are not truly a curve, but a series of discrete solutions that have a particular radial order l and azimuthal order m. For a standard optical fiber and 1550 nm optical wavelength, the azimuthal orders will be relatively high (m300). Figure 2 shows the calculations of the resonant wavelengths for the first radial orders, as a function of the azimuthal order m for the TM polarization. The curves for the TE polarization follow the same trend, but the values of the resonant wavelengths are slightly different. By using Eq. (1), it is possible to relate the resonant wavelength with the effective index of the WGM resonance. For the azimuthal order m=360and the first radial order, l=1, the resonant wavelengths and the effective indices of both polarization families are λRTM=1508.25nm, neffTM=1.3826and λRTE=1505.39nm, neffTE=1.3800. Thus, the resonances for each polarization are not overlapped in wavelength. Regarding the distribution of the fields, Figure 3a shows the amplitude of the electric field of the first radial order TM-WGM, propagating in a cylindrical, silica MR of 10 μmdiameter (the order m of the mode is 40; a low-order mode was considered in order to show the details of the field). As it can be observed, the field is well confined within the MR material (although its evanescent field is high enough to enable the use of these modes for sensing). As the azimuthal order of the WGM gets higher, the field will be more localized near the interface between the MR and the external medium. Also, it should be noted that, as the radial order of the WGM increases, the evanescent tail in the outer medium is larger; thus the quality factor of the correspondent resonance will be poorer. Figure 3b shows the field amplitude along the radial coordinate of the MR. As it can be observed, the optical power is localized in the outer region of the MR, near the interface, and shows a low-evanescent field in the outer medium, especially for the l=1mode. ## 3. Experimental setup The general setup used in the experiments is shown in Figure 4a. The light source is a tunable diode, linearly polarized laser (TDL) with a narrow linewidth (<300 kHz). The tuning range covers from 1515 to 1545 nm. The laser integrated a piezoelectric-based fine frequency tuning facility that allows continuous scanning of the emitted signal around a given wavelength, with subpicometer resolution. A polarization controller (PC) after the laser allows rotating the polarization of the light, and, as a consequence, it allows exciting TE- and TM-WGMs separately. The optical signal is then launched through an optical circulator, which enables measuring the WGM resonances in reflection by means of a photodetector (PD). The MR will consist on a section of the bare optical fiber under test (FUT). Depending on the experiment, it will be a conventional telecom fiber, a rare-earth doped fiber, a photosensitive fiber, or a fiber where a grating has been previously inscribed. It is carefully cleaned and mounted on a three-axis flexure stage. WGMs are excited around the FUT by using the evanescent optical field of an auxiliary microtaper with a waist of 1–2 μmin diameter and a few millimeters in length. This is not the only method that allows exciting WGMs in MRs: for example, one of the first techniques consisted on using a prism to excite the resonances in a spherical MR [15], but the efficiency was very poor. More recently, a fused-tapered fiber tip fabricated using a conventional fiber splicer was demonstrated to be capable of exciting WGMs in a cylindrical MR [16]. However, the highest efficiencies are achieved by using microtapers, with coupling efficiencies higher than 99% [8]. These microtapers are fabricated by the fuse-and-pull technique from conventional telecom fiber [17]. The microtaper and the MR are placed perpendicularly (see in the inset Figure 4a). Since the optical field of the WGMs is not axially localized (its extension is around 200 μmin length [7]), this setup allows exciting the WGM at different positions along the MR: by sweeping the microtaper along the MR, it is possible to detect variations of the parameters of the MR in the axial direction by measuring the shift of the resonances—radius [13, 18], temperature, or strain. Variations can be characterized along several centimeters of the MR. The transmission of the taper was measured using a photodetector, and the signal was registered by an oscilloscope synchronized with the TDL. A typical transmission trace consists on a signal that will present a series of notches at the resonant wavelengths. For MRs of 125 μmin diameter, the free spectral range between two consecutive azimuthal orders m is 4nm at 1550 nm, and it is the same for both polarizations. Figure 4b shows the reflection spectrum of a resonance in an optical fiber (a=62.5μm): its linewidth is 36 fm, which corresponds to a Q-loaded factor of 4 × 107. As it was mentioned before, the position of the resonances will depend on the value of the refractive index of the material. In the next sections, we will study the characterization of different fibers and fiber components by means of the measurement of the shift of WGM resonances as the effective index of the MR is modified. ## 4. Measurement of temperature profiles in doped fibers and fiber gratings When a silica fiber is heated up, two effects occur. First, the expansion of the fiber leads to a change of the diameter. Second, the thermo-optic effect induces a change in the refractive index of the material due to a variation of temperature. This variation modifies the spectral position of the WGM. From Eq. (1) it is possible to evaluate the shift of the resonant wavelength, ΔλR, of a WGM due to a variation of temperature, ΔT: In the case of optical fibers as MRs, it is a good approximation to assume that the thermo-optic coefficient (i.e., the second term in Eq. (5)) can be replaced by that of the pure silica, since the optical field of the WGMs is mainly localized in the fiber cladding (see Figure 3). The high sensitivity of WGMs to variations of temperature has been demonstrated for different geometries of the MR, such as microspheres [19, 20] or cylinders [21]. Moreover, the propagation of an optical signal of moderate power (1W or higher) in a fiber generally induces a variation of temperature of the material. Due to the variation of temperature, the optical response of the fibers, or fiber components, may change when they are in operation. Thus, a detailed characterization of this effect is of interest to design properly the fiber-based optical systems. The use of WGMs allows achieving a very low detection limit: Rivera et al. claimed a detection limit of two thousandths of degree [21]. Here, we will present the characterization of temperature variations in two different examples: (i) rare-earth doped active fibers and (ii) fiber gratings inscribed in commercial photosensitive fibers. ### 4.1. Measurement of temperature in rare-earth doped fibers Heating of rare-earth doped fibers can be an issue in fiber-based lasers and amplifiers. For example, thermal effects can be a limit to the maximum output power that these systems can provide [22]. Another example is the shift in wavelength observed in distributed Bragg reflectors (DBR) and distributed feedback (DFB) lasers due to a pump-induced increment of temperature [23]. The heat is due to the non-radiative processes related to the electronic relaxation of some dopants: for example, this effect is less important in ytterbium-doped fibers, while Er/Yb-codoped and erbium-doped fibers exhibited a high increase of temperature with pump, due to its specific electronic-level system [24]. Thus, it is an intrinsic characteristic of the doped fibers that one needs to evaluate in order to design the proper optical system. In the experiments presented here, several commercially available single-mode, core-pumped doped fibers from Fibercore were investigated. Specifically, the FUTs were three Er-doped fibers (DF-1500-F-980, M12-980/125, and I25-980/125), a Yb-doped fiber (DF-1100), and an Er/Yb-codoped fiber (DF-1500 Y). The values for absorption coefficients at the pump wavelength were 5.5 dB/m (DF-1500-F-980), 12 dB/m (M12-980/125), 21.9 dB/m (I25-980/125), 1000 dB/m (DF-1100), and 1700 dB/m (DF-1100). Short sections of 2cm in length of each FUT were used as the MR where the WGMs were excited. The FUTs were pumped with a single-mode, fiber-pigtailed laser diode that emitted a maximum power of 380 mW at 976 nm. As the pump launched to the FUT was increased, the WGM resonance shifted toward longer wavelengths in all cases, as it was expected, since the thermo-optic and the thermal expansion coefficients of silica are both positives. As an example, Figure 5 shows the shift in wavelength of a resonance as a function of the pump launched to the fiber DF-1500-F-980. In our experiments, we did not investigate in detail the temporal response of the phenomenon, which will be ruled by the mechanisms that convert the pump power to heat, the heat conduction in silica, and the transfer of heat to the air. Typically, it will be on the range of a few tens of microseconds [25]. At this point, several features of this technique must be clarified. First, it is worth to point out that the shift in wavelength is virtually independent of the particular resonance used for the measurements, that is, it does not depend on its radial and azimuthal order nor on its polarization. The sensitivity to thermal variations of different WGM resonances was theoretically calculated around 1.53 μm, taking into account both the thermal expansion of the fiber and the thermo-optic effect. The results showed that the difference in sensitivity between different resonances differs in less than 1/10000 per each ºCof temperature increase. This simplifies the utility of this technique. The second aspect to highlight is related to the fact that the dopants in the active fibers are located in their core, while WGMs are highly confined in the outer region of the cladding (see Figure 3). From the study of heat conduction in doped fibers carried out by Davis et al. [25], it is possible to calculate that, at the steady state, the increase of temperature at the core of the fiber is just 1.5% larger than at the outer surface. In order to calibrate the shift in wavelength of the WGM resonances with the heating, a FBG inscribed in the core of a doped fiber was used for comparison. The procedure is described in [21]. The WGM resonances shift at a rate of 8.2 pm/ºC. With this calibration, it is possible to correlate the shifts in wavelength with the increase of temperature in the core of the fiber. For the example shown in Figure 5, the maximum increment of temperature achieved for a pump of 370 mW was 3.7ºC. Figure 6 summarizes the measurements performed for the different doped fibers. A similar trend can be observed in all the cases; the resonances shift fast in wavelength for low pump powers, and, beyond certain pump, heating tends to saturate. It can be observed that the Yb fiber DF 1100 shows a similar increase of temperature to those of the Er-doped fibers, although the concentration of the dopants in the Yb fiber is much larger (note the absorption coefficient around 975 nm). Also, the highest temperature increment corresponds to the Er/Yb-doped fiber (DF 1500 Y), despite that it shows a lower absorption coefficient than its equivalent Yb-doped fiber (DF 1100). These results are in accordance to the fact that the heating is related to the existence of non-radiative transitions for the relaxation of electrons in the active medium. ### 4.2. Measurement of temperature profiles in fiber components As it was mentioned before, WGMs are axially localized: their extension along the fiber is 200μm, typically, for a MR of 62.5μm. Thus, this technique provides spatial resolution. The taper can be swept along the MR in order to characterize the parameters of the FUT point to point. This feature was used in order to characterize the temperature profile along fiber components [26]. The FBGs used in the experiments were written in germanium-silicate boron codoped, photosensitive fibers from Fibercore, using a doubled-argon UV laser and a uniform phase mask. The length of all the gratings was 10mm. The WGMs were excited at different positions along the FBG, and, simultaneously, it was illuminated by optical signals of moderate powers, within or outside of the reflection band but in the vicinities of the Bragg wavelength. This illumination signal was provided by an amplified tunable laser (range, 1520–1560 nm) that provided up to 1 W of CW light. As a preliminary experiment, a section of fiber Fibercore PS980 was uniformly irradiated (i.e., there was no grating inscribed). The length was 5 mm, and the UV fluence power used in the irradiation was 150 J/mm2. The wavelength shift of the resonances was measured as the MR was illuminated with a 1550 nm optical signal, compared to the original position of the resonances, with no illumination along the FUT. Figure 7a shows the results. The data show a clear difference between the irradiated length (z<3mm) and the non-irradiated length (z>3mm). A temperature gradient in an intermediate region due to the heat conduction in silica and the transfer of heat to the air can be observed. It should be noted that this section is far larger than the length of the focused UV beam (700μm); thus, the beam size is not the cause of this transition length. In the irradiated section, the temperature increases at a rate higher than 10 ºC/W, for this sample, while the pristine fiber heats up at a rate lower than 1 ºC/W. The increment of temperature was linear with power in the available power range. This experiment avails that this technique allows characterizing the variations of temperature along the components with a resolution of tenths of a millimeter. This feature is useful when one needs to detect, evaluate, and correct smooth undesired non-homogeneities that may occur during the fabrication of FBG and LPG, which are usually short components. As an example, Figure 7b shows the measurement of the temperature profile of a section of an irradiated fiber (length, 5 mm) that suffered from some misalignment during the UV irradiation process. For this sample, a variation of 4 ºCis measured in such a short irradiated length. The temperature profile along a FBG with strong reflectivity was measured using this technique. The FBG had a reflectivity higher than 99.9%; the Bragg wavelength was 1556 nm, its length was 12 mm, and it was fabricated in PS1250 fiber (Fibercore). First, the illumination signal was tuned well outside the reflection band, at 1540 nm; in this case, there is no reflection of the optical signal; it just propagates through the FBG. The power launched to the MR was 800 mW. Curve (i) in Figure 8 shows the obtained results. As expected, a similar result to the case shown in Figure 7a was obtained: the heating over the length of the FBG was fairly constant, 5.5ºC. It should be noted that the axial resolution of the technique will be larger than the grating period. Then, the average increment of temperature should be similar to that introduced in the case of the uniformly irradiated fiber, for the same UV fluence and fiber characteristics. Two transition zones were clearly observed at both ends of the grating. Finally, the temperature profile was measured when the optical signal was tuned to the Bragg wavelength (power, 1 W) (see curve (ii) in Figure 8). In this case, one should take into account that the UV irradiation is constant over its length, and the gradient temperature is due to the fact that the optical signal is reflected as it penetrates into the grating. A sharp increment of temperature at the beginning of the grating, at the extreme that is illuminated, can be observed. The maximum is located at the vicinities of the point where the FBG begins. The decay of temperature extends over a length of 5mm, which is shorter than the length of the fiber itself (12 mm). This is consistent with the high reflectivity of this FBG. Moreover it should be noted that, at the beginning of the curve, that is, z=03mm, the temperature increase is 2ºC, that is, roughly twice the value obtained for a pristine fiber. On the contrary, in the section after the grating (and even at the last millimeters of the FBG), the increment of temperature is below the detection limit of the technique. The origin of this asymmetry is the reflection of the optical signal: the amount of light that reaches the last millimeters of the FBG is very small. This technique, then, provides information about the effective length of gratings of different reflectivity, information that could be relevant for the design of optical systems that require of short cavities, or cavities that require of a very precise length, as in the case of mode-locked fiber lasers. ## 5. Measurement of absorption coefficients in photosensitive fibers In the previous section, the gradient of temperature induced in fiber-optic components by means of an illumination signal has been characterized and discussed. It has been shown that there is a difference in temperature between the sections that have been irradiated with UV light compared to the pristine fibers. It is well known that the UV irradiation induces a change in the index of photosensitive fibers, which is employed to fabricate FBGs and LPGs. According to Kramers-Kronig relations, the change in the refractive index is associated with a variation of the absorption coefficient. In addition, the exposure of the fiber to the levels of UV light usually employed in the grating fabrication induces mechanical deformations in the fiber [27]. This leads to an increase of the loss due to scattering. Thus, when a fiber is UV irradiated, its loss, α, increases due to two causes: absorption that will be quantified by αabsand scattering, αscat. The increase of αintroduced by UV irradiation has been measured before [28], since this is a parameter of interest to optimize the fabrication of FBGs, especially in the case of long or superimposed gratings with many reflection bands [29, 30]. This measurement provides information about an averaged value of the attenuation loss along the irradiated section, which includes both the absorption and the scattering contributions. The technique based on the measurement of the shift of WGM resonances will only measure the absorption coefficient; thus, by combining the two types of measurements, it is possible to evaluate both contributions separately. Different types of photosensitive fibers were studied [11]: (i) Fibercore PS980, (ii) Fibercore PS1250, (iii) Fibercore SM1500, and (iv) Corning SMF28; this fiber was hydrogenated for 15 days (pressure: 30 bar) to increase its photosensitivity. The setup used in the experiments was the same than in the previous experiments shown in this chapter. In this case, the FUTs were short sections of the different fibers, which were exposed to a UV fluence of 150 J/mm2. Similar temperature profiles to that shown in Figure 7a were obtained for all of them, but with different temperature increments, since the photosensitivity was also different for each of them. The different increases of temperature between the irradiated fiber and the pristine fiber will provide us information to quantify the variation in the αabsdue to UV irradiation. It will be assumed that the heating over the transversal section of the fiber, at a given axial position, is set by the absorption coefficient, αabs. According to the analysis reported by Davis et al. [25], the heating at the steady state, ΔT, will be given by ΔTP=12πahαabsE6 where his the heat transfer coefficient (81.4 Wm2K1for a silica fiber). Then, the ratio of ΔTbetween two different points along the FUT, 1 and 2, is given by ΔT2ΔT1=α2absα1absE7 Thus, with this analysis and the experimental data obtained from the measurement of the wavelength shift of WGM resonances in irradiated points (1) and pristine points (2) of the FUT, this ratio between the respective αabscan be calculated. Direct measurements of transmission loss variation as the fibers were irradiated were carried out for a PS980 fiber. First, the value of the loss of the pristine fiber was measured at 1550 nm by means of the cutback method: the obtained value was 120.0±0.5dB/km. Then, the UV laser was swept back and forth along a 5-cm-long section of the fiber, repeatedly. The full description of the procedure is described in [11]. Figure 9a shows the data obtained in this experiment. The final loss was 6.2±0.4dB/m; thus the ratio between the loss coefficients, α2/α1, increased 52±3times. Please remember that this loss coefficient includes both absorption and scattering contributions (α=αabs+αscat). The contribution to the loss by means of the absorption mechanism was measured using the WGM technique (see Figure 9b). In this case, a 1550 nm laser (maximum power, 1 W) was launched to the FUT, and the thermal shift of the resonances was measured as the laser power was increased, at two different points, one within the irradiated section and one outside it. The data does not show any sign of saturation of the heating, at this range of power. The temperature of the irradiated section increased linearly, at a rate of 26.48±0.15ºC/W, and at 0.718±0.014ºC/Win the pristine region. The ratio between these values, that is, the ratio α2abs/α1abs, is 36.9±0.7ºC/W. This process was repeated for all the different fibers mentioned before: PS1250, SM1500, and hydrogenated SMF28, at 1550. Table 1 includes the results from the measurements and the corresponding analysis: α2/α1was obtained for each of them from the direct measurement of the loss, while α2abs/α1abswas calculated from the technique based in WGMs. WGM techniqueDirect measurements 1α2abs/α1absΔT/PºC/Wα2/α1α(dB/km) PS98036.9±0.726.48±0.150.718±0.01452±36200±400120±23 PS125040.1±0.830.80±0.170.768±0.01450±36600±400131.14 SM1500>4011.20±0.03<0.032190±50370±901.954 H2-SMF2828.8±0.523.48±0.130.815±0.012n/a25600±400n/a1 ### Table 1. Measurement of thermal heating and loss coefficient of different fibers. Nonavailable. Below detection limit. Cutback measurement. Nominal value. The results, compiled in Table 1, allow establishing several conclusions of interest. First, as expected, the absorption coefficient is substantially increased due to the UV irradiation. As a consequence, even for signals of moderate powers, FBGs might experience shifts and chirps that should be taken into account [31]. Second, the results show that α2/α1is systematically higher than α2abs/α1abs. Roselló-Mechó et al. analyzed the measurements to demonstrate that these results lead to the conclusion that scattering loss increases at a higher rate than absorption loss [11]. Finally, Eq. (6) can be used to calculate the absolute value of the absorption and scattering coefficients by taking into account the values of h and a for a silica fiber [25] and the measurements of α. The results of the contributions are compiled in Table 2. Both contributions are in the same order of magnitude, but αscatis smaller for three of the four FUTs. These values confirm that scattering loss increases faster than absorption loss. αabs(dB/km)αscat(dB/km) PS9803680±2099.7±1.92500±40020±3 PS12504280±20106.6±1.92300±40024.5±1.9 SM1500167±4<1.951200±90<1.951 H2-SMF283260±18113.1±1.72300±400n/a2 ### Table 2. Absorption and scattering contributions to the overall attenuation coefficient. Nominal value. Nonavailable, hydrogenated fiber. Thus, by means of the combination of both techniques, it is possible to quantify the different contributions to the loss, even for short sections of fiber. This information might be useful, for example, in the design of novel-active doped fibers, since it is possible to evaluate if the doping technique increases the scattering loss unnecessarily, but not so much the absorption. ## 6. Measurement of Pockels coefficients in optical fibers The elasto-optic effect consists on the variation in the refractive index generated by any strain applied to the fiber. The correspondent elasto-optic coefficients are usually determined by measuring the optical activity induced by a mechanical twist and the phase change induced by longitudinal strain [32, 33]. This technique relies on the use of the conventional axial modes propagating through the fiber. Since these modes are essentially transverse to the axis of the fiber [34], the anisotropy of the elasto-optic effect does not show up. On the contrary, WGMs have a significant longitudinal component; hence, their optical fields experience the anisotropy of the elasto-optic effect intrinsically. In the last years, researchers have demonstrated a number of fiber devices in which the longitudinal components of the electromagnetic modes are significant, such as microfibers [35] and microstructured optical fibers with a high air-filling fraction [36]. For these cases, the measurement and characterization of the anisotropy of the elasto-optic effect and its Pockels coefficients are of high interest. Roselló-Mechó et al. reported a technique based on the different wavelength shifts of TE- and TM-WGM resonances in a fiber under axial strain, to measure these coefficients [37]. This technique has the additional advantage that, since it does not involve the conventional modes of the fiber, there is no need that the FUTs are single mode in order to carry out the measurements. Then, the coefficients can be measured at different wavelengths to determine their dispersion; this is a limitation of the usual technique based on the optical activity which is overcome by means of WGM technique [38]. According to Eq. (1), a variation in the refractive index will tune the WGM resonances in wavelength. In this case, an axial strain will be applied to the FUT in order to induce this variation in the index, due to the elasto-optic effect. This feature was applied in different works in order to tune the WGM resonances [39, 40]. However, there was not any mention to the different behaviors of TE- and TM-WGM. An axial strain introduces a refractive index perturbation in an isotropic, cylindrical MR, due to the elasto-optic effect, which will be different for the axial (Δnz) and transversal directions (Δnt): Δntn0=petε;petn022p12νp11+p12,E8 Δnzn0=pezε;pezn022p112νp12,E9 where n0is the unperturbed index of the MR, pijare the elasto-optic coefficients, νis Poisson’s ratio, and εis the strain applied to the MR. The coefficients petand pezare the effective elasto-optic coefficients, which are defined for simplicity. According to the reported values for the elasto-optic coefficients for fused silica (p11=0.121, p12=0.27[41], ν=0.17[42]), the ratio Δnt/Δnz6.97; hence, it is expected that the strain introduces a significant differential anisotropy. With this in mind, Maxwell’s equations will be solved considering the uniaxial tensor given by Eq. (2). The solutions, as mentioned before, split in two families of WGM, the TE and TM modes, whose resonant frequencies will be obtained by solving Eqs. (3) and (4). The refractive index perturbation is not the only factor to take into account when evaluating the wavelength shift of WGM resonances due to strain: the radius a of the MR also varies with it according to Poisson’s ratio, Δa/a=νε. With all these ideas in mind, the relative shift of the WGM resonances, ΔλR/λR, can be characterized as a function of the strain, for TE- and TM-WGMs. Figure 10a shows an example of the anisotropic behavior of TE- and TM-WGM. The strain applied to the MR was 330 μεfor both polarizations, and the measured wavelength shift was different for each of them: 0.18 nm for TE-WGM and 0.11 nm for TM-WGM. ΔλR/λRwas measured as a function of the strain in detail at 1531 nm, for both polarizations; the results are shown in Figure 10b. A linear trend in both cases can be observed: the slopes of the linear regressions that fit the experimental values are sTE=0.369±0.006με1for the TE- and sTM=0.201±0.004με1for the TM-WGM. The ratio sTE/sTM=1.84shows the anisotropy of the elasto-optic effect. From these values, it is possible to calculate the elasto-optic coefficients pijwith its uncertainties (see [37] for a more detailed description of the procedure), by taking into account the Sellmeier coefficients for the value of the refractive index at 1531 nm and Poisson’s ratio of ν=0.17±0.01. The measurements were repeated at 1064 nm, to study the dispersion of the elasto-optic effect. Results at both wavelengths are compiled in Table 3 and are compared with those reported in the literature. Both sets of measurements are in good agreement, and the small differences might be due to the fact that the technique based in WGM measures the pijof the cladding material (i.e., fused silica), while in the case of the other techniques, the coefficients are determined by the material of the fiber core, which is usually silica doped with other elements. Present workLiterature 1531 nm1064 nm p110.1160.1310.113 @ 633 nm [32] 0.121 @ 633 nm [41] p120.2550.2670.252 @ 633 nm [32] 0.270 @ 633 nm [41] ### Table 3. Comparison of experimental pijvalues with those reported in the literature. ## 7. Conclusions In this chapter, we described a technique based on the excitation of WGMs around cylindrical MRs, to measure properties of the MR material. The resonant nature of the WGMs confers this technique with high sensitivity and low detection limits. Also, the technique allows measuring these parameters with axial resolution; hence, it is possible to detect changes of the parameters point to point along the MR. The technique has been applied to different experiments. Mainly, thermo-optic effect and elasto-optic effect have been investigated in silica fibers. The variation in the index, due to a change in the temperature or strain, rules the shift in wavelength of the WGM resonances. When the technique was applied to different types of fibers and components, different information were obtained from the experiments. In particular, we measure temperature profiles in pumped, rare-earth doped fibers and in FBGs; the absorption coefficient in irradiated photosensitive fibers; and the Pockels coefficients in telecom fibers. Novel results were obtained: for example, it was possible to measure absorption and scattering loss coefficients separately, and, also, the anisotropy of the elasto-optic effect was observed experimentally. The information provided by the WGM-based technique might help to optimize the fabrication procedures of doped fibers and fiber components as FBGs or LPGs. ## Acknowledgments This work was funded by Ministerio de Economía y Competitividad of Spain and FEDER funds (Ref: TEC2016-76664-C2-1-R) and Generalitat Valenciana (Ref: PROMETEOII/2014/072), Universitat de València (UV-INV-AE16-485280). X. Roselló-Mechó’s contract is funded by the FPI program (MinECo, Spain, BES-2014-068607). E. Rivera-Pérez’s contract is funded by the Postdoctoral Stays in Foreigner Countries (291121, CONACYT, Mexico). ## How to cite and reference ### Cite this chapter Copy to clipboard Martina Delgado-Pinar, Xavier Roselló-Mechó, Emmanuel Rivera-Pérez, Antonio Díez, José Luis Cruz and Miguel V. Andrés (November 5th 2018). Whispering Gallery Modes for Accurate Characterization of Optical Fibers’ Parameters, Applications of Optical Fibers for Sensing, Christian Cuadrado-Laborde, IntechOpen, DOI: 10.5772/intechopen.81259. Available from: ### Related Content Next chapter #### Scanning Electron Microscopy Edited by Viacheslav Kazmiruk First chapter #### Gaseous Scanning Electron Microscope (GSEM): Applications and Improvement By Lahcen Khouchaf We are IntechOpen, the world's leading publisher of Open Access books. Built by scientists, for scientists. Our readership spans scientists, professors, researchers, librarians, and students, as well as business professionals. We share our knowledge and peer-reveiwed research papers with libraries, scientific and engineering societies, and also work with corporate R&D departments and government entities. View all Books
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8569785356521606, "perplexity": 1153.2723040176259}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347435987.85/warc/CC-MAIN-20200603175139-20200603205139-00095.warc.gz"}
http://tex.stackexchange.com/questions/13749/short-title-that-is-not-displayed-in-the-toc-hyperref
# Short title that is not displayed in the ToC {hyperref} I am using TexShop and because there is nothing like a navigation (or at least I don't know how to create it) and my document comprises about 200-300 pages, I try to build my own navigation in the footer of my PDF-file by using the package fancyhdr and the \nameref. The problem is that my headlines are very long. I used the short title to cut them down, but now my ToC is cut down as well. I wonder if there is a possibility to use the short title just for \nameref while putting the full title in the ToC. If someone knows a solution it would be really great. Thanks in advance. Hi again, as you ask for the minimal code. Here it is (I hope it is minimal, am relatively new to this things): \documentclass{book} \usepackage{fancyhdr} \pagestyle{fancy} \lfoot{\nameref{S:A}, \nameref{S:B}, \nameref{S:C}} \begin{document} \tableofcontents \newpage \section[Intro]{Vorbemerkung}\label{S:A} \newpage \section[Paul]{Paul ist ein Mann von Format}\label{S:B} \newpage \section[Klaus]{Klaus}\label{S:C} \end{document} @Caramdir: My TexShop displays the document in his own pdf which enables me to switch from this pdf to the tex file and back by clicking "apple + mouse" and is one of my navigation tools which works as follows: I go to the toc of my pdf, by hyperlink I get to the section I want and there I click "apple + mouse" in the document to get exactly to the same point in the tex document. Many steps I know, but I see no other way. By placing hypertext in the header I can improve this process because I can easily jump back close to the toc (exactly to the first chapter). The "apple + mouse"-option doesn't work when I open the document with other programs and thats why your suggestion doesn't help in this case. - If you use hyperref and compile with pdflatex, you should get a toc in the side bar of your pdf viewer. –  Caramdir Mar 17 '11 at 19:09 Which document class are you using? Could you post some minimal, complete and compilable code showing us your current settings? –  Gonzalo Medina Mar 17 '11 at 19:15 Welcome to tex.sx! It's not necessary to sign your questions (as there is already a box with your username below it) or to begin them with a greeting. –  Martin Scharrer Mar 17 '11 at 19:46 To add to @Caramdir's pointer, click the icon in the menu bar of the preview window that looks like two photos. The sidebar will open up with the navigation. –  Matthew Leingang Mar 17 '11 at 19:47 If you want to keep the contents of the mandatory argument of the sectional units in the ToC while using the contents of the optional argument in the headers/footers, you need to redefine the commands \@part (which controls the information for parts), \@chapter (which controls the information for chapters) and \@sect (which controls the information for the other sectional units). Here's an example code of the redefinition (the lines between \makeatletter and \makeatother): \documentclass{book} \usepackage{fancyhdr} \pagestyle{fancy} \lfoot{\nameref{S:A}, \nameref{S:B}, \nameref{S:C}} \makeatletter \def\@part[#1]#2{% \ifnum \c@secnumdepth >-2\relax \refstepcounter{part}% \else \fi \markboth{}{}% {\centering \interlinepenalty \@M \normalfont \ifnum \c@secnumdepth >-2\relax \huge\bfseries \partname~\thepart \par \vskip 20\p@ \fi \Huge \bfseries #2\par}% \@endpart} \def\@chapter[#1]#2{\ifnum \c@secnumdepth >\m@ne \if@mainmatter \refstepcounter{chapter}% \typeout{\@chapapp\space\thechapter.}% {\protect\numberline{\thechapter}#2}% \else \fi \else \fi \chaptermark{#1}% \if@twocolumn \else \fi} \def\@sect#1#2#3#4#5#6[#7]#8{% \ifnum #2>\c@secnumdepth \let\@svsec\@empty \else \refstepcounter{#1}% \protected@edef\@svsec{\@seccntformat{#1}\relax}% \fi \@tempskipa #5\relax \ifdim \@tempskipa>\z@ \begingroup #6{% \@hangfrom{\hskip #3\relax\@svsec}% \interlinepenalty \@M #8\@@par}% \endgroup \csname #1mark\endcsname{#7}% \ifnum #2>\c@secnumdepth \else \protect\numberline{\csname the#1\endcsname}% \fi #8}% \else \def\@svsechd{% #6{\hskip #3\relax \@svsec #8}% \csname #1mark\endcsname{#7}% \ifnum #2>\c@secnumdepth \else \protect\numberline{\csname the#1\endcsname}% \fi #8}}% \fi \@xsect{#5}} \makeatother \begin{document} \tableofcontents \part[Short part title]{A part title not so short} \chapter[Short title]{A really really really really really really really long title} \section[Intro]{Vorbemerkung}\label{S:A} \newpage \section[Paul]{Paul ist ein Mann von Format}\label{S:B} \newpage \section[Klaus]{Klaus}\label{S:C} \end{document} - Hi, this way is realy great. It works perfectly well. But I have one more question. Can you provide me such a definition for \part{asdfadsf}\label{P:A} as well, that would really help. I thought there will be an easy solution, which I can apply to other things (like parts) as well, but what you created transcends my abilities by far.And one more thing: This version doesnt show the page numbers. How to change that? –  Philip Mar 18 '11 at 0:18 @Philip: I added the redefinition for parts. I also suppressed \fancyhf{} so now the page numbers will be shown. –  Gonzalo Medina Mar 18 '11 at 1:07 Hi, this way it satisfy all my needs. Thank you very much! –  Philip Mar 18 '11 at 8:17
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9135499596595764, "perplexity": 2412.3110314432893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413558066290.7/warc/CC-MAIN-20141017150106-00233-ip-10-16-133-185.ec2.internal.warc.gz"}
http://www.mathisfunforum.com/post.php?tid=18279&qid=236438
Discussion about math, puzzles, games and fun.   Useful symbols: ÷ × ½ √ ∞ ≠ ≤ ≥ ≈ ⇒ ± ∈ Δ θ ∴ ∑ ∫ • π ƒ -¹ ² ³ ° You are not logged in. | Options bobbym 2012-10-22 01:25:47 It looks like it comes from the total derivative but I can not understand their notation so I can not derive it. zetafunc. 2012-10-21 23:28:31 Hmm. I think maybe that comes from an illustration. bobbym 2012-10-21 23:05:34 Hi; Over here he calls it a definition. http://www.ltcconline.net/greenl/course … iffere.htm zetafunc. 2012-10-21 22:59:31 As part of the proof, they are saying that is the total differential of the function f(x,y). Why? bobbym 2012-10-21 22:44:37 Hi; Check here about half way down. zetafunc. 2012-10-21 22:15:51 Thanks. I am particularly confused by the minus sign. bobbym 2012-10-21 22:03:33 Hi; That I do not. I will see if I can find anything. zetafunc. 2012-10-21 21:56:55 Yes, I agree. It was useful for solving the ODEs we were given in class when we had to differentiate annoying functions twice and sub initial conditions, etc... do you remember how it is derived? bobbym 2012-10-21 21:55:38 Hi zetafunc.; I know about that one. It is useful for implicit functions. zetafunc. 2012-10-21 21:11:01 I came across an interesting tool which would help for differentiating functions involving x and y; where f is a function of x and y and the deltas denote partial derivatives. But why is this the case?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9131486415863037, "perplexity": 2370.2920573904853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
https://waseda.pure.elsevier.com/en/publications/measurement-of-the-tt-production-cross-section-using-e%CE%BC-events-wi
# Measurement of the tt¯ production cross-section using eμ events with b-tagged jets in pp collisions at √s=13TeV with the ATLAS detector The ATLAS Collaboration Research output: Contribution to journalArticlepeer-review 96 Citations (Scopus) ## Abstract This paper describes a measurement of the inclusive top quark pair production cross-section (σtt¯) with a data sample of 3.2fb−1 of proton–proton collisions at a centre-of-mass energy of s=13TeV, collected in 2015 by the ATLAS detector at the LHC. This measurement uses events with an opposite-charge electron–muon pair in the final state. Jets containing b-quarks are tagged using an algorithm based on track impact parameters and reconstructed secondary vertices. The numbers of events with exactly one and exactly two b-tagged jets are counted and used to determine simultaneously σtt¯ and the efficiency to reconstruct and b-tag a jet from a top quark decay, thereby minimising the associated systematic uncertainties. The cross-section is measured to be: σtt¯=818±8(stat)±27(syst)±19(lumi)±12(beam) pb, where the four uncertainties arise from data statistics, experimental and theoretical systematic effects, the integrated luminosity and the LHC beam energy, giving a total relative uncertainty of 4.4%. The result is consistent with theoretical QCD calculations at next-to-next-to-leading order. A fiducial measurement corresponding to the experimental acceptance of the leptons is also presented. Original language English 136-157 22 Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics 761 https://doi.org/10.1016/j.physletb.2016.08.019 Published - 2016 Oct 10 ## ASJC Scopus subject areas • Nuclear and High Energy Physics ## Fingerprint Dive into the research topics of 'Measurement of the tt¯ production cross-section using eμ events with b-tagged jets in pp collisions at √s=13TeV with the ATLAS detector'. Together they form a unique fingerprint.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9905567169189453, "perplexity": 4391.399587971461}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522270.37/warc/CC-MAIN-20220518115411-20220518145411-00555.warc.gz"}
http://mathhelpforum.com/algebra/73957-simplification.html
# Math Help - Simplification 1. ## Simplification This is the original formula: $p(\frac{p}{3w_1})^\frac{1}{2}x^\frac{1}{2}_2-w_1(\frac{p}{3w_1})^\frac{3}{2}x^\frac{1}{2}_2-w_2x_2$ This is how far I've gotten: $\frac{2p}{3}(\frac{p}{3w_1})^\frac{1}{2}x^\frac{1} {2}_{2}-w_2x_2$ And this is the final answer I need to get to: $(\frac{4p^3}{27w_1})^\frac{1}{2}x^\frac{1}{2}_2-w_2x_2$ Any help would be greatly appreciated, thank you! 2. All you need to do is get the outer 2p/3 rewritten as $\sqrt{(4p^2/9)}$ Then it would have the same power as the other item raised to the 1/2 power so they could combine.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8926031589508057, "perplexity": 639.4142608587915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500824562.83/warc/CC-MAIN-20140820021344-00426-ip-10-180-136-8.ec2.internal.warc.gz"}
https://hal.inria.fr/hal-00708096
# Stochastic Gene Expression in Cells: A Point Process Approach Abstract : This paper investigates the stochastic fluctuations of the number of copies of a given protein in a cell. This problem has already been addressed in the past and closed-form expressions of the mean and variance have been obtained for a simplified stochastic model of the gene expression. These results have been obtained under the assumption that the duration of all the protein production steps are exponentially distributed. In such a case, a Markovian approach (via Fokker-Planck equations) is used to derive analytic formulas of the mean and the variance of the number of proteins at equilibrium. This assumption is however not totally satisfactory from a modeling point of view since the distribution of the duration of some steps is more likely to be Gaussian, if not almost deterministic. In such a setting, Markovian methods can no longer be used. A finer characterization of the fluctuations of the number of proteins is therefore of primary interest to understand the general economy of the cell. In this paper, we propose a new approach, based on marked Poisson point processes, which allows to remove the exponential assumption. This is applied in the framework of the classical three stages models of the literature: transcription, translation and degradation. The interest of the method is shown by recovering the classical results under the assumptions that all the durations are exponentially distributed but also by deriving new analytic formulas when some of the distributions are not anymore exponential. Our results show in particular that the exponential assumption may, surprisingly, underestimate significantly the variance of the number of proteins when some steps are in fact not exponentially distributed. This counter-intuitive result stresses the importance of the statistical assumptions in the protein production process. Type de document : Article dans une revue SIAM Journal on Applied Mathematics, Society for Industrial and Applied Mathematics, 2013, 73 (1), pp.195-211. 〈10.1137/120879592〉 Domaine : https://hal.inria.fr/hal-00708096 Contributeur : Philippe Robert <> Soumis le : jeudi 14 juin 2012 - 09:48:04 Dernière modification le : vendredi 25 mai 2018 - 12:02:03 ### Citation Vincent Fromion, Emanuele Leoncini, Philippe Robert. Stochastic Gene Expression in Cells: A Point Process Approach. SIAM Journal on Applied Mathematics, Society for Industrial and Applied Mathematics, 2013, 73 (1), pp.195-211. 〈10.1137/120879592〉. 〈hal-00708096〉 ### Métriques Consultations de la notice
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8619123101234436, "perplexity": 499.51477640729723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824119.26/warc/CC-MAIN-20181212203335-20181212224835-00439.warc.gz"}
https://www.maplesoft.com/support/help/Maple/view.aspx?path=Statistics/OrderByRank
OrderByRank - Maple Help Statistics OrderByRank order data items according to their ranks Calling Sequence OrderByRank(X, R, options) Parameters X - R - ranks options - (optional) equation(s) of the form option=value where option is one of order or unique; specify options for the OrderByRank function Description • The OrderByRank command orders the elements/rows of X according to their ranks. • The first parameter X is the data set - given as e.g. a Vector. • The second parameter R is the ranks data (also specified as a data set). Options The options argument can contain one or more of the options shown below. • order = ascending or descending -- Indicate whether the elements of X should be sorted in the ascending or descending order. The default value is order=ascending. • unique=truefalse -- If this option is set to true, all multiple occurrences of elements from X will be removed. The default value is unique=false. Notes • The OrderByRank command creates a copy of the original Array. Examples > $\mathrm{with}\left(\mathrm{Statistics}\right):$ > $A≔\mathrm{Array}\left(\left[a,b,c,d,e,f,g,h\right]\right)$ ${A}{≔}\left[\begin{array}{cccccccc}{a}& {b}& {c}& {d}& {e}& {f}& {g}& {h}\end{array}\right]$ (1) > $B≔\mathrm{Array}\left(\left[5,4,3,7,6,2,8,1\right]\right)$ ${B}{≔}\left[\begin{array}{cccccccc}{5}& {4}& {3}& {7}& {6}& {2}& {8}& {1}\end{array}\right]$ (2) > $\mathrm{OrderByRank}\left(A,B\right)$ $\left[\begin{array}{cccccccc}{h}& {f}& {c}& {b}& {a}& {e}& {d}& {g}\end{array}\right]$ (3) Sort two arrays simultaneously. > $A≔\mathrm{Array}\left(\left[5.,4.,3.,7.,6.,2.,8.,1.\right]\right)$ ${A}{≔}\left[\begin{array}{cccccccc}{5.}& {4.}& {3.}& {7.}& {6.}& {2.}& {8.}& {1.}\end{array}\right]$ (4) > $B≔\mathrm{Array}\left(\left[a,b,c,d,e,f,g,h\right]\right)$ ${B}{≔}\left[\begin{array}{cccccccc}{a}& {b}& {c}& {d}& {e}& {f}& {g}& {h}\end{array}\right]$ (5) > $C≔\mathrm{Rank}\left(A\right)$ ${C}{≔}\left[\begin{array}{cccccccc}{5}& {4}& {3}& {7}& {6}& {2}& {8}& {1}\end{array}\right]$ (6) > $\mathrm{OrderByRank}\left(A,C\right)$ $\left[\begin{array}{cccccccc}{1.}& {2.}& {3.}& {4.}& {5.}& {6.}& {7.}& {8.}\end{array}\right]$ (7) > $\mathrm{OrderByRank}\left(B,C\right)$ $\left[\begin{array}{cccccccc}{h}& {f}& {c}& {b}& {a}& {e}& {d}& {g}\end{array}\right]$ (8) > $X≔\mathrm{Array}\left(\left[a,b,c,d\right]\right)$ ${X}{≔}\left[\begin{array}{cccc}{a}& {b}& {c}& {d}\end{array}\right]$ (9) > $Y≔\mathrm{Array}\left(\left[2,3,4,1\right]\right)$ ${Y}{≔}\left[\begin{array}{cccc}{2}& {3}& {4}& {1}\end{array}\right]$ (10) > $\mathrm{OrderByRank}\left(X,Y\right)$ $\left[\begin{array}{cccc}{d}& {a}& {b}& {c}\end{array}\right]$ (11) > $Z≔\mathrm{Array}\left(\left[2,3,4,3\right]\right)$ ${Z}{≔}\left[\begin{array}{cccc}{2}& {3}& {4}& {3}\end{array}\right]$ (12) > $\mathrm{OrderByRank}\left(X,Z\right)$ > $\mathrm{OrderByRank}\left(X,Z,\mathrm{unique}=\mathrm{true}\right)$ $\left[\begin{array}{ccc}{a}& {d}& {c}\end{array}\right]$ (13) > $A≔\mathrm{Array}\left(1..6,1..2,\left[\left[10,a\right],\left[5,b\right],\left[-2,c\right],\left[3,d\right],\left[7,e\right],\left[20,f\right]\right]\right)$ ${A}{≔}\left[\begin{array}{cc}{10}& {a}\\ {5}& {b}\\ {-2}& {c}\\ {3}& {d}\\ {7}& {e}\\ {20}& {f}\end{array}\right]$ (14) > $R≔\mathrm{Rank}\left(A\left[1..-1,1\right]\right)$ ${R}{≔}\left[\begin{array}{cccccc}{5}& {3}& {1}& {2}& {4}& {6}\end{array}\right]$ (15) > $\mathrm{OrderByRank}\left(A,R\right)$ $\left[\begin{array}{cc}{-2}& {c}\\ {3}& {d}\\ {5}& {b}\\ {7}& {e}\\ {10}& {a}\\ {20}& {f}\end{array}\right]$ (16)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 34, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9062020778656006, "perplexity": 2036.524015181068}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571950.76/warc/CC-MAIN-20220813111851-20220813141851-00285.warc.gz"}
https://www.physicsforums.com/threads/electrostatic-potential-for-two-concentric-cylindrical-shells.377994/
# Electrostatic Potential for Two Concentric Cylindrical Shells • Start date • #1 64 0 ## Homework Statement Two very long hollow conducting cylindrical shells are situated along the x-axis. The shells are concentric and have negligible thickness. The inner shell has a radius a and a linear charge density +lambda, while the outer shell has a radius b and a linear charge density -lambda. Take the zero of electrostatic potential to be at r = 0. The coordinate r measures the distance from the common axis of the two cylinders in a region far from either end. a) Determine the electrostatic potential V(r) for all values of r b) Sketch V(r) vs. r for all r c) Determine the potential difference DeltaV between r=a and r=b d) If a positive charge +q is released from rest at r=a, what will be its kinetic energy when it reaches the outer cylinder at r=b. ## Homework Equations V(r)=q/4piE0r Delta V=Vb-Va=q/4PiE0*(1/rb-1/ra) Ke=1/2mv^2 U=kq1q2/r KEf=Ui ## The Attempt at a Solution a) at r=a V=q/4piE0a at r=b V=q/4piE0b b) as the radius increases the potential goes down. It starts at some positive y value and ends at some negative y value. c)Delta V=Vb-Va=q/4PiE0*(1/b-1/a) d) U=kq1q2/a 1/2mv^2=kq1q2/r v^2=2kq1q2/mr v=(2kq1q2/mr)^1/2 ## The Attempt at a Solution Related Introductory Physics Homework Help News on Phys.org • #2 64 0 I'm just wanting to know if i got this right • #3 ehild Homework Helper 15,536 1,905 ## Homework Statement Two very long hollow conducting cylindrical shells are situated along the x-axis. The shells are concentric and have negligible thickness. The inner shell has a radius a and a linear charge density +lambda, while the outer shell has a radius b and a linear charge density -lambda. Take the zero of electrostatic potential to be at r = 0. The coordinate r measures the distance from the common axis of the two cylinders in a region far from either end. ## Homework Equations V(r)=q/4piE0r The formula you quoted refers to a point charge. These are very long cylinders. ehild • #4 64 0 V(r)= 1/4piE0 Int dq/r Last edited: • #5 64 0 Part C) EA=Qenc/E0 E=lamda/PirE0 Delta V=-lamda/PiE0 Int dr/r Delta V= lamda/PiE0 ln(b/a) Last edited: • #6 ehild Homework Helper 15,536 1,905 You have started part C well, but it is still wrong. The first questions were: "a) Determine the electrostatic potential V(r) for all values of r b) Sketch V(r) vs. r for all r" So what is the potential for r<a? for r>b? ehild • #7 64 0 what equation am i supposed to use to figure out the potential? so confused. V (r) = 1/4PiE0 * Int dq/r ? • #8 ehild Homework Helper 15,536 1,905 Find the electric field first. What is it inside the inner cylinder? Between a and b? for r>b? Use Gauss' law. For r<0, the enclosed charge is 0. If a<r<b, the enclosed charge is +lambda *Length of the cylinder. For a cylinder with r>b, the enclosed charge is 0. E is the negative gradient of the potential. What is the potential like if E=0? ehild Last edited: • Last Post Replies 1 Views 8K • Last Post Replies 7 Views 5K • Last Post Replies 1 Views 7K • Last Post Replies 5 Views 45K • Last Post Replies 5 Views 4K • Last Post Replies 2 Views 2K • Last Post Replies 6 Views 23K • Last Post Replies 1 Views 3K • Last Post Replies 5 Views 2K • Last Post Replies 11 Views 338
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9214807748794556, "perplexity": 2167.013112458338}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141169606.2/warc/CC-MAIN-20201124000351-20201124030351-00230.warc.gz"}
http://www.researchgate.net/researcher/34603653_Kazimierz_Borkowski
# K. J. Borkowski North Carolina State University, Raleigh, North Carolina, United States Are you K. J. Borkowski? ## Publications (170)438.82 Total impact • ##### Article: Spitzer Observations of the Type Ia Supernova Remnant N103B: Kepler's Older Cousin? [Hide abstract] ABSTRACT: We report results from Spitzer observations of SNR 0509-68.7, also known as N103B, a young Type Ia supernova remnant in the Large Magellanic Cloud that shows interaction with a dense medium in its western hemisphere. Our images show that N103B has strong IR emission from warm dust in the post-shock environment. The post-shock gas density we derive, 45 cm$^{-3}$, is much higher than in other Type Ia remnants in the LMC, though a lack of spatial resolution may bias measurements towards regions of higher than average density. This density is similar to that in Kepler's SNR, a Type Ia interacting with a circumstellar medium. Optical images show H$\alpha$ emission along the entire periphery of the western portion of the shock, with [O III] and [S II] lines emitted from a few dense clumps of material where the shock has become radiative. The dust is silicate in nature, though standard silicate dust models fail to reproduce the "18 $\mu$m" silicate feature that peaks instead at 17.3 $\mu$m. We propose that the dense material is circumstellar material lost from the progenitor system, as with Kepler. If the CSM interpretation is correct, this remnant would become the second member, along with Kepler, of a class of Type Ia remnants characterized by interaction with a dense CSM hundreds of years post-explosion. A lack of N enhancement eliminates symbiotic AGB progenitors. The white dwarf companion must have been relatively unevolved at the time of the explosion. The Astrophysical Journal 06/2014; 790(2). · 6.73 Impact Factor • Source ##### Article: Spitzer IRS Observations of the XA Region in the Cygnus Loop Supernova Remnant [Hide abstract] ABSTRACT: We report on spectra of two positions in the XA region of the Cygnus Loop supernova remnant obtained with the InfraRed Spectrograph on the Spitzer Space Telescope. The spectra span the 10-35 micron wavelength range, which contains a number of collisionally excited forbidden lines. These data are supplemented by optical spectra obtained at the Whipple Observatory and an archival UV spectrum from the International Ultraviolet Explorer. Coverage from the UV through the IR provides tests of shock wave models and tight constraints on model parameters. Only lines from high ionization species are detected in the spectrum of a filament on the edge of the remnant. The filament traces a 180 km/s shock that has just begun to cool, and the oxygen to neon abundance ratio lies in the normal range found for Galactic H II regions. Lines from both high and low ionization species are detected in the spectrum of the cusp of a shock-cloud interaction, which lies within the remnant boundary. The spectrum of the cusp region is matched by a shock of about 150 km/s that has cooled and begun to recombine. The post-shock region has a swept-up column density of about 1.3E18 cm^-2, and the gas has reached a temperature of 7000 to 8000 K. The spectrum of the Cusp indicates that roughly half of the refractory silicon and iron atoms have been liberated from the grains. Dust emission is not detected at either position. 03/2014; 787(1). • ##### Article: Observation of Dust Grain Sputtering in a Shock [Hide abstract] ABSTRACT: We have detected emission in C IV λλ1548,1551 from C atoms sputtered from dust in the gas behind a shock wave in the Cygnus Loop using COS on HST. The intensity agrees approximately with predictions from model calculations that match the Spitzer 24 μm and the X-ray intensity profiles. Thus these observations confirm the overall picture of dust destruction in SNR shocks and the sputtering rates assumed. However, the CIV intensity 10" behind the shock is too high compared to the intensities at the shock and 25" behind it. Projection effects and a complex geometry are probably responsible for the discrepancy. 01/2014; • ##### Article: The Young Core-Collapse Supernova Remnant G11.2-0.3: An Asymmetric Circumstellar Medium and a Variable Pulsar Wind Nebula Kazimierz J. Borkowski, A. Moseby, S. P. Reynolds [Hide abstract] ABSTRACT: G11.2-0.3 is a young supernova remnant (SNR) that has been suggested to be associated with a historical supernova of 386 AD. In addition to a bright radio and X-ray shell, it contains a pulsar wind nebula (PWN) and a 65 ms pulsar. We present first results from new deep (about 400 ks in duration) Chandra observations from 2013 May and September. Ahead of the main shell, there are a number of outlying X-ray protrusions surrounded by bow shocks, presumably produced by dense ejecta knots. Pronounced spectral variations are seen in thermal X-ray spectra of the main shell, indicating the presence of shocks with a wide range in shock speeds and large spatial variations in intervening absorption. A band of soft X-ray emission is clearly seen at the remnant's center. We interpret this band as a result of the interaction of supernova ejecta with the strongly asymmetric wind produced by a red supergiant SN progenitor shortly before its explosion. We study interstellar absorption in the central region of the remnant, finding high absorption everywhere. This rules out the association of G11.2-0.3 with SN 386. The PWN is dominated by a bright "jet" whose spatial morphology is markedly different between our May and September observations. 01/2014; • Source ##### Article: Grain Destruction in a Supernova Remnant Shock Wave [Hide abstract] ABSTRACT: Dust grains are sputtered away in the hot gas behind shock fronts in supernova remnants, gradually enriching the gas phase with refractory elements. We have measured emission in C IV $\lambda$1550 from C atoms sputtered from dust in the gas behind a non-radiative shock wave in the northern Cygnus Loop. Overall, the intensity observed behind the shock agrees approximately with predictions from model calculations that match the Spitzer 24 micron and the X-ray intensity profiles. Thus these observations confirm the overall picture of dust destruction in SNR shocks and the sputtering rates used in models. However, there is a discrepancy in that the CIV intensity 10" behind the shock is too high compared to the intensities at the shock and 25" behind it. Variations in the density, hydrogen neutral fraction and the dust properties over parsec scales in the pre-shock medium limit our ability to test dust destruction models in detail. The Astrophysical Journal 10/2013; 778(2). · 6.73 Impact Factor • Source ##### Article: Supernova Ejecta in the Youngest Galactic Supernova Remnant G1.9+0.3 [Hide abstract] ABSTRACT: G1.9+0.3 is the youngest known Galactic supernova remnant (SNR), with an estimated supernova (SN) explosion date of about 1900, and most likely located near the Galactic Center. Only the outermost ejecta layers with free-expansion velocities larger than about 18,000 km/s have been shocked so far in this dynamically young, likely Type Ia SNR. A long (980 ks) Chandra observation in 2011 allowed spatially-resolved spectroscopy of heavy-element ejecta. We denoised Chandra data with the spatio-spectral method of Krishnamurthy et al., and used a wavelet-based technique to spatially localize thermal emission produced by intermediate-mass elements (IMEs: Si and S) and iron. The spatial distribution of both IMEs and Fe is extremely asymmetric, with the strongest ejecta emission in the northern rim. Fe Kalpha emission is particularly prominent there, and fits with thermal models indicate strongly oversolar Fe abundances. In a localized, outlying region in the northern rim, IMEs are less abundant than Fe, indicating that undiluted Fe-group elements (including 56Ni) with velocities larger than 18,000 km/s were ejected by this SN. But in the inner west rim, we find Si- and S-rich ejecta without any traces of Fe, so high-velocity products of O-burning were also ejected. G1.9+0.3 appears similar to energetic Type Ia SNe such as SN 2010jn where iron-group elements at such high free-expansion velocities have been recently detected. The pronounced asymmetry in the ejecta distribution and abundance inhomogeneities are best explained by a strongly asymmetric SN explosion, similar to those produced in some recent 3D delayed-detonation Type Ia models. The Astrophysical Journal Letters 05/2013; 771(1). · 6.35 Impact Factor • ##### Article: Azimuthal Density Variations Around the Rim of Tycho's Supernova Remnant [Hide abstract] ABSTRACT: {\it Spitzer} images of Tycho's supernova remnant in the mid-infrared reveal limb-brightened emission from the entire periphery of the shell and faint filamentary structures in the interior. As with other young remnants, this emission is produced by dust grains, warmed to $\sim 100$ K in the post-shock environment by collisions with energetic electrons and ions. The ratio of the 70 to 24 $\mu$m fluxes is a diagnostic of the dust temperature, which in turn is a sensitive function of the plasma density. We find significant variations in the 70/24 flux ratio around the periphery of Tycho's forward shock, implying order-of-magnitude variations in density. While some of these are likely localized interactions with dense clumps of the interstellar medium, we find an overall gradient in the ambient density surrounding Tycho, with densities 3-10 times higher in the NE than in the SW. This large density gradient is qualitatively consistent with the variations in the proper motion of the shock observed in radio and X-ray studies. Overall, the mean ISM density around Tycho is quite low ($\sim 0.1-0.2$ cm$^{-3}$), consistent with the lack of thermal X-ray emission observed at the forward shock. We perform two-dimensional hydrodynamic simulations of a Type Ia SN expanding into a density gradient in the ISM, and find that the overall round shape of the remnant is still easily acheivable, even for explosions into significant gradients. However, this leads to an offset of the center of the explosion from the geometric center of the remnant of up to 20%, although lower values of 10% are preferred. The best match with hydrodynamical simulations is achieved if Tycho is located at a large (3-4 kpc) distance in a medium with a mean preshock density of $\sim 0.2$ cm$^{-3}$. Such preshock densities are obtained for highly ($\ga 50$%) porous ISM grains. The Astrophysical Journal 05/2013; 770(2). · 6.73 Impact Factor • ##### Article: Asymmetric Circumstellar Matter in Type Ia Supernova Remnants Kazimierz J. Borkowski, S. P. Reynolds, J. M. Blondin [Hide abstract] ABSTRACT: The progenitors of Type Ia supernovae (SNe) are not well understood, but are likely to be of diverse origin, including single- and double-degenerate binary systems. Among single-degenerate progenitors, substantial amounts of circumstellar material (CSM) are expelled prior to the SN explosions by asymptotic giant branch (AGB) companions to the accreting white dwarfs. A subsequent collision of SN ejecta with the dense AGB wind has been detected among several distant SNe such as SN 2002ic, SN 2008J, and more recently PTF11kx. Dense CSM ejected by an AGB companion is present in the remnant of Kepler's SN of 1604, a Type Ia event. Observations of distant SNe hint at strongly asymmetric CSM distributions. A recent study of the CSM in Kepler's SNR by Burkey et al. indicates a large (factor of 10) density contrast between the dense, disk-like equatorial outflow and the more tenuous AGB wind above the orbital plane. A significant fraction of mature Type Ia SNRs in the Large Magellanic Cloud (LMC) shows the presence of dense Fe-rich ejecta in their interiors that cannot be explained by standard models of Type Ia explosions in a uniform ambient interstellar medium. We explore the hypothesis that these remnants originated in Type Ia explosions with strongly asymmetric CSM distributions such as found in Kepler's SNR. We present results of 2-D hydrodynamical simulations of the interaction of SN ejecta with asymmetric, disk-like AGB winds throughout the whole adiabatic stage of SNR evolution. Dense, asymmetric, and highly-ionized Fe-rich ejecta are indeed present in the simulated remnants, while the blast wave assumes a spherical shape shortly after passage through the ambient CSM. We also present simulated X-ray images and spectra and compare them with X-ray observations of selected remnants in the LMC. These remnants include DEM L238 and L249, recently observed by Suzaku, whose X-ray emission is strongly dominated by dense Fe-rich ejecta in their interiors. We contrast these remnants to more typical mature Type Ia SNRs such as 0534-69.9 and 0548-70.4 whose Suzaku spectra can be satisfactorily modeled with standard (without any CSM) X-ray models for Type Ia SNRs. 01/2013; • ##### Article: Chandra and Spitzer Observations of the NW Filament of SN 1006 [Hide abstract] ABSTRACT: We present results from Chandra and Spitzer observations of the NW region of SN1006. Deep X-ray observations from Chandra (companion paper by Winkler et al.) allow us to study the variation in shock velocity around the shell and elucidate the physics of diffusive shock acceleration, and both non-thermal and thermal X-ray emission, in unprecedented detail. Along the thermally-dominated NW limb, X-ray proper motions over an 11-yr baseline indicate a shock velocity of about 3000 km/s, consistent with measurements from optical studies. But even in the NW we find a few regions dominated by non-thermal emission, and proper motions of these small filaments show a velocity of 5000 km/s, virtually identical to that seen along the synchrotron-dominated NE limb. Higher shock speeds in the non-thermal regions than in thermal ones are consistent with the theoretical view of diffusive shock acceleration that faster shocks can enhance synchrotron X-ray emission. The existence of thermal and non-thermal regions, with strongly contrasting X-ray spectra and proper motions, in close proximity to one another indicates that interstellar density inhomogeneities exist on pc scales, even at the location of SN 1006, 550 pc above the Galactic plane. Spitzer IR imaging and spectroscopic observations also indicate an inhomogeneous ISM surrounding SN1006, where the shock has recently encountered a denser region to the NW. The 24 micron image from MIPS clearly shows faint filamentary emission just interior to the NW Balmer filaments that delineate the present position of the expanding shock. This is the first detection of IR radiation from SN 1006 and clearly indicates an origin in shock-heated interstellar dust grains. The spectrum confirms a warm dust origin for the IR emission, and a model of the dust spectrum is consistent with the pre-shock density of 1 cm^-3 derived from optical and X-ray studies. The dust-to-gas mass ratio in the pre-shock ambient medium is a factor of several lower than expected in the Galactic ISM, and radial profiles of the IR emission may indicate an overabundance of small grains at the location of SN1006. This work has been supported by NASA through grant (GO2-13066A) and contract RSA 1330031 (Spitzer). 01/2013; • Source ##### Article: X-ray Emission from Strongly Asymmetric Circumstellar Material in the Remnant of Kepler's Supernova [Hide abstract] ABSTRACT: Kepler's supernova remnant resulted from a thermonuclear explosion, but is interacting with circumstellar material (CSM) lost from the progenitor system. We describe a statistical technique for isolating X-ray emission due to CSM from that due to shocked ejecta. Shocked CSM coincides well in position with 24 $\mu$m emission seen by {\sl Spitzer}. We find most CSM to be distributed along the bright north rim, but substantial concentrations are also found projected against the center of the remnant, roughly along a diameter with position angle $\sim 100^\circ$. We interpret this as evidence for a disk distribution of CSM before the SN, with the line of sight to the observer roughly in the disk plane. We present 2-D hydrodynamic simulations of this scenario, in qualitative agreement with the observed CSM morphology. Our observations require Kepler to have originated in a close binary system with an AGB star companion. The Astrophysical Journal 12/2012; 764(1). · 6.73 Impact Factor • Source ##### Article: The First Reported Infrared Emission from the SN 1006 Remnant [Hide abstract] ABSTRACT: We report results of infrared imaging and spectroscopic observations of the SN 1006 remnant, carried out with the Spitzer Space Telescope. The 24 micron image from MIPS clearly shows faint filamentary emission along the northwest rim of the remnant shell, nearly coincident with the Balmer filaments that delineate the present position of the expanding shock. The 24 micron emission traces the Balmer filaments almost perfectly, but lies a few arcsec within, indicating an origin in interstellar dust heated by the shock. Subsequent decline in the IR behind the shock is presumably due largely to grain destruction through sputtering. The emission drops far more rapidly than current models predict, however, even for a higher proportion of small grains than would be found closer to the Galactic plane. The rapid drop may result in part from a grain density that has always been lower -- a relic effect from an earlier epoch when the shock was encountering a lower density -- but higher grain destruction rates still seem to be required. Spectra from three positions along the NW filament from the IRS instrument all show only a featureless continuum, consistent with thermal emission from warm dust. The dust-to-gas mass ratio in the pre-shock interstellar medium is lower than that expected for the Galactic ISM -- as has also been observed in the analysis of IR emission from other SNRs but whose cause remains unclear. As with other SN Ia remnants, SN 1006 shows no evidence for dust grain formation in the supernova ejecta. The Astrophysical Journal 12/2012; 764(2). · 6.73 Impact Factor • Source ##### Article: Dust in a Type Ia Supernova Progenitor: Spitzer Spectroscopy of Kepler's Supernova Remnant [Hide abstract] ABSTRACT: Characterization of the relatively poorly-understood progenitor systems of Type Ia supernovae is of great importance in astrophysics, particularly given the important cosmological role that these supernovae play. Kepler's Supernova Remnant, the result of a Type Ia supernova, shows evidence for an interaction with a dense circumstellar medium (CSM), suggesting a single-degenerate progenitor system. We present 7.5-38 $\mu$m infrared (IR) spectra of the remnant, obtained with the {\it Spitzer Space Telescope}, dominated by emission from warm dust. Broad spectral features at 10 and 18 $\mu$m, consistent with various silicate particles, are seen throughout. These silicates were likely formed in the stellar outflow from the progenitor system during the AGB stage of evolution, and imply an oxygen-rich chemistry. In addition to silicate dust, a second component, possibly carbonaceous dust, is necessary to account for the short-wavelength IRS and IRAC data. This could imply a mixed chemistry in the atmosphere of the progenitor system. However, non-spherical metallic iron inclusions within silicate grains provide an alternative solution. Models of collisionally-heated dust emission from fast shocks ($>$ 1000 km s$^{-1}$) propagating into the CSM can reproduce the majority of the emission associated with non-radiative filaments, where dust temperatures are $\sim 80-100$ K, but fail to account for the highest temperatures detected, in excess of 150 K. We find that slower shocks (a few hundred km s$^{-1}$) into moderate density material ($n_{0} \sim 50-250$ cm$^{-3}$) are the only viable source of heating for this hottest dust. We confirm the finding of an overall density gradient, with densities in the north being an order of magnitude greater than those in the south. The Astrophysical Journal 06/2012; 755(1). · 6.73 Impact Factor • ##### Article: Detailed X-Ray Study of O-Rich Supernova Remnant 0049-73.6 in the Small Magellanic Cloud [Hide abstract] ABSTRACT: Based on our deep 450 ks Chandra observation, we present our preliminary analysis of the oxygen-rich supernova remnant (SNR) 0049-73.6 in the Small Magellanic Cloud (SMC). We performed image and spectral analyses of the central ejecta nebula and the outer blast wave shock. Our line equivalent width maps of several elements (e.g., O, Ne, Mg, and Si) show a differential spatial structure of ejecta enriched in these species. Our detailed spatially-resolved spectral analysis of the central ejecta features show radial and azimuthal structures of ejecta elements and their thermal states. We also investigate the true 3-D nature of the central ejecta ("ring" vs spherical shell) by studying the surface brightness profile and applying an image de-projection method. 05/2012; • ##### Article: Circumstellar Dust in the Remnant of Kepler's Type Ia Supernova [Hide abstract] ABSTRACT: Kepler's Supernova Remnant, the remains of the supernova of 1604, is widely believed to be the result of a Type Ia supernova, and shows IR, optical, and X-ray evidence for an interaction of the blast wave with a dense circumstellar medium (CSM). We present low-resolution 7.5-38 μm IR spectra of selected regions within the remnant, obtained with the Spitzer Space Telescope. Spectra of those regions where the blast wave is encountering circumstellar material show strong features at 10 and 18 μm. These spectral features are most consistent with various silicate particles, likely formed in the stellar outflow from the progenitor system during the AGB stage of evolution. While it is possible that some features may arise from freshly formed ejecta dust, morphological evidence suggests that it is more likely that they originate from dust in the CSM. We isolate the dust grain absorption efficiencies for several regions in Kepler and compare them to laboratory data for dust particles of various compositions. The hottest dust in the remnant originates in the regions of dense, radiatively shocked clumps of gas, identified in optical images. Models of collisionally heated dust show that such shocks are capable of heating grains to temperatures of > 150 K. We confirm the finding that Kepler's SNR is still interacting with CSM in at least part of the remnant after 400 years. The significant quantities of silicate dust are consistent with a relatively massive progenitor. 01/2012; • ##### Article: Shock Acceleration Efficiency in Kepler's Supernova Remnant [Hide abstract] ABSTRACT: Fast shock waves like those in young supernova remnants put some fraction of their energy into fast particles, and another fraction into magnetic field. These fractions are not well determined typically, because synchrotron emission from relativistic electrons depends on roughly the product of the two, while the shock energy density depends on gas density and shock speed. Shock speeds can be difficult to determine from thermal X-ray spectra, as electrons and ions may have different temperatures, and significant energy may be lost to the fast particles. Most importantly, accurate thermal-gas densities are often unknown, or only roughly known from X-ray emission measures. All these quantities may vary at different locations in a supernova remnant. We present new determinations of gas densities at various points around the periphery of Kepler's supernova remnant, from modeling Spitzer IRS spectra from shock-heated dust. In combination with shock velocities from proper motions, radio brightnesses, and magnetic-field determinations from X-ray synchrotron morphology, we can then estimate the fractions of shock energy in relativistic electrons and in magnetic field, at different points around the remnant periphery. Furthermore, X-ray synchrotron emission visible around much of the periphery allows the determination of maximum electron energies. We present spatially resolved estimates of these quantities and discuss their significance for theoretical models of shock acceleration. 01/2012; • Source ##### Article: RCW 86: A Type Ia Supernova in a Wind-blown Bubble [Hide abstract] ABSTRACT: We report results from a multi-wavelength analysis of the Galactic supernova remnant RCW 86, the proposed remnant of the supernova of 185 A.D. We show new infrared observations from the Spitzer Space Telescope and the Wide-Field Infrared Survey Explorer, where the entire shell is detected at 24 and 22 μm. We fit the infrared flux ratios with models of collisionally heated ambient dust, finding post-shock gas densities in the non-radiative shocks of 2.4 and 2.0 cm–3 in the southwest (SW) and northwest (NW) portions of the remnant, respectively. The Balmer-dominated shocks around the periphery of the shell, large amount of iron in the X-ray-emitting ejecta, and lack of a compact remnant support a Type Ia origin for this remnant. From hydrodynamic simulations, the observed characteristics of RCW 86 are successfully reproduced by an off-center explosion in a low-density cavity carved by the progenitor system. This would make RCW 86 the first known case of a Type Ia supernova in a wind-blown bubble. The fast shocks (>3000 km s–1) observed in the northeast are propagating in the low-density bubble, where the shock is just beginning to encounter the shell, while the slower shocks elsewhere have already encountered the bubble wall. The diffuse nature of the synchrotron emission in the SW and NW is due to electrons that were accelerated early in the lifetime of the remnant, when the shock was still in the bubble. Electrons in a bubble could produce gamma rays by inverse-Compton scattering. The wind-blown bubble scenario requires a single-degenerate progenitor, which should leave behind a companion star. The Astrophysical Journal 10/2011; 741(2):96. · 6.73 Impact Factor • ##### Article: RCW 86: The Remnant of a Type Ia Explosion in a Wind-blown Bubble [Hide abstract] ABSTRACT: The identification of the progenitor systems of type Ia supernovae is an ongoing effort, with implications for many astronomical fields. In the single-degenerate scenario, the accreting system may leave some imprint on the surrounding medium, which should affect the dynamics of the expanding remnant. We present X-ray and infrared observations of RCW 86, the likely remnant of the supernova of 185 A.D., which for several decades has presented a dynamical puzzle. Its size (D = 25 pc) implies an explosion into a cavity, yet optically determined shock speeds are slower than the average expansion speed by an order of magnitude. We use Spitzer observations of the northwest and southwest rims to determine the postshock density in these regions to be 2 cm-3, and X-ray spectra to determine the ionization state of the gas and relative abundances of ejecta products. Oxygen lines in X-ray spectra are consistent with solar abundances from shocked ISM, while iron K-shell lines imply significant amounts of reverse-shocked iron. Combined with the existence of Balmer-dominated shocks around the entire periphery and the lack of a compact remnant, this strongly implies a type Ia origin. If the age of the remnant is 1825 years, hydrodynamic simulations require an explosion into a low-density (n0 = 0.002 cm-3) bubble, surrounded by higher density ambient medium (n0 = 0.5 cm-3). This model reproduces the observed ionization state of the gas, as well as the radius and velocity of the forward shock. This makes RCW 86 the first case of a type Ia SN exploding into a cavity carved by the progenitor system. The high shock speeds in the eastern limb imply an off-center explosion. Gamma-rays have been detected from this remnant, and we find that the spectrum below 1 TeV must harden to Gamma < 2. 09/2011; • ##### Article: Radioactivity, Particle Acceleration, And Supernova Ejecta In The Youngest Galactic SNR G1.9+0.3 [Hide abstract] ABSTRACT: A supernova explosion around 1900 produced a young remnant G1.9+0.3, presumably located near the Galactic center, that can now be studied at X-ray and radio wavelengths. A deep (980 ks) Chandra X-ray observation of G1.9+0.3 has just been completed (May and July 2011). We report first results based on this observation. In the interior, there is an excess of counts near 4.1 keV over a nonthermal continuum. A preliminary Markov chain Monte Carlo modeling of this feature with a Gaussian line and an underlying power-law continuum gives line energy of 4.1 keV and FWHM of 35000 km/s (but with a large 90% confidence interval from 11000 to 59000 km/s). The estimated line strength is nearly identical to our previous estimate based on the shorter-duration Chandra observations from 2007 and 2009. If this line is identified with radioactive 44Sc, 10-5 solar masses of 44Ti was produced in the explosion. The radio-bright northern shell shows K lines of Si, S, Ar, and Fe, with no Sc present. The Fe line is moderately broad (FWHM of 12000 km/s). A plane shock fit to the spectrum indicates an oversolar (1.6 solar) Fe abundance and plasma temperature of 3.7 keV. These are presumably heavy-element ejecta heated to high temperatures in the reverse shock. The total X-ray flux increased by 2.8% between the 2009 and 2011 Chandra observations. G1.9+0.3 is the only Galactic supernova remnant that is brightening at X-ray and radio wavelengths. We also present preliminary results of spectro-spatial analysis of the Chandra data cube, based on a method described by Krishnamurthy, Raginsky, & Willett (2010). Our aim is to study spatial distribution of the supernova ejecta, and variations in the nonthermal synchrotron continuum that dominates the total X-ray spectrum. 09/2011; • Source ##### Article: Expansion of the Youngest Galactic Supernova Remnant G1.9+0.3 [Hide abstract] ABSTRACT: We present a measurement of the expansion and brightening of G1.9+0.3, the youngest Galactic supernova remnant, comparing Chandra X-ray images obtained in 2007 and 2009. A simple uniform expansion model describes the data well, giving an expansion rate of 0.642 +/- 0.049 % yr^-1, and a flux increase of 1.7 +/- 1.0 % yr^-1. Without deceleration, the remnant age would then be 156 +/- 11 yr, consistent with earlier results. Since deceleration must have occurred, this age is an upper limit; we estimate an age of about 110 yr, or an explosion date of about 1900. The flux increase is comparable to reported increases at radio wavelengths. G1.9+0.3 is the only Galactic supernova remnant increasing in flux, with implications for the physics of electron acceleration in shock waves The Astrophysical Journal Letters 06/2011; 737(1). · 6.35 Impact Factor • Source ##### Article: Dusty Blast Waves of Two Young Large Magellanic Cloud Supernova Remnants: Constraints on Post-shock Compression [Hide abstract] ABSTRACT: We present results from mid-IR spectroscopic observations of two young supernova remnants (SNRs) in the Large Magellanic Cloud made with the Spitzer Space Telescope. We imaged SNRs B0509-67.5 and B0519-69.0 with Spitzer in 2005, and follow-up spectroscopy presented here confirms the presence of warm, shock-heated dust, with no lines present in the spectrum. We use model fits to Spitzer Infrared Spectrograph (IRS) data to estimate the density of the post-shock gas. Both remnants show asymmetries in the infrared images, and we interpret bright spots as places where the forward shock is running into material that is several times denser than elsewhere. The densities we infer for these objects depend on the grain composition assumed, and we explore the effects of differing grain porosity on the model fits. We also analyze archival XMM-Newton RGS spectroscopic data, where both SNRs show strong lines of both Fe and Si, coming from ejecta, as well as strong O lines, which may come from ejecta or shocked ambient medium. We use model fits to IRS spectra to predict X-ray O line strengths for various grain models and values of the shock compression ratio. For 0509-67.5, we find that compact (solid) grain models require nearly all O lines in X-ray spectra to originate in reverse-shocked ejecta. Porous dust grains would lower the strength of ejecta lines relative to those arising in the shocked ambient medium. In 0519-69.0, we find significant evidence for a higher than standard compression ratio of 12, implying efficient cosmic-ray acceleration by the blast wave. A compact grain model is favored over porous grain models. We find that the dust-to-gas mass ratio of the ambient medium is significantly lower than what is expected in the interstellar medium. The Astrophysical Journal 02/2011; 729(1):65. · 6.73 Impact Factor #### Publication Stats 1k Citations 438.82 Total Impact Points #### Institutions • ###### North Carolina State University • Department of Physics Raleigh, North Carolina, United States • ###### 2010 • Department of Earth and Space Science • ###### Massachusetts Institute of Technology • Kavli Institute for Astrophysics and Space Research Cambridge, Massachusetts, United States • ###### University of Maryland, College Park • Department of Astronomy College Park, MD, United States • ###### Hampden-Sydney College Hampden Sydney, Virginia, United States • ###### University of Colorado at Boulder • Center for Astrophysics and Space Astronomy • ###### Middlebury College • Department of Physics Middlebury, Indiana, United States • ###### Polytechnic University of Catalonia • Department of Physics and Nuclear Engineering (FEN) Barcino, Catalonia, Spain • ###### North Carolina School of Science and Mathematics Durham, North Carolina, United States
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9117431044578552, "perplexity": 3459.472451299751}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657135549.24/warc/CC-MAIN-20140914011215-00158-ip-10-234-18-248.ec2.internal.warc.gz"}
https://www2.math.binghamton.edu/p/seminars/comb/abstract.200704beck
### Sidebar seminars:comb:abstract.200704beck # Generating Functions of Rational Polyhedra and Dedekind-Carlitz Polynomials ## Abstract for the Combinatorics Seminar 2007 April 10 We study higher-dimensional analogs of the Dedekind-Carlitz polynomials, c(u,v;a,b) := Sumk=1 .. a-1 uk-1 vfloor(kb/a) , where u and v are indeterminates and a and b are positive integers. These polynomials satisfy the reciprocity law (u-1) c(u,v;a,b) + (v-1) c(v,u;b,a) = ua-1 vb-1 - 1 , from which one easily deduces many classical reciprocity theorems for the Dedekind sum and its generalizations, most notably by Hardy and Berndt-Dieter. Dedekind-Carlitz polynomials appear naturally in generating functions of rational cones. We use this fact to give geometric proofs of the Carlitz reciprocity law. Our approach gives rise to new reciprocity theorems and a multivariate generalization of the Mordell-Pommersheim theorem on the appearance of Dedekind sums in Ehrhart polynomials of 3-dimensional lattice polytopes. I will not assume familiarity with Dedekind sums or discrete geometry and I will carefully define all the terminology used above. The talk will be accessible to a beginning graduate student. This is joint work with Asia Matthews.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8911057710647583, "perplexity": 1457.014165695863}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711552.8/warc/CC-MAIN-20221209213503-20221210003503-00107.warc.gz"}
https://brilliant.org/problems/and-youll-be-on-the-walls-of-the-hall-of-fame/
And you'll be on the walls of the hall of fame. Algebra Level 3 If a belongs to real numbers and $$a_1 , a_2 , a_3, .\ldots, a_N$$ belong to real numbers then $$(x - a_1) ^2 + (x-a_2) ^2 + (x - a_3)^2 +\cdots + (x - a_N)^2$$assumes the least value at $$x =$$ ×
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8205585479736328, "perplexity": 768.6195449950062}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720475.79/warc/CC-MAIN-20161020183840-00058-ip-10-171-6-4.ec2.internal.warc.gz"}
https://zx31415.wordpress.com/2016/05/31/%e6%9c%88%e6%97%a6-iv/
# 月旦 IV For May, 2016 The L-functions and modular forms database正式上线! “I wish I could just casually hand Paul Erdos a copy of Annals of Math 181-1. 4 of the 7 papers are: solution to the Erdos distance conjecture by Guth and Nets Katz, solution to the Erdos covering congruence conjecture by Hough, Maynard’s paper on bounded gaps between primes, and the Bhargava-Shankar paper proving that the average rank of elliptic curves is bounded.” Every effective 2-cycle class of a compact Calabi-Yau contains a holomorphic curve representative——物理学家的这个猜想非常有趣,应将其视为Hodge猜想的某种变式加以研究。考虑到我们并不知道Hodge猜想在Kähler流形上的变式该如何陈述(众所周知,Voison证否了经典的变式候选),一个在Calabi-Yau流形上的变式更显示出其价值。 Galois Representations More on China’s giant accelerator program: 7月中国或将发射世界上首颗量子卫星。届时潘建伟团队在量子通讯、量子计算机方面的工作想必会得到更多媒体的关注吧。 The man who knew elliptic integrals, prime number theorems, and black holes The man who knew partition asymptotics ## One thought on “月旦 IV” 1. Anonymous says: 感觉你懂的东西好广
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8936066627502441, "perplexity": 4401.668190277038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542668.98/warc/CC-MAIN-20161202170902-00036-ip-10-31-129-80.ec2.internal.warc.gz"}
http://mathonline.wikidot.com/vector-subspaces-of-homogenous-systems-of-rn
Vector Subspaces of Homogenous Systems of Rn # Vector Subspaces of Homogenous Systems of Rn We will now look at some more vector subspaces and verify that they are in fact subspaces of another vector space. Instead of verify axioms 9 and 10 however, we will instead utilize the following lemma (proven on the Vector Subspaces page) to show that these sets are subspaces of a larger vector space: Lemma: A nonempty subset $U$ of an $\mathbb{F}$-vector space $V$ is a subspace of $V$ if and only if for all $a, b \in \mathbb{F}$ (where $\mathbb{F} = \mathbb{R}$ or $\mathbb{F} = \mathbb{C}$) and for all vectors $\mathbf{x}, \mathbf{y} \in U$, then $( a\mathbf{x} + b\mathbf{y} ) \in U$. ## Example 1 Let $V = \mathbb{R}^4$ be a vector space. Is $U = \{ (x_1, x_2, x_3, x_4 ) : x_1 - 2x_2 + x_3 + 4x_4 = 0 \}$ a subspace of $V$? $U$ is a vector subspace of $V$ if it satisfies all of vector space axioms. We will need to check that this space is closed under addition and scalar multiplication which we will show using the lemma from above. First let $a, b \in \mathbb{F}$ and let $\mathbf{x}, \mathbf{y} \in U$ such that $\mathbf{x} = (x_1, x_2, x_3, x_4)$ and $\mathbf{y} = (y_1, y_2, y_3, y_4)$. From the lemma above, we want to show that $(a \mathbf{x} + b \mathbf{y} ) \in U$ Expanding $a\mathbf{x} + b\mathbf{y}$ we get: (1) \begin{align} a_1\mathbf{x} + a_2\mathbf{y} = a(x_1, x_2, x_3, x_4) + b(y_1, y_2, y_3, y_4) \\ = (ax_1, ax_2, ax_3, ax_4) + (by_1, by_2, by_3, by_4) \\ = (ax_1 + by_1, ax_2 + by_2, ax_3 + by_3, ax_4 + by_4) \end{align} Now we want to check to see if if this vector satisfies the condition of our set, that is if $(ax_1 + by_1) - 2(ax_2 + by_2) + (ax_3 + by_3) + 4(ax_4 + by_4) = 0$. We note that: (2) \begin{align} (ax_1 + by_1) - 2(ax_2 + by_2) + (ax_3 + by_3) + 4(ax_4 + by_4) \\ = ax_1 - 2ax_2 + ax_3 + 4x_4 + by_1 -2by_2 + by_3 + 4by_4 \\ = a\underbrace{(x_1 - 2x_2 + x_3 + 4x_4)}_{\in U} + b\underbrace{(y_1 - 2y_2 + y_3 + 4y_4)}_{\in U} \\ = a (0) + b(0) \\ = 0 \end{align} Therefore we conclude that $U$ is a vector subspace of $V$, that is $U \subset V$. ## Example 2 Let $V = \mathbb{R}^4$ and let $U = \{ (x_1, x_2, x_3, x_4) : x_1 + x_2 - x_3 + x_4 = 0 \: \mathrm{and} \: 3x_1 -2x_2 + 4x_4 = 0 \}$. If $U$ a subspace of $V$? We first let $\mathbf{x}, \mathbf{y} \in U$ such that $x = (x_1, x_2, x_3, x_4)$ and $y = (y_1, y_2, y_3, y_4)$. Now let $a, b \in \mathbb{R}$. We want to show that $a\mathbf{x} + b\mathbf{y} \in U$, that is we want to show that $(ax_1 + by_1, ax_2 + by_2, ax_3 + by_3, ax_4 + by_4) \in U$. We will have to show this vector satisfies the two conditions defining the set $U$. For the first condition we need to show that $(ax_1 + by_1) + (ax_2 + by_2) - (ax_3 + by_3) + (ax_4 + by_4) = 0$. (3) \begin{align} (ax_1 + by_1) + (ax_2 + by_2) - (ax_3 + by_3) + (ax_4 + by_4) \\ = a\underbrace{(x_1 + x_2 - x_3 + x_4)}_{=0} + b\underbrace{(y_1 + y_2 - y_3 + y_4)}_{=0} \\ = a \cdot 0 + b \cdot 0 \\ = 0 \end{align} Now for the second condition we need to show that $3(ax_1 + by_1) - 2(ax_2 + by_2) + 4(ax_4 + by_4) = 0$. (4) \begin{align} 3(ax_1 + by_1) - 2(ax_2 + by_2) + 4(ax_4 + by_4) \\ = a\underbrace{(3x_1 -2x_2 + 4x_4)}{=0} +b\underbrace{(3y_1 - 2y_2 + 4y_4)}_{=0} \\ = a \cdot 0 + b \cdot 0 \\ = 0 \end{align} Therefore $U$ is a vector subspace of $V$. ## Example 3 We will now look at a subset of $V = \mathbb{R}^3$ that is not a vector subspace and is also not a homogenous linear equation. For example, consider the set $U = \{ (x_1, x_2, x_3) : x_1 + x_2 + x_3 = 4 \}$. We can think of $U$ as the set of all coordinates $(x_1, x_2, x_3)$ that satisfy the linear equation $x_1 + x_2 + x_3 = 4$. This collection of points form a plane. But notice that the the zero list $(0, 0, 0) \not \in U$ since $0 + 0 + 0 \neq 4$, and so this plane does not pass through the origin. Recall that if $0 \not \in U$ then $U$ cannot be a subspace of $V$. There's also another reason as to why $U$ is not a subspace of $V$}]. We note that the pair of coordinates [[$(1, 1, 2) \in U$. And so any scalar multiple $k(1, 1, 2)$ should be contained within $U$ since $U$ needs to be closed under scalar multiplication to be a vector subspace of $V$. If $k = 2$, then the point $2(1,1,2) = (2,2,4) \not \in U$ since $2 + 2 + 4 \neq 4$. Therefore $U$ is not a vector subspace of $\mathbb{R}^3$. ## Example 4 At minimum, how many vector subspaces must a vector space $V$ have? We note that the set containing only the zero element, that is $U_{0} = \{ 0 \}$ and the entire vector space set $U_{V} = V$ are vector subspaces of $V$ that satisfy all the axioms. We will show this now. Clearly $U_{V} = V \subseteq V$ is a vector space since $V$ is a vector space. We now need to show $U_{0} = \{ 0 \}$ is a vector subspace. Let $a, b \in \mathbb{F}$ and let $\mathbf{x}, \mathbf{y}, \in U_{0}$. Clearly $x = 0$ and $y = 0$ since $U_{0}$ contains only one element, namely $0$. Therefore: (5) \begin{align} a\mathbf{x} + b\mathbf{y} = a \cdot 0 + b \cdot 0 = 0 \in U \end{align} And so $U_{0}$ is a vector subspace of $V$. We conclude that thus every vector space $V$ has at minimum $2$ vector subspaces. ## Example 5 List all of the vector subspaces for $\mathbb{R}^3$. We note that all vector subspaces must contain the zero element $(0, 0, 0) \in \mathbb{R}^3$, namely, we must have each vector subspace set contain the zero element. We verify that all of the vector subspaces for $\mathbb{R}^3$ is the set $\mathbb{R}^3$ itself, $\{ 0 \}$, any line $L$ that passes through the origin, and any plane $\Pi$ that also passes through the origin.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 5, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999363422393799, "perplexity": 232.64551000771462}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541317967.94/warc/CC-MAIN-20191216041840-20191216065840-00355.warc.gz"}
http://math.stackexchange.com/users/8343/sam-lisi?tab=activity&sort=posts
Sam Lisi Reputation 2,070 Next privilege 2,500 Rep. Create tag synonyms Mar10 answered symplectic surfaces in 4-manifolds May17 answered When is symplectic pullback bundle trivial Apr3 answered Symplectic submanifolds in $\mathbb{R}^{4}$ Mar15 answered vector bundles and their cross-sections Mar15 answered What does it mean by saying that $u^n, J^n$ “$C^{\infty}$ converges” to u, J? Feb17 answered Second Hirzebruch surface as Delzant space associated to trapezoid Feb14 answered On the definition/notation for pseudoholomorphic curves Feb10 answered About symplectic embedding Feb10 answered Why is the dividing set nonempty when a convex surface has Legendrian boundary? Sep17 answered is any hamiltonian system with just one degree of freedom completely integrable? Sep16 answered Can the system $\partial_x f(x,y) = \dot{y}$, $\partial_y f(x,y) = \dot{x}$ be related to some Hamiltonian system? Sep16 answered Understanding the definition and meaning of cotangent space Jul25 answered symplectic strucutre May24 answered Does this IVP have a unique solution for all $x \in \mathbb R$ May7 answered Determining the embedding space: May7 answered When does a vector field admit orthogonal fields? May7 answered An alternative description of the first Stiefel-Whitney class May1 answered Question about symplectic tranformations Apr30 answered Implicit Function Theorem and Rank Theorem Misunderstandings. Apr27 answered Non-degenerate solutions to constant Hamiltonian flow
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9699400663375854, "perplexity": 4725.059878670224}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042990114.79/warc/CC-MAIN-20150728002310-00051-ip-10-236-191-2.ec2.internal.warc.gz"}
http://www.ss-pub.org/jmss/reinitiated-laplace-homotopy-ananlysis-method-for-solving-integral-equations/
• Reinitiated Laplace Homotopy Ananlysis Method For Solving Integral Equations Kong Hoong Lem Abstract: The complexity of the deformation equation increases exponentially with the order of approximation.  Consequently, implementing the Laplace homotopy analysis method (LHAM) under high deformation order can be very computationally costly and lengthy and even cause computational paralysis in cases. Here, the LHAM is modified in a reinitiated manner where the low order results are initiated for further approximation using truncated Maclaurin expansions. This modified approach manages to avoid high order approximation but still promises accurate approximate series solution. This approach greatly improves the efficiency of LHAM in solving integral equations. Keywords: Laplace transform, homotopy analysis method (HAM), integral equations.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9820535182952881, "perplexity": 3027.9928173222156}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104375714.75/warc/CC-MAIN-20220704111005-20220704141005-00100.warc.gz"}
http://mathoverflow.net/users/30491/a-b
# A.B. less info reputation 12 bio website location age 30 member for 1 year, 7 months seen Jul 28 at 11:31 profile views 121 # 7 Questions 12 Largest subset of $GL_n(p)$ in which pairwise subtraction is also in $GL_n(p)$ 7 Sets from $(F_2)^n$ which are not fixed by any non-identity isomorphism 5 In which fixed-point free representations is the sum of every 3 elements invertible? 5 Orthogonal orthomorphisms of order 2 4 Dimension of irreducible representations in characteristic p # 77 Reputation +25 In which fixed-point free representations is the sum of every 3 elements invertible? +10 Is there a system of quasigroup equations implying non-associativity? +5 Orthogonal orthomorphisms of order 2 +5 Sets from $(F_2)^n$ which are not fixed by any non-identity isomorphism 4 Sets which are not fixed by any non-identity isomorphism 1 Orthogonal orthomorphisms of order 2 # 9 Tags 4 linear-algebra × 3 0 characteristic-p × 2 0 gr.group-theory × 5 0 matrices 0 finite-groups × 5 0 characteristic-2 0 co.combinatorics × 4 0 non-associative-algebras 0 rt.representation-theory × 3 # 1 Account MathOverflow 77 rep 12
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.803246021270752, "perplexity": 1661.451805700978}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500815050.22/warc/CC-MAIN-20140820021335-00366-ip-10-180-136-8.ec2.internal.warc.gz"}
https://www.arxiv-vanity.com/papers/hep-lat/9608040/
# Wub 96-27 Hlrz 53/96 Evaluating Sea Quark Contributions to Flavour-Singlet Operators in Lattice QCD SESAM-Collaboration: N. Eicker, U. Glässner, S. Güsken, H. Hoeber Th. Lippert G. Ritzenhöfer, K. Schilling, G. Siegert A. Spitz, P. Ueberholz, and J. Viehoff HLRZ c/o KFA Jülich, D-52425 Jülich and DESY, D-22603 Hamburg, Germany, Physics Department, University of Wuppertal D-42097 Wuppertal, Germany. ###### Abstract In a full QCD lattice study with Wilson fermions, we seek to optimize the signals for the disconnected contributions to the matrix element of flavour-singlet operators between nucleon states, which are indicative for sea quark effects. We demonstrate, in form of a fluctuation analysis to the noisy estimator technique, that – in order to achieve a tolerable signal to noise-ratio in full QCD – it is advantageous to work with a -noise source rather than to rely only on gauge invariance to cancel non-gauge-invariant background. In the case of the N -term, we find that 10 -noise sources suffice on our sample ( about 150 independent QCD configurations at on with , equivalent to ), to achieve decent signals and adequate fluctuations, rather than 300 such sources as recently used in quenched simulations. ## 1 Introduction The direct computation of full fledged hadronic matrix elements containing flavour-singlet operators , such as the -N -term, , or the singlet axial vector current forward matrix element between nucleon states, has posed serious problems to lattice gauge theory. As a result there is up to now only faint evidence for sea quark effects from full QCD lattice computations of such matrix elements[1]. The practical bottleneck is given by the computation of disconnected diagrams, i.e. ubiquitous insertions of into quark loops disconnected from the valence quark graph, to say the nucleon propagator, , in the vacuum background field. Technically, the disconnected insertion must be calculated configurationwise, i.e. in correlation with the hadronic propagator. But the cost to compute all quark loops individually grows with volume which renders a direct evaluation of prohibitively expensive[2]. In the framework of quenched QCD, two groups [3, 4, 5], have recently tackled the problem by applying two different variants of the noisy estimator technique proposed some time ago for computing the chiral condensate [6]. The trick is to induce the extensive character of the quark-loop insertion by inverting the Dirac operator on a source extending over the entire (spatial) volume. This estimator technique is biassed in form of non-closed, i.e. bilocal and therefore non-gauge-invariant contributions that need be controlled. The Kentucky group[3, 4] pioneered the use of a stochastic volume source with -noise (SETZ) to get rid of the ‘bilocal crossterms’, . On their small sample of 50 configurations at , they used about 300 such noise sources per configuration, at the expense of a substantial overhead in their computatinal cost. The authors of Ref.[5], on the other hand, utilize a homogeneous volume source (VST) and appeal to the gauge fluctuations to suppress the non-loop bias contributions via the Elitzur theorem. This approach however adds a substantial amount of noise to the data and may therefore require a very high statistics ensemble average to yield a reasonable signal. In exploratory quenched applications, where chromofield configurations can be created at low cost, both approaches have led to rather encouraging results on signal quality[7]. But in the context of a full QCD sea-quark computation, computer time prevents us at present from doing very high statistics sampling of gauge fields. So the question is whether effects from disconnected diagrams are accessible at all in small/medium size sampling of full QCD. This motivates us to look more deeply into the systematics of the stochastic estimator technique (SET) on actual full QCD configurations with Wilson fermions. To be specific, we will proceed by studying the quality (size and fluctuations) of the signal from disconnected contributions to the scalar density. The strategy in applying estimator techniques should be such that – with minimal overhead – both systematic bias (from remaining nonlocal pollution) and fluctuations of the signal should become negligible compared to the statistical accuracy attainable from the size of the field sample. On the basis of the current statistics from our hybrid Monte Carlo production[8, 9] we will find that it is realistic to exploit the stochastic estimator technique for the study of quark loop effects in full QCD. ## 2 Asymptotic Expectations In order to determine the disconnected part of one needs to calculate the expectation value , where denotes the (momentum zero) proton correlation function and is the fermion matrix. With VST one calculates on each gauge configuration by solving M(C)X=h, (1) where is a volume source vector with components . The average over the gauge configurations A=1NCNC∑C=1P(C)∑i,jM−1i,j(C) (2) can be separated into local and non local contributions A=1NCNC∑C=1P(C)∑iM−1i,i(C)+1NCNC∑C=1P(C)∑i≠jM−1i,j(C). (3) As the latter are not gauge invariant they vanish (only) in the large limit: limNC→∞A=⟨PTr(M−1)⟩. (4) In the stochastic estimator technique, one uses (complex) random source vectors with the property limNE→∞1NENE∑E=1η∗i(E,C)ηj(E,C)=δi,j (5) and computes the quantity repeatedly ( times) on each configuration. It appears natural to choose according to a Gaussian distribution (SETG) [6]. A decomposition similar to eq.(3) then yields 1NCNC∑C=11NENE∑E=1η†(E,C)M−1(C)η(E,C)P(C)= 1NCNC∑C=1{[∑iM−1i,i(C)+¯Ton(NE,C)+¯Toff(NE,C)]P(C)}, where ¯Ton(NE,C) = 1NENE∑E=1∑i[η∗i(C,E)ηi(C,E)−1]M−1i,i(C) ¯Toff(NE,C) = 1NENE∑E=1∑i≠jη∗i(C,E)ηj(C,E))M−1i,j(C). (7) Unfortunately the term on the right hand side of eq.(2) introduces a gauge invariant bias, whose suppression definitely requires the number of estimates per configuration to be in the asymptotic regime, where eq.5 becomes valid. This dangerous bias is removed from the beginning, if one samples the components of the random vector according to a distribution. In this case, the relation holds for any single estimate , and vanishes on each gauge configuration. For the remaining nonlocal bias, , SETZ combines two additive suppression mechanisms: (a) the gauge fluctuations cancel them as non gauge invariant objects on a sufficiently large sample of gauge fields, even for ; (b) the -noise kills them in the large limit, even on a single gauge configuration. In order to understand the efficiency of the competing methods for rendering good signals in more detail, we consider the variance of in each case. Asymptotically one finds σ2=σ2gauge+⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩1NE⟨σ2offP2⟩for SETZ1NE⟨(σ2on+σ2off)P2⟩for% SETG⟨(∑i≠jM−1i,j)2P2⟩for VST⎫⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎬⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎭+2COV. (8) is the variance calculated with the exact value of on each configuration, and and are the variances due to the distribution of and within the process of stochastic estimation. The cornered brackets stand for the average over gauge configurations. The abbreviation COV in eq.8 reads in its full length COV=cov⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩(PTr(M−1),P¯Toff)for SETZ% (PTr(M−1),P(¯Ton+¯Toff))for SETG(PTr(M−1),P∑i≠jM−1i,j)for VST. (9) These formulae111A similar analysis has been carried out in [10] for a modified version of VST, applied to the calculation of the scattering length. show how the signals will be blurred by the fluctuations of the estimates . Note that the terms in the large curly brackets of eq.8 are all gauge invariant. Hence they survive even in the limit of an infinite sample of gauge configurations unless suppressed otherwise : there is a way to fight them within SETZ and SETG according to a suppression222The terms depicted by COV behave like for SETZ and SETG. In case of SETG COV contains a gauge invariant part due to , which can be removed only in the limit . All other terms in COV vanish for , as they are not gauge invariant., but there is no way to influence them at all within VST. So far the discussion is qualitative only, as we do not know the relative size of these additional terms in eq.8. Furthermore, the calculation of requires the investigation of the ratio rather than itself. This can entail cancellations due to additional correlations between numerator and denominator. Last not least we have to keep in mind that these considerations are only valid in the asymptotic limits of the gauge and estimator samples. In the next section we will therefore present a numerical study of the situation, under actual working conditions of full QCD. ## 3 Numerical Results Our present analysis of is based on 157 configurations from our ongoing Hybrid Monte Carlo run[8, 9] with two degenerate flavours of dynamical Wilson fermions. Here we work with , which amounts to a ratio , and to a quark mass of . 1. To set the stage, we consider first the implications of the stochastic estimator technique on the chiral condensate, and compare the performance of Gaussian and -noises in full QCD. In Fig.1 we show – on a given configuration – the standard error resulting from the two methods as a function of the number of estimates, . Clearly SETZ is superior (by about a factor two) in obtaining a good signal for ; we note that this is similar to the related quenched situation[3]. For this reason we pursue SETZ in the following. 2. An obvious way to economize is to properly adjust the accuracy in solving[11] eq.1. We therefore compute in the next step the chiral condensate in its dependence on the inversion accuracy . Fig.2 shows the convergence behaviour, . The two horizontal lines in the plot refer to the margin from the stochastic noise with estimators, on a single field configuration. It can be seen that below , is safely inside this margin. It is therefore sufficient to operate with the convergence condition . 3. The next question relates to the size of the sample of gauge configurations required to observe a signal of the scalar density matrix element . This quantity333Throughout this work we applied gauge invariant gaussian smearing [12] with , to the Proton operator at the source. is extracted from R(t)disc=⟨P(0→t)Tr(M−1)⟩⟨P(0→t)⟩−⟨Tr(M−1)⟩tlarge⟶const+t⟨P|¯qq|P⟩lattdisc. (10) To start we choose a large number of stochastic sources, as proposed for quenched simulations in Ref.[4], in order to avoid bias from the nonlocal pollutions to the trace. In Fig.3 we show how the linear rise in evolves more and more clearly as the sampling is increased from 50 to 100 and 157 gauge configurations. The linear fits to this data yield the values 5.89(2.37), 3.44(1.45) and 2.51(0.77) for the slope. We conclude that it appears mandatory to work on samples of at least 100 configurations. Under this condition we (a) retrieve a reasonable signal and (b) find the statistical errors on the data points to follow the expected behaviour . For comparison we show in Fig.4 calculated with VST on 50, 100 and 157 gauge configurations. Note that the signal to noise ratio is much worse in this case. For the slope we find 7.13(4.01), 5.40(2.8) and 3.50(2.20), consistent with SETZ, although with much larger statistical errors. 4. Given the signal on our present sample of 157 configurations we can now ask the question whether and how far we can actually relax in the number of stochastic estimates, , without deteriorating its quality. The starting point is the observation that the fluctuations from SET add to the inherent fluctuations from the gauge field sample (see eq.8). For economics should be chosen just large enough to suppress this undesired effect. Fig.5 shows the response of and of as well as the variances and with respect to changes in . We find that the mean values are not affected by the variation of within the statistical accuracy. The variances however display a significant decrease over , which ends in a somewhat noisy plateau. Obviously it does not pay to increase beyond a value of . For comparison we quote also the results obtained with VST on the identical sample of gauge configurations: , , , and . Based on the variance we find SETZ to outperform VST substantially. In terms of statistics we recover a gain of a factor 2.5 to 3 from using SETZ instead of VST, in the present application. The full -dependence of is displayed in Fig.6, showing again that 10 estimates within SETZ are sufficient to produce a reasonable signal on our sample of 157 configurations, while the signal is nil with one estimate only. On the other hand 300 estimates are definitely (and fortunately!) unnecessary. ## 4 Conclusion and Outlook We have presented a full QCD study on the stochastic estimator technique applied to the disconnected diagrams of the scalar density. We found that it is indeed possible to achieve clean signals from only -noise sources with a weak convergence requirement, on the iterative solver to . This saves a factor 30 in computer time compared to previous applications of this technique in the quenched applications. With as few as 157 dynamical field configurations we obtain a reliable signal on . This opens the door to perform a detailed analysis of the -N term as well as the axial vector matrix elements in the proper setting, i.e. without recourse to the quenched approximation. Work along this line is in progress. It is obvious that in full QCD simulations the use of appropriately optimized -noise techniques will be of utmost importance when it comes to the estimate of more complex quantitites in flavour-singlet channels, where the underlying correlators contain two disconnected fermion loops. Exploratory quenched applications of VST to compute the -mass and the annihilation diagrams in -scattering in the isospin zero channel at threshold have attained good signals[13, 14]. This makes us hope that medium size sampling will lead in full QCD to reliable signals as well, once optimized estimator techniques will be applied to grasp the loops. Acknowledgements. We are grateful to DESY, DFG and KFA for substantial computer time on their QH2 Quadrics systems at DESY/Zeuthen, University of Bielefeld and on the Q4 and CRAY T90 systems at ZAM/KFA. Thanks to Hartmut Wittig, Markus Plagge and Norbert Attig for their kind support. This research has been supported by DFG (grants Schi 257/1-4 and Schi 257/3-3) and by EU contracts SC1*-CT91-0642 and CHRX-CT92-0051. ## References • [1] R. Altmeyer, M. Göckeler, R. Horseley, E. Laermann, and G. Schierholz, Nucl. Phys. B (Proc.Suppl.) 34(1994)376. • [2] J.E. Mandula, M.C. Ogilvie, Phys. Lett.B312(1993)327. • [3] S.J. Dong and K.F. Liu, Nucl.Phys. B (Proc.Suppl.) 26(1992)353; Phys. Lett.B328(1994)130. • [4] S.J. Dong and K.F. Liu, Nucl.Phys. B (Proc.Suppl.) 26(1993)487; K.F. Liu, S.J. Dong, T. Draper and W. Wilcox, Phys. Rev. Lett.74(1995)2172; S.J. Dong, J.F. Lagaë, and K.F. Liu, Phys. Rev. Lett.75(1995)2096. • [5] Y. Kuramashi, M. Fukugita, H. Mino, M. Okawa, and A. Ukawa, Phys. Rev. Lett.75(1995)2092; Phys. Rev. D 51 (1995)5319. • [6] Bitar et al., Nucl. Phys. B313(1989)348. • [7] for a review, see M. Okawa, Nucl. Phys.B (Proc.Suppl.) 47(1996)160. • [8] SESAM Collaboration: U. Glässner, S. Güsken, H. Hoeber, Th. Lippert, X. Luo, G. Ritzenhöfer, K. Schilling, and G. Siegert, Nucl. Phys.B (Proc.Suppl.) 47(1996)386. • [9] SESAM Collaboration: N. Eicker, U. Glässner, S. Güsken, H. Hoeber, Th. Lippert, X. Luo, G. Ritzenhöfer, K. Schilling, G. Siegert, and J. Viehoff, in preparation. • [10] M. Fukugita, Y. Kuramashi, M. Okawa, H. Mino and A. Ukawa, Phys. Rev. D 52 (1995)3003. • [11] A. Frommer, V. Hannemann, B. Nöckel, T. Lippert and K. Schilling, Int. J. Mod. Phys. C5 (1994) 1073. • [12] S. Güsken, U. Löw, K.H. Mütter, R. Sommer, A. Patel and K. Schilling, Nucl. Phys. B (Proc.Suppl.) 17 (1990)361. • [13] Y. Kuramashi, M. Fukugita, H. Mino, M. Okawa, and A. Ukawa, Phys. Rev. Lett.72(1994)3448. • [14] Y. Kuramashi, M. Fukugita, H. Mino, M. Okawa, and A. Ukawa, Phys. Rev. Lett.75(1993)2387.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8591367602348328, "perplexity": 1760.0559024945876}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389798.91/warc/CC-MAIN-20210309092230-20210309122230-00009.warc.gz"}
http://math.stackexchange.com/questions/100461/on-the-completeness-of-the-generalized-laguerre-polynomials
On the completeness of the generalized Laguerre polynomials I am trying to prove that the generalized Laguerre polynomials form a basis in the Hilbert space $L^2(\mathbb{R})$. 1. Orthonormality $$\int_0^{\infty} e^{-x}x^kL_n^k(x)L_{m}^k(x)dx=\dfrac{(n+k)!}{n!}\delta_{mn}$$ 2. Completeness (?) $$\sum_{n=0}^{+\infty}L_n^k(x)L_{n}^k(y)=?\delta(x-y)$$ I am having trouble with the second relation, can anyone give a reference where it is proven or hint for a proof? - As $L^k_n=x^n/n!+$ lower degree terms, the sequence $L^k_n$, $n=0,1,2,\dots$ can be obtained from $1,x,x^2,x^3,\dots$ by the Gramm-Schmidt orthonormalization process. The completeness is therefore equivalent to the completeness of polynomials in $L^2(\mathbb{R}_+, e^{-x}x^k\,dx)$. – user8268 Jan 19 '12 at 22:08 Of course, $L^2(\mathbb{R})$ should read $L^2(\mathbb{R}_+)$. – ˈjuː.zɚ79365 Jun 12 '13 at 1:10 Completeness of an orthogonal sequence of functions is a bit tricky on unbounded intervals, while it is relatively straightforward on bounded intervals. In the case of Laguerre and Hermite polynomials, there is a nice trick due to von Neumann that allows the reduction to bounded intervals. There seems to be a bit of confusion about the interval in the statement of the question. Here's a correct statement: For any real number $\alpha \gt -1$ the functions $\langle e^{-x/2} x^{\alpha/2} L_{n}^{(\alpha)}(x)\rangle_{n=0}^\infty$ obtained from the Laguerre polynomials $L_{n}^{(\alpha)}(x)$ are a complete orthogonal system in $L^2(0,\infty)$. The Hermite polynomials $H_n(x)$ yield the complete orthogonal system $\langle e^{-x^2/2} H_n(x)\rangle_{n=0}^\infty$ in $L^2(\mathbb{R})$. This is proved in detail in the classic book Gábor Szegő, Orthogonal polynomials, Chapter 5. The entire chapter discusses the main properties oft the Laguerre polynomials $L^{(\alpha)}_n(x)$ for an arbitrary real number $\alpha \gt -1$ and proves their completeness in Section 5.7. More precisely, Szegő shows in Theorem 5.7.1 on pages 108f that for fixed $\alpha \gt -1$ the functions $f_n(x) = e^{-x/2}x^{\alpha/2} x^n$ span a dense subspace of $L^2(0,\infty)$. The first idea is to use a change of variables $y = e^{-x}$ in order to use the case of $L^2(0,1)$ where density of the span of $(\log1/y)^{\alpha/2} y^n$ is not too hard to prove (see Theorem 3.1.5). Write a function in $L^2(0,\infty)$ as $e^{-x/2} x^{\alpha/2} f(x)$. Then we have that $(\log1/y)^{\alpha/2} f(\log(1/y)) \in L^2(0,1)$ can be approximated by functions of the form $(\log1/y)^{\alpha/2} p(y)$ where $p$ is a polynomial. Transforming back to $(0,\infty)$ this shows that $$\int_{0}^\infty e^{-x} x^\alpha (f(x) - p(e^{-x}))^2 \,dx \lt \varepsilon$$ for a suitable polynomial $p$. This reduces the task to proving that for all natural $k$ there exists a polynomial $q$ such that $$\tag{\ast} \int_{0}^\infty e^{-x} x^\alpha (e^{-kx} - q(x))^2\,dx$$ is as small as we wish. To do this, von Neumann's trick is to use the generating function of the Laguerre polynomials $L_{n}^{(\alpha)}(x)$ $$(1-w)^{-\alpha-1} \exp\left(-\frac{xw}{1-w}\right) = \sum_{n=0}^\infty L_n^{(\alpha)}(x) w^n.$$ Choosing $w = \frac{k}{k+1}$ we have $\exp\left(-\frac{xw}{1-w}\right) = \exp{(-kx)}$. Thus, a natural choice for $q$ is $q_N(x) = (1-w)^{\alpha+1} \sum_{n=0}^N L_n^{(\alpha)}(x) w^n$ with large enough $N$. Plugging this into $(\ast)$ we obtain using the orthogonality relations \begin{align*} \int_{0}^\infty e^{-x} x^\alpha (e^{-kx} - q_N(x))^2\,dx & = (1-w)^{2\alpha+2} \int_{0}^\infty e^{-x} x^\alpha \left(\sum_{n=N+1}^\infty L_{n}^{(\alpha)}(x) w^{n}\right)^2\,dx \\ &= (1-w)^{2\alpha+2} \Gamma(\alpha+1) \sum_{n=N+1}^\infty \binom{n+\alpha}{n} w^{2n} \end{align*} where term-wise integration is justified using an application of Cauchy-Schwarz. It remains to observe that the last expression tends to $0$ as $N \to \infty$. Another reference discussing the case of $\alpha = 0$ nicely is Courant and Hilbert, Methods of mathematical physics, I, §9, sections 5 and 6. They discuss ordinary Laguerre and Hermite polynomials and their completeness. -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9871561527252197, "perplexity": 118.2210071843648}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701163438.83/warc/CC-MAIN-20160205193923-00235-ip-10-236-182-209.ec2.internal.warc.gz"}
https://www.khanacademy.org/math/multivariable-calculus/integrating-multivariable-functions
# Integrating multivariable functions Contents There are many ways to extend the idea of integration to multiple dimensions: Line integrals, double integrals, triple integrals, surface integrals, etc. Each one lets you add infinitely many infinitely small values, where those values might come from points on a curve, points in an area, points on a surface, etc. These are all very powerful tools, relevant to almost all real-world applications of calculus. In particular, they are an invaluable tool in physics. ## Line integrals for scalar functions (videos) With traditional integrals, our "path" was straight and linear (most of the time, we traversed the x-axis). Now we can explore taking integrals over any line or curve (called line integrals). ## Line integrals for scalar functions (articles) Rather than integrating along a straight line, such as the x-axis, we will now start thinking about meandering through space. This topic starts with arc length, which leads in nicely to the broader idea of line integration. ## Line integrals in vector fields (videos) You've done some work with line integral with scalar functions and you know something about parameterizing position-vector valued functions. In that case, welcome! You are now ready to explore a core tool math and physics: the line integral for vector fields. Need to know the work done as a mass is moved through a gravitational field. No sweat with line integrals. ## Line integrals in vector fields (articles) After introducing line integrals in the context of scalar-valued functions, we see how to integrate along curves which wander through a vector field. This leads to a very beautiful extension of the fundamental theorem of calculus, known as the fundamental theorem of line integrals. ## Double integrals (videos) A single definite integral can be used to find the area under a curve. with double integrals, we can start thinking about the volume under a surface! ## Double integrals (articles) A single definite integral can be used to find the area under a curve. with double integrals, we can start thinking about the volume under a surface! More generally, double integrals are useful anytime you feel the need to add up infinitely many infinitely small quantities inside some two-dimensional region. ## Triple integrals (videos) This is about as many integrals we can use before our brains explode. Now we can sum variable quantities in three-dimensions (what is the mass of a 3-D wacky object that has variable density)! ## Triple integrals (articles) Triple integrals are a way of integreating throughout a three-dimensional region in space. ## Surface integral preliminaries (videos) Here, Sal covers some of the skills you need to be able to understand surface integrals. ## Surface integrals (articles) Just as line integrals give you the ability to add up points on a line, and double integrals give you the ability to add up points in a two-dimensional region, surface integrals are a mechanism for adding points on a curved surface in three-dimensional space. ## Flux in 3D (videos) Flux can be view as the rate at which "stuff" passes through a surface. Imagine a net placed in a river and imagine the water that is flowing directly across the net in a unit of time--this is flux (and it would depend on the orientation of the net, the shape of the net, and the speed and direction of the current). It is an important idea throughout physics and is key for understanding Stokes' theorem and the divergence theorem. ## Flux in 3D (articles) Learn how to compute surface integrals in a vector field, which involves constructing a unit normal vector to a surface. This lets you compute how much fluid flows through a given surface, which provides an intuition much more broadly applicable than just fluids.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9214860200881958, "perplexity": 442.7375743275025}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718426.35/warc/CC-MAIN-20161020183838-00317-ip-10-171-6-4.ec2.internal.warc.gz"}
http://www.plosgenetics.org/article/info%3Adoi%2F10.1371%2Fjournal.pgen.1001188&imageURI=info%3Adoi%2F10.1371%2Fjournal.pgen.1001188.g004
Research Article # Genome-Wide Effects of Long-Term Divergent Selection • Affiliation: Department of Animal Breeding and Genetics, Swedish University of Agricultural Sciences, Uppsala, Sweden X • Affiliation: Department of Animal Breeding and Genetics, Swedish University of Agricultural Sciences, Uppsala, Sweden X • Affiliation: Department of Animal and Poultry Sciences, Virginia Polytechnic Institute and State University, Blacksburg, Virginia, United States of America X • [email protected] Affiliations: Department of Animal Breeding and Genetics, Swedish University of Agricultural Sciences, Uppsala, Sweden, Department of Cell and Molecular Biology, Uppsala University, Uppsala, Sweden X • Published: November 04, 2010 • DOI: 10.1371/journal.pgen.1001188 ## Abstract To understand the genetic mechanisms leading to phenotypic differentiation, it is important to identify genomic regions under selection. We scanned the genome of two chicken lines from a single trait selection experiment, where 50 generations of selection have resulted in a 9-fold difference in body weight. Analyses of nearly 60,000 SNP markers showed that the effects of selection on the genome are dramatic. The lines were fixed for alternative alleles in more than 50 regions as a result of selection. Another 10 regions displayed strong evidence for ongoing differentiation during the last 10 generations. Many more regions across the genome showed large differences in allele frequency between the lines, indicating that the phenotypic evolution in the lines in 50 generations is the result of an exploitation of standing genetic variation at 100s of loci across the genome. ## Author Summary Evolution is the process of change in response to selection. Typically, this results in more or less obvious changes to the appearance and physical properties—the phenotype—of an organism. However, these changes reflect underlying changes to the genome of that organism—the genotype. We examine the genomes of two lines of chickens that share a very recent ancestry but have been subjected to 50 generations of selection for high or low body weight, respectively. The effect of selection on the phenotype was dramatic, where on average high line birds are nine times heavier than low line ones. The effect on the genotype was equally dramatic, with a large number of changes distributed all across the genome. We observed more than 100 regions where different genetic variants were established in the two lines of chickens, which is considerable given the number of generations involved. ### Introduction Evolution is the process by which populations adapt genetically in response to selection. Understanding the genetic mechanisms leading to phenotypic differentiation requires identification of the regions in a genome that are, or have been, under selection. Maynard Smith and Haigh [1] proposed to find these loci by searching for genetic hitch-hiking (now also called “selective-sweeps”[2]). Most reported selective-sweeps surround novel, major effect mutations that appeared on a single haplotype before sweeping through a population. A potentially more common type of sweep starts from standing genetic variation present at the onset of selection - the “soft sweep” [3][6]. Domestic animals and plants have been used as models to study both simple monogenic and complex polygenic traits. One of the unique features of these populations is that their reproduction has been under human control for a long time and planned selection of individuals have led to an exceptionally wide range of phenotypes within species. Here, we report the results of a genome wide scan, using a 60 k SNP chip, in two chicken lines from a long-term, bi-directional, single trait selection experiment. In the Virginia chicken lines used in this study, 40 generations of selection resulted in a nine-fold difference in 56-day body weight (the selected trait) between the lines [7]. Long-term selection experiments, where animal and plant breeders have subjected populations to very strong and meticulously documented directional selection for generations, provide a valuable resource for studying the effects of selection [8], [9]. The resulting populations are examples of accelerated evolution, where the genetic and phenotypic changes that resulted correspond to changes that would most likely take centuries to achieve with the selection pressures in natural populations. The Virginia lines are a chicken resource population for studying the genetic, genomic and phenotypic effects of long-term, single trait, divergent artificial selection [10]. In 1957, founders for one high- and one low- body weight line were selected from a 7-way cross between partially inbred White Plymouth Rock chickens. Once a year, with some restrictions imposed to minimise inbreeding, the birds with the highest and lowest 8-week body weight within each respective line were selected as parents for the next generation. After more than 40 generations of selection, there was a 9-fold difference in body weight between the lines [7] and a significant selection response continues through 50 generations of selection. Sublines, where selection was relaxed, were established periodically within both the high and low body weight lines to serve as unselected controls. After some generations, the relaxed lines originating from the high line had lower body weights than the line continuously selected for high body weight, and the relaxed lines originating from the low line had heavier body weights than the selected low line [10]. This pattern reinforces the notion that the observed change in phenotype is indeed due to the continuous selection process. The Virginia lines are a valuable resource for studying the effects of selection on the genome. Of particular importance is that the experiment involved bi-directional selection and that the population history, including population sizes, selection intensities as well as expected and observed selection responses each generation are known. This information allows a better separation of the genomic effects of selection and drift than would otherwise be the case. Together with the advent of a new high-density chicken SNP chip the Virginia lines allows a detailed investigation of the effect of selection on the genome that was not previously possible. One current paradigm for identifying selective sweeps (hitch-hiking) is to scan the genome of a selected population for regions of homozygosity (e.g. Sabeti and co-workers [11]). In these analyses, it is assumed that the selected allele was present on a single haplotype at the beginning of selection, which is the case when selection acts on a novel mutation. When the beneficial allele is present on multiple haplotypes, effects of selection will not be detected using this approach. If there is standing (or cryptic) genetic variation in a population, which is likely when selecting on mutations that have existed in a population for some time before the onset of selection, the expected pattern of fixation is different [3][6], [12]. Although little is known on how common it is that selection starts from standing variation, initial studies with soft sweeps based on limited marker sets and partial genome coverage [13], [14] indicate that they might be common. In the Virginia lines, selection started from a mixed population where, at each selected locus, the selected allele might be present on haplotypes from any of the founder lines of the base-population. The selected allele might thus be in high linkage disequilibrium, LD, with some marker alleles (i. e. SNPs) and lower LD with other marker alleles that are physically close on a chromosome. Therefore we would not necessarily expect to observe regions with complete fixation of all SNPs around the selected loci, but instead regions where some SNPs display large frequency differences between lines (in the extreme case fixed for different alleles) and other adjacent SNPs with little frequency differences between lines. Because evidence for selection is strong in these lines, as shown by the selection response and results from the relaxed lines, our aim was to identify the genetic elements that are the most likely to have been under intense selection by identifying the regions in the genome with the most extreme allele frequency differences between the lines. Here, we report on a genome-wide scan for soft-sweeps designed to identify those SNPs that are in LD with regions in the chicken genome that have been under selection during the breeding of the Virginia lines. Analysing 57,636 SNPs in individuals from both the high- and low body weight lines after 40 and 50 generations of selection provides a detailed analysis of both past and present genomic effects of selection as well as insights into how selection has acted on the genome in order to achieve the considerable response to selection. ### Results #### Fixations in the two lines Genotypes from both the high and the low lines were studied at two time-points, namely after 40 and 50 generations of selection. 57,636 SNPs were genotyped in 20 individuals from each line after 40 generations of selection and in 10 individuals from the low line and 49 from the high line after 50 generations of selection. The 60 K SNP chip provides a marker density of approximately 1 marker/15 kb. The extent of LD for the SNPs on this chip in the population is not known, but estimates from genome re-sequencing of the lines suggests an LD block size in these populations of 30 kb (micro chromosomes) - 60 kb (macro chromosomes). The extent of LD is expected to be relatively large due to three relatively recent bottle-necks in these populations from breed-formation, inbreeding of lines used to create the base population as well as limited size of the base-population. It is, however, unlikely that any of the SNPs on the chip is causative, but most causative mutations are likely to be linked with at least one marker. 56,586 SNPs had genotypes in both lines after 40 generations and 56,561 after 50 generations of selection. Of the 32,846 SNPs that were polymorphic in generation 40, 13,579 were polymorphic in both lines, 10,237 only in the low line, 8,032 only in the high line and 998 were fixed for alternative alleles in the two lines. There were more fixed SNPs in the sample from the high line, which was expected based on the empirical observation that the phenotypic response to selection ceased in the low-line about generation 30 (Figure 1). In generation 50, an additional 748 SNPs were fixed for different alleles in the two lines – an increase by 75% – most of which were already fixed in one line at generation 40 (Tables S1 and S2). #### Allele frequency differences between lines and generations Figure 2 illustrates the different samples included in the study and the two types of comparisons made using these data. First, allele frequencies at all SNPs were compared across time within each line (arrows labelled A in Figure 2). This comparison identifies the regions within each line with the largest changes in allele frequencies between generations 40 and 50. Then, allele frequencies for all SNPs were compared between the high and low lines at two different time points: generations 40 and 50 (arrows labelled B in Figure 2) to identify where in the genome the SNPs indicate the strongest divergence between the lines. To evaluate the significance of observed differences in allele frequencies between lines and sample points within a line, association analyses using PLINK [15] were performed. Within line comparisons of frequencies at generation 40 and 50 (comparisons A in Figure 2) are performed to reveal the effects of recent and ongoing selection. The analyses identified significant differences in many regions dispersed over the entire genome. In the high line, there are highly significant changes in allele frequencies (p<0.001) on 10 chromosomes and significant changes (p<0.05) on 6 additional chromosomes. For example on chromosome 1 (Figure 3) there were six regions with significant differences between generations 40 and 50 in the high line and those regions are thus the most likely to have been under intense recent selection within this line. The low line only shows significant differences (p<0.05) on two chromosomes (for details see ). This lower number of currently affected regions is expected given the low response to selection since about generation 30. Comparisons between the high and low lines at generations 40 and 50 (comparisons B in Figure 2) revealed many highly significant differences between them across the genome at both time points (Figure S2). For example, there were at least ten regions with highly significant allele frequency differences between the lines on chromosome 4 both at generation 40 and 50. These regions were likely to have been under intense selection earlier in the selection process. An example of a region with recent divergence between the lines was between 60 Mb and 80 Mb on chromosome 4 (Figure 4). This could be an interesting region to study further as the different selection response in the lines could be caused by the region containing one or several genes that display genetic background dependent effects (i.e. epistasis). It is noteworthy that despite the relatively low number of individuals, a test for allele frequency differences yields a χ2 value of 80 for a SNP fixed for different alleles in the two lines, which is highly significant even with full Bonferroni correction for multiple testing. For comparisons with other studies it is also useful to realize that χ2 and p values from the allelic χ2-test is the same as a χ2-test of Fst, i.e Fst was also highly significant at all the identified regions across the entire genome. To measure the dynamics within the genomes of the low and high lines, allele frequency changes resulting from 10 generations of selection (from generation 40 to 50) were studied. The loci with the highest rates of allele frequency changes are the most likely regions to contain genes under current selection. In total, there are 24 regions with significant allele frequency changes in at least one line, spread across the genome. Only one region, the beginning of chromosome 7, was significantly affected in both lines. This lack of correspondence is not entirely unexpected because the lines have undergone a large number of independent fixation events, which makes it unlikely that the same regions are concurrently under selection after 40 generations of divergent selection. Figure 5 shows the results for chromosome 1. The complete results for all chromosomes are provided in Figure S3. #### Simulations A complicating factor when attempting to identify regions under selection, especially with small effective population sizes, is to discriminate between the effects of selection and drift. Because the full population history of these lines is known, we could use simulations to evaluate how selection and drift were expected to affect the genome. Previous studies to identify QTLs [7], [16], [17] indicate that selection has been strong on many loci in the genome. Using the estimated effects of the QTLs to calculate the selection coefficient (s) [18], [19], yields values of s in the range 0.19–0.93 (Table S3). The simulations show that selection on these loci was sufficiently strong to lead to high probability of fixation after only 10–15 generations for the loci with larger effects and well before generation 40 for many other loci (Table S4 and S5). After 40 generations, the loci with the largest selection coefficients (i.e. those representing the effects of significant QTL for the selected trait) always reaches fixation for the selected allele during the simulations with additive alleles. This is illustrated in Figure S4A, S4B, S4C, where selection is applied on the loci Growth4 (selection coefficient for males, sM = 0.56, and selection coefficient for females, sF = 0.34), Growth6 (sM = 0.93, sF = 0.56) and Growth9 (sM = 0.79, sF = 0.48) in the high line. Even for the QTL with the smallest effect, Growth12 (sM = 0.31, sF = 0.19), fixation occurred in 85% of the replicates at generation 40 (Figure S4D). Using a selection coefficient half the size of the smallest QTL (i.e sM = 0.15, sF = 0.10) and otherwise the same parameters, gives fixation in 45% of the replicates. Keep in mind that these values are for fixation within a single line, they should be squared to obtain the probability of concurrent fixation in both lines. The effective population size, Ne, for the selected lines estimated from the number of parents each generation is ~35 (See Table S6 for details). Calculations of Ne from the actual pedigrees up until generation 48 show higher values (44.5 for the high line and 49.3 for the low line) [20]. This demonstrates that the breeding scheme to limit inbreeding has been successful. Using Ne = 35, the Nes for the previously identified QTL with the smallest effect is, 35×0.19 = 6.6, which is greater than 1 implicating that selection is the predominant force at this locus [21]. The simulations support this, as the selected allele is always the one that becomes fixed even for the QTL with the smallest effect. It should, however, be noted that the simulations use effects estimated for statistically significant QTL for the selected trait in a line-cross experiment. As these might include multiple genes affecting the trait and there will be a large number of additional loci with smaller effect on the trait, there will also be a large number of loci for which a balance between selection of drift will have determined which allele has been fixed at the end of the experiment. Our results do, however, show that the population size has been sufficiently large to prevent genetic drift from overriding the effect of selection for the loci with the largest s-values in the selected lines. The simulations also show that for a locus with no selection (i.e. where there is only genetic drift), fixation at this locus in one of the lines only occurs in 10–20% of the replicates when the allele frequencies are intermediate in the base population (3/7 and 4/7) and in approximately 50% of the replicates when the initial frequencies are more uneven (1/7 and 6/7) (Figure S5). The probability of observing fixation of one of the alleles in one line or the same allele in both lines is thus rather high, which is what we observe in the data. Approximately 30% of the SNPs were fixed in one line and not in the other, while at another 45%, they were fixed for the same allele. It should, however, be noted that the group of markers displaying fixation for the same allele in both lines contain both those SNPs that have drifted to fixation and those that were monomorphic in the common base-population. The simulations showed that the probability of fixation of one allele in one line and the other allele in the other line by drift is very low. If the initial allele frequencies in the base-generation are 3/7 and 4/7 (the base population is a mixture of 7 lines) the probability of fixation of different alleles is: 2 * (fixation probability for A) * (fixation probability for a)) = 2*0.038*0.094 = 0.0072≈0.7%, for 2/7 and 5/7 it is 0.4% and for A = 1/7 and a = 6/7 it is 0.2%. The corresponding numbers for fixation of the same allele are 1%, 6% and 27%, respectively. If we assume a uniform distribution of initial frequencies, the expected proportion of loci fixed for the same allele in the two lines would be 11% and the proportion fixed for different alleles in the two lines 0,44%. Since an unknown, but likely substantial, fraction of the SNPs were fixed in the base population, this value cannot easily be compared to the observed data. However, we can compare the observed fixation rate between generation 40 and 50 with the corresponding value from the simulations. In the simulations, the ratio of fixation of the same allele divided by fixation of different alleles is 3.98, again assuming a uniform allele frequency (an assumption that closely matches the true distribution of segregating SNPs in the data [data not shown]), whereas the observed ratio is 2.12. This indicates that about 50% of the fixations for different alleles are due to selection rather than drift. Given the decreased selection response in the low line during this period, it is likely that this figure is lower than the average for the entire selection process. We can also look at the raw number of expected fixations of different alleles to estimate the proportion of SNPs fixed by drift. In the worst-case scenario, where all 56,000 SNPs would have segregated at intermediate frequencies (we used 3/7 and 4/7 as the founder population was a mixture of 7 partially inbred lines) in the original population, at least 60% of the observed fixations for different alleles at 40 generations would be due to selection. If instead we assume a uniform distribution of allele-frequencies in the base population, the proportion of the markers fixed for alternative alleles due to selection would be 70%. These two alternative ways of separating the effects of drift to selection are in reasonably good agreement, and indicate that the proportion of fixed SNPs due to selection is in the range of 50% to 70%. #### Heterozygosity in the two lines The observed mean heterozygosity, Ho, was calculated at all autosomal loci in each line at both time points. Ho at 40 generations was 0.146 and 0.156 in the high and low lines, respectively. After 50 generations, Ho had decreased to 0.130 and 0.142. This decrease in heterozygosity was significantly (p = 0.0003) larger in the high line, and because the population structure is the same in both lines, it is logical that this excess is primarily a function of selection. We also observed a greater loss of genetic variance in the high line during the last generations of selection when the response had weakened in the low line. All this is consistent with the greater response to selection in the high line during those ten generations of the selection experiment. Selection, however, continues in the low line and thus the difference in heterozygosity loss only provides a minimal estimate of the effect of selection. #### Expected number of loci determining the trait Several theoretical methods exist for estimating the number of genetic factors (loci) that determine a complex trait in an experimental intercross between divergent lines [22], [23], [24], [25]. The procedure of Otto and Jones [25], which takes information about the difference in mean between the parental lines and the effects of known QTL as input to predict the distribution of remaining additive effects, was used to estimate the number of loci affecting body weight in the intercross. When employing the most recent estimates of QTL effects in the lines [17], this method predicted that the selected trait - body weight at 56 days of age - was determined by 121 loci (Table 1). This is consistent with our result from comparison on allele frequencies between the two lines, indicating that the selected trait is determined by a large number of loci. These estimates are, however, only an indication of the true number of data. But it is interesting to note that all data indicate that the number of loci involved is more likely to be large (in the order of 100s) rather than small. #### Number of loci under concurrent selection The genome-wide QTL profile from the scan for loci affecting body weight at 56 days of age in an F2 intercross between the selected lines [7] reveals about 30 discrete peaks, where there is a significant (nominal p<0.05) additive genetic effect. We expect the distribution of the estimated genetic effects of these loci, even though they do not reach the experiment-wide significance threshold, to have a distribution that resembles that of the genetic effects of the true loci that determine the line difference. The observed distribution is approximately exponential (Figure S6), and as a consequence of this, the relative differences in genetic effects between the ordered loci are more or less constant. The s-values for the loci are not dependent on the absolute size of the genetic effects - they are determined by the distribution of the genetic effects for the segregating loci, where in the ordered distribution the locus is and how many loci contribute to the trait. When the distribution of genetic effects is exponential, there is a gradient in the strength of selection on individual loci. The locus with the largest effect will be under more intense selection than the second largest locus and the difference in selection intensity is proportional to the relative difference in their genetic effects. Thus, even though all loci that affect the selected trait will technically be under selection at all times, there will always be a subset of loci under more pronounced selection in the population. In our simulations we show that the loci with the largest effects reach fixation in approximately 10–15 generations in this population. Fixation of these loci will affect the s-values for other loci via, at least, two mechanisms. Firstly, fixation of the strongest loci will increase the relative importance of all other loci. This is because (for additive genes) the selection differential scales with the allelic effect in standard deviations. As major genes are fixed, the genetic variance decreases and, as a consequence, so does the standard deviation, which results in an increase of the strength of selection. In the selection experiment, the standard deviations for 8 week weights for males from generations 20, 40 and 50 were 111, 139, and 179 g. The increase in standard deviation makes sense as we are seeing large phenotypic changes. Decreasing coefficients of variation do, however, indicate a decrease in the genetic variance due to selection. Respective values for the LW line, where there is a plateau at the phenotypic level were 63, 54, and 60 g. The changes in the relative strength of selection for the loci will depend on how their allelic effects scale – will weight increase with a constant amount over time or scale with increasing mean body weights in the population. This is not known, and cannot be estimated, but it is reasonable to expect a scaling with the mean and if so the relative strength of selection will increase over time for these loci. Secondly, earlier studies have shown that extensive capacitating epistasis in important in this population [16, Besnier, Pettersson and Carlborg, in preparation]. Due to genetic interactions, the genetic effects of some loci will increase with the changes in genetic background due to selection. In addition, new mutations that occur during the selection process might create entirely new selected alleles with larger selective advantage. In either case, it is unlikely that the current selection profile across the genome is different from what it was at onset of selection. When studying the effect of 10 generations of selection (from S40 to S50), we observe strong sweep signals in approximately 10 loci, which seems reasonable given the expected distribution of genetic effects. #### Clusters of fixation Using a clustering criterion that required a maximum of 1 Mb between subsequent fixed SNPs, there were 116 clusters of at least two SNPs that included 96.1% of the 998 SNPs fixed for different alleles and covered 10.2% of the genome. This indicates highly non-random spatial distribution of fixed SNPs, which is not what we expect to observe when drift is responsible for a majority of the fixations. Using a more stringent criterion of at least 5 SNPs per cluster, there were 65 clusters including 82.3% of the SNPs and covering 8.6% of the genome (Figure 6). In generation 50, there were 1746 SNPs fixed for different alleles in 163 clusters of at least 2 SNPs or, using the more stringent criterion, 102 clusters with at least 5 SNPs. The number of clusters and proportion of the genome covered is relatively stable to variation in the required number of SNPs in clusters and distance between markers (Table S7). Both in generations 40 and 50, more than half of the clusters with at least 5 SNPs were longer than 1 Mb and about a quarter was larger than 2 Mb (Table S8). The results for clusters with at least 2 SNPs are shown in Table S9. The size in Mb and cM of the 23 clusters longer than 2 Mb at generation 50 can be seen in Table 2. The largest physical cluster was 5.4 Mb long and located on chromosome 2. The largest cluster with respect to recombination distance was 23.3 cM and located on chromosome 24. Nine of the largest clusters overlapped with previously identified QTLs. Depending on the criteria used for clustering, we thus observe between 102 and 163 clusters fixed for alternative alleles in the two lines at generation 50. Irrespective of the criteria used, these clusters contain more than 85% of the SNPs fixed for alternative alleles in the lines. Based on the calculations above, we expect that between 50–70% of the SNPs that are fixed for alternative alleles to be due to selection. If we conservatively assume that the fixed SNPs are distributed randomly inside and outside of clusters, we would then expect between 51 and 114 of the observed clusters to be fixed due to selection, This observation fits well with the expectation of 121 major factors contributing to selection response based on the quantitative genetic theory presented above. As can be seen in Table 2, the size of the 23 largest clusters, in terms of recombination distances, ranges between 5.0 and 23.3 cM. Since the probability of recombination occurring in a given region increases exponentially with each generation, these regions were most likely fixed rapidly. As expected from population genetics theory (see e.g. [21]), our simulations show that fixation in a single line for a neutral locus takes considerably longer time than for a locus with s-values similar to those in our data. E.g. in 1000 simulated replicates, the first fixation for a neutral locus occurred after 12 generations and it took 35 generation before fixation was reached in 10% of the replicates. This should be compared with the 4 generations it took to reach the first fixation and the 9 generations it took for 10% to be fixed for the locus with the largest effect (Table 3). The probability that a region of 5 cM will remain un-altered by recombination during the sweep to fixation in this population is 0.078 in 3 generations, 0.014 in 5 generations and 2.1*10−4 in 10 generations for allele frequencies of 1/7 and 6/7 and 6.0*10−3 in 3 generations, 2.0*10−4 in 5 generations and 3.8*10−8 in 10 generations for allele frequencies of 3/7 and 4/7. This example illustrates how rapidly the probability of un-altered haplotypes decreases with increasing number of generations to fixation. Our results indicate that it is not that probable that 8 regions larger than 10 cM and an additional 10 regions 5–10 cM would have swept through the selected population in the time required for neutral loci to become fixed, and that selection is a more likely explanation for the fixation of these large clusters. Of the 116 clusters identified after 40 generations of selection, 63% contained at least two consecutive fixed SNPs and could therefore be considered as traditional hard sweeps. However, almost two thirds of them had only two consecutive fixed SNPs, and would not be detected under more stringent clustering criteria. The largest stretch of consecutive markers fixed for different alleles is located on chromosome 2 and contains 8 SNPs. In generation 50 those clusters with at least 5 SNPs overlapped to a large extent with clusters that contained at least 2 SNPs in generation 40. There were, however, 17 new clusters (Figure 6), which indicate that there were responses to selection at new loci during the last ten generations. Even though some of these new clusters might be due to drift, a number of them are likely to contain genetic elements that have recently come under effective selection. These could be alleles present already at the beginning, but which were not strongly selected due to a relatively small effect size compared to other loci, that have become more important as the scaled phenotypic variance decreases in response to selection [10] or they could be epistatic loci, the effect of which have increased due to changed genetic background [16]. Some of the loci may also be new favourable mutations, although the present data does not allow us to estimate how frequent these are. Moreover, all significant QTLs identified in the Virginia lines by Wahlberg et al. [17] contained one or several clusters of fixed SNPs (Figure 7). ### Discussion Improving our understanding of the dynamic changes in allele frequencies that occur across the genome in response to selection is a challenge in genetics. The selective coefficients of loci will not remain constant throughout the time span of a long-term selection experiment. Loci with the largest effects are most likely to be fixed rapidly, resulting in an increase in the proportion of the total variance contributed by loci with smaller effects. Very little, however, is known about how many loci contribute to a complex trait and how many loci are under most intense selection, i.e. undergoing the most rapid allele-frequency changes, at a given point in time. Several recent studies indicate that the number of loci contributing to complex traits is considerable (Maize [26], Illinois corn selection lines [27], height in humans [28]). These insights were, however, gained from studies of the association between phenotypes and genotypes, which implicitly means that there will be limits on the power to detect loci due to sample size. Population history and selection for multiple traits also complicates the picture. Here, we study the genomic effects of intense selection on a single complex trait, which facilitates more precise insights on basic genetic regulation and dynamic changes that occur during selection. Earlier genetic studies of the Virginia lines have shown that more than 20 genome regions (QTL) are involved in the genetic regulation of the trait under selection, body weight [7], [16], [17], as well as correlated responses including body composition and metabolic traits [29]. Our estimates of the expected number of loci contributing to the trait indicate that there are many loci that remain unidentified. The probability of fixation for alleles with small effects is higher when selection acts on standing genetic variation than on a new mutation, due to the high likelihood of losing a weakly selected new mutation from the gene-pool in the population. Thus, we would expect our approach to identify a larger number of loci than previous QTL mapping experiments that were based on these data because only loci with rather large genetic effect would have reached the detection threshold in those experiments. This is also what we observed. Both the quantitative genetic and molecular assays used to estimate the number of selected genetic elements are in agreement that we have evidence for there being from 50 up to over 100 regions in the genome that have been under strong selection over the first 50 generations of the selection experiment. This study demonstrates that selection on a complex trait will influence more regions than can be identified even in a comprehensive genetic mapping study, and that the genetic regulation of these traits is complex. Our criterion to require fixation for alternative alleles was very stringent and therefore it is likely that additional regions than those reported were actually under selection. This becomes apparent when examining data from generation 50, where 1776 SNPs were fixed for alternative alleles in our samples, including 17 new clusters of at least 5 fixed SNPs that were formed during the 10 last generations of the selection experiment. Some of these new clusters may have been selected already earlier but not strongly enough to reach fixation before 40 generations, while some might be due to new mutations that have occurred recently. The footprints of selection include regions spread throughout the genome, including previously identified QTLs as well as those hitherto not implicated to affect body weight in chickens. As regions of fixation, of which many certainly contain selected regions, are identified with very high resolution (in many cases the clusters cover <1 Mb), this information can be useful for identifying candidate genes and mutations involved in the phenotypic response to selection. Assigning the functional effects to the identified regions, however, remains a future challenge. Selection coefficients for the genomic regions (QTL) identified in previous studies of these lines ranged from 0.93 to 0.31 and 0.56 to 0.19 for high line males and females, respectively, with very similar values for the low line (Table S3). Even if some of these selection coefficients are overestimates, they are, as a group, very high and illustrate the massive selective pressure on the genome in these lines. The intensity of selection is the most likely explanation for the remarkable differences in allele frequencies observed across the whole genome. Selective sweep analyses are powerful in identifying loci that display directional changes in allele frequencies that correlate with the phenotypic responses to selection. With the advent of more affordable methods for high-density genotyping and genome re-sequencing, it is a cost effective approach to identify loci determining complex traits because small samples from existing, divergent populations can be used [30]. The resolution often allows identification of individual genes and thus provides useful insights to the genes and plausible mechanisms involved in the regulation of the traits for which studied populations differ. A major drawback with the sweep analyses is, however, that they do not provide causal evidence for the involvement of particular genetic polymorphisms in phenotypic expression. The divergent populations studied often differ for multiple traits and it is not possible to identify which of these traits that is affected by the polymorphisms. Furthermore, there are no additional insights to the potential genetic mechanisms involved, i.e. whether genes act independently or through interactions in complex gene-networks. This information is, however, provided in e.g. linkage or association studies. Therefore it is necessary to realise that the selective sweep analyses are not a stand-alone method, but rather an addition to the complete set of tools used for understanding the inheritance of complex traits. An example of how sweep and linkage analyses complement each other is obvious in this population. We have earlier used linkage analysis to identify a network of loci that through strong interactions have a major influence on body weight at 56 days of age [16]. Subsequently we replicated the effects and refined their location in an independent advanced intercross line population (Besnier et al, in preparation). The epistatic network contains four loci on chromosomes 3, 4, 7 and 20 and there is a clear overlap between one or several sweeps in each of these regions with the QTL (Figure 7). Combining this information will be a highly useful strategy for identifying the causal mutations underlying the observed genetic interactions. To conclusively rule out drift as the cause of any given fixation event or other observed change in allele frequencies is not possible. However, all available results indicate that the large phenotypic difference in body weight between the Virginia lines is the result of directional selection acting on a large number regions spread across the genome. The number of loci involved in long-term selection response are likely to be in the 100s for a complex trait and that at any point in time selection is likely to simultaneously act on 10s of loci even in populations of limited size. The identified loci are located with high resolution, which makes them obvious candidate regions for attempts to identify causal mutations. The two lines were from the same founder population and were subjected to 50 generations of artificial selection that have led to changes in trait expression and genetics that may resemble those observed from 1000s of years of natural selection. What we observed is genome wide changes that occurred in an accelerated and directed evolution process. In a broader perspective, the results provide not only insights to the effects of artificial selection, but also what may be expected from natural selection when populations adapt to a new environment. This study shows the inherent power and efficiency in combining data from classic long-term selection experiments with modern genomics tools. ### Materials and Methods #### Birds and genotyping Genotyping was performed on 20 low and 20 high line chickens from generation S40 (the generation of the parents from the F2 cross described in Jacobsson et al. [7]), and 10 low and 10 high line chickens from generation S50. At the later time point we chose to genotype an additional 39 individuals from the high line because this line still exhibited a good response to selection, whereas the low line appeared to have phenotypically plateaued. The genotyping was performed by the company DNA Landmarks with the 60 k chicken chip produced by Illumina Inc for the GWMAS Consortium. The animal husbandry for the later generations were the same as described for the previous generations [10]. All procedures involving animals used in this experiment were carried out in accordance with the Virginia Tech Animal Care Committee animal use protocols. #### Simulations Individual based simulations with parameters chosen to mimic the Virginia lines were performed with a code written in R [31], in order to evaluate the probability of fixation for selected and neutral loci. The number of selected males and females, calculated proportion of selected and selection intensity, i, is given in Table S6. For simplicity, the parameters for generation 5–25 in the selection experiment were used for simulation of selection during all 50 generations, because the effective population size for these generations were close to the effective population size for all generations (34.55) (Table S6). The number of females per male was thus 48/12 = 4 and the number of offspring per female was six, which is the number that gives a population size (6×48 = 288) close to the mean population sizes in the selected lines. The selected lines originated from a founder population formed by crossing seven partially inbred (~36%) lines. We assume that the inbred lines were fixed for all loci, i.e. the starting haplotype frequencies were multiples of 1/7. Simulations were performed with two linked loci, A and B with alleles A/a and B/b, were selection acts on locus A. The fitness of genotypes AA, Aa and, aa were modelled as 1, 1-hs and, 1-s, respectively, where s is the selection coefficient and h is used to model dominance. Note that since the selection intensity is different for males and females, there is one selection coefficient for males, sM, and another for females, sF, for each locus. Alleles with additive effects (h = 0.5) were assumed for the simulations in this paper. The selection coefficient, s, for a given QTL was estimated as s = i2a[18], [19]. The selection coefficients for the 11 QTLs with significant additive effects in Jacobsson et al [7] in the low and high line are given in Table S3. The additive effect, a, and the phenotypic standard deviation, σ, for the QTLs are as described in Jacobsson et al [7]. Simulations were performed for the QTL with the largest effect (Growth6 on chromosome 4), the smallest effect (Growth12 on chromosome 20) and two additional loci (Growth4 and Growth9, on chromosomes 3 and 7 respectively). Fixation in the simulations was defined as all individuals in the simulated population being homozygous for the same allele. This should be kept in mind when comparing with the observed results, where fixation is measured in a genotyped sample form the selected poplation. #### Association mapping Association mapping was performed using the software package PLINK v1.07 [15]. The results in the manuscript are based on asymptotic p-values from the χ2-test (the assoc option in PLINK). As the number of expected in some cells in the χ2-test might be small for some SNPs, we have also computed p-values using a Fisher exact test (the fisher option in PLINK) to see that the results did not change due to this. A comparison of the results from using asymptotic p-values with those using a Fisher exact test reveals that even though p-values for individual SNPs are slightly different using the two tests, the overall conclusion does not change. #### Fixation, heterozygosity, and clusters Calculations of fixation, observed heterozygosity and clusters were performed in R [31]. The significance of the difference in the decrease in heterozygosity at each locus between generations 40 and 50 in the high and low lines was tested by a two-sided t-test in R (the function t.test). The length of the clusters in cM was calculated using the chromosome specific ratios of cM/Mb given in Table 2 in [17]. The length of the clusters in cM was then transformed to recombination frequency using Haldanes map function. The clusters will contain different alleles in the two lines if no recombination occurred during the fixation process or if recombination occurred only in homozygous individuals ( = non-informative). The probability for this was calculated as ((1−r)+r(p2+q2))2Ng, where r is the recombination frequency between first and last position in the cluster, p and q are the haplotype frequencies, N is the effective population size and g is the number of generations until fixation of the cluster. Allele frequencies of p = 1/7, q = 6/7 and p = 3/7, q = 4/7 was used in the calculations and 3, 5 and 10 generations was compared. #### Allele frequency changes Changes in allele frequencies between generations 40 and 50, and also the average over blocks of 5 SNPs were calculated. The mean allele frequency change in each block is compared to the distribution of all blocks across the genome, and if it lies in the 95:th percentile, it is identified as a potential locus under selection. Thus, the number of selected loci per set of 20 blocks is Poission-distributed with average 1, given the assumption that the blocks are independent. #### Quantitative genetic estimation of the number of loci The total number of loci affecting a trait was estimated using equations 6 (n = D/(MT) and 12 (T≈ (aminndM)/(nd−1)) in [25]. The estimated number of loci is n, D is half the phenotypic difference between the parental lines (here 670.5), M is the average additive effect of the detected loci, T is the detection threshold, amin is the smallest additive effect among the detected loci, and nd is the number of detected loci. Data on additive effects from previously identified QTLs were from Table 3 in [17]. The estimation was done for the body weight traits with at least 3 identified QTLs. ### Supporting Information Figure S1. Results for all chromosomes for an association test on allele frequency differences between generation 40 and 50 in the high line (red) and the low line (blue). The result for individual SNPs are shown as circles. The grey line indicates the Bonferroni corrected significance level p<0.001, and the dashed grey line p<0.05. For the low line there are significant differences (p<0.05) on chromosome 12 and 15. (Using a Fisher exact test the region on chromosome 12 is not significant with Bonferroni correction but instead a region on chromosome 11 is significant). For the high line there are significant differences at chromosomes 1 (0.001), 2 (0.001), 3 (0.001), 4 (0.05), 5 (0.001), 6 (0.001), 7 (0.001), 8 (0.001), 9 (0.001), 10 (0.05), 12 (0.05), 14 (0.05), 18 (0.001), 20 (0.05), 21 (0.05), 22 (0.001). Using a Fisher exact test the regions on chromosomes 4, 12, and 20 are not significant with Bonferroni correction. doi:10.1371/journal.pgen.1001188.s001 (0.40 MB PDF) Figure S2. Results for all chromosomes for an association test on allele frequency differences between the high- and low line. The result for individual SNPs are shown as circles and a sliding window mean of 20 markers are shown as a red line for generation 40 and as a purple line for generation 50. The dashed lines indicate the maximum χ2 values, which is obtained when a SNP is fixed for different alleles in the high and low line (80 and 118, respectively). The grey line indicates the Bonferroni corrected significance level at p<0.001. doi:10.1371/journal.pgen.1001188.s002 (0.97 MB PDF) Figure S3. Allele frequency changes between generation 40 and 50 across all chromosomes. Each symbol corresponds to the average frequency change over a block of 5 markers. Colored symbols indicate blocks that belong to the 95:th percentile, compared to all blocks in the entire genome. The blue and red lines indicate the number of outliers present in a window of 20 blocks. The largest changes are in several regions on chromosomes 1, 2 and 3 as well as in regions on chromosomes 7, 9, 11, 12, 18, 20 and 22. Changes on chromosomes 9 and 12 are most prominent in the high line, whereas changes on chromosome 2, 3 and 11 are mostly in the low line. Chromosome 20 is affected in both lines, but in the high line the changes are located towards the end of the chromosome and in the low line the first third is changing rapidly. Similarly, chromosome 22 has distinct regions affected in the different lines. On chromosome 18 a region changes rapidly in both lines, although the favoured allele differs for many, but not all, affected markers. doi:10.1371/journal.pgen.1001188.s003 (0.37 MB PDF) Figure S4. Simulations in the high line with h = 0.5 and selection coefficients from A) Growth4, B) Growth6, C) Growth9, and D) Growth12 with starting haplotype frequencies of 4003, i.e. 4/7 AB and 3/7 ab. The selection is strong enough to always lead to fixation at locus A except for Growth12 where fixation is reached in around 85% of replicates. A linked neutral locus often reaches fixation at recombination frequencies below 1–2 cM. The probability of fixation at the linked but unselected locus B is affected by the initial haplotype frequencies. A higher frequency from the beginning leads to a higher probability of fixation. doi:10.1371/journal.pgen.1001188.s004 (0.04 MB PDF) Figure S5. Simulations with no selection shows that fixation only occurs in 10–20% of the replicates for initial haplotype frequencies of 1006 i.e. 1/7 AB and 6/7 ab (A) and around 50% for initial haplotype frequencies of 3004 i.e. 3/7 AB and 4/7 ab (B). doi:10.1371/journal.pgen.1001188.s005 (0.03 MB PDF) Figure S6. Distribution of estimated effects in the original F2 population. Distribution of the additive effect estimates in the genome scan for QTL affecting body weight at 56 days of age in an F2 intercross between the high- and low- Virginia lines. The effects are given on a natural log-scale and ordered by size. The solid line shows the linear logarithmic trend for the effects in the range 10–30. This illustrates that the relative difference between the ordered genetic effects is close to constant. At both ends of the distribution, the differences between the neighbouring effects is greater, which could indicate that these are over- and under- estimates of effects due to sampling. During selection, this distribution indicates that there will always be a smaller set of loci (often 5–10) that will have be dominant over the rest in mediating response to selection, given that the relation between the effects does not change as larger effects go to fixation. doi:10.1371/journal.pgen.1001188.s006 (0.03 MB PDF) Table S1. Fixation in the low and high line in different generations and sample sizes. The number of fixed alleles is dependent on the sample size, and thus results based on the total number of genotyped individuals is not directly comparable since the number of sampled individuals is not the same at the two time points. For comparison, fixation was also computed in a random sample of 10 individuals from each of the lines at both time points and the general trend is very similar regardless of the sample size. The number of SNPs fixed for different alleles has increased by 75.0% (whole data set) or 63.8% (10+10 individuals) during the ten generations. The number of SNPs fixed for the same allele has increased by 6.7% (whole data set) or 6.6% (10+10 individuals) during the ten generations. doi:10.1371/journal.pgen.1001188.s007 (0.03 MB PDF) Table S2. Fixation dynamics during 10 generations, depending on sample size. The numbers indicate the number of SNPs that are present in the two compared sets. The set for generation 40 is given before the arrow and generation 50 after the arrow. Comparison with Table S1 shows that the majority of the fixed alleles in generation 40 also are fixed in generation 50, indicating that we have large enough sample size to accurately estimate fixed alleles. Diff = fixed for different alleles in the high and low line, Same = fixed for same allele, H not L = fixed in high but not in low, L not H = fixed in low but not high. doi:10.1371/journal.pgen.1001188.s008 (0.03 MB PDF) Table S3. Selection coefficients for the QTLs in the body weight selected lines, calculated using i from generation 5–25 and additive effects for body weight at 56 days from [7] and [17]. doi:10.1371/journal.pgen.1001188.s009 (0.03 MB PDF) Table S4. Number of generations until fixation for different QTLs. All simulations have starting frequencies 4/7 AB and 3/7 ab. A single additive QTL is assumed. H denotes the high line and L the low line. doi:10.1371/journal.pgen.1001188.s010 (0.04 MB PDF) Table S5. Number of generations until fixation, for different starting frequencies at Growth9 in the high line, assuming additive QTL effects. Our notation for the starting allele frequencies of two loci A and B with alleles A/a and B/b is a four digit code xyzw, where x is the proportion of haplotype AB, y is the proportion of haplotype Ab, z is the proportion of haplotype aB and w is the proportion of haplotype ab. doi:10.1371/journal.pgen.1001188.s011 (0.02 MB PDF) Table S6. Population parameters for the body weight selected lines, p is proportion selected, i is selection intensity calculated from p. p was calculated separately for males and females by dividing the number of selected by the average number of individuals in each generation (n = 268 in the high line and n = 309 in the low line, and assuming equal sex ratio in the offspring). The selection intensities, i, were retrieved from p using the tables on pp 379–380 in Falconer and Mackay [18]. Since the number of males and females selected in each generation are not equal, i is different for males and females leading to different s for males and females. The effective population size for each of the three generation intervals was estimated as 4*Nm*Nf/(Nm+Nf). The effective population size for generations 1–40 estimated as the harmonic mean is 40/((4/27.43)+(26/38.40)+(15/44.80)) = 34.55, whereas until generation 50 it is 40/((4/27.43)+(26/38.4)+(25/44.80)) = 36.21. doi:10.1371/journal.pgen.1001188.s012 (0.03 MB PDF) Table S7. Clusters as defined by different maximum distance between, and minimum number of, SNPs. doi:10.1371/journal.pgen.1001188.s013 (0.03 MB PDF) Table S8. Number of clusters with at least 5 SNPs fixed for different alleles in the two lines in generation 40 and 50, together with their length distribution. doi:10.1371/journal.pgen.1001188.s014 (0.02 MB PDF) Table S9. Number of clusters with at least 2 SNPs fixed for different alleles in the two lines in generation 40 and 50, together with their length distribution. doi:10.1371/journal.pgen.1001188.s015 (0.02 MB PDF) ### Acknowledgments We thank Leif Anderssson and Mattias Jakobsson for providing useful comments on the manuscript. We acknowledge the access to the 60 K chicken chip produced by Illumina Inc for the GWMAS Consortium. ### Author Contributions Conceived and designed the experiments: AMJ MEP PBS ÖC. Analyzed the data: AMJ MEP ÖC. Contributed reagents/materials/analysis tools: AMJ MEP PBS. Wrote the paper: AMJ MEP PBS ÖC. ### References 1. 1. Maynard Smith J, Haigh J (1974) The hitch-hiking effect of a favourable gene. Genet Res 23: 23–35. 2. 2. Berry AJ, Ajioka JW, Kreitman M (1991) Lack of polymorphism on the Drosophila fourth chromosome resulting from selection. Genetics 129: 1111–1117. 3. 3. Orr HA, Betancourt AJ (2001) Haldane's sieve and adaptation from the standing genetic variation. Genetics 157: 875–884. 4. 4. Przeworski M, Coop G, Wall JD (2005) The signature of positive selection on standing genetic variation. Evolution 59: 2312–2323. 5. 5. Hermisson J, Pennings PS (2005) Soft sweeps: molecular population genetics of adaptation from standing genetic variation. Genetics 169: 2335–2352. 6. 6. Pennings PS, Hermisson J (2006a) Soft sweeps II – molecular population genetics of adaptation from recurrent mutation or migration. Mol Biol Evol 23: 1076–1084. 7. 7. Jacobsson L, Park HB, Wahlberg P, Fredriksson R, Perez-Enciso M, et al. (2005) Many QTLs with minor additive effects are associated with a large difference in growth between two selection lines in chickens. Genet Res 86: 115–125. 8. 8. Hill WG (2005) A century of corn selection. Science 307: 683–684. 9. 9. Hill WG, Bunger L (2004) Inferences on the genetics of quantative traits from long-term selection in laboratory and domestic animals. Plant Breeding Rev 24: 169–210. 10. 10. Dunnington EA, Siegel PB (1996) Long-term divergent selection for eight-week body weight in White Plymouth Rock chickens. Poult Sci 75: 1168–1179. 11. 11. Sabeti PC, Varilly P, Fry B, Lohmueller J, Hostetter E, et al. (2007) Genome-wide detection and characterization of positive selection in human populations. Nature 449: 913–918. 12. 12. Pennings PS, Hermisson J (2006b) Soft sweeps III: the signature of positive selection from recurrent mutation. PLoS Genetics 2: e186. doi:10.1371/journal.pgen.0020186. 13. 13. Teotónio H, Chelo IM, Bradić M, Rose MR, Long AD (2009) Experimental evolution reveals natural selection on standing genetic variation. Nat Genet 41: 251–257. 14. 14. Raquin A-L, Brabant P, Rhoné B, Balfourier F, Leroy P, et al. (2008) Soft selective sweep near a gene that increases plant height in wheat. Mol Ecol 17: 741–756. 15. 15. Purcell S, Neale B, Todd-Brown K, Thomas L, Ferreira MAR, et al. (2007) PLINK: a toolset for whole-genome association and population-based linkage analysis. Am J Hum Genet 81: 559–575. 16. 16. Carlborg Ö, Jacobsson L, Åhgren P, Siegel P, Andersson L (2006) Epistasis and the release of genetic variation during long-term selection. Nat Genet 38: 418–420. 17. 17. Wahlberg P, Carlborg Ö, Foglio M, Tordir X, Syvänen A-C, et al. (2009) Genetic analysis of an F2 intercross between two chicken lines divergently selected for body-weight. BMC Genomics 10: 248. 18. 18. Falconer DS, Mackay TFC (1996) Introduction to Quantitative Genetics. 4th ed. Essex, UK: Longmans Green, Harlow. 19. 19. Kimura M, Crow JF (1978) Effect of overall phenotypic selection on genetic change at individual loci. Proc Natl Acad Sci USA 75: 6168–6171. 20. 20. Marquez GL, Lewis RM, Wiegland EN, Siegel PB (2009) Inbreeding and population structure in lines of chickens divergently selected for high and low 8-week body weight. Poultry Science 88(E-suppl. 1 ): 161. 2009 Poultry Science Association Annual Meeting Abstracts. 21. 21. Gillespie JH (1998) Population Genetics: A Concise Guide. Baltimore: John Hopkins University Press. 174 p. 22. 22. Castle WE (1921) An improved method of estimating the number of genetic factors concerned in cases of blending inheritance. Proc Natl Acad Sci USA 81: 6904–6907. 23. 23. Wright S (1968) Evolution and the Genetics of Populations: Volume 1, Genetic and biometric foundations. Chicago: University of Chicago Press. 469 p. 24. 24. Zeng ZB (1992) Correcting the bias of Wright estimates of the number of genes affecting a quantitative character—a further improved method. Genetics 131: 987–1001. 25. 25. Otto SP, Jones CD (2000) Detecting the undetected: Estimating the total number of loci underlying a quantitative trait. Genetics 156: 2093–2107. 26. 26. Buckler ES, Holland JB, Bradbury PJ, Acharya CB, Brown PJ, et al. (2009) The genetic architecture of maize flowering time. Science 325: 714–718. 27. 27. Laurie CC, Chasalow SD, LeDeaux JR, McCarroll R, Bush D, et al. (2004) The genetic architecture of response to long-term artificial selection for oil concentration in the maize kernel. Genetics 168: 2141–2155. 28. 28. Weedon MN, Lango H, Lindgren CM, Wallace C, Evans DM, et al. (2008) Genome-wide association analysis identifies 20 loci that influence adult height. Nat Genet 40: 575–583. 29. 29. Park H-B, Jacobsson L, Wahlberg P, Siegel PB, Andersson L (2006) QTL analysis of body composition and metabolic traits in an intercross between chicken lines divergently selected for growth. Physiol Genomics 25: 216–223. 30. 30. Rubin CJ, Zody MC, Eriksson J, Meadows JR, Sherwood E, et al. (2010) Whole-genome resequencing reveals loci under selection during chicken domestication. Nature 464: 587–591. 31. 31. R Development Core Team (2007) R: A language and environment for statistical computing. Vienna: R Foundation for Statistical Computing. Ambra 2.9.22 Managed Colocation provided by Internet Systems Consortium.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8496705293655396, "perplexity": 1825.9600559163102}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1404776435842.8/warc/CC-MAIN-20140707234035-00049-ip-10-180-212-248.ec2.internal.warc.gz"}
https://wikimili.com/en/Classical_physics
Classical physics Last updated Classical physics is a group of physics theories that predate modern, more complete, or more widely applicable theories. If a currently accepted theory is considered to be modern, and its introduction represented a major paradigm shift, then the previous theories, or new theories based on the older paradigm, will often be referred to as belonging to the area of "classical physics". Contents As such, the definition of a classical theory depends on context. Classical physical concepts are often used when modern theories are unnecessarily complex for a particular situation. Most usually classical physics refers to pre-1900 physics, while modern physics refers to post-1900 physics which incorporates elements of quantum mechanics and relativity. [1] Overview Classical theory has at least two distinct meanings in physics. In the context of quantum mechanics, classical theory refers to theories of physics that do not use the quantisation paradigm, which includes classical mechanics and relativity. [2] Likewise, classical field theories, such as general relativity and classical electromagnetism, are those that do not use quantum mechanics. [3] In the context of general and special relativity, classical theories are those that obey Galilean relativity. [4] Depending on point of view, among the branches of theory sometimes included in classical physics are variably: Comparison with modern physics In contrast to classical physics, "modern physics" is a slightly looser term which may refer to just quantum physics or to 20th and 21st century physics in general. Modern physics includes quantum theory and relativity, when applicable. A physical system can be described by classical physics when it satisfies conditions such that the laws of classical physics are approximately valid. In practice, physical objects ranging from those larger than atoms and molecules, to objects in the macroscopic and astronomical realm, can be well-described (understood) with classical mechanics. Beginning at the atomic level and lower, the laws of classical physics break down and generally do not provide a correct description of nature. Electromagnetic fields and forces can be described well by classical electrodynamics at length scales and field strengths large enough that quantum mechanical effects are negligible. Unlike quantum physics, classical physics is generally characterized by the principle of complete determinism, although deterministic interpretations of quantum mechanics do exist. From the point of view of classical physics as being non-relativistic physics, the predictions of general and special relativity are significantly different from those of classical theories, particularly concerning the passage of time, the geometry of space, the motion of bodies in free fall, and the propagation of light. Traditionally, light was reconciled with classical mechanics by assuming the existence of a stationary medium through which light propagated, the luminiferous aether, which was later shown not to exist. Mathematically, classical physics equations are those in which Planck's constant does not appear. According to the correspondence principle and Ehrenfest's theorem, as a system becomes larger or more massive the classical dynamics tends to emerge, with some exceptions, such as superfluidity. This is why we can usually ignore quantum mechanics when dealing with everyday objects and the classical description will suffice. However, one of the most vigorous on-going fields of research in physics is classical-quantum correspondence. This field of research is concerned with the discovery of how the laws of quantum physics give rise to classical physics found at the limit of the large scales of the classical level. Computer modeling and manual calculation, modern and classic comparison Today a computer performs millions of arithmetic operations in seconds to solve a classical differential equation, while Newton (one of the fathers of the differential calculus) would take hours to solve the same equation by manual calculation, even if he were the discoverer of that particular equation. Computer modeling is essential for quantum and relativistic physics. Classic physics is considered the limit of quantum mechanics for large number of particles. On the other hand, classic mechanics is derived from relativistic mechanics. For example, in many formulations from special relativity, a correction factor (v/c)2 appears, where v is the velocity of the object and c is the speed of light. For velocities much smaller than that of light, one can neglect the terms with c2 and higher that appear. These formulas then reduce to the standard definitions of Newtonian kinetic energy and momentum. This is as it should be, for special relativity must agree with Newtonian mechanics at low velocities. Computer modeling has to be as real as possible. Classical physics would introduce an error as in the superfluidity case. In order to produce reliable models of the world, one can not use classic physics. It is true that quantum theories consume time and computer resources, and the equations of classical physics could be resorted to provide a quick solution, but such a solution would lack reliability. Computer modeling would use only the energy criteria to determine which theory to use: relativity or quantum theory, when attempting to describe the behavior of an object. A physicist would use a classical model to provide an approximation before more exacting models are applied and those calculations proceed. In a computer model, there is no need to use the speed of the object if classical physics is excluded. Low energy objects would be handled by quantum theory and high energy objects by relativity theory. [5] [6] [7] Related Research Articles Mechanics is the area of physics concerned with the motions of physical objects, more specifically the relationships among force, matter, and motion. Forces applied to objects result in displacements, or changes of an object's position relative to its environment. This branch of physics has its origins in Ancient Greece with the writings of Aristotle and Archimedes. During the early modern period, scientists such as Galileo, Kepler, and Newton laid the foundation for what is now known as classical mechanics. It is a branch of classical physics that deals with particles that are either at rest or are moving with velocities significantly less than the speed of light. It can also be defined as a branch of science which deals with the motion of and forces on bodies not in the quantum realm. The field is today less widely understood in terms of quantum theory. Quantum gravity (QG) is a field of theoretical physics that seeks to describe gravity according to the principles of quantum mechanics, and where quantum effects cannot be ignored, such as in the vicinity of black holes or similar compact astrophysical objects where the effects of gravity are strong, such as neutron stars. In physics, the special theory of relativity, or special relativity for short, is a scientific theory regarding the relationship between space and time. In Albert Einstein's original treatment, the theory is based on two postulates: 1. The laws of physics are invariant in all inertial frames of reference. 2. The speed of light in vacuum is the same for all observers, regardless of the motion of the light source or observer. The theory of relativity usually encompasses two interrelated theories by Albert Einstein: special relativity and general relativity, proposed and published in 1905 and 1915, respectively. Special relativity applies to all physical phenomena in the absence of gravity. General relativity explains the law of gravitation and its relation to other forces of nature. It applies to the cosmological and astrophysical realm, including astronomy. In physics, the correspondence principle states that the behavior of systems described by the theory of quantum mechanics reproduces classical physics in the limit of large quantum numbers. In other words, it says that for large orbits and for large energies, quantum calculations must agree with classical calculations. Causality is the relationship between causes and effects. While causality is also a topic studied from the perspectives of philosophy, from the perspective of physics, it is operationalized so that causes of an event must be in the past light cone of the event and ultimately reducible to fundamental interactions. Similarly, a cause cannot have an effect outside its future light cone. The classical limit or correspondence limit is the ability of a physical theory to approximate or "recover" classical mechanics when considered over special values of its parameters. The classical limit is used with physical theories that predict non-classical behavior. The word mass has two meanings in special relativity: invariant mass is an invariant quantity which is the same for all observers in all reference frames, while the relativistic mass is dependent on the velocity of the observer. According to the concept of mass–energy equivalence, invariant mass is equivalent to rest energy, while relativistic mass is equivalent to relativistic energy. In physics, action at a distance is the concept that an object can be moved, changed, or otherwise affected without being physically touched by another object. That is, it is the non-local interaction of objects that are separated in space. In physics, an effective field theory is a type of approximation, or effective theory, for an underlying physical theory, such as a quantum field theory or a statistical mechanics model. An effective field theory includes the appropriate degrees of freedom to describe physical phenomena occurring at a chosen length scale or energy scale, while ignoring substructure and degrees of freedom at shorter distances. Intuitively, one averages over the behavior of the underlying theory at shorter length scales to derive what is hoped to be a simplified model at longer length scales. Effective field theories typically work best when there is a large separation between length scale of interest and the length scale of the underlying dynamics. Effective field theories have found use in particle physics, statistical mechanics, condensed matter physics, general relativity, and hydrodynamics. They simplify calculations, and allow treatment of dissipation and radiation effects. Modern physics is a branch of physics either developed in the early 20th century and onward or branches greatly influenced by early 20th century physics. Notable branches of modern physics include quantum mechanics, special relativity and general relativity. Wojciech Hubert Zurek is a Polish theoretical physicist and a leading authority on quantum theory, especially decoherence and non-equilibrium dynamics of symmetry breaking and resulting defect generation. In physics, relativistic mechanics refers to mechanics compatible with special relativity (SR) and general relativity (GR). It provides a non-quantum mechanical description of a system of particles, or of a fluid, in cases where the velocities of moving objects are comparable to the speed of light c. As a result, classical mechanics is extended correctly to particles traveling at high velocities and energies, and provides a consistent inclusion of electromagnetism with the mechanics of particles. This was not possible in Galilean relativity, where it would be permitted for particles and light to travel at any speed, including faster than light. The foundations of relativistic mechanics are the postulates of special relativity and general relativity. The unification of SR with quantum mechanics is relativistic quantum mechanics, while attempts for that of GR is quantum gravity, an unsolved problem in physics. For classical dynamics at relativistic speeds, see relativistic mechanics. This article will use the Einstein summation convention. Classical mechanics is a physical theory describing the motion of macroscopic objects, from projectiles to parts of machinery, and astronomical objects, such as spacecraft, planets, stars, and galaxies. For objects governed by classical mechanics, if the present state is known, it is possible to predict how it will move in the future (determinism), and how it has moved in the past (reversibility). Theoretical physics is a branch of physics that employs mathematical models and abstractions of physical objects and systems to rationalize, explain and predict natural phenomena. This is in contrast to experimental physics, which uses experimental tools to probe these phenomena. Physics is a scientific discipline that seeks to construct and experimentally test theories of the physical universe. These theories vary in their scope and can be organized into several distinct branches, which are outlined in this article. In physics, a field is a physical quantity, represented by a number or another tensor, that has a value for each point in space and time. For example, on a weather map, the surface temperature is described by assigning a number to each point on the map; the temperature can be considered at a certain point in time or over some interval of time, to study the dynamics of temperature change. A surface wind map, assigning an arrow to each point on a map that describes the wind speed and direction at that point, is an example of a vector field, i.e. a 1-dimensional (rank-1) tensor field. Field theories, mathematical descriptions of how field values change in space and time, are ubiquitous in physics. For instance, the electric field is another rank-1 tensor field, while electrodynamics can be formulated in terms of two interacting vector fields at each point in spacetime, or as a single-rank 2-tensor field. Superfluid vacuum theory (SVT), sometimes known as the BEC vacuum theory, is an approach in theoretical physics and quantum mechanics where the fundamental physical vacuum is viewed as superfluid or as a Bose–Einstein condensate (BEC). References 1. Weidner and Sells, Elementary Modern Physics Preface p.iii, 1968 2. Morin, David (2008). . New York: Cambridge University Press. ISBN   9780521876223. 3. Barut, Asim O. (1980) [1964]. Introduction to Classical Mechanics. New York: Dover Publications. ISBN   9780486640389. 4. Einstein, Albert (2004) [1920]. Relativity. Robert W. Lawson. New York: Barnes & Noble. ISBN   9780760759219. 5. Wojciech H. Zurek, Decoherence, einselection, and the quantum origins of the classical, Reviews of Modern Physics 2003, 75, 715 or arXiv : quant-ph/0105127 6. Wojciech H. Zurek, Decoherence and the transition from quantum to classical, Physics Today, 44, pp 36–44 (1991) 7. Wojciech H. Zurek: Decoherence and the Transition from Quantum to Classical—Revisited Los Alamos Science Number 27 2002
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8362747430801392, "perplexity": 361.4591184953859}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154099.21/warc/CC-MAIN-20210731172305-20210731202305-00034.warc.gz"}
http://www.vindenergi.dtu.dk/english/kalender/Arrangement?id=e0b5d6f6-16ad-4470-933b-de3db8de1000
# PhD Defense Mads Mølgaard Pedersen Supervisors: Torben J. Larsen, DTU Wind Energy - Helge Aagaard Madsen, DTU Wind Energy - Gunner Chr. Larsen, DTU Wind Energy - Uwe Schmidt Paulsen, DTU Wind Energy Examinors: Ebba Delwick, DTU Wind Energy - Vasilis Riziotis, NTNU - Knud Kragh, Siemens Wind Power Titel: Inflow Measurements from Blade-mounted Flow Sensors - Flow Analysis, Application and Aeroelactic Responce Power and load performance of wind turbines are important for the development and continuous expansion of wind energy. The power and loads are highly dependent on the inflow conditions, which can be measured using different types of sensors mounted on nearby met masts, on the nacelle, at the spinner or at the blade. To characterize the incoming turbulent wind flow that results in high and low fatigue loads, information about the temporal and spatial variations within the rotor area is required. This information can be obtained from a blade-mounted flow sensor, e.g. a five-hole pitot tube, which has been used in several research experiments over the last 30 years. From its rotating position at the blade, a blade-mounted flow sensor is exposed to exactly the same inflow conditions as the turbine (including wake effects from upstream turbines). A blade-mounted flow sensor is able to provide valuable information about the instant inflow velocity as well as variations within the rotor plane, and that goes for all wind directions. The inflow, measured by a blade-mounted flow sensor, is, however, disturbed by the wind turbine. A method to compensate for this disturbance and estimate the free-stream inflow velocity has therefore been developed and utilised in this project. Applications of measurements from blade-mounted flow sensors have been investigated. It is concluded that a blade-mounted flow sensor provides valuable information about the inflow. This information can be used for the control purposes and to investigate the complex relation between the inflow and the power and loads. Furthermore, the measurement can be used to characterise the inflow conditions that yield high loads and as input for aeroelastic simulations to improve the correlation between the measured and simulated loads. Mon 30 Apr 18 10:00 - 13:00 ## Where Danmarks Tekniske Universitet DTU Risø Campus, B112, H.H. Koch Frederiksborgvej 399 4000 Roskilde
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8831828832626343, "perplexity": 3675.9069924801665}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513508.42/warc/CC-MAIN-20181020225938-20181021011438-00073.warc.gz"}
https://www.physicsforums.com/threads/help-conservation-of-mass.68006/
# Homework Help: Help! Conservation Of Mass 1. Mar 20, 2005 ### GotTrips Hi all I am in deperate, desperate, desperate need of some help. I have this question that I have been working on for hours and have made no progress at all. Here is the question. Write down the "conservation of mass" equation for dm(r)/dr, where m(r) is the mass inside radius "r". Assume that the pressure inside a star at radius r is given by P(r) = Pc/R(R-r) where Pc is the central pressure and R is the stars outer radius. Combine this and the equation of hydrostatic equlibrium to find an expression for m(r). Hence show that m(r) x r5/2
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9434614777565002, "perplexity": 1452.1182636080653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860041.64/warc/CC-MAIN-20180618031628-20180618051628-00243.warc.gz"}
http://mathhelpforum.com/advanced-statistics/5826-basic-statistics-help-please.html
# Math Help - Basic Statistics. Help please 1. ## Basic Statistics. Help please Hi, I would appreciate any guidance to do this exercise. I'm a lawyer and I'm quite lost with the subject: Te question is: Consider the following probability distribution: X P(X) -1 0.17 2 0.15 5 0.40 8 0.18 11 0.10 1.00 E(X)= _ _. _ _ _ _ V(X)= _ _. _ _ _ _ Mauricio 2. Originally Posted by 10219929 Hi, I would appreciate any guidance to do this exercise. I'm a lawyer and I'm quite lost with the subject: Te question is: Consider the following probability distribution: X P(X) -1 0.17 2 0.15 5 0.40 8 0.18 11 0.10 1.00 E(X)= _ _. _ _ _ _ V(X)= _ _. _ _ _ _ Mauricio E(X) = expected value = summation of X*P(X) --> same as a weighted average V(X) = variance = summation of [ X - E(X) ]^2 * P(X) 3. ## Thanks John For V(X), that means that I have to use the sumation of all X values, plus the result of the sumation of E(X)? Sorry mate, can you give just one example using the numbers? Thanks a lot. Mauricio 4. Originally Posted by 10219929 For V(X), that means that I have to use the sumation of all X values, plus the result of the sumation of E(X)? Sorry mate, can you give just one example using the numbers? Thanks a lot. Mauricio Yes, for V(X) you will use the result from computing E(X). Here's an example: Expectation
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9693756103515625, "perplexity": 1795.6064204047634}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802765002.8/warc/CC-MAIN-20141217075245-00174-ip-10-231-17-201.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/linear-independence.294968/
# Homework Help: Linear Independence 1. Feb 24, 2009 ### sana2476 I attempted the proof but I don't know how to complete it.. Let u,v,w be linearly independent vectors and x is in <u,v,w>. Then there are unique a,b,y such that x=au+bv+yw 2. Feb 24, 2009 ### Tom Mattson Staff Emeritus Great, let's see what you've done. 3. Feb 24, 2009 ### sana2476 I'm having trouble starting it...if u could help me start it..then i can try to carry it from there 4. Feb 24, 2009 ### yyat Look at the definition of <u,v,w>. What does it mean for x to be in <u,v,w>, spelled out in terms of the definition? For the uniqueness part, start by assuming that you can write x=au+bv+cw and x=du+ev+fw, then prove that a=d, b=e, c=f. 5. Feb 24, 2009 ### sana2476 ok...so if i start the proof by saying if x is in <u,v,w> then there exists d,e,f such that x=du+ev+fw and then if i take the difference, say: (a-d)u+(b-e)v+(c-f)w...would that be right approach? 6. Feb 24, 2009 ### Staff: Mentor By definition, if x is in Span(u, v, w), then there are scalars a, b, and c such that x = au + bv + cw. (I changed letters on you, here. You want to show that this representation is unique, so one way to do this is to assume the contrary--that the representation is not unique, meaning that there is at least one other way to represent x, say, as du + ev + fw. Work with these two representations, and you should get a contradiction, which means that your assumption that the representation was not unique must have been incorrect, which gets you back to the representation being unique.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9088800549507141, "perplexity": 1097.8901728997641}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828018.77/warc/CC-MAIN-20181216234902-20181217020902-00423.warc.gz"}
http://mathhelpforum.com/statistics/180009-bayes-rule-probability-problem.html
# Math Help - Bayes Rule probability problem. 1. ## Bayes Rule probability problem. Hi all, I am not sure if this belongs to University Math help. It looks like it's basic enough but we are doing it in college. I hope I am not intruding in the wrong territory. The problem is as follows: A certain disease can be detected by a blood test in 95% of those who have it. Unfortunately, the test also has a 0.02 probability of showing that a person has the disease when in fact he or she does not. It has been estimated that 1% of those people who are routinely tested actually have the disease. If the test shows that a certain person has the disease, find the probability that the person actually has it. This is how I did it: For straight forwardness we will look at the population as being 10000 people. 95% of those who have it will test "yes" i.e. 95% of 1% = 0.95 x 0.01 = 0.0095 from all population, so from 10000 people it would be 95 people 2% of those who do not have it will also test "yes" i.e. 2% of 99% = 0.02 x 0.99 = 0.0198 from all population, so from 10000 people it would be 198 people So, the total number of all "yes" tests (positive "yes" and negative "yes") would be 95+198=293 "yes" tests Since we know that 1% of population actually have the disease i.e. 100 people then we can conclude that chance of having disease if the test was positive is 100/293=0.3413 As you can see, my answer came out as 0.3413. Is that correct? The problem is - my lecturer's answer is 0.324 2. Originally Posted by johammbass Hi all, I am not sure if this belongs to University Math help. It looks like it's basic enough but we are doing it in college. I hope I am not intruding in the wrong territory. The problem is as follows: A certain disease can be detected by a blood test in 95% of those who have it. Unfortunately, the test also has a 0.02 probability of showing that a person has the disease when in fact he or she does not. It has been estimated that 1% of those people who are routinely tested actually have the disease. If the test shows that a certain person has the disease, find the probability that the person actually has it. This is how I did it: For straight forwardness we will look at the population as being 10000 people. 95% of those who have it will test "yes" i.e. 95% of 1% = 0.95 x 0.01 = 0.0095 from all population, so from 10000 people it would be 95 people 2% of those who do not have it will also test "yes" i.e. 2% of 99% = 0.02 x 0.99 = 0.0198 from all population, so from 10000 people it would be 198 people So, the total number of all "yes" tests (positive "yes" and negative "yes") would be 95+198=293 "yes" tests Since we know that 1% of population actually have the disease i.e. 100 people then we can conclude that chance of having disease if the test was positive is 100/293=0.3413 As you can see, my answer came out as 0.3413. Is that correct? The problem is - my lecturer's answer is 0.324 Well I get $P(D|+)=\frac{P(+|D)P(D)}{P(+|D)P(D)+P(+|D^c)P(D^c) }=0.324232082$. 3. Dear Plato, If you are not very busy, would you be able to the solution in explicitly because I am not clearly understanding all the terms. Would you be able to tell me where I went wrong with my reasoning? Thank you 4. Originally Posted by johammbass would you be able to the solution in explicitly You should understand that this is not a tutorial service. I will however give you the numbers. $+$ means a positive test. $D$ means that an individual actually has the disease. So $P(+|D)=0.95,~P(+|D^c)=0.02,~\&~P(D)=0.01~.$ 5. Hmm, I don't know, I thought the answer would have to be the probability of being actually diseased over the probability of a positive test but I guess I was wrong 6. Originally Posted by johammbass Hmm, I don't know, I thought the answer would have to be the probability of being actually diseased over the probability of a positive test but I guess I was wrong The question asks, "Given that the test is positive, what is the probability that the disease is actually present"? That is $P(D|+)=\frac{P(D~\&~+)}{P(+)}$. One needs to understand conditional probabilies. If you have doubts about that, then that may be you problem.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 5, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8682804107666016, "perplexity": 377.21732284117024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500835872.63/warc/CC-MAIN-20140820021355-00380-ip-10-180-136-8.ec2.internal.warc.gz"}
http://mathhelpforum.com/differential-geometry/101501-convergence-comparison.html
# Math Help - Convergence comparison 1. ## Convergence comparison Suppose that $f_n\to f$ in measure and $|f_n|\le g\in L^1$, for all $n$. Show that $f_n\to f$ in $L^1$. That is $\lim_n\int_X|f_n-f|d\mu=0$. I have already shown that $\int_Xfd\mu=\lim_n\int_X f_nd\mu$, but don't see how to use this or anything else to get to the desired result. 2. Look at Dominated convergence theorem - Wikipedia, the free encyclopedia, the given proof of the assertion that you already know contains your desired result.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 7, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9781320095062256, "perplexity": 129.6600182556201}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737913406.61/warc/CC-MAIN-20151001221833-00079-ip-10-137-6-227.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/13524-max-min-online-webwork-problem.html
# Thread: Max/Min Online Webwork Problem 1. ## Max/Min Online Webwork Problem Here is the question: Thus I went about trying to solve this problem in this manner. Note that the Minimum value is correct but the system will not accept 10 as an answer for the maximum value. 2. Originally Posted by qbkr21 Here is the question: Thus I went about trying to solve this problem in this manner. Note that the Minimum value is correct but the system will not accept 10 as an answer for the maximum value. remember that absolute max is different from local max. the derivative gives you the local max. the absolute max is the highest point in the interval, IT DOES NOT HAVE TO BE A CRITICAL POINT. check the end points, we get: (15, 28585) and (-6, -2222) so the absolute max is 28585 3. Originally Posted by Jhevon remember that absolute max is different from local max. the derivative gives you the local max. the absolute max is the highest point in the interval, IT DOES NOT HAVE TO BE A CRITICAL POINT. check the end points, we get: (15, 28585) and (-6, -2222) so the absolute max is 28585 Bingo You were right, but what did you do to maximize each coordinate? Did you stick the intervals that x was between back into f(x)? 4. Originally Posted by qbkr21 Bingo You were right, but what did you do to maximize each coordinate? Did you stick the intervals that x was between back into f(x)? yes, i found f(-6) and f(15). if one of those is lower than the y-value of all critical poitns, then it is the absolute min, if one is higher than all the y-values of the critical points, it is the absolute max. so always remember to check the endpoints for absolute max and mins
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8936505317687988, "perplexity": 609.204697042384}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463611560.41/warc/CC-MAIN-20170528200854-20170528220854-00336.warc.gz"}
https://casper.astro.berkeley.edu/astrobaki/index.php?title=Thevenin_Equivalent_Resistance&diff=next&oldid=2115
# Difference between revisions of "Thevenin Equivalent Resistance" ## Thévnin’s Theorem Using a Thévnin equivalent circuit to model the behavior of a black-box circuit, from the point of view of the two terminals, A and B Thévenin’s Theorem is a life-saver when you start chaining circuits together. It says that however complex your circuit involving currents, voltages, resistors, capacitors, inductors, etc., it can all be modeled from the point of view of two output or input terminals as a single voltage and a single series impedance (if you haven’t see impedances discussed yet, just read "resistance" where "impedance" is used). This is incredible, because it means that you can completely describe the impact on your circuit of any upstream or downstream electronics just by using these two quantities: the equivalent voltage, and the equivalent impedance. If you just accept this as true (and it is!), then calculating these quantities is easy. First, for two terminals A and B, calculate or measure the voltage between them if you leave them unconnected. This is the Thévenin equivalent voltage, or ${\displaystyle V_{th}}$. Next, calculate or measure the current that flows between A and B if you connect them with a wire. (Warning, if you are measuring, you might want to put a resistor in series before you blow your fuse!) If you are considering complex impedances, you’ll have to measure current as a function of frequency. Using Ohm’s Law, you then have your Thévenin equivalent impedance, or ${\displaystyle Z_{th}}$ (or ${\displaystyle R_{th}}$ if we are just considering resistance). Done!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.935874879360199, "perplexity": 724.464271514225}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030336880.89/warc/CC-MAIN-20221001163826-20221001193826-00791.warc.gz"}
https://www.gradesaver.com/textbooks/math/precalculus/precalculus-6th-edition-blitzer/chapter-p-section-p-1-algebraic-expressions-mathematical-models-and-real-numbers-exercise-set-page-17/80
## Precalculus (6th Edition) Blitzer This problem illustrates this, as the order of the terms $7$ and $11\times8$ being multiplied are switched, but the product remains the same.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8143389225006104, "perplexity": 454.19503916058056}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526506.44/warc/CC-MAIN-20190720091347-20190720113347-00351.warc.gz"}
http://tex.stackexchange.com/questions/150623/move-beamer-header-down
I did a dry run of a presentation I'm giving and I noticed that when in full screen mode, the projector is cutting off a little bit of all four sides of the slide (appears to be overscan). I've tried getting the projector and laptop to correct for it but have been unsuccessful. As a hack, I'd like to just increase the distance between the header and footer and the edge of the slide. The style file I'm using contains \ProvidesPackageRCS $Header: /cvsroot/latex-beamer/latex-beamer/themes/theme/compatibility/beamerthemeshadow.sty,v 1.12 2007/01/28 20:48:30 tantau Exp$ \mode<presentation> \definecolor{BYUblue}{RGB}{0,31,69} \definecolor{BYUgold}{RGB}{195,163,106} \usecolortheme[RGB={0,31,69}]{structure} % BYU Blue \usetheme{Frankfurt} \setbeamercolor*{frametitle}{bg=BYUblue!50,fg=white!25} \setbeamercovered{transparent} \mode<all> \begin{textblock*}{100mm}(0.95\textwidth,-0.7cm) \includegraphics[width=1.2cm]{figures/magiccLabLogo} \end{textblock*}} % define the footline \defbeamertemplate*{footline}{infolines theme} { \leavevmode% \hbox{% \hspace*{2mm} \end{beamercolorbox}% \hspace*{2mm} \end{beamercolorbox}}% \hspace*{2mm} \end{beamercolorbox}% \vskip0pt% } \setbeamertemplate{navigation symbols}{} % no nav symbols I moved the footer up by increasing the dp setting of the beamercolorboxes to 2ex in the above footline definition above. Is there an easy way like that do that for the header? This is what I have This is what I want - just a little additional space just above the section titles - If you are able to provide a minimal working example (MWE) that the community can play with, it could be useful to diagnose the actual problem and how to fix it correctly. –  Werner Jan 11 '14 at 5:24 So I've sort of figured it out. The Frankfurt theme uses the smoothbars outer theme. I navigated to the MiKTeX 2.9\tex\latex\beamer\base\themes\outer folder and made a backup copy of the beamerouterthemesmoothbars.sty file. I then edited it so the \AtBeginDocument command is now \AtBeginDocument{ { \colorlet{global.bg}{bg} \usebeamercolor{frametitle} \ifbeamer@sb@subsection color(0ex)=(global.bg);% } color(0ex)=(frametitle.bg);% color(1ex)=(frametitle.bg);% } \else color(0ex)=(global.bg);% color(8ex)=(section in head/foot.bg)% <- this was the only line I changed. It was 7ex. } color(0ex)=(frametitle.bg);% color(1ex)=(frametitle.bg);% } \fi
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8424948453903198, "perplexity": 1619.7170426015405}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375094662.41/warc/CC-MAIN-20150627031814-00270-ip-10-179-60-89.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/very-hard-integral-from-int-coulomb-with-3-2-power-others.672208/
# Very Hard Integral (from int Coulomb) with -3/2 power + others 1. Feb 16, 2013 ### pr0me7heu2 1. The problem statement, all variables and given/known data I am trying to directly calculate the electric field (using Coulomb) at some arbitrary point P(0,0,z). The charge is evenly distributed over the surface of a sphere (radius R, charge density σ). Here I use θ for the polar angle and p for the azimuthal angle. I will leave out the messy details, but I know by symmetry only projection onto z-axis is relevant. I also determined the angle ψ (that between separation vector π and the z-axis) in terms of z,R,θ, and π. 2. Relevant equations E(alongz) = (4∏ε0) ∫02∏0 [σR^2 sinθ (z - Rcosθ)] / (R^2 + z^2 - 2Rzcosθ)^(3/2) dθ dp 3. The attempt at a solution ∫dp → 2∏ removing 2∏R^2σ constants out from integrand 0 [(z - Rcosθ)sinθ] / (R^2 + z^2 - 2Rzcosθ)^(3/2) dθ using u-substitution: u = cosθ du= -sinθ dθ θ = 0 → u = 1 θ = ∏ → u =-1 and reversing the limits of integration gives (ignoring constants out front): -11 (z - Ru) / (R^2 + z^2 - 2Rzu)^(3/2) du​ (#1) →according to solutions manual→ this works out to: z^-2 [(z-R) / |z-R| - (-z-R) / |z+R|]​ (#2) The manual says: Does anyone have any idea how you would use partial fractions to go from (1) to (2)?? Last edited: Feb 16, 2013 2. Feb 16, 2013 ### haruspex I don't see how to use partial fractions here either, but how about a substitution t2 = R2+z2-2Rzu? Similar Discussions: Very Hard Integral (from int Coulomb) with -3/2 power + others
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.974429190158844, "perplexity": 4564.060676013041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102967.65/warc/CC-MAIN-20170817053725-20170817073725-00616.warc.gz"}
https://agenda.infn.it/event/16348/timetable/?view=standard_numbered_inline_minutes
# Nuclear Structure and Dynamics - NSD 2019 Europe/Rome Venice, Centro Culturale Don Orione Artigianelli #### Venice, Centro Culturale Don Orione Artigianelli Zattere Dorsoduro 909/A, Venezia (Italy) , Description The IV International Conference on Nuclear Structure and Dynamics NSD2019 will be held in Venice at Centro Culturale Don Orione Artigianelli on May 13-17, 2019. The conference is a follow-up of the three previous conferences held first in 2009 in Dubrovnik (Croatia), continued in 2012 in Opatija (Croatia) and last in 2015 in Portoroz (Slovenia), and belongs to a series of conferences devoted to the most recent experimental and theoretical advances in the field of nuclear structure and reactions. The Conference NSD2019 will maintain this tradition, with the aim to provide a broad discussion forum that will promote exchange of ideas and collaboration among researchers with experimental, theoretical and phenomenological background. We encourage the attendance of graduate students and postdocs. Organized by: With the contribution of: Participants • Alberto Stefanini • Alexander Solovyev • Alexis Diaz-Torres • Alina Goldkuhle • Amiram Leviatan • Ana Montaner Pizá • Andrea Gottardo • Andreas Heusler • Andres Illana Sison • Anne Forney • Anu Kankainen • Augusto Macchiavelli • Aurel Bulgac • Aurora Tumino • barbara sulignano • Bernard BORDERIE • Chengjian Lin • Christopher Ricketts • Claes Fahlander • Claus Müller-Gatermann • Clément Delafosse • Costel Petrache • Daniele Mengoni • Danilo Gambacurta • Dario Vretenar • Dariusz Seweryniak • David Jenkins Jenkins • Deša Jelavić Malenica • Dieter Ackermann • Dmitry Testov • Eda Sahin • Elena Lawrie • Elena Litvinova • Eleonora Teresia Gregor • Enrico Fioretto • Fabiana Gramegna • Filip Kondev • Francesco Recchia • Francisco Barranco • Gaolong Zhang • Giacomo De Angelis • Giorgia Mantovani • Giorgia Pasqualato • Giovanna Montagnoli • Giulia Colucci • Gregory Aguilar • Guojian Yang • Gurjit Kaur • Hans Geissel • HIMANSHU KUMAR SINGH • Hiroyuki Sagawa • Horst Lenske • Huanqiao Zhang • Huiming Jia • Irene Zanon • Isao Tanihata • ISHTIAQ AHMED • Ivano Lombardo • Jaime Benito García • Jakub Wiśniewski • Jerry Draayer • Jerzy DUDEK • Jesper Jensen • Jesus Casal • Jesus Lubian Rios • Jose Javier Valiente Dobon • Kathrin Wimmer • Kazuyuki Sekizawa • Kobus Lawrie • Kosuke Nomura • Krzysztof Rusek • Liam Gaffney • Luciano Moretto • Lukasz Iskra • Magda Cicerchia • Makito Oi • Makoto Ito • MANOJ KUMAR SHARMA • Manpreet Kaur • Manuela Cavallaro • Marco Cinausero • Marco Mazzocco • MARCO SALVATORE La Cognata • Marco Siciliano • Maria Grazia Pellegriti • Maria Vittoria Managlia • Mariano Vigilante • Martha Liliana Cortes Sua • Martin Albertsson • Matko Milin • Michael Bentley • Michelangelo Sambataro • Mikhail Itkis • Mirco Del Fabbro • Monika Piersa • Moshe Gai • Mustafa Rajabali • Naomi Marchini • Nicolae Sandulescu • Nobuo Hinohara • Norbert Pietralla • Nunzio Itaco • Paolo Finelli • Pavol Mosat • Pawan Kumar • Peter Butler • Petr Navratil • Petra Colovic • Rituparna Kanungo • Roman Sagaidak • Ronald Fernando Garcia Ruiz • Saba Ansari • Sait Umar • SAKIR AYIK • Sara Pirrone • Sergey Vaintraub • Shahariar Sarkar • Shihang Shen • Shiwei Yan • Silvia Leoni • Silvia Monica Lenzi • Silvia Piantelli • Soumya Bagchi • Stanislav Antalic • Stefan Frauendorf • Suzana Szilner • Takahiro Mizusaki • Tamara Niksic • Tea Mijatovic • Thamer Alharbi • Tokuro Fukui • Tommaso Marchi • Tomohiro Oishi • Tomoya Naito • Toshio Suzuki • Tuncay Bayram • Vigezzi Enrico • Volker Werner • Vyacheslav Saiko • Wilton Catford • Xiaodong Tang • Yulia Parfenova • Yutaka Watanabe Support • Sunday, May 12 • 5:00 PM Registration • 6:30 PM Welcome Reception • Monday, May 13 • 8:30 AM Registration • 1 Speakers: Giacomo De Angelis (LNL) , Lorenzo Corradi (LNL) • Session I Convener: Giacomo De Angelis (LNL) • 2 Coulomb Excitation of Pear-shaped Nuclei We have carried out measurements, using Miniball, of the $\gamma$-ray de-excitation of $^{222,228}$Ra and $^{222,224,226}$Rn nuclei Coulomb-excited by bombarding $^{60}$Ni and $^{120}$Sn targets. The beams of radioactive ions, having energies of between 4.25 and 5.08 MeV.A, were provided by HIE-ISOLDE at CERN. The purpose of these measurements is to determine the intrinsic quadrupole and octupole moments in these nuclei and look for other cases of permanent octupole deformation to those of $^{224,226}$Ra already reported$^{1,2}$. Another aim of this experiment is to determine the level schemes of $^{224,226}$Rn in order to characterise these isotopes as octupole vibrational or octupole deformed. We present here the preliminary results from these measurements, including the implications for EDM searches. $^1$ Gaffney L P et al. 2013 Nature $\bf 497$ 199 $^2$ Wollersheim H J et al. 1993 Nuclear Physics A $\bf 556$ 261 Speaker: Prof. Peter Butler (University of Liverpool) • 3 Recent studies of heavy ion transfer reactions using large solid angle magnetic spectrometers Transfer reactions produce a wealth of nuclei in a wide energy and angular range and with cross sections spanning several orders of magnitude. Total angle and energy integrated cross sections for transfer channels have been investigated with spectrometers in various systems close to the Coulomb barrier. Such ingredients allow one to understand how nucleons are exchanged between projectile and target and how energy and angular momentum are transferred from the relative motion to the intrinsic excitation. The recent results of the multinucleon transfer reaction studies with neutron-rich projectile emphasized that these reactions provide a suitable mechanism to populate neutron-rich heavy nuclei [1,2]. The transfer reactions are among the most important tools to probe nucleon-nucleon correlations in nuclear systems. The pairing interaction induces correlations that are essential in defining the properties of finite quantum many body systems in their ground and neighboring states. These structure properties may influence in a significant way the evolution of the collision. Recently, pair correlations were probed in heavy ion collisions by performing studies far below the Coulomb barrier with the PRISMA spectrometer for several systems. The microscopic calculations that incorporate nucleon-nucleon correlations well reproduce the experimental data in the whole energy range, in particular, the transfer probability for two neutrons is very well reproduced, in magnitude and slope [3]. The talk will focus on the main outcome of these recent studies, critically addressing the new achievements, the present problems and new challenges, especially in view of forthcoming experiments to be performed with exotic beams at the radioactive beam facilities. [1] T. Mijatovic et al., Phys. Rev. C 94 (2016) 064616. [2] F. Galtarossa et al., Phys. Rev. C 97 (2018) 054606. [3] D. Montanari et al., Phys. Rev. Lett. 113 (2014) 052601. Speaker: Suzana Szilner (Ruder Boskovic Institute) • 4 Experimental studies of neutron-rich nuclei around N = 126 at KEK isotope separation system The lifetimes of the waiting point nuclei at N = 126 of the rapid neutron capture process (r-process) are important parameters to investigate the astrophysical environment of the r-process. However, the difficulty in the production of those extremely neutron-rich nuclei makes their experimental studies unfeasible. Therefore, the theoretical nuclear models play crucial roles in the simulation of the r-process nucleosynthesis. The experimental studies of lifetimes, masses and nuclear structures of the neutron-rich nuclei around N = 126 provide significant inputs to those theoretical models to improve their predictability for the waiting point nuclei. We are developing KEK Isotope Separation System (KISS) at RIKEN RIBF facility to produce and separate those neutron-rich nuclei for the measurements of the beta-gamma spectroscopy, the lifetime and the mass [1-2]. The multi-nucleon transfer (MNT) reactions between the Xe-136 beam and the Pt-198 target are employed to produce those nuclei. The MNT reactions were studied at GANIL to investigate their feasibility to produce the neutron-rich nuclei around N = 126, demonstrating its promising potential [3-4]. The KISS consists of an argon-gas-cell-based laser ion source and an isotope separation on-line system to extract a single species of the reaction products. The detector system composed from a multi-segmented gas counter [5] and high-purity germanium detectors makes it possible to perform their beta-gamma spectroscopy and laser ionization spectroscopy. In this presentation, we will report the present status of the KISS including the recent experimental results of nuclear spectroscopy and the future plan. [1] Y. Hirayama et al., Nucl. Instrum. and Methods B 353 (2015) 4. [2] Y. Hirayama et al., Nucl. Instrum. and Methods B 376 (2016) 52. [3] Y.H. Kim et al., EPJ Web of conferences 66 (2014) 03044. [4] Y.X. Watanabe et al., Phys. Rev. Lett. 115 (2015) 172503. [5] M. Mukai et al., Nulc. Instrum. and Methods A 884 (2018) 1. Speaker: Dr Yutaka Watanabe (KEK WNSC) • 11:00 AM Coffee break • Session II • 5 Describing low-energy nuclear reactions with wave-packet dynamics The physics of nuclear reactions is crucial for understanding element creation in the Universe, and is therefore at the core of science programmes in new generation facilities. I will report on novel theoretical developments in describing low-energy fusion dynamics of heavy ions and weakly bound nuclei using the time-dependent wave-packet method. Topical applications of the method include the incomplete fusion of weakly bound nuclei at Coulomb energies [1] and resonances in stellar carbon fusion [2]. Perspectives of the method for identifying resonant behaviour in nuclear collisions will be discussed [3]. [1] M. Boselli and A. Diaz-Torres, Physical Review C 92 (2015) 044610. [2] A. Diaz-Torres and M. Wiescher, Physical Review C 97 (2018) 055802. [3] A. Diaz-Torres and J.A. Tostevin, arXiv: 1809.10517. Speaker: Dr Alexis Diaz-Torres (University of Surrey) • 6 Nuclear spectroscopy with fast beams of rare isotopes The often surprising properties of neutron-rich nuclei have prompted extensive experimental and theoretical studies aimed at identifying the driving forces behind the dramatic changes encountered in the exotic regime. In-beam nuclear spectroscopy with fast beams and thick reaction targets where $\gamma$-ray spectroscopy is used to tag the final state provides information on the single-particle structure as well as on collective degrees of freedom in nuclei that are available for experiments at beam rates of only a few ions/s. This presentation will show how in-beam experiments measure complementary observables that advance our understanding. The interplay of experimental results and theory will be emphasized at the intersection of nuclear structure and reactions in the joined quest of unraveling the driving forces of shell evolution. Speaker: Alexandra Gade (Michigan State University) • 7 Performance and Recent Results with the the Advanced GAmma Tracking Array (AGATA) The AGATA array [1], is the European forefront instrument based on semiconductor Germanium detectors, for high-resolution position sensitive gamma-ray spectroscopy. AGATA is being built in a collaborative effort of more than 40 institutes in 11 countries. The conceptual design of AGATA foresees a 4π array with 60 triple clusters containing 180 Ge encapsulated detectors [2]. Nevertheless, smaller sub- arrays of AGATA have been implemented, first as a prove of concept for a tracking array at INFN-LNL [3] and later to prove the potential of AGATA in different experimental conditions as well as to profit from the scientific possibilities offered by European large scale facilities. Since 2012 AGATA sub-arrays have been installed at the FAIR/NUSTAR-precursor PRESPEC set-up [4], placed at the focal plane of the FRS Fragment Separator in GSI, where experiments with in-flight highly relativistic exotic beams were performed, and in 2014 at GANIL and SPIRAL where experiments with high-intensity stable beams and reaccelerated ISOL radioactive beams are expected to be performed till 2020 [5].In this contribution the AGATA project will be presented, emphasising the capabilities and performance figures, relevant for the present and future European facilities. Finally the recent results of the AGATA experimental activity, coupled with different complementary instruments in the mentioned host laboratories, will be reported. [1] The AGATA Collaboration, Nucl. Instrum. Methods Phys. Res., Sect. A 668, 26 (2012). [2] E. Farnea et al., Nucl. Instrum. Methods Phys. Res., Sect. A 621 (2010) 331. [3] A.Gadea et al., Nucl. Instrum. Methods Phys. Res., Sect. A 654, 88 (2011). [4] N. Pietralla et al., EJP Web of Conferences 66, 02083 (2014) and http://web-docs.gsi.de/~wolle/PreSPEC/ [5] E. Clément et al., Nucl. Instrum. Methods Phys. Res., Sect. A 855 (2017) 1 Speaker: Andres F. Gadea Raga (IFIC CSIC-University of Valencia) • 1:15 PM Lunch • Session III Convener: Wilton Catford (University of Surrey) • 8 An Analysis of the 18g,m F (d,p)19F Reactions in the Rotational Model* In this work we discuss the results of a recent HELIOS [1] measurement of the (d,p) reaction on 18F, from both the ground (1+ ) and isomeric (5+) states, to the members of the 19F ground-state band [2] in the rotational model We consider the structure of 18,19F in terms of Nilsson single-particle orbits originating from the sd spherical levels coupled to a deformed core, and calculate the (d,p) spectroscopic strengths to 19F from both the ground and isomeric states following the framework reviewed in [3]. Our results show good agreement with the experiment and the shell model. [1] A. Wuosmaa, et al. Nucl. Instrum. Methods, A580, 1290 (2007). [2] D. Santiago Gonzalez, et al. Phys. Rev. Lett. 120, 122503 (2018). [3] B. Elbek and P. O. Tjøm, in Advances in Nuclear Physics, M. Baranger and E. Vogt eds. (Springer, Boston, MA, 1969). *This material is based upon work supported by the U.S. DOE, Office of Science, Office of Nuclear Physics, under Contract No. DEAC0205CH11231. Speaker: Augusto Macchiavelli (Lawrence Berkeley National Laboratory) • 9 Halo and unbound light nuclei from ab initio theory In recent years, significant progress has been made in ab initio nuclear structure and reaction calculations based on input from QCD employing Hamiltonians constructed within chiral effective field theory. One of the modern approaches is the No-Core Shell Model with Continuum (NCSMC) [1,2], capable of describing both bound and scattering states in light nuclei simultaneously. We will present latest NCSMC calculations of weakly bound states and resonances of exotic halo nuclei 11Be and 15C and discuss the photo-dissociation of 11Be and 14C(n,γ)15C capture. We will also present our results for their unbound mirror nuclei 11N and 15F, respectively. We will point out the effects of continuum on the structure of mirror resonances and highlight the role of chiral NN and 3N interactions. Finally, we will discuss polarization effects in the 3H(d,n)4He fusion [3]. This transfer reaction is relevant for primordial nucleosynthesis and is being explored in large-scale experiments such as NIF and ITER as a possible future energy source. [1] S. Baroni, P. Navratil, and S. Quaglioni, Phys. Rev. Lett. 110, 022505 (2013); Phys. Rev. C 87, 034326 (2013). [2] P. Navratil, S. Quaglioni, G. Hupin, C. Romero-Redondo, A. Calci, Physica Scripta 91, 053002 (2016). [3] G. Hupin, S. Quaglioni, and P. Navratil, Nature Communications (2019) 10:351; https://doi.org/10.1038/s41467-018-08052-6 *Supported by the NSERC Grant No. SAPIN-2016-00033. TRIUMF receives federal funding via a contribution agreement with the National Research Council of Canada. Speaker: Dr Petr Navratil (TRIUMF) • 10 Analysis of excited states in 13C and their cluster structure Accurate studies on 13C spectroscopy have great impact in the present understanding of the role played by extra-neutrons in stabilizing alpha-cluster structures formed in light nuclei. 13C excited states are in fact the simplest systems that can be formed by adding a neutron to a triple-alpha molecular-like structure. Their spectroscopic properties are therefore a fundamental benchmark for theoretical models aiming at describing clustering in light nuclei. To improve our knowledge of 13C structure, we performed a comprehensive R-matrix fit of $\alpha$+9Be elastic and inelastic scattering data in the energy range Ex≈3.5 – 10 MeV at several angles. To carefully determine the partial decay widths of states above the $\alpha$-decay threshold we included in the fit procedure also 9Be($\alpha$,n0)12C and 9Be($\alpha$,n1)12C cross section data taken from the literature. This analysis allows to improve the (poorly known) spectroscopy of excited states in 13C in the Ex≈12-17 MeV region, and tentatively suggests the presence of a large-deformation negative-parity molecular band. Speaker: Dr Ivano Lombardo (INFN - Sezione di Catania) • 11 Neutron Interaction With 7Be at the SARAF: Evidence for Cluster Shell Model p-h States in 8Be and Implication for Big Bang Nucleosynthesis. The interaction of neutrons with 7Be that was measured at the SARAF in Israel with a quasi-Maxwellian neutron beam at 49.5 keV reveals a strong B(E1: 2- ---> 2+) ~ 0.04 W.u., decay of the 2- state at 18.91 MeV in 8Be to the alpha-cluster 2+ state at 3.03 MeV [1]. This strong E1 decay leads to large cross section of the 7Be(n,g_1)*8Be(3.03) reaction at the “BBN window". It implies s-waves dominance of the cross section at the “BBN window”, in contrast to previous extrapolations into the “BBN window” from lower energies (the n_TOF measurement [2]) and extrapolation from higher energies (the Kyoto measurement [3]). In addition, the phenomenological structure of all states below 19.5 MeV in 8Be (including the 2- state at 18.91 MeV) provides good evidence for particle-hole (p-h) states in the newly proposed Cluster Shell Model (CSM) of Della Roca and Iachello [4]. The states near the neutron and proton thresholds in 8Be show the characteristic of the p-h states predicted by the CSM. The measured B(E1) of the 2- state at 18.91 MeV is in accordance with other measured decays of the p-h CSM states to the well-known cluster ground-states and 2+ state at 3.03 MeV in 8Be. The new CSM model of Della Roca and Iachello [4] will be introduced with emphasize on the similarity between p-h states in 8Be and single particle states in 9Be. The material presented in this paper is based upon work supported by the U.S.-Israel Bi National Science Foundation, Award No. 2012098, and the U.S. Department of Energy, Office of Science, Nuclear Physics, Award No. DE-FG02-94ER40870. [1] M. Gai, arXiv:1812.09914v1, (2018). [2] M. Barbagallo et al., Phys. Rev. Lett. 117, 152701 (2016). [3] T. Kawabata et al., Phys. Rev. Lett., 118, 052701 (2017). [4] V.DellaRocca and F.Iachello, Nucl. Phys. A 973, 1 (2018). Speaker: Moshe Gai (University of Connecticut) • 4:30 PM Coffee break • Session IV Convener: Fabiana Gramegna (LNL) • 12 Equilibration dynamics in nuclear reactions Low-energy heavy-ion reactions provide us a rich laboratory to study the equilibration dynamics of strongly interacting many-body systems. In particular, these reactions probe an intriguing interplay between the microscopic single-particle dynamics and collective motion at time scales too short for complete equilibration. In this presentation, we discuss recent microscopic studies of equilibration dynamics in deep-inelastic, quasifission, and fusion reactions. In this context we will discuss the equilibration dynamics and time-scales for various quantities that are connected to the experimentally observable entities. These include the study of mass, isospin, and total kinetic energy (TKE) equilibration time-scales. In most of these studies one is essentially dealing with the transport phenomena of isospin asymmetric systems [1,2]. These investigations provide us the ingredients to model such phenomena and help answer important questions about the nuclear Equation of State (EOS) and its evolution as a function of neutron-to-proton $N/Z$ ratio [3]. *This work has been supported by the U.S. DOE under Grant No. DE SC0013847 with Vanderbilt University and by the Australian Research Council Grant No. DP160101254. [1] C. Simenel and A. S. Umar, Prog. Part. Nucl. Phys. 103, 19 (2018). [2] K. Godbey, A.S. Umar, and C. Simenel, Phys. Rev. C 95, 011601(R) (2017). [3] A.S. Umar, C. Simenel, and W. Ye, Phys. Rev. C 96, 024625 (2017). Speaker: Prof. Sait Umar (Vanderbilt University) • 13 Phase transition dynamics in hot nuclei and N/Z influence An abnormal production of events with almost equal-sized fragments was theoretically proposed as a signature of spinodal instabilities responsible for nuclear multifragmentation in the Fermi energy domain. On the other hand finite size effects are predicted to strongly reduce this extra production. High statistics quasifusion hot nuclei produced in central collisions between Xe and Sn isotopes at 32 and 45 MeV per nucleon incident energies have been used to definitively establish, through the experimental measurement of charge correlations, the presence of spinodal instabilities. N/Z influence was also studied. The nature of the dynamics of a phase transition i.e. the fragment formation was the last missing piece of the puzzle concerning the liquid-gas transition in nuclei. Ref. B. Borderie et al., INDRA coll., Phys. Lett. B 782 (2018) 291. Speaker: Dr Bernard BORDERIE (Institut de Physique Nucléaire IN2P3/CNRS) • 14 Isospin influence on the Intermediate Mass Fragments production at low energy S. Pirrone(1), B. Gnoffo(1),(2), , G. Politi(1),(2) E. De Filippo(1) P. Russotto(3), G. Cardella(1), F. Favela(1), E. Geraci(1),(2) N. S. Martorana(2),(3) A. Pagano(1),(2), E.V. Pagano(3) E. Piasecki(4) L. Quattrocchi(1),(2) F. Rizzo(2),(3) M. Trimarchi(1),(5) and A. Trifirò (1),(5) (1) INFN, Sezione di Catania -Catania, Italy (2) Dipartimento di Fisica, Università degli Studi di Catania - Catania, Italy (3) INFN, Laboratori Nazionali del Sud- Catania, Italy (4) Heavy Ion Laboratory, University of Warsaw, Warsaw, Poland (5) Dipartimento di Scienze Matematiche e Informatiche, Scienze Fisiche e Scienze della Terra, Università, Messina - Messinia, Italy The reactions 78Kr + 40Ca and 86Kr + 48Ca at 10 A MeV, have been studied in Catania at LNS with the 4π multi detector CHIMERA. For these systems, we have already analyzed the fusion-evaporation and fission-like processes, [1,2,3]. In this work we present a new study concerning the break-up of the Projectile-Like (PLF) into two fragments, following more violent deep-inelastic collisions. A selection method has been developed, in order to discriminate PLF break up events from those due to other mechanisms which populate the same region of the phase-space. A preference for PLF aligned break-up, along the direction of the PLF-TLF separation axis with the light fragment emitted in the backward part, has been evidenced, suggesting the presence of some dynamical effects. As the isospin is expected to play a crucial role in the onset of this process; a comparison between the neutron-rich 86Kr +48 Ca and neutron-poor 78Kr +40 Ca systems will be presented. [1] Gnoffo B., Il Nuovo Cimento C, 39 (2016) 275. [2] Pirrone S. et al., Journal of Physic: Conf. Series, 515 (2014) 012018. [3] Politi G. et al., JPS Conf. Proc., 6 (2015) 030082. Speaker: Dr Sara Pirrone (INFN -Sezione di Catania) • 15 Comparative study of four reactions at onset of pre-equilibrium emission The study of the emitted particles, comparing pre-equilibrium and thermal components, is a useful tool to examine the nuclear structure. Possible clustering effects, which may change the expected decay chain probability, could be highlighted on the competition between different reaction mechanisms. The NUCL-EX collaboration (INFN, Italy) has carried out an extensive research campaign on pre-equilibrium emission of light charged particles from hot nuclei [1]. In this framework, the reactions $^{16}$O+$^{30}$Si, $^{18}$O+$^{28}$Si, $^{19}$F+$^{27}$Al at 7 MeV/u and $^{16}$O+$^{30}$Si at 8 MeV/u have been carried out using the GARFIELD+RCo array [2] at Legnaro National Laboratories, as a first step, where the fast emission mechanisms could be kept under control. After a general introduction on the experimental campaign performed on different systems, which have evidenced anomalies in the $\alpha$-particle emission channel, this contribution will focus on the analysis results obtained in the measurement reported above, showing in an exclusive way the observed effects related to the entrance channels. The experimental results will be compared to model prediction, for which the same filtering and complete event selection have been applied. [1] T. Marchi et al., F. Gramegna et al. - Nuclear Particle Correlations And Cluster Physics – Chapter 20 –pag. 507 (2017) –ISBN 978-981-3209-34-3; L. Morelli et al., Journ. of Phys. G 41 (2014) 075107; L. Morelli et al., Journ. of Phys. G 41 (2014) 075108; D. Fabris et al., PoS (X LASNPA), 2013, p. 061.D; V.L. Kravchuk, et al. EPJ WoCs, 2 (2010) 10006; O. V. Fotina et al., Int. Journ. Mod. Phys. E 19 (2010) 1134. [2] F. Gramegna et al., Proc. of IEEE Nucl. Symposium, 2004, Roma, Italy, 0-7803-8701-5/04/; M. Bruno et al., M. Eur. Phys. Jour. A 49 (2013) 128. Speaker: Magda Cicerchia (LNL) • Tuesday, May 14 • Session V Convener: Peter Butler (University of Liverpool) • 16 Towards high-resolution in-beam gamma-ray spectroscopy at the RIBF The Radioactive Isotope Beam Factory (RIBF) at RIKEN provides the world’s highest intensity beams for the production of radioactive isotopes by in-flight fragmentation and fission. Stable beams at 345 MeV/u are impinging on primary targets and secondary beams are separated and identified in the BigRIPS fragment separator. In-beam gamma-ray spectroscopy towards the drip-lines utilizes the DALI2 array for maximum efficiency. To overcome the limited resolution, we currently constructing a germanium-based gamma-ray spectrometer composed of the MINIBALL clusters and several Ge tracking detectors from Japan, Europe, and the USA for experimental fast beam campaigns. The status of the project and the physics program will be presented. Speaker: Kathrin Wimmer (The University of Tokyo) • 17 Decoherence of collective motion in warm nuclei. Collective states in cold nuclei (yrast region) are represented by a wave function that assigns coherent phases to the participating nucleons. The degree of coherence decreases with excitation energy above the yrast line because of coupling to the increasingly dense background of quasiparticle excitations. The consequences of this damping mechanism will be discussed with a perspective on applications in nuclear astrophysics and technology. For isoscalar quadruple vibrational multiplets, the rapid decoherence of the low-spin members will be contrasted with the coherent tidal wave motion of the yast members. The rapid decoherence or even absence of the beta vibration will be addressed. The screening of an oblate band in 137Nd from rotational damping by the prolate quasiparticle background will be discussed. The completely incoherent low energy M1 radiation and the scissors mode of warm nuclei will be addressed. Speaker: Prof. Stefan Frauendorf (University Notre Dame) • 18 Nuclear structure studies based on energy density functionals The microscopic self-consistent mean-field (SCMF) framework based on universal energy density functionals provides an accurate global description of nuclear ground states and collective excitations, from relatively light systems to super-heavy nuclei, and from the valley of beta-stability to the particle drip-lines. Based on this framework, structure models have been developed that go beyond mean-field approximation and include collective correlations related to restoration of broken symmetries and fluctuation of collective variables. In particular this includes i) generator-coordinate method with projections on particle number, angular momentum and parity, ii) implementations for the solution of the collective Hamiltonian for quadrupole and octupole vibrational and rotational degrees of freedom, iii) microscopically determined interacting boson model. These models have become standard tools for nuclear structure calculations, able to describe new data from radioactive-beam facilities and provide microscopic predictions for low-energy nuclear phenomena of both fundamental and practical significance. In this talk some of the recent applications of the SCMF framework will be highlighted: studies of shape evolution and coexistence, quadrupole and octupole shape phase transitions and SCMF based analysis of the dynamics of spontaneous fission process. Finally, perspectives for future calculations will be discussed. Speaker: Dr Tamara Niksic (Department of physics, Faculty of science, University of Zagreb) • 19 Fission dynamics from saddle to scission and beyond Nuclear fission, one of the oldest if not the oldest challenge to theoretical many-body physics in literature, is still awaiting a fully quantum microscopic description with a robust predictive power. Since its experimental discovery in 1939 only a few theoretical results have been firmly established in the quantum theory of fission, while many phenomenological and microscopic models, based on untested assumptions have been suggested. The evolution of the compound nucleus from the moment the neutron is absorbed until the saddle is reached was left basically in the dark by theory, and most of the attention was concentrated on the evolution of the nucleus from the saddle-to-scission, where fission fragment properties are defined. The main assumption was that this process is slow and moreover adiabatic, an assumption which allowed the separation of the degrees of freedom into collective and intrinsic. Being slow does not imply adiabaticity however. In a new time-dependent energy density formalism, free of any restrictions and assumptions, we demonstrate that the fission dynamics from saddle-to-scission is slow and even overdamped, but the intrinsic system gains a lot of entropy, and the energy gained from the collective degrees of freedom is never relinquished. The fission dynamics from the saddle-to-scission is much slower than the adiabatic assumption would imply, the collective flow energy never exceeding 1-2 MeV and the rest of the difference between the potential energy at the saddle and at the scission point is almost entirely converted into intrinsic energy or heat. This finding requires a complete retooling of most theoretical and phenomenological approaches, as the introduction of a potential energy surface and inertia tensor is completely illegitimate, the role of collective inertia is negligible. Agreement with experiment is surprisingly good, in spite of the fact that no parameters have been fitted and the results are rather stable with parameter changes. Speaker: Prof. Aurel Bulgac (Uninversity of Washington) • 10:40 AM Coffee break • Session VI Convener: Alberto Stefanini (LNL) • 20 Fusion hindrance in light- and heavy-systems The phenomenon of hindrance in sub-barrier heavy-ion fusion will be introduced and several experimental evidences show that it is a general phenomenon. It is recognized in many cases by the trend of the logarithmic slope of excitation function and of the S factor at low energies. The comparison with standard Coupled-Channels calculations is a more quantitative evidence for its existence. Hindrance is observed in light systems, independent of the sign of the fusion Q-value, with different features. In the case of the $^{12}$C+ $^{30}$Si system the hindrance effect is small but it is clearly recognized. Near-by cases show evidence for systematic behaviors. A very recent experiment has concerned the lighter case $^{12}$C+ $^{24}$Mg where hindrance shows up clearly, because a maximum of the S-factor appears already at a relatively high cross section $\sigma$=1.6 mb. The consequences for the dynamics of stellar evolution have to be clarified by further experimental and theoretical work. Possible interpretations of hindrance will be shortly illustrated, including a recent suggestion on the possible influence of Pauli blocking in the fusion dynamics. Indeed in many heavier systems the hindrance effect has been recognized with different features depending on the various couplings to the inelastic and transfer channels. When transfer channels with positive Q-value are available, their effect is often important at low energies where it can compete with hindrance. Speaker: Prof. Giovanna Montagnoli (PD) • 21 Fusion in massive stars: Pushing the 12C+12C cross-section to the limits with the STELLA experiment at IPN Orsay The 12C+12C fusion reaction is one of the key reactions governing the evolution of massive stars as well as being critical to the physics underpinning various explosive astrophysical scenarios [1]. Our understanding of the 12C+12C reaction rate in the Gamow window – the energy range relevant to the different astrophysical scenarios – is presently confused. This is due to the large number of resonances around the Coulomb barrier and persisting down to the lowest energies measured. In usual circumstances, where the fusion cross-section is smooth it can be readily extrapolated from the energy range measured in the laboratory down to the Gamow window but this is not possible for 12C+12C. Jiang et al. have developed a new experimental approach to study of the 12C+12C reaction which can circumvent issues related to target contamination [2]. They used the Gammasphere array to detect fusion gamma rays in coincidence with detection of evaporated charged particles using annular silicon strip detectors [2]. This technique has shown considerable promise in essentially removing experimental background from the measurement [2]. The STELLA experiment has been established at IPN Orsay. A intense 12C beam from the Andromede accelerator is incident on thin self-supporting 12C foils. A target rotation system can allow for cooling supporting μA beam currents. Evaporated charged particles are detected with a dedicated silicon array while gamma rays are detected in coincidence with an array of 30 LaBr3 detectors [3]. The design and status of STELLA will be presented along with results on the cross-sections and astrophysical S-factors obtained down into the Gamow window for massive stars. REFERENCES [1] A. Chieffi et al., Astrophys. J 502, 7373 (1998). [2] C.L. Jiang et al., Nucl. Instrum. Meth. A 682, 12 (2012). [3] M. Heine et al., J. Phys. Conf. Ser. 763, 012005 (2016). Speaker: David Jenkins Jenkins (University of York) • 22 Finite-temperature nuclear response in the relativistic framework Recent developments of the relativistic nuclear field theory on the finite-temperature formalism will be presented. The general non-perturbative framework, which advances the nuclear response theory beyond the one-loop approximation, is formulated in terms of a closed system of non-linear equations for the two-body Green’s functions. This provides a direct link to ab initio theories and allows for an assessment of accuracy of the approach. This framework has been extended recently to the case of finite temperature, for both neutral and charge-exchange channels [1-3]. For this purpose, the time blocking approximation to the time-dependent part of the in-medium nucleon-nucleon interaction amplitude is adopted for the thermal (imaginary-time) Green’s function formalism. The method is implemented self-consistently on the base of Quantum Hadrodynamics and designed to connect the high-energy scale of heavy mesons and the low-energy domain of nuclear medium polarization effects in a parameter-free way, now also at finite temperature. In this approach we investigate the temperature dependence of nuclear spectra in various channels, such as the monopole, dipole, quadrupole and charge-exchange ones, for even-even medium-heavy nuclei. The special focus is put on the giant dipole resonance’s width problem, the low-energy strength distributions and the influence of temperature on the equation of state. The temperature dependence of the spin-isospin excitations is studied for its potential impact on the astrophysical modeling of supernovae and neutron-star mergers. References [1] E. Litvinova and H. Wibowo, Phys. Rev. Lett. 121, 082501 (2018). [2] H. Wibowo and E. Litvinova, arXiv:1810.01456, submitted to Phys. Rev. C.; E. Litvinova and H. Wibowo, arXiv:1812.11751, submitted to Eur. Phys. J. A. [3] E. Litvinova, C. Robin and H. Wibowo, arXiv:1808.07223. Speaker: Prof. Elena Litvinova (Western Michigan University and National Superconducting Cyclotron Laboratory, Michigan State University) • 23 Enhanced monopole and dipole transitions in medium-heavy nuclei induced by alpha custer structures $\alpha$ cluster structures are well known to appear in excited states of lighter mass nuclei. According to recent studies, the isoscalar monopole (IS0) and dipole excitations (IS1) are considered to be important probes to identify the alpha cluster struture. We have calculated the continuum IS0 and IS1 transitions in the $^{44}$Ti = $\alpha$ + $^{40}$Ca system. We will demonstrate that the prominent enhancement will occur in the lower excitation energy than the single particle excitation energy due to the development of the alpha cluster structures. We have also extend the similar calculation to the much heavier systems, such as the Te isotopes with the $\alpha$ + Sn structure in the mass range from A=104 to A=110. From a series of our calculations, the systematic enhancement in the IS0 and IS1 strengths has been confirmed in the lower excitation energy of $E_x\leq$ 15 MeV. Furthermore, the dissociation strength of $^{135}$Cs into $\alpha$ + $^{131}$I, which is induced by the electric dipole (E1) field, will also be discussed. The $^{135}$Cs nucleus is a kind of long lived fission products (LLFPs) in nuclear wastes. From the viewpoint of the alpha cluster structure, there is a possibility that the low-lying E1 transition will be effective for the transmutation of $^{135}$Cs. Speaker: Makoto Ito (Department of Pure and Applied Physics, Kansai University) • 1:15 PM Lunch • Session IX (Parallel Session) Convener: Dieter Ackermann (GANIL) • 24 Time-Dependent Hartree-Fock Theory for Multinucleon Transfer Reactions Heavy-ion multinucleon transfer reactions at around the Coulomb barrier offer unique opportunity to study a variety of non-equilibrium nuclear dynamics, such as energy dissipation, nucleon transfer, shape evolution, fusion, and so on. Besides the fundamental interest into the underlying reaction mechanism, it possesses substantial importance as a means for producing new, neutron-rich heavy nuclei, whose properties are crucial to figure out the detailed scenario of the r-process nucleosynthesis. Aiming at prediction of optimal reactions for producing yet-unknown neutron-rich unstable nuclei, I have extensively developed and applied methods based on the microscopic framework of the time-dependent Hartree-Fock (TDHF) theory. In this talk, I will review our recent works and progress, showing how the theory works in practice, making possible comparisons with available experimental data. Speaker: Dr Kazuyuki Sekizawa (Niigata University) • 25 From neutron-nucleus interactions to (d,p) cross sections Deuteron-induced reactions have a long and fruitful tradition in nuclear physics as an experimental tool for spectroscopy. They have been extensively used to study in detail the single-particle nature of the low-lying spectrum of the nuclear quantum many-body system. Standard reaction theory describing the direct population of sharp bound states have been very successful in extracting detailed structural information from the experimental data, in the form of spin, parities, spectroscopic factors, etc., of the populated bound states. The advent of high intensity exotic beams have granted experimental access to weakly bound systems with a Fermi energy close to the neutron-emission threshold, where the role of the continuum becomes important. Within this context, new theoretical developments are called for, such as a reaction framework able to account for the population of resonant and non-resonant states of the continuum, adapted to the associated structure description of the target-neutron interaction. Aside from paving the way to the description of (d,p) reactions in exotic loosely bound nuclei in terms of  state-of-the-art neutron-target interactions, such a framework can also be used to describe the formation of a compound nucleus in the neutron+target channel. The formalism presented here is thus also an important theoretical ingredient for the use of (d,p) reactions as surrogates for neutron capture processes. Speaker: Gregory Potel Aguilar (Michigan State University) • 26 Multinucleon transfer reactions and proton transfer channels Transfer reactions have always been of great importance for nuclear structure and reaction mechanism studies. With heavy ions it becomes feasible to transfer several nucleons and a considerable amount of energy and angular momenta from the relative motion to the intrinsic degrees of freedom. So far proton pickup channels have been identified in atomic and mass numbers at energies close to the Coulomb barrier only in few studies. We will show a comprehensive study of the multinucleon transfer reaction $^{40}$Ar+$^{208}$Pb measured near the Coulomb barrier, by employing the PRISMA magnetic spectrometer. By using the most neutron-rich stable $^{40}$Ar isotope we could populate, besides neutron pickup and proton stripping channels, also neutron stripping and proton pickup channels. Comparison of cross sections between different systems with the $^{208}$Pb target and with projectiles going from neutron-poor to neutron-rich, as well as between the data and GRAZING, will be shown. The results are relevant for future investigations with radioactive beams, especially considering the SPES project. Multinucleon transfer cross sections have been recently measured for the $^{92}$Mo+$^{54}$Fe reaction, where both proton stripping and pickup channels were populated with similar strength. The excitation function was measured from the Coulomb barrier to far below, by making use of inverse kinematics to detect target recoils at forward angles with PRISMA. We will discuss the yield of the proton transfer channels, whose probability turns out to be stronger than predicted by a simple phenomenological analysis. The measurement followed the successful results recently obtained for the closed shell $^{96}$Zr+$^{40}$Ca [2] and super-fluid $^{116}$Sn+$^{60}$Ni [3] systems were focus was on neutron transfer channels. [1] T. Mijatovic et al., Phys. Rev. C 94, 064616 (2016). [2] L. Corradi et al., Phys. Rev. C 84, 034603 (2011). [3] D. Montanari et al., Phys. Rev. Lett. 113, 052601 (2014). Speaker: Tea Mijatovic (Ruder Boskovic Institute) • 27 Role of charge equilibration in multinucleon transfer in damped collisions of heavy ions Nowadays a perspective of production of heavy neutron-enriched nuclides encourages the scientists to investigate theoretically as well as experimentally the multinucleon transfer (MNT) reactions with heavy ions [1,2]. This type of reaction is occurred at low energies and leads to a variety of binary fragments formed around the projectile and target with dozens of transferred nucleons between them. Usually, yields of the MNT products drops exponentially with increasing the number of transferred nucleons between the colliding nuclei, but their values can be rather high for experimental investigation of yet unknown neutron-enriched nuclei in certain cases. Special attention is paid to the theoretical models of MNT processes able to provide a description of the key features of collision dynamics and make reasonable predictions for distributions of reaction fragments. Among such the models the Langevin-type approaches allow one to achieve a good agreement in complex description of energy, angular and mass distributions of the reaction products. Thus, the various reactions with spherical and statically deformed nuclei such as Sm + Sm, Xe + Pb, Gd + W, U + U and U + Cm have been analyzed within a dynamical Langevin-type approach providing a rather well agreement of calculated and experimental data [3,4]. As the next step, we aimed to analyze the MNT processes in pairs of nuclei with different N/Z ratios. In such combinations the early stage of nucleus-nucleus collisions is characterized by fast redistribution of neutrons and protons called N/Z equilibration or isospin-relaxation. This phenomenon significantly influences the collision dynamics and "neutronrichness" of the fragments that can be visible in isotopic yields. 1. L. Corradi et al., Nucl. Instr. Meth. B 317, 743 (2013) 2. V.I. Zagrebaev and W. Greiner, Phys. Rev. C 87, 034608 (2013) 3. A.V. Karpov and V.V. Saiko, Phys. Rev. C 96, 024618 (2017) 4. V.V. Saiko and A.V. Karpov, Phys. Rev. C 99, 014613 (2019) Speaker: Vyacheslav Saiko (Flerov Laboratory of Nuclear Reactions, JINR) • Session VII (Parallel Session) Convener: Dr Javier Valiente-Dobon • 28 Building a coherent physics picture around N=50 towards 78Ni The N=50 shell closure above $^{78}$Ni has been the subject of intense experimental efforts. While an initial spectroscopy of $^{78}$Ni itself has been achieved, the rich phenomenology around the neutron shell closure still lacks a comprehensive picture. The parabolic behaviour of the N=50 gap, decreasing from Z=40 to Z=32 and the re-increasing towards Z=30 is not well understood, also in terms of its relation with the appearance of low-lying shape-coexisting states in Se, Ge and Zn isotopes. Similarly, the rapid decreasing of the $\nu$s$_{1/2}$ shell, becoming almost degenerate with the $\nu$d$_{5/2}$ orbital, may have a role in the predicted and observed low-lying E1 strength in $^{83}$Ge. Recent experimental results will be presented, concentrating at first on N=50 core-breaking states and then on evidences of shape coexistence and triaxiality in the region coming both from in-beam and decay spectroscopy. Results will discussed in the framework of shell-model, mean filed, and weak coupling calculations, pointing out the evolution of neutron effective single-particle energies beyond N=50. It will be shown how heavy-meson exchange may provide a common physics picture to these phenomena. The relation to the possible development of a neutron skin beyond N=50, and hence to the appearance of a pygmy dipole resonance, will also be highlighted. Future perspectives at new generation ISOL facilities will be addressed. Speaker: Andrea Gottardo (LNL) • 29 In Flight and $\beta$-delayed $\gamma$-spectroscopy in the vicinity of $^{78}$Ni with AGATA at GANIL and BEDO at ALTO. While the $N=50$ shell-gap evolution towards $^{78}$Ni is presently in the focus of nuclear structure research, experimental information on the neutron effective single particle energy (ESPE) sequence above the $^{78}$Ni core remain scarce. Direct nucleon exchange reactions are indeed difficult with presently available post-accelerated radioactive ion beams (especially for high orbital momentum orbitals) in this exotic region. We have studied the evolution of the $\nu g_{7/2}$ ESPE which is the key to understanding the possible evolution of the spin-orbit splitting due to the action of the proton-neutron interaction terms in the $^{78}$Ni region by measuring the lifetime of excited states in order to distinguish between collective and single-particle states. The evolution of the ESPE of this orbital, characterized by a high orbital momentum $\ell=4$, should indeed be particularly sensitive to tensor effects. In the continuity of an experiment performed in LNL-Legnaro [1], we performed an experiment at GANIL (Caen, France) with AGATA [2], VAMOS [3] and the Orsay plunger OUPS [4] in order to measure lifetime of Yrast excited states (in peculiar $7/2_1^+$ states) in several $N=51$ isotones populated by the reaction $^{238}$U($^9$Be,f). We particularly focused our study on $^{83}$Ge, the closest $N=51$ odd isotones to $^{79}$Ni for which detailed spectroscopy studies are possible within our experimental conditions. We also performed complementary $\beta$-delayed $\gamma$-spectroscopy of $^{83}$Ge with BEDO [5] at the ALTO ISOL photo-fission facility in Orsay to investigate non-Yrast spectroscopy. Results from both experiments and future plans at IGISOL will be presented and discussed. REFERENCES [1] F. Didierjean et al., Phys. Rev. C 96, 044320 (2017) [2] E. Clément et al., NIM A 855 pp. 1-12 (2017) [3] H. Savajols et al., NIM B 204 pp. 146-153 (2003) [4] J. Ljungvall et al., NIM A 679 pp. 61-66 (2012) [5] A. Etile et al., PRC 91, 064317 (2015) Speaker: Dr Clément Delafosse (University of Jyväskylä) • 30 Recent applications of the subtracted second random-phase approximation The Second Random Phase Approximation (SRPA) is a natural extension of Random Phase Approximation obtained by introducing more general excitation operators where two particle-two hole configurations, in addition to the one particle-one hole ones, are considered. Only in the last years,large-scale SRPA calculations, without usually employed approximations have been performed [1,2]. The SRPA model corrected by a subtraction procedure [2] designed to cure double counting issues and the related instabilities has been recently implemented and applied in the study of different physical cases. In this talk we report on the most recent results obtained by using this model. In particular, results on the dipole strength and polarizability in 48Ca [3], the enhancement of the effective masses induced by the beyond-mean-field correlations [4] and the effect of two particle-two hole configurations on the monopole response [5] will be presented and discussed. [1] D. Gambacurta, M. Grasso, and F. Catara, Phys. Rev. C 84, 034301 (2011). [2] D. Gambacurta, M. Grasso and J.Engel,Phys. Rev. C 81, 054312 (2010); Phys. Rev. C 92 , 034303 (2015). [3] D. Gambacurta , M. Grasso , O. Vasseur, Physics Letters B 777 163–168, (2018). [4] M. Grasso, D. Gambacurta, and O. Vasseur, Phys. Rev. C 98, 051303(R) (2018) [5] D. Gambacurta and M. Grasso, in preparation. Speaker: Danilo Gambacurta (ELI-NP) • 31 Structural investigation of neutron deficient Pt isotopes: the case of 178Pt Lifetime measurements with the recoil distance Doppler-shift technique have been performed to determine yrast E2 transition strengths in 178Pt. The experimental data are related to those on neighboring Pt isotopes, especially recent data on 180Pt, and compared to calculations within the interacting boson model and a Hartree-Fock Bogoliubov approach. These models predict prolate deformed ground states in Pt isotopes close to neutron midshell consistent with the experimental findings. Further, evidence was found that the prolate intruder structure observed in neutron deficient Hg isotopes that is minimum in energy in 182Hg becomes the ground state configuration in 178Pt and neighboring 180Pt with nearly identical transition quadrupole moments. The new data on 178Pt are further discussed in the context of the systematics along the Pt isotopic chain with respect to an asymmetry of the level schemes relative to the neutron midshell that is not expected in collective models. In addition, hints for a sharp shape transition towards a weakly deformed or a quasi-vibrational structure in 174,176Pt will be discussed based on existing data where Supported by the Deutsche Forschungsgemeinschaft (DFG) under Contracts No. FR 3276/1-1 and DE 1516/3-1. Speaker: Christoph Fransen (Institut für Kernphysik, Universität zu Köln) • Session XI (Parallel Session) Convener: Michael Bentley (University of York) • 32 $\beta$ decay of neutron-rich $^{135}$In, $^{134}$In and $^{133}$In nuclei: $\gamma$ emission from neutron-unbound states in $^{134}$Sn and $^{133}$Sn Experimental studies of nuclei far from stability provide guidance for further development of nuclear models. Simple systems in the proximity of the doubly-magic shell closures are the best cases for testing the predictive power of shell-model calculations. In this context, understanding of the nuclear structure in the closest proximity of the doubly-magic $^{132}$Sn is essential before making extrapolations of the nuclear properties towards more neutron-rich tin isotopes. In this work, the $\beta$ decay of $^{135}$In has been studied for the first time. Excited states in $^{133}$Sn, $^{134}$Sn and $^{135}$Sn were investigated via $\beta$ decay of $^{133}$In, $^{134}$In and $^{135}$In at ISOLDE Decay Station. Isomer-selective ionization using RILIS enabled the $\beta$ decays of $^{133g}$In (I$^{\pi}$=9/2$^+$) and $^{133m}$In (I$^{\pi}$=1/2$^-$) to be studied independently for the first time. Thanks to the large spin difference of those two $\beta$-decaying states, it is possible to investigate separately the lower- and higher-spin states in the daughter $^{133}$Sn and thus to probe single-particle transitions relevant in the neutron-rich $^{132}$Sn region. Single-hole states in $^{133}$Sn were identified at energies exceeding neutron-separation energy up to 3.7 MeV. Due to centrifugal barrier hindering the neutron from leaving the nucleus, the contribution of electromagnetic decay of those unbound states was found to be significant. The same phenomenon was observed for a new neutron-unbound state identified in $^{134}$Sn. Preliminary results of the first $\beta$-decay studies of $^{135}$In were obtained. Comprehensive description of excited states in $^{133}$Sn and $^{134}$Sn was deduced from both $\beta$ and $\beta$n decay branches of indium isotopes. Speaker: Monika Piersa (Faculty of Physics, University of Warsaw, PL 02-093 Warsaw, Poland) • 33 Decay spectroscopy of isotopes above the fermium (Z > 100) at SHIP The single-particle level structure is essential for the stability and decay properties of the heaviest nuclei. However, the prediction of low-lying single-particle states for heaviest elements remains a very challenging task nowadays (see for example [1 - 3]). Experimental data are scarce in this region and any new data serves as an important anchor for theoretical predictions and a possibility to predict the stabilized regions in the region of superheavy elements. The application of sensitive $\alpha$-, $\gamma$- and conversion-electron (CE) spectroscopy methods allowed us to investigate the structure of very heavy nuclei (A>250). We performed an extensive program aimed at nuclear structure studies of isotopes above fermium (Z=100) using $\alpha$-CE, $\alpha$-$\gamma$ and CE-$\gamma$ spectroscopy at the velocity filter SHIP in GSI Darmstadt. In these measurements, we obtained enhanced data for many isotopes, which helped us to extend and improve the single-particle level systematics for N = 149, 151 and 153 isotones. Besides $\alpha$-decay spectroscopy, we also performed the very first $\beta$-decay studies in this region of nuclide chart. Our series of measurements at SHIP provided a substantial body of new data. The most recent results for selected isotopes in very heavy element region will be presented and discussed within different theoretical frameworks. In particular, the observation of new single and multi-quasi particle isomers in $^{255}$Rf [4, 5] the very first EC-decay data for $^{258}$Db and $^{254}$Md will be discussed. REFERENCES [1] S. Cwiok et al. Nucl. Phys. A 573, 356 (1994). [2] A. Parkhomenko and A. Sobiczewski, Acta Phys. Pol. B35, 2447 (2004). [3] A. Parkhomenko and A. Sobiczewski, Acta Phys. Pol. B36, 3115 (2005). [4] S. Antalic et al. Eur. Phys. J. A 51, 41 (2015). [5] P. Mošať et al. draft is ready (2018) [6] F.P. Hessberger et al. Eur. Phys. J. A 52, 328 (2016). Speaker: Dr Stanislav Antalic (Comenius University in Bratislava, Slovakia) • 34 Gamma and fast-timing spectroscopy of 132Sn from the beta-decay of In isotopes Nuclei with a large N/Z ratio in this region are of great interest to test nuclear models and provide information about single particle states. During the last two decades there has been a substantial effort directed to gathering information about the region around 132Sn[1-3], the most exotic doubly-magic nucleus presently at reach. 132Sn is itself a very interesting case [4]. The simplest excited levels correspond with particle-hole states where a particle is excited across the energy gap of the closed shell. The identification of the p-h multiplets, provides information on the nuclear two-body elements. This isotope has been studied in detail through the β-decay of 132In [5]. Nevertheless, a lot of the expected p-h multiplet states remained unidentified. We have used fast-timing and γ spectroscopy to investigate 132Sn. The experiment was carried out at ISOLDE, where the excited states of 132Sn were populated in the β-decay of In isomers, produced in a UCx target unit equipped with a neutron converter. The In isomers were ionized using the ISOLDE RILIS, which for the first time allowed isomer-selective ionization of indium. The measurements took place at the new ISOLDE Decay Station, equipped with four clover-type Ge detectors, along with a fast-timing setup consisting of two LaBr3(Ce) detectors and a fast β detector. In this work we report on the excited structure of 132Sn, populated in the β-decay of 132In, and also, owing to the RILIS isomer selectivity, separately from the β-n decay of the 133In 1/2- isomer and 9/2+ ground state. We present a preliminary new level-scheme, which have been enlarged with 13 new levels and more than 40 new γ-transitions. These results are completed with new lifetimes values of excited states. [1]K.L.Jones et al.,Nature 465,454(2010). [2]J.M.Allmond et al.,Phys.Rev.Lett.112,172701(2014). [3]A.Korgul et al,PhysRevLett 113,132502(2014) [4]D.Rosiak et al.,Phys.Rev.Lett.121,252501(2018) [5]B.Fogelberg et al Phys.Rev.Lett.73,2413(1996) Speaker: Mr Jaime Benito (Grupo de Física Nuclear, Universidad Complutense de Madrid) • 35 Chirality and oblate rotation in nuclei: new achievements and perspectives The breaking of symmetries in quantum systems is one of the key issues in nuclear physics. In particular, the spontaneous symmetry breaking in rotating nuclei leads to exotic collective modes, like the chiral motion, which is an unique fingerprint of triaxiality in nuclei and have been intensively studied in recent years. We are currently involved in the study of Lanthanide nuclei. New results have been obtained recently and interpreted as the manifestation of a stable triaxial nuclear shape, presenting various types of collective motion, like tilted axis and principal axis rotation, chiral motion, rotation of nuclei with oblate shape at very high spins. Chiral bands in even-even nuclei, which were taught to be unfavored energetically, unstable against 3D rotation and difficult to observe, have been instead identified very recently in 136Nd. The experimental evidence of such bands will be presented and their theoretical interpretation will be discussed. The experimental evidence of multiple chiral bands in several Lanthanides, as well as the presence of competing collective oblate rotation up to very high spins in Nd nuclei will also be discussed. Speaker: Prof. Costel Petrache (CSNSM, University Paris Sud and CNRS/IN2P3) • 4:00 PM Coffee break • Session VIII (Parallel Session) Convener: Dariusz Seweryniak, (Argonne National Laboratory, USA) • 36 Shell evolution and shape coexistence in the 78Ni region It is well known that nucleons are arranged in specific shells resulting in greater stability, analogous to the electron shells in the atom and that this shell structure was expected to be very robust in the whole nuclear chart. However, with advance experimental and theoretical works during the last two decays, we are aware that the shell structure changes when moving far away from stability and it is related to the large neutron excess and nuclear forces. In other words, the Shell Model described in 1949 by Mayer and Jensen is not valid throughout the nuclear chart and nuclear forces have to be reconsidered in the nuclear Hamiltonian which was initially described by harmonic oscillator potential and spin–orbit interaction. The possible consequences expected for neutron-rich nuclei are shell evolution in which changes in ordering and location of the single particle orbits are significant, and the shape coexistence where particle-hole excitations over a major shell and quadrupole correlations are favored due to inversion of orbitals and reduced shell gaps. In extreme cases proven in the lighter mass regions, new magic numbers appear and some other conventional ones disappear and intruder correlations change the ground state deformation, causing the phenomena called island of inversion. In the present manuscript, these aspects will be discussed in the 78Ni region. Recent experiments performed at RIKEN radioactive beam facility using different methodologies will be presented. Speaker: Dr Eda Sahin (University of Oslo, Norway) • 37 Transition probabilities in $^{54}$Ti: Evolution of the shell structure of neutron-rich titanium isotopes Previous investigations of neutron-rich titanium isotopes indicate the development of a subshell closure at $N=32$. However, shell model calculations could not explain this behaviour so far: the excitation energies of the lowest excited Yrast states in these titanium isotopes are reproduced, but not, for example, the trend of the $B(E2;2_1^+\rightarrow 0_\mathrm{gs}^+)$ values as a function of the neutron number. In addition, only few information about $E2$ transition strenghts between higher Yrast states is known. To measure these, excited states in $^{46-54}$Ti were populated by multinucleon transfer reactions and level lifetimes measured by the Recoil-Distance Doppler-shift method were determined. The experiment was performed at GANIL with the detector system AGATA and the spectrometer VAMOS++ for particle identification as well as the Cologne Compact Plunger for deep inelastic reactions. Lifetimes of the $2_1^+$ and $4_1^+$ state as well as upper and lower limits of the $6_1^+$ and $8_1^+$ state in $^{54}$Ti, respectively, could be determined with the differential decay curve method (DDCM) and corresponding $B(E2)$ values were calculated. In addition preliminary lifetime values of excited states of the neighbor nucleus $^{53}$Ti were determined for the first time and will be presented and discussed in the framework of current shell model calculations. Speaker: Alina Goldkuhle (Institute for Nuclear Physics, University of Cologne) • 38 Collectivity in the vicinity of $^{78}$Ni: Coulomb excitation of neutron-rich Zn at HIE-ISOLDE Nuclei in the vicinity of $^{78}$Ni have recently been in focus of many experimental and theoretical investigations. In particular, the neutron-rich Zn isotopes, only two protons above the Ni isotopic chain, are ideally suited to study the evolution of the Z = 28 proton shell gap, and the stability of the N = 50 neutron shell gap. In the last decade, several experiments were performed to study the collectivity in the even-even Zn isotopes between N = 40 and N = 50 [1-4], but their results are not consistent; consequently, the evolution of nuclear structure in the neutron-rich Zn nuclei is not fully understood. The ISOLDE facility finished in 2015 the first phase of a major upgrade in terms of the energy of post-accelerated exotic beams bringing it up from 3 MeV/u to 5.5 MeV/u. The increased beam energy strongly enhances the probability of multi-step Coulomb excitation, giving experimental access to new excited states and bringing in-depth information on their structure. The very first HIE-ISOLDE beam experiment in October 2015 and its continuation in 2016 have been dedicated to the study of the evolution of the nuclear structure along the zinc isotopic chain. The preliminary results discriminate between the two experimental values of B(E2; 4$^{+} \to$ 2$^{+}$) in $^{74}$Zn, and yield for the first time a B(E2; 4$^{+} \to$ 2$^{+}$) value in $^{76,78}$Zn. [1] J. Van de Walle, et al. Phys. Rev. Lett. 99 14501 (2007). [2] J. Van de Walle, et al. Phys. Rev. C, 79:014309 (2009). [3] M. Niikura et al., Phys. Rev. C 85 054321 (2012). [4] C. Louchart, et al. Phys. Rev. C, 87:054302 (2013). Speaker: Andres Illana Sison (INFN-LNL) • Session X (Parallel Session) Convener: Prof. Huanqiao Zhang • 39 Short-range (pairing) versus long-range (collective) correlations in multi-particle transfer reactions. It is shown that the pairing correlation is very important for the two-neutron transfer reactions, for reaction induced by 84 MeV 18O on several targets with low collectivity in its ground state (spherical), proceeding through the one-step process (concerning to transfer process). For the transition to lower excited states, the one-step process also dominated, for the final nuclei having also low collectivity. On the contrary, if the collectivity of these states is considerable, the two-neutron transfer reaction is dominated by a two-step process through an intermediate partition. We present our results for 12,13C(18O,16O) 12,13C[1,2], 16O(18O,16O) 18O[3,4], 64Ni(18O,16O)66Ni[5] and 28Si(18O,16O)30Si[6] by analysing the two-neutron transfer angular distributions. We compare our results with similar results for the 206Pb(18O,16O)208Pb[7] and 7Be(9Be,7Be)9Be[8] reactions, and with the analysis of the quasi-elastic barrier distributions for the 63Cu +18O system [9]. We also show the evidences recently found for the observations of Giant Pairing Vibrations in the 12,13C(18O,16O) 12,13C reactions [10]. Some preliminary results of the effect of pairing correlations in two-protons transfer reactions are also shown. Our ability to describe microscopically multi-nucleon transfer reactions that compete with the double-charge exchange reactions within the NUMEN project [11] will be also discussed. 1. M. Cavallaro, et al., PRC 88, 054601 (2013). 2. D. Carbone, et al., PRC 95, 034603 (2017). 3. M. J. Ermamatov et al., PRC 94, 024610 (2016). 4. M. J. Ermamatov, et al., PRC 96, 044603 (2017). 5. B. Paes, et al., PRC 96, 044612 (2017) 6. E. N. Cardozo et al., PRC 97, 064611 (2018). 7. A. Parmar, et al., NPA 940, 167 (2015). 8. R. Lishtenthaler, et al., submitted to PRC (2018). 9. E. Crema, et al., submitted to PRC (2018). 10. F. Cappuzzello, et al., Nat. Commun. 6, 6743 (2015). 11. F. Cappuzzello, et al., Eur. Phys. J. A 54, 72 (2018). Speaker: Dr Jesus Lubian (Federal Fluminense University) • 40 STRUCTURE AND REACTIONS OF N=7 ISOTONES: PARITY INVERSION AND TRANSFER CROSS SECTIONS The properties of low-lying states in N=7 isotones have been studied theoretically, going from $^{10}$Li to $^{13}$C. To reproduce in detail the changes of structure in these nuclei going towards the neutron drip line represents a considerable challenge for many-body theories. In particular, this concerns the inversion of parity between the ground and first excited state observed going towards the drip line, which is experimentally well established in $^{11}$Be but is under discussion in the case of the unbound nucleus $^{10}$Li, while the normal sequence is observed in $^{12}$B and $^{13}$C. The effects of many-body renormalization processes are considered in detail, and transfer reactions are calculated, showing that the cross sections observed in recent $^{9}$Li(d,p)$^{10}$Li one–neutron transfer experiments [1,2] are consistent with, or better, require the presence of a virtual 1/2+ state [3]. Furthermore, theoretical cross sections for reactions leading to low-lying resonant states in $^{11}$Be are successfully compared to data [4]. [1] H.B. Jeppesen et al, Phys. Lett. B, 642(2006)449 [2] M. Cavallaro et al, Phys. Rev. Lett. 118 (2017) 012701 [3] F. Barranco, G. Potel, R. A. Broglia, and E. Vigezzi, Phys. Rev. Lett. 119 (2017) 082501 [4] F. Barranco, G. Potel, R. A. Broglia, and E. Vigezzi, arXiv:1812.01761 Speaker: Prof. Francisco Barranco (Sevilla University) • 41 The effect of the positive Q-value neutron transfers on near-barrier heavy-ion fusion In near-barrier fusion reactions with heavy-ions, the coupling effect of the positive Q-value neutron transfers (PQNT) is still a complex and unsolved problem. For studying this effect, the fusion excitation functions of the typical systems, such as 32S+90,94,96Zr, 112,116,120,124Sn, were measured by using an electrostatic deflector setup at CIAE. In this talk, the recent experimental results measured at CIAE will be reviewed, with special emphasis on the effect of the positive Q-value neutron stripping channels of 18O+50Cr,58Ni,74Ge. Additionally, considering the current inconsistent experimental data and theoretical analysis, the concept of residual enhancement (RE)[1] that mainly aims for reducing the additional uncertainties was proposed to extract a reliable quantitative PQNT effect. More details will be given in this talk. Reference [1] H. M. Jia, C. J. Lin, L. Yang et al., Phys. Lett. B 788,43 (2016). Speaker: Dr Huiming Jia (China Institute of Atomic Energy) • Session XII (Parallel Session) Convener: Enrico Fioretto (LNL) • 42 Recent results from collinear resonance ionization spectroscopy (CRIS) at ISOLDE-CERN The collinear resonance ionization spectroscopy experiment (CRIS) at ISOLDE-CERN has been developed developed as a sensitive technique to access to electromagnetic properties of exotic nuclei. This technique provides observables that are key for our understanding of the nuclear many-body problem; nuclear spins, electromagnetic moments, and changes in the root-mean-square charge radii. This contribution will present the results from recent experimental campaigns in the vicinity of the so-called doubly magic nuclei: $^{52}$Ca, $^{78}$Ni, $^{100}$Sn and $^{132}$Sn. The relevance of these results, in connection with recent developments in nuclear theory, will be discussed. Speaker: Dr Ronald Garcia Ruiz (CERN) • 43 Fundamental properties of nuclear ground and isomeric states in neutron-deficient indium from laser spectroscopy Hyperfine structure measurements of the neutron-deficient indium ($Z=49$) isotopes, approaching the heaviest self-conjugate doubly-magic nucleus $^{100}\mathrm{Sn}$, have been performed using collinear resonance ionization spectroscopy [1]. These measurements provide an important benchmark in the development of many-body methods, which are now able to predict properties around the $Z=N=50$ shell-closure [2,3]. States in previously measured odd-even In isotopes have shown a remarkably simple single-particle behaviour, whether this trend in the electromagnetic moments continues will give insight into the strength of the shell closure. Isomeric spin assignments in the odd-odd isotopes also help pin down the ordering of the neutron $d_{5/2}$ and $g_{7/2}$ orbits [4,5]. This first experimental determination of ground-state electromagnetic moments and changes in mean-square charge radii of neutron-deficient $^{101-103}\mathrm{In}$ will shed light on the evolution of nuclear structure around $^{100}\mathrm{Sn}$. [1] K.T. Flanagan, et al., Phys. Rev. Lett. 111, 212501 (2013). [2] T.D. Morris et al., Phys. Rev. Lett. 120, 152503 (2018). [3] T. Togashi et al., Phys. Rev. Lett. 121, 062501 (2018). [4] C. Vaman et al., Phys. Rev. Lett. 99, 162501 (2007). [5] D. Seweryniak et al., Phys. Rev. Lett. 99, 022504 (2007). Speaker: Christopher Ricketts (The University of Manchester) • 44 Masses and Beta-Decay Spectroscopy of Neutron-Rich Nuclei: Isomers and Sub-shell Gaps with Large Deformation The structure of deformed, neutron-rich nuclei in the rare-earth region is of significant interest for both the nuclear-structure and astrophysics fields. Although much progress is being made in our understanding of the r-process, a satisfactory explanation for the elemental peak in abundance near A=160 is still elusive. Understanding the origin of this peak may be a key to correctly identifying the astrophysical conditions for the r-process. Theoretical models of element production are dependent on masses and lifetimes of neutron-rich, deformed rare-earth nuclei in this region where little or no information is known. The available nuclear structure information is also scarce, owing to difficulties in the production of these nuclei. In order to address these issues, an experimental program has been initiated at Argonne National Laboratory using high-purity radioactive beams produced by the CARIBU facility. Mass measurements using the Canadian Penning Trap (CPT) and beta-gamma coincidence studies using the SATURN moving tape system and the X-Array spectrometer, comprising of five Ge clover detectors, were carried out. A number of two-quasiparicle isomers were discovered in odd-odd nuclei using CPT and in several cases their properties were elucidated by complementary beta-decay studies. Evidences were found for changes in the single-particle structure, which in turn resulted in the formation of a sizable sub-shell gap at N=98 and large deformation. Results from these measurements will be presented, together with predictions based on deformed shell model that includes effects of pairing and spin-depended, nucleon-nucleon interactions. The newly-commissioned beta-decay station at Gammasphere will also be discussed and results from the first experimental campaign will also be presented. This work is supported by the U.S. Department of Energy, Office of Nuclear Physics, under Contracts No. DE-AC02-06CH11357. Speaker: Dr Filip Kondev (Argonne National Laboratory) • POSTER SESSION • 7:30 PM Concert • Wednesday, May 15 • Session XIII Convener: Hans Geissel (GSI) • 45 Nuclear structure studies via precision mass measurements The Ion Guide Isotope Separator On-Line (IGISOL) facility in the JYFL Accelerator Laboratory offers versatile possibilities for nuclear structure studies via high-precision mass measurements as well as via decay and laser spectroscopy. In this presentation, I will focus on mass measurements recently performed with the JYFLTRAP Penning trap mass spectrometer. These include for example measurements on the neutron-rich rare-earth isotopes close to N=100 as well as nuclides close to 78Ni. In addition to the ground states, information on long-living isomeric states has been obtained. Many of the studied nuclides were measured for the first time and therefore provide essential data for nuclear structure far from stability as well as for nuclear astrophysics. Speaker: Anu Kankainen (University of Jyväskylä) • 46 Neutrinoless double-beta decay and realistic shell model I report about the calculation of the nuclear matrix element involved in neutrinoless double-β decay within the framework of the realistic shell model. Starting from a realistic nucleon-nucleon potential, the effective shell-model Hamiltonian and 0νββ-decay operator are derived by way of the many-body perturbation theory. The contributions to the effective shell-model operator due to short-range correlations and to the Pauli-principle violations are taken into account. Attention will be focussed on some 0νββ-decay candidates with mass ranging from A = 48 up to A = 136. Speaker: Nunzio Itaco (NA) • 47 Reaction spectroscopy of Borromean nuclei at the drip-lines shed light on the nuclear force and shell evolution Borromean nuclei are unique bound quantum systems with unbound sub-systems, that tend to appear in neutron-proton asymmetric isotopes at the edges of the nuclear landscape. Such weakly bound few-body systems can provide sensitive grounds for understanding the nuclear force through their structural properties and interaction. This presentation will describe different techniques of reaction spectroscopy measurements with re-accelerated beams at TRIUMF and in-flight beams at RIBF to explore the ground and excited states of these drip-line nuclei. At the proton drip-line, spectroscopy of $^{20}$Mg from inelastic scattering with a solid D$_2$ target at the IRIS facility at TRIUMF will be discussed. The observation of new states will be presented and compared to new $\it ab ~initio$ theory predictions. The reaction spectroscopy also offers potential to investigate collectivity that will be discussed to understand shell evolution. The presentation will show how a strong sensitivity to the nuclear force emerges from proton elastic scattering of $^{10}$C. In the neutron-rich domain, defining the low-Z end of the island of inversion around $N$ = 20 remains as an open problem. The presentation will discuss exploration of the ground state features of the drip-line nucleus $^{29}$F using intermediate energy in-flight beams at RIBF. Speaker: Prof. Rituparna Kanungo (Saint Mary's University, TRIUMF) • 48 Isospin Symmetry of the A=46 T=1 triplet studied with AGATA The degree to which isospin symmetry is maintained across an isospin multiplet, and hence the extent to which the isospin quantum number can be considered pure, is matter of much contemporary interest. Tests of isospin purity have traditionally been undertaken through examination of the behaviour of the Isobaric Multiplet Mass Equation (IMME), with parabolic behaviour of the IMME of the lowest energy states of a multiplet being considered as a strong evidence for isospin purity. For excited states of multiplets, an alternative approach would be to examine electromagnetic transition matrix elements between analogue states, for which isospin selection rules impose specific behaviour as a function of Tz.. The E2 transition matrix element, in the limit of pure isospin, should be exactly linear with Tz for a T=1 triplet. The measured proton matrix element for the lowest transition, the E2 from the first excited T=1 2+ state to the first T=1 0+ state, can be used as a test of this rule. In this work, we present the results of an experiment to measure this B(E2) strength in the T=1 A=46 triplet The experiment was performed at GSI, Darmstadt, using the AGATA array in conjunction with the Fragment Separator and the LYCCA array. For two members of the triplet, 46Cr and 46Ti, relativistic Coulomb excitation was used to determine the B(E2), whilst for 46V and 46Ti, lifetimes were measured using a new Doppler-shift technique which we call the stretched-target method. The results are analysed in the context of all available data for B(E2)s for T=1 triplets. The A=46 case we will present represents one of the most precise tests of the linearity rule (matrix element vs. Tz) to date. Speaker: Michael Bentley (University of York) • 10:40 AM Coffee break • Session XIV Convener: Dr Enrico Vigezzi (INFN Milano) • 49 Nuclear physics in stellar lifestyles with the Trojan Horse Method Understanding energy production and nucleosynthesis in stars requires a precise knowledge of the nuclear reaction rates at the energies of interest. To overcome the experimental difficulties arising from the small cross sections at those energies and from the presence of the electron screening, the Trojan Horse Method has been introduced. The method represents one of the most powerful tools for experimental nuclear astrophysics because of its advantage to measure unscreened low-energy cross sections of reactions between charged particles, and to retrieve information on the electron screening potential when ultra-low energy direct measurements are available. This is done by selecting the quasi-free (QF) contribution of an appropriate three-body reaction A+a → c+C+s, where a is described in terms of clusters x⊕s. The QF reaction is performed at energies well above the Coulomb barrier, such that cluster x is brought already in the nuclear field of A, leaving s as spectator to the A + x interaction. The THM has been successfully applied to several reactions connected with fundamental astrophysical problems and recently to resonant ones involving medium-heavy nuclei, such as 12C, 16O and 18,19F. I will recall the basic ideas of the THM and show some recent results. Speaker: Aurora Tumino (LNS) • 50 Optical Potentials Derived from Nucleon-Nucleon Chiral Potentials at N4LO: Comparison between Phenomenological and Microscopic Optical Potentials Proton elastic scattering is a very important process to understand nuclear interactions in finite nuclei. Even if this process has been extensively studied in the last years, a consistent microscopic description is still under development. We want to study the domain of applicability of microscopic two-body chiral potentials in the construction of an optical potential, derived as the first-order term within the spectator expansion of the multiple scattering theory and adopting the impulse approximation and the optimum factorization approximation. First, we derive a nonrelativistic theoretical optical potential from nucleon-nucleon chiral potentials at fourth (N3LO) and fifth order (N4LO). We check convergence patterns and establish theoretical error bands for pp and np Wolfenstein amplitudes and the cross sections, analyzing powers, and spin rotations of elastic proton scattering off some light nuclei at an incident proton energy of 200 MeV [1,2]. Second, the cross sections and analyzing powers for elastic proton scattering off calcium, nickel, tin, and lead isotopes are presented for several incident proton energies, exploring the range 156 ≤ E ≤ 333 MeV, where experimental data are available. In addition, we provide theoretical predictions for Ni56 at 400 MeV, which is of interest for the experiments at EXL [3]. In addition, we present some preliminary results for antiproton elastic scattering off nuclei at energies close to 200 MeV [4] Our results indicate that microscopic optical potentials derived from nucleon-nucleon chiral potentials at N4LO can provide reliable predictions for the cross section and the analyzing power both of stable and exotic nuclei. Bibliography [1] M. Vorabbi, P. Finelli, C. Giusti, Phys. Rev. C93, 034619 (2016) [2] M. Vorabbi, P. Finelli, C. Giusti, Phys. Rev. C96, 044001 (2017) [3] M. Vorabbi, P. Finelli, C. Giusti, Phys. Rev. C98, 064602 (2018) [4] M. Vorabbi, P. Finelli, C. Giusti, submitted to Phys. Rev. Lett. (2019) Speaker: Dr Paolo Finelli (University of Bologna) • 51 Intertwined quantum phase transitions in the Zr isotopes Most of the attention in the study of quantum phase transitions (QPT) in nuclei, has been devoted to shape phase transitions in a single configuration (denoted Type I), described by a single Hamiltonian, $\hat{H}(\xi) \!=\! \left( 1\!-\!\xi \right)\hat{H}_{1} + \xi \hat{H}_{2}$, where $\xi$ is the control parameter. A different type of phase transitions (denoted Type II) occurs when two (or more) configurations coexist. In this case, the quantum Hamiltonian has a matrix form, with entries: $\hat{H}_{A}(\xi^A),\,\hat{H}_{B}(\xi^B),\,\hat{W}(\omega)$, where the index $A$, $B$ denotes the two configurations and $\hat{W}$ denotes their coupling. As the control parameters are varied, the separate Hamiltonians $\hat{H}_A$ and $\hat{H}_B$ can undergo shape-phase transitions of Type I, which in turn can result in a crossing of configurations $A$ and $B$. In the present contribution, we focus on the $_{40}$Zr isotopes and find a variety of multiple intertwined phase transitions both of Type I and Type II [1]. These isotopes have been recently the subject of several experimental investigations [2] and theoretical calculations [3]. By employing the interacting boson model with configuration mixing, we have calculated the spectra and other observables of the entire chain of Zr isotopes, from neutron number 52 to 70. The latter exhibit a complex phase structure with coexisting Type I and Type II QPTs, and ground state shapes changing from spherical ($^{92-98}$Zr), to X(5)-like ($^{100}$Zr), to axially deformed ($^{102-104}$Zr), and finally to $\gamma$-unstable ($^{106-110}$Zr). This interpretation is corroborated by the evolution along the Zr chain of order parameters and key observables, including B(E2) values, isotope shift and two-neutron separation energies. [1] N. Gavrielov, A. Leviatan and F. Iachello, submitted (2019). [2] P. Singh et al., Phys. Rev. Lett. 121, 192501 (2018) and references therein. [3] See e.g., T. Togashi et al., Phys. Rev. Lett. 117, 172502 (2016). Speaker: Prof. Amiram Leviatan (The Hebrew University) • 52 Gamma spectroscopy of neutron-rich isotopes in the A = 100 region produced in fission induced by cold neutrons with new FIPPS array The occurrence of shape coexistence in nuclei with N = 58 and 59, suggests that the evolution of the deformation is a gradual process. Our goal was to study N = 57, 96Y isotope where only a few states were known. Additionally, we decided to investigate whether deformed structures are present in the 94Y nucleus which lies 5 neutrons away from the N = 60 boundary and in the 97Y with 59 neutrons. During the talk also the new result concerning the enhancement of octupole collectivity in the N=60, 96Zr isotope will be mentioned [1]. The yttrium isotopes have been produced in the fission of 235U active target induced by cold neutron from the reactor at ILL. The level scheme has been established based on multi-fold gamma-ray coincidence relationships measured with the new highly efficient HPGe array FIPPS [2]. For completess also recent data from the previous fission experiment with EXILL spectrometer has been added. During the analysis, over 50 new gamma transitions in 96Y isotope, have been identified [3, 4]. Additionally, the analysis revealed that the long 8+ isomer is located 400 keV higher than it was reported in NNDC base, which has to be taken into account in reactor antineutrino anomaly calculations [5]. By using the delayed-coincidence method it was possible to identify a few weak transitions above the 201-ns isomeric state, which seem to form a rotational band. In the case of 94Y isotope, 11 new gamma transitions have been identified [6] while in the 97Y, 8 new prompt lines can be observed [4]. Angular correlation analysis supported by shell-model consideration allowed to propose spin-parity assignments for most of the new levels. [1] Ł.W. Iskra et al., Phys. Lett. B 788, 396 (2019) [2] C. Michelagnoli et al., EPJ 193, 04009 (2018) [3] Ł.W. Iskra et al., Europhys. Lett. 117, 12001 (2017) and ILL annual report [4] Ł.W. Iskra et al., (in preparation) [5] A.A. Sonzogni et al., Phys. Rec. C 91, 011301(R) (2015) [6] Ł.W. Iskra et al., Phys. Scr. 92, 104001 (2017) Speaker: Dr Lukasz Iskra (INFN sezione di Milano) • 53 Chiral three-body force and monopole properties of shell-model Hamiltonian We show an evolution to derive the shell-model effective Hamiltonian employing two- and three-body interactions based on the chiral effective field theory. A new way to calculate three-body matrix elements of the chiral interaction with the nonlocal regulator is given. We apply our framework to the p-shell nuclei and perform benchmark calculations to compare our results with those by an ab initio no-core shell-model. We report that our results are satisfactory and the contribution of the three-body force is essential to explain experimental low-lying spectra of the p-shell nuclei. We discuss the contribution of the three-body force on the effective single-particle energy extracted from the monopole interaction. Next, we investigate the shell evolution on the fp-shell nuclei. We show that the monopole component of the shell-model effective Hamiltonian induced by the three-body force plays an essential role to account for the experimental shell evolution. Speaker: Tokuro Fukui (INFN-Napoli) • 1:00 PM Lunch • 2:30 PM Conference Tour • Thursday, May 16 • Session XV Convener: Prof. Krzysztof Rusek (Heavy Ion Laboratory, University of Warsaw, Warsaw, Poland) • 54 Recent results on heavy-ion induced reactions of interest for neutrinoless double beta decay at INFN-LNS Researches on neutrinoless double beta decay have crucial implications on particle physics, cosmology and fundamental physics. It is likely the most promising process to access the absolute neutrino mass scale. To determine quantitative information from the possible measurement of the 0νββ decay half-lives, the knowledge of the Nuclear Matrix Elements (NME) involved in such transitions is mandatory. The use heavy-ion induced double charge exchange (DCE) reactions as tools towards the determination of information on the NME is one of the goals of the NUMEN and the NURE projects. The basic point is that there are a number of similarities between the two processes, mainly that the initial and final state wave functions are the same and the transition operators are similar, including in both cases a superposition of Fermi, Gamow-Teller and rank-two tensor components. The availability of the MAGNEX magnetic spectrometer for high resolution measurements of the very suppressed DCE reaction channels is essential to obtain high resolution energy spectra and accurate cross sections at very forward angles including zero degree. The measurement of the competing multi-nucleon transfer processes allows to study their contribution and constrain the theoretical calculations. An experimental campaign is ongoing at INFN-Laboratori Nazionali del Sud (Italy) to explore medium-heavy ion induced reactions on target of interest for 0νββ decay. Recent results obtained by the (20Ne,20O) DCE reaction and competing channels, measured for the first time using a 20Ne10+ cyclotron beam at 15 AMeV will be presented at the conference. Speaker: Manuela Cavallaro (INFN -LNS) • 55 Single- and double-charge exchange excitations of spin-isospin mode Double charge exchange excitations (DCX) induced by heavy ion beams at intermediate energies [1],[2] attract a lot of interest in relations with new collective excitations such as double isobaric analog states (DIAS) and double Gamow-Teller giant resonance (DGTR) . This reaction is also closely linked with double beta decay matrix elements. In 1980s, the double charge exchange reactions (DCX) were performed by using pion beams, i.e., $(\pi^+, \pi^-)$ and $(\pi^-, \pi^+)$ reactions. Through these experimental studies, the double isobaric analog states (DIAS), and the double dipole resonance states (DGDR) are identified. However, DGTR were not found in the pion double charge exchange spectra. A new research program based on a new DCX reaction ($^{12}$C, $^{12}$Be(0$^+_2$ )) is planned at RIKEN RIBF facility with high intensity heavy ion beams at the optimal energy of E$_{lab}$ =250MeV/u to excite the spin-isospin response [1].A big advantage of this reaction is based on the fact that it is a $(2p,2n)$ type DCX reaction and one can use neutron-rich target to excite DGT strength. In this talk, I will present a microscopic study of DGTR within a framework of microscopic Hartree-Fock+BCS (or Bogolyubov) and QRPA. The results of QRPA will be also examined by analytic formulas to calculate the excitation energies of the DIAS and DGT strength using commutator relations for the double isospin $(t_-)^2$ and spin-isospin operator $(\sigma t_-)^2$. I will give formulas to estimate energies of the DIAS state and DGT states with separable interactions [3]. [References] [1] M. Takaki, T. Uesaka et al., Proposal for experiment at RCNP, "Search for double Gamow Teller giant resonances in $^{48}$Ti via the heavy-ion double charge exchange reaction" (2015). [2] F. Cappuzzello et al., Journal of Physics: Conference Series 630, 012018 (2015). [3] H. Sagawa and Uesaka, Phys. Rev. C94, 064325 (2016) and H. Sagawa, to be published. Speaker: Prof. hiroyuki sagawa (RIKEN) • 56 Theory of Heavy Ion Single and Doublöe Charge Exchange Reactions as Probes for Nuclear Beta Decay Heavy ion charge exchange reactions are of manyfold interest for nuclear reaction and structure physics. In a recent paper [1] a fully microscopic theory of heavy ion single charge exchange (SCE) reactions was formulated. Here, a new theoretical approach is presented, emphasizing the role of single and double charge exchange reactions for probing nuclear response functions of the same type as encountered in single and double beta decay [2]. In particular, a special class of nuclear double charge exchange (DCE) reactions proceeding as a one-step reaction through a two-body process are shown to involve nuclear matrix elements of the same diagrammatic structure as in $0\nu 2\beta$ decay. These correlated Majorana-DCE (MDCE) reactions are distinct from second order DCE reactions which are characterized the best as sequential double single charge exchange (DSCE), thus carrying a close resemblance to $2\nu 2\beta$ decay. The results suggest that ion-ion DCE reactions are the ideal testing grounds for investigations of double-beta decay nuclear matrix elements as proposed by the NUMEN project [3]. Nuclear response functions for $\tau_\pm$ excitations and applications to recent single and double charge exchange data measured by the NUMEN collaboration at LNS Catania are discussed. References: [1] H.~Lenske, J.~I.~Bellone, M.~Colonna and J.~A.~Lay, Phys. Rev. C 98 (2018) 044620 [2] H.~Lenske, J.Phys.Conf.Ser. 1056 (2018) 012030. [3] F.~Cappuzzello et al., Eur. Phys. J. A 54 (2018) 72 Speaker: Prof. Horst Lenske (JLU Giessen) • 10:40 AM Coffee break • Session XVI Convener: Horst Lenske (Univ. Giessen) • 57 Systematic Search for Tetrahedral and Octahedral Symmetries In Subatomic Physics: Follow-up of the First-Discovery Case In a recent ref. [1] group representation methods have been applied to the nuclear point-group symmetries and combined with realistic mean-field calculation results together with the new specifically designed methods of experimental analysis. The authors demonstrated that existing in the literature experimental data on 152Sm are fully compatible with the extremely restrictive group-theory criteria of simultaneous presence of tetrahedral and octahedral symmetries. We discuss the theoretical predictions related to the systematic presence of these symmetries throughout the periodic table. Interestingly enough, in some nuclei the presence of one of the two symmetries are predicted whereas in some others theory predictions are compatible with the interpretation of spontaneous octahedral symmetry breaking by its tetrahedral partner (tetrahedral symmetry group is a sub-group of the octahedral one). The corresponding theory predictions aim at an optimisation of the propositions of new experiments, which would employ the advanced mass-spectrometry methods, ref. [2] – in view of the new experimental search criteria of ref. [1]. Since part of the predictions indicates that several exotic nuclei are concerned, we employ the parameter optimisation methods based on the so-called inverse problem theory, ref. [3]. The addressed field of symmetry-research presents particularly promising potentialities in the domain of exotic nuclei studies. Indeed, as it can be demonstrated, in the exact tetrahedral and/or octahedral symmetry limits the corresponding nuclei emit neither E2 nor E1 radiation generating isomeric states with lifetimes which are much longer than the related ground states. Bibliography [1] J. Dudek et al., Phys. Rev. C 97, 021302(R) (2018) [2] T. Dickel and Ch. Scheidenberger, private communication [3] I. Dedes, PhD thesis, University of Strasbourg, https://tel.archives-ouvertes.fr/tel-01724641 Speaker: Prof. Jerzy Dudek (IPHC/CNRS Strasbourg, France and UMCS, Lublin, Poland) • 58 A 21st Century View of Nuclear Structure Exploiting exact and special symmetries to unmask simplicity within complexity, which remains the ‘holy grail’ of nuclear physics, will be considered within its historical context and as evolving through 21st Century ‘ab initio’ methods, including emerging results linked to the internal structure of nucleons. Some exemplar results for very light to medium mass nuclei will be presented, and what these may portend for heavier systems, including species beyond known lines of stability, will be proffered. Speaker: Jerry Draayer (http://www.phys.lsu.edu/newwebsite/people/draayer.html) • 59 Pairing rotation and pairing energy density functional Pairing correlation produces the odd-even staggering of the binding energies. In addition to that, it also introduces spontaneous breaking of the gauge symmetry. The pairing rotation is the Nambu-Goldstone mode associated with the gauge symmetry breaking in superconducting nuclei, and is measurable experimentally as a pairing rotational band. In our previous work [1], it was shown that the binding-energy differences, \delta_{2n}, \delta_{2p}, and \delta V_{pn}, are understood in terms of the moment of inertia of the pairing rotation. Conventionally a simple form is assumed for the pairing energy density functional because of the lack of observables to constrain the coupling constants. I will show that the moment of inertia for the pairing rotation can be used to constrain the coupling constants of the pairing energy density functional, and discuss an extended form of the pairing energy density functional by including the terms with the kinetic pair density and the spacial derivative of the pair density [2]. I will also show a systematic calculation of the pairing rotational moments of inertia from stable to unstable nuclei employing various pairing functionals by performing the linear response calculation using the finite-amplitude method. [1] N. Hinohara and W. Nazarewicz, Phys. Rev. Lett. 116, 152502 (2016). [2] N. Hinohara, J. Phys. G 45, 024004 (2018). Speaker: Nobuo Hinohara (Center for Computational Sciences, University of Tsukuba) • 60 Skyrme functional with tensor terms from ab initio calculations in neutron-proton drops A new Skyrme functional devised to account well for standard nuclear properties as well as for spin and spin-isospin properties is presented. The main novelty of this work relies on the introduction of tensor terms guided by ab initio relativistic Brueckner-Hartree-Fock calculations of neutron-proton drops. The inclusion of tensor term does not decrease the accuracy in describing bulk properties of nuclei, experimental data of some selected spherical nuclei such as binding energies, charge radii, and spin-orbit splittings can be well fitted. The new functional is applied to the investigation of various collective excitations such as the Giant Monopole Resonance (GMR), the Isovector Giant Dipole Resonance (IVGDR), the Gamow-Teller Resonance (GTR), and the Spin-Dipole Resonance (SDR). The overall description with the new functional is satisfactory and the tensor terms are shown to be important particularly for the improvement of the Spin-Dipole Resonance results. Speaker: Shihang Shen (Università degli Studi di Milano, INFN Sezione di Milano) • 1:15 PM Lunch • Session XIX (Parallel Session) Convener: Marco Mazzocco (PD) • 61 Heavy ion fusion reactions in stars Heavy ion fusion reactions play important roles in a wide variety of stellar burning scenarios. $^{12}$C+$^{12}$C, $^{12}$C+$^{16}$O and $^{16}$O+$^{16}$O are the principle reactions during the advance burning stages of massive star. $^{12}$C+$^{12}$C also triggers the happening of superburst and Type Ia supernovae. The heavy ion fusion reactions of the neutron-rich isotopes such as $^{24}$O are the major heating source in the crust of neutron star. In this talk, I will review the challenges and the recent progress in the study of these heavy ion fusion reactions at stellar energies. The outlook for the studies of the astrophysical heavy-ion fusion reactions will also be presented. Speaker: Xiaodong Tang (Institute of Modern Physics, CAS) • 62 A possible nuclear solution to the 18F deficiency in novae Crucial information on nova nucleosynthesis can be potentially inferred from γ-ray signals powered by 18F decay [1]. Therefore, the reaction network producing and destroying this radioactive isotope has been extensively studied in the last years. Among those reactions, the 18F(p,α)15O cross-section has been measured by means of several experiments, using direct and indirect methods. The presence of interfering resonances in the energy region of astrophysical interest has been reported by many authors including the recent applications of the Trojan Horse Method (THM). The THM is an indirect method using direct reactions to populate 19Ne states of astrophysical importance, with no suppression by the Coulomb and centrifugal barriers. In this work, we evaluate what changes are introduced by the THM data in the 18F(p,α)15O astrophysical factor recommended in a recent R-matrix analysis [2-4], accounting for existing direct and indirect measurements [5]. We will particularly focus on the role of the THM experiment, since it allowed to cover the 0-1 MeV energy range with experimental data, with no need of extrapolation and with unprecedented accuracy (better than 20%). Then, the updated reaction rate is calculated and implications of the new results on nova nucleosynthesis are discussed. In particular, while no change on the dynamical properties of the explosion is found, important differences in the chemical composition of the ejected matter is observed, with a net reduction in the mean 18F content by a factor of 2 and a corresponding increase in the detectability distance [4]. [1] J. Josè, Stellar Explosions: Hydrodynamics and Nucleosynthesis (London: Taylor and Francis, 2016) [2] R.G. Pizzone et al., Eur. Phys. J. A 52, 24 (2016) [3] S. Cherubini et al., Phys. Rev. C 92 015805 (2015) [4] M.La Cognata et al., Astrophys. J. 846 65 (2017) [5] D.W. Bardayan et al., Phys. Lett. B 751 311 (2015) [6] R.H. Cyburt et al., Astrophys. J. Suppl. 189 240 (2010) Speaker: MARCO SALVATORE La Cognata (LNS) • 63 Forbidden transitions in nuclear weak processes relevant to neutrino detection, nucleosynthesis and evolution of stars Important roles of Gamow-Teller transitions have been studied for electron-capture and $\beta$-decay processes at stellar environments [1, 2] as well as $\nu$-nucleus reactions [3]. Importance of first-forbidden transitions in $\beta$-decay rates of N=126 isotones have been shown, and the short half-lives obtained were used to study r-process nucleosynthesis in core-collapse supernova explosions (SNe) and binary neutron-star mergers [4]. Here, we focus more on the roles of forbidden transitions in nuclear weak processes. $\nu$-induced reactions on $^{16}$O, where spin-dipole transitions are dominant, are studied with new shell-model Hamiltonians [5] and SN$\nu$ detection and $\nu$ mass hierarchy dependence of the cross sections [6] as well as nucleosynthesis of light elements such as $^{11}$B and $^{11}$C in SNe [5] are discussed. Next, we study e-capture processes on $^{20}$Ne which become important in late stage of the evolution of O-Ne-Mg cores in stars. The transition to the ground state in $^{20}$F (2$^{+}$) is a second-forbidden transition and is important in certain ranges of densities and temperatures [7]. Electron-capture rates for the transition are evaluated with the multipole expansion method, and compared with a simple evaluation using a constant parametrized strength obtained from the beta-decay experiment [8]. Energy dependence of the second-forbidden transition strength is found to lead to a significant difference in the capture rates from the simple parametrized method. [1] T. Suzuki, H. Toki, and K. Nomoto, ApJ. 817, 163 (2016) [2] K. Mori et al., ApJ. 833, 179 (2016) (2009) [3] T. Suzuki et al., Phys. Rev. C 74, 0407 (2006) [4] T. Suzuki et al, ApJ. 859, 1 (2018) [5] T. Suzuki, S. Chiba, T. Yoshida, K. Takahashi, and H. Umeda, Phys. Rev. C 98, 034613 (2018) [6] K. Nakazato, T. Suzuki, and M. Sakuda, PTEP 2018, 123E02 82018) [7] G. Martinez-Pinedo et al., Phys. Rev. C 89, 045806 (2014) [8] O. S. Kirseborn et al., arXiv:1805.19149 (2018) Speaker: Prof. Toshio Suzuki (Nihon University) • Session XVII (Parallel Session) Convener: Dr Daniele Mengoni • 64 Coexistence and evolution of shapes: mean-field-based interacting boson model The nuclear shapes and collective excitations have been one of the most prominent and studied themes of nuclear structure physics. Experiments using radioactive-ion beams allow to study thus far unknown nuclei and also necessitate timely systematic, as well as reliable, theoretical analyses. The interacting boson model (IBM) has been remarkably successful in phenomenological description of low-lying states in nuclei. Microscopic foundation of the IBM, i.e., derivation of the bosonic Hamiltonian from nucleonic degrees of freedom, has been extensively studied in terms of the shell model, but it has been somewhat limited to nearly spherical nuclei. In this presentation I will focus on a comprehensive method of deriving the Hamiltonian of the IBM from the energy density functional theory (DFT). We begin with the DFT self-consistent mean-field calculation of the potential energy surface with the relevant shape degrees of freedom. The DFT energy surface is then mapped onto the expectation value of the IBM Hamiltonian in the boson condensate state. This procedure completely determines the strength parameters of the IBM Hamiltonian, which is used to compute the excitation spectra and electromagnetic transition rates. Since the DFT framework allows for a global mean-field description of many nuclear properties over the entire region of the nuclear chart, it has become possible to derive the IBM Hamiltonian for any arbitrary nuclei. This has paved the way and allowed unprecedented opportunities to study the spectroscopy of heavy exotic nuclei in an accurate, systematic, and computationally feasible way. Interesting applications of the mean-field-based IBM calculations include the shape phase transitions and coexistence in neutron-rich isotopes in the mass A~100 region, the possible intruder states in even-even Cd isotopes, and the spectroscopy in heavy odd-A and odd-odd nuclei, in particular, the influence of odd particles on the nature of shape phase transitions. Speaker: Dr Kosuke Nomura (JAEA) • 65 Shape coexistence in 94Zr studied via Coulomb Excitation The Zr isotopes (Z=40) belong to a mass region where shape coexistence has been proposed. These isotopes exhibit a variety of shapes, going from deformation near mid-open-shell (80Zr), through sphericity near the closed neutron shell (90Zr) and sub-shell (96Zr), and then to a sudden reappearance of deformation at 100Zr. Such a variety of behavior is unprecedented anywhere on the nuclide chart. Shape coexistence has been also suggested by several experimental works, however, direct information on the shape of ground and excited states are still lacking for these isotopes, since multi-step Coulomb Excitation measurements have not yet been performed on these isotopes. 94Zr is particularly interesting because it is thought to be a strong candidate for displaying type-II shell evolution, as recently proposed for the Zr isotopes around N = 56, by state-of-the-art Monte Carlo Shell Model calculations. As such, a dedicated experiment to study collectivity and configuration coexistence in 94Zr by means of low-energy Coulomb excitation was performed at the INFN Legnaro National Laboratory. The GALILEO-SPIDER setup, which in this instance has been further implemented with 6 LaBr3:Ce scintillators, has been used. In this talk, I will present the results of the experiment, discussing the information on the shape obtained from the analysis with the GOSIA code. A preliminary comparison with Monte Carlo Shell model predictions will be also shown. Speaker: Mrs Naomi Marchini (Istituto Nazionale di Fisica Nucleare) • 66 Shape evolution in exotic neutron-rich nuclei around mass 100 The shape of a nucleus is one of its fundamental properties. The nuclei in the neutron-rich region around mass 100 are well known to exhibit rapid shape changes. The simplest estimate of nuclear deformation in even-even nuclei can be obtained from the energy of the 2+1 state. For Sr (Z = 38) and Zr (Z = 40) isotopes this energy is observed to decrease dramatically at N = 60, while its evolution is much more gradual in Mo nuclei (Z = 42) [1]. Precise lifetime measurements provide a key ingredient in the systematic study of the evolution of nuclear deformation and the degree of collectivity in this region. Neutron-rich nuclei in the mass region of A = 100-120 were populated through the fusion-fission reaction of a 238U beam at 6.2 MeV/u on a 9Be target. The compound nucleus 247Cm was produced at an excitation energy of ~45 MeV before undergoing fission. The setup used for this study comprised the high-resolution mass spectrometer VAMOS [2] in order to identify the nuclei in Z and A, the Advanced γ-ray Tracking Array AGATA [3] of 35 germanium detectors to perform γ-ray spectroscopy, as well as a plunger mechanism to measure lifetimes down to a few ps using the Recoil Distance Doppler Shift method (RDDS) [4]. In addition, the target was surrounded by 24 Lanthanum Bromide (LaBr3) detectors for a fast-timing measurement of lifetimes longer than 100 ps. In this contribution, we will report on new lifetime results for short-lived states in neutron-rich A~100 nuclei, with an emphasis on the Zr and Mo chains. We will discuss the experimental techniques used to evaluate the lifetimes as well as their interpretation in terms of state-of-the-art nuclear structure models. [1] S. Ansari et al. Phys. Rev. C 96, 054323 [2] M. Rejmund et al. Nuclear Instruments and Methods in Physics Research A 646 (2011) 184–191 [3] S. Akkoyun et al. Nuclear Instruments and Methods in Physics Research A 668 (2012) 26–58 [2] A. Dewald et al. Progress in Particle and Nuclear Physics 67, 3 Speaker: Saba Ansari (CEA Saclay) • 67 Shape transitions between and within Zr isotopes The Zirconium isotopes across the N=56,58 neutron sub-shell closures have been of special interest since years, sparked by the near doubly-magic features of $^{96}$Zr and the subsequent rapid onset of collectivity with a deformed ground-state structure already in $^{100}$Zr. Recent state-of-the-art model approaches [1] did not only correctly describe this shape phase transition in the Zr isotopic chain, but also the coexistence of non-collective structures and pronounced collectivity especially in $^{96,98}$Zr. This transition between different structural realizations within an isotope, first established in $^{96}$Zr [2], was attributed to the reordering of the effective valence spaces. The isotope $^{98}$Zr is located on the transition from spherical to deformed ground state structures. However, information on collectivity of this isotope in terms of E2 observables has been notoriously difficult to obtain, since it is unstable, and the lifetime of its first excited 2$^+$ state turned out to be out of range for fast-timing techniques in decay spectroscopy, only giving an upper bound. In this work a new lower bound on this lifetime will be presented, obtained from Coulomb excitation of a radioactive $^{98}$Zr beam [3]. This data has recently been complemented by a recoil-distance lifetime measurement following a two-neutron transfer reaction. The new data will be brought in context with the discussion of the shape-phase transition and the type-II shell evolution in $^{96,98}$Zr. Supported by the German BMBF under Grand No. 05P15RDFN1 and DFG within SFB 1245. [1] T. Togashi et al., Phys. Rev. Lett. 117, 172502 (2016). [2] C. Kremer et al., Phys. Rev. Lett. 117, 172503 (2016). [3] W. Witt et al., Phys. Rev. C 98, 041302(R) (2018). • Session XXI (Parallel Session) Convener: Matko Milin (Physics Department, Faculty of Science, University of Zagreb, Zagreb, Croatia) • 68 Fusion Hindrance and Pauli Blocking in 58Ni +64Ni We report here on the measurement of deep sub-barrier fusion cross sections for $^{58}$Ni +$^{64}$Ni. In this system the influence of positive Q-value transfer channels on sub-barrier fusion was evidenced in a famous experiment by Beckerman et al. [1]. Subsequent experiments for the two symmetric systems $^{58}$Ni +$^{58}$Ni and $^{64}$Ni +$^{64}$Ni showed that fusion hindrance is clearly present in both cases. The lowest measured cross section for $^{58}$Ni +$^{64}$Ni, however, was relatively large ($\sim$0.1 mb), so that no hindrance was observed. The present measurements have been recently performed at the XTU Tandem accelerator of LNL and the excitation function has been extended by two orders of magnitude downward. The case of $^{58}$Ni +$^{64}$Ni is very similar to $^{40}$Ca+$^{96}$Zr [2] because of the flat shape of the two sub-barrier fusion excitation functions, originating from the couplings to several Q$>$0 neutron pick-up channels. $^{40}$Ca+$^{96}$Zr was studied to very small cross sections (2$\mu$b) and fusion hindrance does not show up, suggesting [3] that this unusual behavior is due to the Q$>$0 transfer couplings, since the valence nucleons can flow freely from one nucleus to the other without being hindered by Pauli blocking [4]. Our experiment indicates that the flat trend of the sub-barrier cross sections for $^{58}$Ni +$^{64}$Ni continues down to the level of $\sim$1$\mu$b and fusion hindrance is not observed. This trend at far sub-barrier energies reinforces the suggestion that the availability of several states following transfer with Q$>$0, effectively counterbalances the effect of Pauli repulsion that, in general, is predicted to reduce tunneling probability inside the Coulomb barrier. [1] M. Beckerman et al. Phys. Rev. Lett. 45, 1472 (1980) [2] A.M. Stefanini et al., Phys. Lett. B728, 639 (2014) [3] H. Esbensen et al., Phys. Rev. C 89, 044616 (2014) [4] C. Simenel et al., Phys. Rev. C 95, 031601(R) (2017) Speaker: Alberto Stefanini (LNL) • 69 Fusion probability of massive nuclei in reactions leading to heavy composite nuclear systems Interaction of massive nuclei shows a considerable reduction in fusion cross sections at the Coulomb barrier according to a comparison of experimental cross sections with the calculated ones obtained using a barrier passing (BP) model. Lowered fusion cross sections are accompanied by a high probability of deep-inelastic and quasi-fission (QF) processes arising on the way to fusion. The detection of evaporation residues (ERs) resulting from the compound nucleus (CN) formation is an unambiguous sign of the complete fusion, whereas fission events do not specify the CN formation since CN-fission strongly interferes with QF events. Theoretical models developed to describe heavy ER cross sections σ_ER treat them as the product of capture cross-section σ_c relating to a composite-nuclear system formation, of CN production probability P_CN, and of survivability against fission when CN decays W_sv. Most of the models reproduce experimental σ_ER quite well, but they give P_CN differed from each other within several orders of the magnitude. Such a difference implies a corresponding distinction in W_sv. Available data on the excitation functions for fission and ERs obtained in projectile-target combinations with very different mass numbers (very asymmetric ones) can be well described in the framework of the BP and statistical model (SM) approximations. These data allow us to choose SM parameters implying that P_CN=1 and σ_c=σ_bp. Thus, fitting the calculated excitation functions to the measured ones with scaling of macroscopic fission barriers one can get W_sv. Fusion suppression corresponding to P_CN<1 appears in less asymmetric combinations and can be derived using W_sv for very asymmetric ones leading to the same or nearby CN and σ_c obtained in experiments or with the BP model calculations. The work attempts to systemize the data on P_CN derived as described above for projectile-target combinations leading to ERs from Pb to heaviest nuclei produced in (HI,xn) reactions. Speaker: Dr Roman Sagaidak (Flerov Laboratory of Nuclear Reactions, Joint Institute for Nuclear Research) • 70 Reactions with Exotic Nuclei at Near- and Sub-barrier Energies Reaction with Exotic Nuclei at Near- and Sub-barrier Energies become a hot topic of current interest in nuclear physics. In the talk, I would like to present recent results obtained in the nuclear reaction group of CIAE. The first topic is on the optical model potentials (OMPs) of exotic nuclear systems. Due to the limitations of intensity and quality of RIBs, it is difficult to extract the OMPs of exotic nuclear systems by the elastic scattering. For this reason, a transfer reaction method was proposed and applied to extract the OMPs of 6He+12C, 64Zn, 209Bi systems via 11B, 63Cu, 208Pb(7Li,6He) reactions [1]. The threshold anomaly behavior has been obtained in the 6He+209Bi system for the first time [2]. Results show that the dispersion relation is not applicable for the exotic nuclear systems. Possible reasons are discussed but further study is strongly required to discover the underlying physics. The second topic is on the reaction mechanism of exotic nuclear systems. An important task is to understand the breakup effects as well as its mechanism. To this end, a complete-kinematics measurement method was developed and applied in the 17F+58Ni, 89Y [3], 208Pb and 7Be+208Pb experiments. The processes of elastic scattering, breakup/transfer, and fusion evaporation have been identified successfully. Preliminary results of 17F+58Ni show that elastic breakups are dominant, moreover, the fusions are suppressed above the barrier while enhanced below the barrier. [1] L. Yang, C. J. Lin, H. M. Jia et al., Phys. Rev. C 96, 044615 (2017); Phys. Rev. C 95, 034616 (2017); Phys. Rev. C 89, 044615 (2014); Phys. Rev. C 87, 047601 (2013). [2] L. Yang, C. J. Lin, H. M. Jia et al, Phys. Rev. Lett. 119, 042503 (2017). [3] G. L. Zhang, G. X. Zhang, C. J. Lin et al., Phys. Rev. C 97, 044618 (2018). Speaker: Prof. Chengjian Lin (China Institute of Atomic Energy) • 71 Study of fusion mechanisms induced by weakly bound nuclei The study of fusion reaction for weakly bound nuclei at sub-barrier energies is of large interest, especially those studies of the breakup and transfer effects in weakly bound nuclei. Due to the low breakup threshold, the fusion reactions induced by weakly bound nuclei are complicated processes including complete fusion and incomplete fusion. Also, transfer processes, including the one-neutron stripping, followed by the breakup of the projectile can occur. In all the above reaction channels, the same products can be produced by different mechanisms. So it is fundamental to exprimentally discriminate the different reaction channels to explore the various reaction mechanisms. In this report we will introduce the study of suppression factor of complete fusion and how to use gamma rays in coincidence with the light charged to discriminte the different reaction channels. On basis of GALILEO array which is a high-efficiency gamma-ray spectrometer coupled with the Si-ball EUCLIDE for the detection of charged particles at Legnaro National Laboratory (LNL) in Italy, the experiments of 6Li+89Y and 6Li+209Bi have been performed. It is indicated that the different reaction mechanisms can be clearly studied. This facility can be used well to explore the fusion reacton mechanisms induced by weakly bound nuclei. Speaker: Dr Gaolong Zhang (Beihang University) • 4:30 PM Coffee break • Session XVIII (Parallel Session) Convener: Filip Kondev (Argonne National Laboratory, USA) • 72 Neutron Skin Effects in Mirror Energy Differences: The Case of 23Mg-23Na Energy differences between analogue states in the T=1/2 $^{23}$Mg-$^{23}$Na mirror nuclei have been measured along the rotational yrast bands with the EXOGAM + Neutron Wall + DIAMANT setup at GANIL. The nuclei of interest have been populated via the $^{12}$C+$^{16}$O fusion evaporation reaction. This allows us to search for effects arising from isospin-symmetry breaking interactions (ISB) and/or shape changes. Data are interpreted in the shell model framework following the method successfully applied to nuclei in the $f_{7/2}$ shell. It is shown that the introduction of a schematic ISB interaction of the same type of that used in the $f_{7/2}$ shell is needed to reproduce the data. An alternative novel description, applied here for the first time, relies on the use of an effective interaction deduced from a realistic charge-dependent chiral nucleon-nucleon potential. This analysis provides two important results: (i) The mirror energy differences give direct insight into the nuclear skin; (ii) the skin changes along the rotational bands are strongly correlated with the difference between the neutron and proton occupations of the $s_{1/2}$ “halo” orbit. Speaker: Francesco Recchia (University and INFN Padova) • 73 Discovery of collective states in the heavy $^{208}$Pb nucleus by complete spectroscopy Complete spectroscopy for a certain nucleus means that up to a given excitation energy for each state, spin and parity is determined by experiment and the composition is described by some theoretical model. Among heavy nuclei the goal to reach complete spectroscopy is approached only for $^{208}$Pb. Knowledge of nuclear states in $^{208}$Pb is gained since 1899. Since the 1990s the sensitivity of the Munich Q3D magnetic spectrograph [1] improved and several hundred levels in $^{208}$Pb up to 8 MeV were found. The shell model describes the majority of nuclear states in $^{208}$Pb with great success [2]. From the very beginning a few low-lying states were recognized to need other model descriptions. The qualities of the 3- yrast state were understood to be peculiar already in the 1950s. Its coupling to 1p-1h configurations revealed a new class of nuclear excitations [3,4]. The description of collective states as tetrahedral rotations and vibrations invented 80 years ago was verified by discovering the 2- member of the predicted 2+- parity doublet in $^{208}$Pb at Ex = 4.1 MeV [5,6]. In 2016 a major step of complete spectroscopy was reached with the identification of 151 states below 6.2 MeV with spin, parity, and major composition [3]. Now below 6.2 MeV nearly 160 states are observed - including 5 states predicted but not yet clearly identified [3-6]. The shell model predicts, however, only about 125 states. Sixteen states are described by coupling 1p-1h configurations to the 3- yrast state, four states as pairing vibrations, nine states as tetrahedral rotations and vibrations, and six states wait for some model description. [1] G. Dollinger and T. Faestermann. Nucl. Phys. News 28:5 (2018) [2] R. Broda et al. PRC 95:064308 (2017) [3] A. Heusler et al. PRC 93:054321 (2016) [4] A. Heusler et al. PRC submitted [5] A. Heusler et al. EPJ A 53:215 (2017) [6] A. Heusler et al. PRC(R) submitted Speaker: Dr Andreas Heusler (Gustav-Kirchhoff-Str. 7/1 69120 Heidelberg, Germany) • Session XX (Parallel Session) Convener: Tommaso Marchi (INFN - LNL) • 74 PROTON-NEUTRON PAIRING AND ALPHA-LIKE QUARTET CORRELATIONS IN NUCLEI The common treatment of proton-neutron (pn) pairing in N = Z nuclei relies on Cooper pairs and HFB-type models. However, in these nuclei the pn interaction generates quartet correlations of alpha type which compete with the Cooper pairs. In fact, for any T=0 and T=1 pairing interactions the ground state of N = Z systems is accurately described not by Cooper pairs but in terms of collective quartets [1-8]. Alpha-like quartets are relevant degrees of freedom for treating also more general two-body interactions than pairing [9-11]. From this perspective, I will discuss how the quartetting is affecting the competition between the T=0 and T=1 pn pairing correlations in nuclei as well as the contribution of pairing to the Wigner energy. 1. N. Sandulescu, D. Negrea, J. Dukelsky, C. W. Johnson, PRC 85, 061303(R) (2012) 2. N. Sandulescu, D. Negrea, C. W. Johnson, PRC 86, 041302(R) (2012) 3. D. Negrea and N. Sandulescu, PRC 90, 024322 (2014) 4. N. Sandulescu, D. Negrea, D. Gambacurta, Phys. Lett. B 751 348 (2015) 5. M. Sambataro and N. Sandulescu, PRC 88, 061303(R) (2013), PRC 93,054320 (2016) 6. M. Sambataro, N. Sandulescu, C. W. Johnson, Phys. Lett. B 770, 137 (2015) 7. D. Negrea, N. Sandulescu, D. Gambacurta, Prog. Theor. Exp. Phys. 073D05 (2017) 8. D. Negrea, P. Buganu, D. Gambacurta, N. Sandulescu, PRC 98, 064319 (2018) 9. M. Sambataro and N. Sandulescu, Phys. Rev. Lett. 115, 112501 (2015) 10. M. Sambataro and N. Sandulescu, Eur. J. Phys. 53, 47 (2017) 11. M. Sambataro and N. Sandulescu, Phys. Lett. B 786, 11 (2018) Speaker: Dr Nicolae Sandulescu (National Institute of Physics and Nuclear Engineering, Bucharest, Romania) • 75 Quartet structure of self-conjugate nuclei The treatment of proton-neutron pairing in self-conjugate nuclei in terms of conventional BCS-type approaches has revealed to be problematic. We have shown [1-4] that this form of pairing can be very well accounted for in a formalism of $J=0,T=0$ quartets. We have extended the quartet formalism to the treatment of realistic interactions both in the case of even-even [5,6] and odd-odd [7] self-conjugate nuclei. The role of quartets other than $J=0,T=0$ in the description of these systems has been investigated and it will be illustrated. The difficulties associated with a microscopic treatment of $N=Z$ nuclei in a formalism of quartets rapidly grow with increasing the number of active nucleons. To make this formalism accessible also to large systems, we have recently explored an approach where elementary bosons replace quartets with $J=0,T=0$ and $J=2,T=0$. This boson architecture, which is clearly analogous to that of the Interacting Boson Model in its simplest formulation (IBM-1), has been employed for an analysis of $^{28}$Si [8]. The boson Hamiltonian has been derived with the help of a mapping procedure and the resulting spectrum and $E2$ scheme have been compared with the experimental data. As a peculiarity, the potential energy surface of this nucleus turns out to be that expected at the critical point of the U(5)-$\overline{\rm SU(3)}$ phase transition of the IBM structural diagram. [1] N. Sandulescu, D. Negrea, J. Dukelsky, C. W. Johnson, Phys. Rev. C 85 (2012) 061303(R). [2] M. Sambataro and N. Sandulescu, Phys. Rev. C 88 (2013) 061303(R). [3] M. Sambataro, N. Sandulescu, and C.W. Johnson, Phys. Lett. B 740 (2015) 137. [4] M. Sambataro and N. Sandulescu, Phys. Rev. C 93 (2016) 054320. [5] M. Sambataro and N. Sandulescu, Phys. Rev. Lett. 115 (2015) 112501. [6] M. Sambataro and N. Sandulescu, Eur. Phys. J. A 53 (2017) 47. [7] M. Sambataro and N. Sandulescu, Phys. Lett. B 763 (2016) 151. [8] M. Sambataro and N. Sandulescu, Phys. Lett. B 786 (2018) 11. Speaker: Michelangelo Sambataro (INFN - Sezione di Catania) • 76 Inverse thick target method in order to investigate alpha-clustering in 212Po In order to investigate 212Po alpha-structure, the inverse kinematic thick target method has been used to study elastic and inelastic scattering of 208Pb on 4He target. A 208Pb beam produced by the Superconducting Cyclotron (CS), INFN-LNS, at the incident energy of 10 MeV/u was sent onto a 4He gas cell. The gas cell was acting as target and as beam degrader, completely stopping the beam before it reaches the detection system placed at 0° with respect to the beam direction. The recoiling alpha-particles were measured at forward angles in the center-of-mass system. The 208Pb stopping power in 4He was measured to correctly determine the excitation energy Ex from the detected alpha energy. In this talk, the experimental technique will be described and the preliminary data analysis of the stopping power and the elastic cross section will be shown. Speaker: Maria Grazia Pellegriti (CT) • Session XXII (Parallel Session) Convener: Kathrin Wimmer (The University of Tokyo) • 77 How do we infer shell effects at high excitation energies? Deviations from a smooth trend in the separation energy extracted from atomic masses are typically associated with a sudden onset of deformation or the rise of a magic number. This information is limited to ground and isomeric states. A new way to investigate shell effects at high excitation energies is presented here and inferred from empirical drops in nuclear polarizabilities. Deviations from the effect of giant dipole resonances reveal the presence of shell effects in semi-magic nuclei with neutron magic numbers N = 50, 82 and 126. Similar drops of polarizability in the quasi-continuum of nuclei with, or close to, magic numbers N = 28, 50 and 82, could reflect the continuing influence of shell closures up to the nucleon separation energy. These findings strongly support recent large-scale shell-model calculations in the quasi-continuum region, which describe the origin of the low-energy enhancement of the photon strength function as induced paramagnetism, and assert the generalized Brink-Axel hypothesis as more universal than originally expected. Speaker: Prof. Nico Orce (University of the Westsern Cape) • 78 Structure of neutron-rich Ge and Se isotopes Indication of triaxiality in $^{78}$Ge has recently been presented from a low-energy sequence of strictly $\Delta J=1$ transitions [1]. Neutron-rich Ge and Se isotopes were studied using the Gammasphere Ge-detector array at ANL. Beams of $^{76}$Ge and $^{82}$Se were incident upon thick $^{238}$U and $^{208}$Pb targets in deep-inelastic reactions. New data in $^{80,82}$Se will be presented to clarify $\beta$-decay studies [2,3], and angular-correlation measurements are used to strengthen spin and parity assignments in some cases. These observations can provide insights into the single-particle and collective properties of these neutron-rich nuclei. NuShellX calculations for the N = 46 and N = 48 Ge and Se isotones will be shown to test the $p_{3/2}f_{5/2}p_{1/2}g_{9/2}$ proton and neutron subspace[4]. Additionally, new insight into the structure of isotonic nuclei will be discussed. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, under contract Nos. DE-AC02-06CH11357 (ANL) and DE-AC02-98CH10886 (BNL), and grants No. DE-FG02-94ER40834 (Maryland). This research used resources of ANL’s ATLAS facility, which is a DOE Office of Science User Facility. [1] A. M. Forney, W. B. Walters, C. J. Chiara, R. V. F. Janssens, A. D. Ayangeakaa, J. Sethi, J. Harker, M. Alcorta, M. P. Carpenter, G. G\"urdal, C. R. Hoffman, B. P. Kay, F. G. Kondev, T. Lauritsen, C. J. Lister, E. A. McCutchan, A. M. Rogers, D. Seweryniak, I. Stefanescu, and S. Zhu. Submitted (2018). [2] J.V. Kratz, H. Franz, N. Kaffrell, G. Hermann. Nucl. Phys. A 250 13-37 (1975). [3] H. Gausemel, K. A. Mezilev, B. Fogelberg, P. Hoff, H. Mach, and E. Ramstr\"om Phys. Rev. C 70, 037301 (2004). [4] B.A. Brown and W.D.M. Rae. Nucl. Data Sheets 120 Supplement C, 115-118 (2014). Speaker: Dr Anne Marie Forney (University of Maryl and College Park) • 79 Spectroscopy of low-lying excited states of 50Ar An interesting aspect of nuclear structure is the shell evolution for isotopes with extreme isospin values. Experimental evidence show the presence of a sub-shell closure at N = 32 for 52Ca, 54Ti and 56Cr. Mass measurements on 52,53K suggest that this sub-shell closure is maintained below Z=20. For the case of the 48Ar, low lying 2+, 4+ and the second 2+ states, as well as the B(E2)↑ value have been accessed using different techniques, and a triaxial character has been suggested. A recent γ-ray spectroscopy measurement of 50Ar, reported an energy of the first 2+ state of 1178(18) keV. The satisfactory reproduction of this experimental results by shell model calculations indicated the conservation of the shell gap at N = 32 for Ar isotopes. In the same measurement, a tentative transition with energy of 1582(38) keV was suggested to correspond to the 4 +→2+ transition. However, the limited statistics did not allow for a coincidence analysis to obtain any definitive conclusion on the existence of the peak, nor on spin and parity assignments. To further investigate the nature of the N=32 shell gap below Ca, the analysis of different reaction channels populating 50Ar is of high importance. We will report on the preliminary results of the measurements of proton and neutron knockout reactions as well as inelastic scattering populating 50Ar performed at RIKEN within the third SEASTAR campaign. Isotopes of interest were produced after the fragmentation of a 70Zn beam at 345 MeV/u on a Be target and identified with BigRIPS. Selected isotopes were focused onto the liquid-hydrogen target of the MINOS device and gamma rays from the reactions were detected with the DALI2+ array. Outgoing particles were identified using the SAMURAI magnet and related detectors. Preliminary results on the spectroscopy of low-lying levels for 50Ar will be presented and the cross sections to populate the different states from different reaction channels will be discussed. Speaker: Martha Liliana Cortés (INFN-LNL) • 80 Interplay between quadrupole and pairing correlations close to $^{100}$Sn from lifetime measurements The tin nuclei, representing the longest isotopic chain between two experimentally accessible doubly-magic nuclei, provide a unique opportunity for systematic studies of the evolution of basic nuclear properties when going from very neutron-deficient to very neutron-rich species. A little over a decade ago, they were considered a paradigm of pairing dominance: the excitation energies of the first $2^+$ and $4^+$ states are rather constant along the Sn isotopic chain, and the $B(E2;2^+\to0^+)$ values for isotopes with A>116 present a parabolic behavior expected for the seniority scheme. On the other hand, the $B(E2;2^+\to0^+)$ values measured for neutron-deficient Sn isotopes remain constant with N. Unfortunately, the lack of information on $B(E2;4^+\to2^+)$ strengths in light Sn nuclei, combined with large experimental uncertainties on the $B(E2;2^+\to0^+)$ values, prevent firm conclusions on the shell evolution in the vicinity of the heaviest proton-bound N=Z doubly-magic nucleus $^{100}$Sn. To remedy this, the first lifetime measurement in neutron-deficient tin isotopes was carried out using the Recoil Distance Doppler-Shift method, providing a complementary solution to the previous Coulomb-excitation studies. Thanks to the unusual application of a multi-nucleon transfer reaction, together with unprecedented capabilities of the powerful AGATA and VAMOS++ spectrometers, the lifetimes of the $2^+$ and $4^+$ states in $^{106,108}$Sn have been directly measured for the very first time. Large-scale shell-model calculations were performed to account for the new experimental results. In particular, the comparison of the $B(E2;4^+\to2^+)$ values with the theoretical predictions shed light on the interplay between quadrupole and pairing forces in the vicinity of $^{100}$Sn. An interpretation has also been proposed for the anomalous $B(E2;4^+\to2^+)/B(E2;2^+\to0^+)$ ratio observed not only for the Sn isotopes, but also in other regions of the nuclear chart. Speaker: Dr Marco Siciliano (Irfu/CEA, Université de Paris-Saclay, France) • 8:00 PM Conference Dinner • Friday, May 17 • Session XXIII Convener: Claes Fahlander (Department of Physics, Lund University) • 81 Revealing microscopic origins of shape coexistence in the Ni isotopic chain Since the 80’s, various mean-field theoretical approaches indicated neutron rich Nickel isotopes among the best candidates for the appearance of the shape coexistence phenomenon, including the possibility of finding its most extreme manifestation, i.e. shape isomerism. Shape isomerism arises from the existence of a secondary deformed minimum at large deformation in the nuclear potential energy surface, separated from the primary energy minimum by a high barrier, what results in a significantly hindered gamma transition between the minima. In an experiment performed in Bucharest [1], we have identified a shape-isomer like structure in the 66Ni nucleus. This is the lightest atomic nucleus exhibiting a photon decay hindered - solely - by a nuclear shape change. Such a rare process, at spin zero, was clearly observed only in actinide nuclei in the 1970’s. 66Ni was populated employing a two-neutron transfer reaction induced by an 18O beam on a 64Ni target, at sub-Coulomb barrier energy. The experimental findings have been well reproduced by the Monte Carlo Shell Model Calculations [1]. Encouraged by the results on 66Ni, we have started a comprehensive gamma spectroscopy investigation of 62Ni, 64Ni and 65Ni at IFIN-HH (Bucharest), ILL (Grenoble) and IPN Orsay, using different reaction mechanisms to pin down the wave function composition of selected excited states. We aim at shedding light on the origin of deformation in neutron-rich Ni isotopes, and at possibly locating other examples of shape isomerism in this region. Preliminary results will be presented and compared with Monte Carlo Shell Model predictions. Perspectives in the search for shape isomerism in other mass regions will be also discussed, following recent calculations pointing to Pt, Hg and Pb nuclei (with N≈110) and Pd, Cd and Sn (with N≈66) as best candidates. Such systems could be investigated with radioactive beams from HIE-ISOLDE and SPES. [1] S. Leoni et al., Phys. Rev. Lett. 118, 162502 (2017). Speaker: Silvia Leoni (MI) • 82 Nuclear structure physics with radioactive-ion beams at HIE-ISOLDE HIE-ISOLDE [1] at CERN reached the end of phase 2 in 2018, operating with four cryomodules for the first time and reaching the original design energy of 10 MeV/$u$ for radioactive ion beams. Experiments have been focused on two experimental setups so far, with the Miniball HPGe array [2] taking most of the beam time and the Scattering Experiments Chamber (SEC) concentrating on reactions with light nuclei. The ISOLDE Solenoidal Spectrometer (ISS) [3] was newly commissioned in 2018 for few-nucleon transfer reactions in the magnetic field of a former MRI magnet. In this talk I will present the HIE-ISOLDE project and the show preliminary status of experiments from three years of operation. Some of the selected physics cases will be, amongst others, Coulomb excitation at both ends of the Sn isotopic chain and studying octupole collectivity in both the lanthanides and the actinides. Finally, preliminary results from the first two experiments at ISS will also be discussed, along with plans for the future of the device. References: [1] M. Lindroos, P. Butler, M. Huyse, and K. Riisager, Nucl. Instrum. Meth. B 266, 4687 (2008). [2] N. Warr et al., Eur. Phys. J. A 49, 40 (2013). [3] S. J. Freeman et al., CERN-INTC 031, 099 (2010). Speaker: L. P. Gaffney (ISOLDE, CERN, Geneva, Switzerland) • 83 First high-precision measurement of the low-lying isovector M1 strength in Li-6 at the photon point Since neither of the hydrogen nor helium nuclei have a particle-bound excited state, Li-6 is the lightest nuclide in the entire nuclear chart for which an excited state decays predominantly by gamma-ray emission. The particle-decay of its 0+ state with isospin T=1 at 3563 keV excitation energy is parity-forbidden, and it decays exclusively by a strong isovector M1 transition to the 1+ ground state with isospin T=0. This decay transition represents the M1 analogue to the GT decay of the ground state of He-6 which has recently been measured with spectacular precision [1]. Although the lifetime of the 0+ state of Li-6 has been measured many times since the 1950s there is a disturbing 3-sigma deviation between the error-weighted mean value of the world-data and the measurement which claimed the highest precision. Moreover, the latter [2] has not been a measurement at the photon point but it was an electron-scattering experiment constraining the B(M1, 0+_3653 -> 1+ gs) value from an, in principle, model-dependent extrapolation of electron-scattering data at finite momentum transfers to the photon point. We have re-measured [3] the electromagnetic decay width of the 0+ state of Li-6 with a statistical uncertainty of only 1% with the technique of Relative nuclear Self-Absorption. The data and the technique will be presented and discussed. [1] A. Knecht at el., Phys. Rev. Lett. 108, 122502 (2012). [2] J. Bergstrom et al., Nucl. Phys. A 251, 401 (1975). [3] U. Gayer et al., in preparation. • 84 Quest of octupole deformation in very light Te isotopes Excited states of $^{31}$S and $^{31}$P mirror nuclei were recently studied using the same fusion evaporation reaction $^{24}$Mg($^{12}$C, 1$\alpha$1p) and $^{24}$Mg($^{12}$C, 1$\alpha$1n). The 45~MeV beam was delivered by the XTU-Tandem accelerator at LNL Legnaro. The detection system was composed of GALILEO $\gamma$-ray spectrometer coupled to 4$\pi$ Si ball Euclides and to Neutron Wall. Previous studies of A=31 mirror nuclei showed the oscillation behaviour of Mirror Energy Difference values (MED) values for the negative-parity sequence as a function of spin. These oscillation may be explained including in the wave function excitations to the fp shell considering thus the electromagnetic spin-orbit effect. Description of the MED in sd shell nuclei for negative parity and high spin states involving the electromagnetic spin orbit term is up to now only qualitative (because it involves interactions in two main shells). Additionally, shell-model calculations performed using the USD residual interaction and the Monte Carlo shell model with the SDPF-M interaction reproduce well the excitation energies and the reduced transition probabilities for positive-parity states up to the spin $\frac{13}{2}^{-}$. An interesting feature revealed by these calculations is that the yrast negative-parity states show an alternating structure: the $\frac{7}{2}^{-}$ , $\frac{11}{2}^{-}$ , and $\frac{15}{2}^{-}$ states are described by almost equal contributions of the proton and neutron excitation to the fp shell, whereas the $\frac{9}{2}^{-}$ and $\frac{13}{2}^{-}$ states have only a neutron excitation to the f$_{7/2}$ shell. Because experimental MED values are available up to spin J=$\frac{13}{2}$ for both negative and positive parity in our experiment we tried to identify high-spin states if $^{31}$S in order to disentangle the theoretical puzzle. The results of our investigations will be presented. Speaker: Dr Dmitry Testov • 10:40 AM Coffee break • Session XXIV • 85 Superallowed alpha decay to doubly magic $^{100}$Sn Alpha decay has been a probe of nuclear structure and clustering in nuclei since the dawn of nuclear physics. However, microscopic description of alpha-decay rates remains to be a challenge. During the talk, the recent observation of the superallowed alpha-decay chain $^{108}$Xe-$^{104}$Te to doubly magic $^{100}$Sn [1], using the recoil-decay correlation technique with the Argonne Fragment Mass Analyzer at ATLAS, will be presented. This is an important stepping-stone towards developing a microscopic model of alpha decay since it is only the second case of alpha decay to a doubly magic nucleus, besides the benchmark $^{212}$Po alpha decay to $^{208}$Pb. The decay properties of $^{108}$Xe and $^{104}$Te indicate that in at least in one of them the reduced alpha-decay width is a factor of 5 larger than in $^{212}$Po. The enhanced alpha-particle preformation probability could be the result of stronger interactions between protons and neutrons, which occupy the same orbitals in N=Z nuclei. During the talk, the alpha emitters in the $^{100}$Sn region will be compared with their counterparts in the $^{212}$Po region, and with the existing alpha-decay models. Prospects for alpha-decay studies in the $^{100}$Sn region will be also discussed. [1] K. Auranen, D. Seweryniak et al., Phys. Rev. Lett. 121, 182501 (2018) Speaker: Dr Dariusz Seweryniak (Argonne National Laboratory) • 86 Challenging nuclear structure of the heaviest – opportunities at S$^3$ When the liquid drop fission barrier vanishes in the fermium-rutherfordium region only the stabilization by quantum mechanics effects allows the existence of the observed heavier species. Those are in turn providing an ideal laboratory to study the strong nuclear interaction by in-beam methods as well as decay spectroscopy after separation [1]. Here we focus on the achievements of decay spectroscopy after separation (DSAS) for the deformed nuclei in the region Z=100-112 and N=152-162. They have the potential to provide direct links to the next heavier spherical closed shell nuclei via the investigation of single particle levels [2]. Particularly interesting features are meta-stable states due to nuclear deformation, so-called K isomers, which can be used to trace the spherical superheavy nuclei (SHN) and to locate the island of stability [3]. The application of coincidence and correlation methods, employing the detection of $\alpha$s, $\gamma$s, X-rays, conversion electrons and fission fragments, can be used as powerful tools to separate and study specific decay features like e.g. in the investigation of the $^{258}$Db decay performed by Heßberger et al. [4]. High intensity accelerators, efficient in-flight separators and spectrometers, and highly efficient detectors with fast electronics are the essential ingredients for the success of the field. The new SPIRAL2 facility and, in particular, the separator-spectrometer setup S$^3$ [5] presently under construction at the accelerator laboratory GANIL in Caen, France, will offer great perspectives for the field [6]. [1] D. Ackermann and Ch. Theisen, Phys. Scripta 92, 083002 (2017). [2] M. Asai et al., Nucl. Phys. A 944, 308 (2015). [3] D. Ackermann, Nucl. Phys. A 944, 376 (2015). [4] F.P. Heßberger et al. Eur. Phys. J. A 52, 328 (2016). [5] F. Dechery et al., Eur. Phys. J. A 51, 66 (2015). [6] D. Ackermann, EPJ Web of Conf. 193, 04013 (2018). Speaker: Dieter Ackermann (GANIL) • 87 IMPACT OF QUASIFISSION ON SHE PRODUCTION The properties of the mass and energy distributions of fissionlike fragments formed in the reactions 48Ca,58Fe + 208Pb,36S, 48Ca,48Ti,64Ni + 238U,48Ca + 232Th,244Pu,248Cm at energies around the Coulomb barrier have been analyzed to define the systematic trend of compound nucleus fission and quasifission in cold and hot fusion reactions. The measurements have been carried out at the U400 cyclotron of the FLNR, JINR using the double-arm time-of-flight spectrometer CORSET. The fusion probabilities have been deduced from the analysis of mass and energy distributions. It was found that for the studied reactions fusion probability depends exponentially on mean fissility parameter of the system. For the reactions with actinide nuclei leading to the formation of superheavy elements the fusion probabilities are of several orders of magnitude higher than in the case of cold fusion reactions. Speaker: Mikhail Itkis • 88 Investigation of excited states in very heavy elements The search for new magic numbers beyond 208Pb, understanding the enhanced stability of superheavy nuclei (SHN) and their existence despite the repulsive Coulomb interaction is an active field of research in both theoretical and experimental nuclear physics. Precise structure studies of quasi-particle excitations in deformed actinide and transactinide nuclei are crucial to this understanding. In the last decades exhaustive investigations have been carried-out on the decay of deformed nuclei in the transfermium region around 254No. In this contribution, I will first report on the recent results of in-beam spectroscopic studies on the 244Cf (Z=98) nucleus performed at the University of Jyvaskyla using the RITU gas-filled separator, the GREAT spectrometer and the Jurogam germanium array. The ground-state rotational band of the neutron-deficient californium isotope 244Cf was identified for the first time indicating that the nucleus is deformed. The kinematic and dynamic moments of inertia were deduced from the measured gamma-ray transition energies and are compared to theoretical calculations. I will then present the investigation of the 250No isotope performed at the University of Jyvaskyla using the same set-up. Using fully equipped focal plane detector with digital electronics, we were able to give a definitive answer to the puzzling question concerning the decay path of the isomeric state and the ground state of 250No. Those results will be compared to configuration-constrained PES calculations performed for the 250No and other heavy nuclei. Finally, I will briefly describe the new focal plane detection set-up SIRIUS that have been built in the framework of Spiral2 coupled with S3 spectrometer. The SIRIUS spectrometer, which has been designed for the identification of fusion-evaporation residue through decay tagging, will provide important information on nuclear deformation, single-particle properties. Speaker: Dr barbara sulignano (barbara) • 89 High-momentum nucleons, Tensor blocking, and nuclear Shell Structure Recent high-energy (p,pd) reaction study1 has confirmed the existence of high-momentum correlated pair of nucleons with S=1 and T=0 in ground state of 16O nucleus. The effect of such high-momentum correlated pairs affect the structure of ground and low excited nuclei through the tensor blocking. A new paradigm of the nuclear structure that includes blocking effects of the tensor interactions is proposed. All of the recently discovered magic numbers (N=6, 14, 16, 32, 34) in neutron-rich nuclei are explained by the blocking effects that occur at specific shell configurations. A large amount of binding energy is gained by high-momentum correlated pairs of nucleon produced by the tensor interaction. Such tensor correlations strongly depend on the configuration space available for exciting 2p-2h states. When additional neutron occupy a new orbital, the configuration that was available before may be lost and result in sudden loss of binding energy otherwise gained by the 2p-2h excitation. Such tensor blocking effects enlarge the energy gaps at all observed new magic numbers. The tensor blocking also explains consistently observed peculiar configurations of neutron rich nuclei at the border of shells. The present study will open new horizon in nuclear physics particularly focusing the high momentum properties in excitation spectra. 1. S. Terashima et al., Phys. Rev. Letters 121 242501 (2018). Speaker: Isao Tanihata (RCNP, Osaka Univ. and School of Physics, Beihang Univ) • 1:15 PM Lunch • Session XXV Convener: Giacomo De Angelis (LNL) • 90 Shape Coexistence in the Neutron-Deficient 188Hg Isotope Speaker: Irene Zanon (Istituto Nazionale di Fisica Nucleare) • 91 Interesting states in A=10 mass region, populated in 10B + 10B nuclear reactions Speaker: Deša Jelavić Malenica • 92 Study of the neutron-rich region in the vicinity of 208Pb via multinucleon transfer reactions Speaker: Petra Colovic (Ruder Boskovic Institute) • 93 Summary Talk and Closure Speaker: Dario Vretenar (University of Zagreb)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8442260026931763, "perplexity": 3553.0368248169643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710953.78/warc/CC-MAIN-20221204004054-20221204034054-00684.warc.gz"}
http://math.stackexchange.com/users/43608/jean-claude-arbaut?tab=summary
Jean-Claude Arbaut Reputation 8,990 Top tag Next privilege 10,000 Rep. Access moderator tools 5 15 49 Impact ~65k people reached 36 In plain language, what's the significance of a field? 28 How can you derive $\sin(x) = \sin(x+2\pi)$ from the Taylor series for $\sin(x)$? 23 Process to show that $\sqrt 2+\sqrt[3] 3$ is irrational 22 Which polynomials fix the unit circle? 20 Value of $f'(0)$ if $f(x)=\frac{x}{1+\frac{x}{1+\frac{x}{1+\ddots}}}$ ### Reputation (8,990) +50 Evaluation of $\int_{0}^{\frac{\pi}{4}}\left(\cos 2x \right)^{\frac{11}{2}}\cdot \cos xdx$ +10 Process to show that $\sqrt 2+\sqrt[3] 3$ is irrational +25 how to prove that a relation is antisymmetric? +10 A math contest question related to Ramsey numbers ### Questions (3) 21 Scalar product and uniform convergence of polynomials 16 Flaw or not flaw in Excel's RNG? 5 Irreducibility of an infinite sequence of polynomials ### Tags (162) 110 calculus × 39 59 trigonometry × 27 95 algebra-precalculus × 25 54 integration × 19 73 sequences-and-series × 26 50 real-analysis × 17 61 linear-algebra × 17 46 polynomials × 12 61 derivatives × 13 38 discrete-mathematics × 11 ### Accounts (37) Mathematics 8,990 rep 51549 Stack Overflow 1,657 rep 11022 Mathematica 303 rep 19 Cross Validated 201 rep 37 Area 51 151 rep 2
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8875299096107483, "perplexity": 4258.6483723312695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246657216.31/warc/CC-MAIN-20150417045737-00284-ip-10-235-10-82.ec2.internal.warc.gz"}
https://socratic.org/questions/a-solid-disk-with-a-radius-of-2-m-and-mass-of-3-kg-is-rotating-on-a-frictionless
Physics Topics # A solid disk with a radius of 2 m and mass of 3 kg is rotating on a frictionless surface. If 18 W of power is used to increase the disk's rate of rotation, what torque is applied when the disk is rotating at 6 Hz? Jan 16, 2018 The torque is $= 0.48 N m$ #### Explanation: Apply the equation $\text{Power (W)" ="torque(Nm)"xx"angular velocity} \left(r a d {s}^{-} 1\right)$ The power is $P = 18 W$ The frequency is $f = 6 H z$ The angular velocity is $\omega = 2 \pi f = 2 \times \pi \times 6 = \left(12 \pi\right) r a {\mathrm{ds}}^{-} 1$ Therefore, The torque is $\tau = \frac{P}{\omega} = \frac{18}{12 \pi} = 0.48 N m$ ##### Impact of this question 369 views around the world You can reuse this answer
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 6, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9768665432929993, "perplexity": 1237.9246452140333}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780058373.45/warc/CC-MAIN-20210927060117-20210927090117-00193.warc.gz"}
https://proceedings.neurips.cc/paper/2015/hash/6974ce5ac660610b44d9b9fed0ff9548-Abstract.html
Tor Lattimore #### Abstract Given a multi-armed bandit problem it may be desirable to achieve a smaller-than-usual worst-case regret for some special actions. I show that the price for such unbalanced worst-case regret guarantees is rather high. Specifically, if an algorithm enjoys a worst-case regret of B with respect to some action, then there must exist another action for which the worst-case regret is at least Ω(nK/B), where n is the horizon and K the number of actions. I also give upper bounds in both the stochastic and adversarial settings showing that this result cannot be improved. For the stochastic case the pareto regret frontier is characterised exactly up to constant factors.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9498710036277771, "perplexity": 425.99501677448217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991648.10/warc/CC-MAIN-20210511153555-20210511183555-00474.warc.gz"}
http://mathschallenge.net/full/sequence_divisibility
## Sequence Divisibility #### Problem Prove that every term in the infinite sequence 18, 108, 1008, 10008, 100008, ... , is divisible by 18. #### Solution The n th term, un , of the sequence 18, 108, 1008, 10008, ... is given by un = 10n + 8. un+1 = 10n+1 + 8 = 10.10n + 8 = (9 + 1).10n + 8 = 9.10n + 10n + 8 = 9.10n + 10n + 8 If un = 10n + 8 is divisible by 18, then so too will be un+1, as 9 [even] will be divisble by 18. We can see that u1 = 18, hence 10n + 8 must be divisible by 18 for all n. Problem ID: 75 (Apr 2002)     Difficulty: 3 Star Only Show Problem
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.80301433801651, "perplexity": 1280.6859571558668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982295424.4/warc/CC-MAIN-20160823195815-00246-ip-10-153-172-175.ec2.internal.warc.gz"}
http://mathhelpforum.com/calculus/214353-differential-calculus-maxima-minima-problem-project.html
Thread: Differential calculus: Maxima and minima problem (project) 1. Differential calculus: Maxima and minima problem (project) DIFFERENTIAL CALCULUS: MAXIMA AND MINIMA 1. DESIGNING A POSTER You are designing a rectangular poster to contain 50in^2 of printing with a 4-in. margin at the top and bottom and a 2-in. margin at each side. What overall dimensions will minimize the amount of paper used? Can you show it to me how it looks like? -_- And how do you solve this problem using the maxima and minima? thanks in advanced! 2. Re: Differential calculus: Maxima and minima problem (project) Let the paper be x inches wide and y inches high. Since you want 4 inch margins at top and bottom, and 2 inch margins at each sidea, what are the dimensions of the actual printing area? Set that area equal to 50. You want to minimize xy subject to that constraint. 3. Re: Differential calculus: Maxima and minima problem (project) can u demonstrate to me how to calculate this problem? ) 4. Re: Differential calculus: Maxima and minima problem (project) Originally Posted by HallsofIvy Let the paper be x inches wide and y inches high. Since you want 4 inch margins at top and bottom, and 2 inch margins at each sidea, what are the dimensions of the actual printing area? Set that area equal to 50. You want to minimize xy subject to that constraint. can u demonstrate to me how to calculate this problem? ) 5. Re: Differential calculus: Maxima and minima problem (project) Start by drawing a sketch... , , , , , , , , , , , , , , The project on maxima and minima Click on a term to search for related topics.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9246814250946045, "perplexity": 1869.8220616074502}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257650993.91/warc/CC-MAIN-20180324190917-20180324210917-00718.warc.gz"}
http://math.stackexchange.com/questions/171096/if-two-sets-have-the-same-sum-and-xor-are-they-necessarily-the-same/171102
# If two sets have the same sum and xor are they necessarily the same? Let $A = \{A_1, A_2, A_3, \cdots, A_n\}$ and $B = \{B_1, B_2, B_3,\cdots, B_n\}$. where $A_i\in \mathbb{Z}$ and $B_i\in \mathbb{Z}$. Say, $$S_{1} = A_1 + A_2 + A_3 + \cdots + A_n = \sum_{i=1}^{n}{A_{i}} \\ S_{2} = B_1 + B_2 + B_3 + \cdots + B_n = \sum_{i=1}^{n}{B_{i}}$$ And, $$X_1 = A_1 \oplus A_2 \oplus A_3 \oplus \cdots \oplus A_n = \bigoplus_{i=1}^{n}{A_{i}} \\ X_2 = B_1 \oplus B_2 \oplus B_3 \oplus \cdots \oplus B_n = \bigoplus_{i=1}^{n}{B_{i}}$$ If $S_{1} = S_{2}$ and $X_{1}=X_{2}$, does this imply that $A$ and $B$ contain the same set of integers? - Are you using $I$ to stand for the integers? and A1, etc., to stand for $A_1$, etc.? –  Gerry Myerson Jul 15 '12 at 12:40 @GerryMyerson yes –  abhinav8 Jul 15 '12 at 12:46 No. Counterexample: \begin{align*} A &= \{ 1, 6, 8, 48 \} \\ B &= \{ 3, 4, 24, 32 \} \end{align*} More generally, any sets of integers of the form $$A = \{ 2^{a_k}, 2^{b_k} + 2^{c_k} \}_{k = 1,2,\ldots} \qquad\qquad B = \{ 2^{a_k} + 2^{b_k}, 2^{c_k} \}_{k = 1,2,\ldots}$$ where the sequences $a_k, b_k, c_k$ never repeat and also don't have any elements in common, will be a counterexample. This can be generalised to any sequence of non-overlapping binary vectors, in which there are more vectors with Hamming weight 2 or greater than Hamming weight 1, interpreted as integers in binary notation. No. For example, $\{ 0, 3 \}$ and $\{1 , 2\}$ both have sum and xor $3$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9955719709396362, "perplexity": 271.4530475388307}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440645257890.57/warc/CC-MAIN-20150827031417-00118-ip-10-171-96-226.ec2.internal.warc.gz"}
https://physjam.wordpress.com/2013/10/27/spectra-of-atoms/
## Spectra of atoms Why is a sodium lamp yellow? How can we determine the elemental composition of the sun? How does a Helium-neon laser can work? To some degree all of these questions require knowing the spectra of atoms, which can in theory be calculated by Quantum mechanics. However the calculations of these spectra for arbitrary systems from first principles is prohibitively difficult and computationally intensive (which is why techniques such as Density Functional Theory are used). This post will roughly outline to calculate the spectrum of the smaller atoms by explicitly diagonalising a matrix, whose elements are simple combinatorial quantities. The non-relativistic Hamiltonian in the Born-Oppenheimer approximation of an n-electron atom in SI units is given by $H = \sum_{i=1}^{n} \left( \frac{p_i^2}{2m} - \frac{Ze^2}{4 \pi \epsilon_0} \frac{1}{r_i} + \frac{e^2}{4 \pi \epsilon_0} \sum_{j > i} r_{ij} \right)$ where $p_i$ is the momentum of the ith electron, $r_i$ is its distance from the nucleus, $r_ij$ is the distance between the ith and jth electron, m is the mass of an electron, Z is the charge of the nucleus, and e is the charge of an electron. To simplify matters choose units such that $\frac{e^2}{4 \pi \epsilon_0} = 1$, $m = \frac{1}{2}$ and $\hbar = 1$, these will be used in the rest of this article. Then $H = \sum_{i=1}^{n} p_i^2 - \sum_{i=1}^{n} \frac{Z}{r_i} +\sum_{i=1}^{n}\sum_{j>i}^{n} \frac{1}{r_{ij}}$. The terms correspond to the kinetic energy of the electrons, the electron-atom interaction, and the electron-electron interactions respectively. If we neglect the third term we recover the equation of a Hydrogenic atom which can be solved algebraically. Our approach is to calculate the the elements of the Hamiltonian matrix in the Hydrogenic basis. We can then explicitly diagonalise the matrix in this basis; if the electron-electron term is small the matrix will be almost-diagonal. I will only cover the case of the bound states; the unbound states do need to be considered at a future point, but at least near the ground state their contribution should be negligible. The Hydrogenic atom can be simultaneously diagonalised in a number of different basis sets (corresponding to different coordinate systems); we need a basis whose symmetry is preserved by the perturbation. Since the perturbed Hamiltonian is spherically symmetric, we choose the basis H, L, Lz where the bound states are characterised by the quantum numbers n, l, m, s ($n >=1, 0 <= l < n, |m| <= l, s = \pm \frac{1}{2}$ with corresponding eigenvalues  $\frac{Z^2}{2n^2}$, l(l+1), m\$). The electronic states of an n-electron atom, neglecting electron-electron interactions, are then the antisymmetric tensor products of these states. We now proceed to calculate the matrix elements of the full Hamiltonian in this basis. The only non-trivial part of the calculation are the terms $\wedge_{i=1}^{N} \langle n_i, l_i, m_i, s_i | \frac{1}{r_{st}} \wedge_{j=1}^{N} | n_j, l_j, m_j, s_j \rangle$. The spins and terms i, j not equal to s, t factor through, giving Kronecker deltas. The remaining calculation is $\langle n_1, l_1, m_1, n_2, l_2, m_2 | \frac{1}{r_ij} | n'_1, l'_1, m'_1, n'_2, l'_2, m'_2 \rangle$. Since the term commutes with $L_i$ and $L_j$, it is also proportional to $\delta_{l_1}^{l'_1} \delta_{l_2}^{l'_2} \delta_{m_1}^{m'_1} \delta_{m_2}^{m'_2}$. Integrate in spherical coordinates over first $r_1$, then $r_2$ setting the z-axis of the second coordinate system along the vector $r_1$. Then $\theta_2$ is the angle between the two electrons at the nucleus, and the integrand is $\frac{1}{r_{12}} = \frac{1}{\sqrt{r_1^2 + r_2^2 - 2 r_1 r_2 \cos(\theta_2)}}$, and consequently the first solid angle integral is trivial. Thus we just need to evaluate $\int d\Omega Y_{l_2}^{m_2}(\theta, \phi) {Y_{l_2}^{m_2}}^*(\theta, \phi) \int_{0}^{\infty} dr_1 r_1^2 R_{n_1}^{l_1}(r_1) {R_{n'_1}^{l_1}}^*(r_1) \int_{0}^{\infty} dr_2 r_2^2 R_{n_2}^{l_2}(r_2) {R_{n'_2}^{l_2}}^*(r_2) \frac{1}{\sqrt{r_1^2 + r_2^2 - 2 r_1 r_2 \cos(\theta)}}$ (notice that the integral must be invariant under interchange of all 1 labels with 2 labels; in practice we make the choice that makes the integral easiest). We now separate the $r_2$ integral into two regions; where it is less than $r_1$ the $\frac{1}{r_{12}}$ term can be expanded as $\sum_{t=0}^{\infty} \frac{1}{r_1} \left( \frac{r_2}{r_1} \right)^t P_t(\cos(\theta))$ where $P_t$ is a Legendre Polynomial, and in the other region we switch $r_1$ with $r_2$. The integral then becomes $\sum_{t=0}^{\infty} \int d\Omega Y^{m_1}_{l_1} (\Omega) Y^{-m_1}_{l_1}(\Omega) \sqrt{\frac{4 \pi}{2t+1}} Y^0_t(\Omega) \int_0^{\infty} dr_1 R_{n_1}^{l_1}(r_1) {R^{l_1}_{n'_1}}^*(r_1) r_1^{2+t} \int_0^\infty dr_2 r_2^{1-t} R_{n_2}^{l_2}(r_2) {R_{n'_2}^{l_2}}^*(r_2)$ plus the integral switching r1 with r2 (after a fiddling change of coordinates). The angular integral is a combinatorial quantity, which can be expressed in terms of the Clebsch-Gordan coefficients. It can be expressed using recurrance relations which can be used to compute this part of the integral. [In fact there is an explicit combinatorial representation, although in practice it would be quicker to compute it using recurrance.] The inner radial part of the integral can be calculated by expanding the Legendre polynomials as a power series and using the relation $\int_R^\infty e^{- \alpha r} r^k = \frac{e^{- \alpha R}}{\alpha^{k+1}} \sum_{j=0}^{k} (R \alpha)^j \frac{k!}{j!}$, and the outer part of the integral can then be calculated using this relation again with R=0. This is simply a combinatorial factor than needs to be determined. Thus once we have evaluated these combinatorial quantities, and combined them all to get an expression for the matrix elements of the total Hamiltonian H we can truncate it to a finite basis, and then diagonalise it computationally. It is a very interesting question as to how the truncation affects the eigenvalues.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 29, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9887406229972839, "perplexity": 199.99346579770875}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806422.29/warc/CC-MAIN-20171121185236-20171121205236-00434.warc.gz"}
http://mathoverflow.net/feeds/user/10843
User michael renardy - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-24T03:36:31Z http://mathoverflow.net/feeds/user/10843 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/51190/counterexample-markov-process/51192#51192 Answer by Michael Renardy for Counterexample Markov process Michael Renardy 2011-01-05T10:07:18Z 2011-01-05T10:07:18Z <p>The porous medium equation provides examples. See, for instance, M. Inoue, A Markov process associated with a porous medium equation, Proc. Japan Acad. 60 (1984), 157-160.</p> http://mathoverflow.net/questions/44326/most-memorable-titles/50775#50775 Answer by Michael Renardy for Most memorable titles Michael Renardy 2010-12-30T23:51:55Z 2010-12-30T23:51:55Z <p>"A survey of finite differences of opinion on numerical muddling of the incomprehensible defective confusion equation" by B.P. Leonard</p> http://mathoverflow.net/questions/50120/eigenvalues-of-matrix-product/50125#50125 Answer by Michael Renardy for Eigenvalues of Matrix Product Michael Renardy 2010-12-22T03:32:45Z 2010-12-22T03:32:45Z <p>I think D was supposed to have positive entries. If B is positive definite (meaning that the associated quadratic form is positive definite), then so is $D^{1/2}BD^{1/2}$. This matrix is similar to $DB$, hence it has the same eigenvalues. So if $DB$ is symmetric, it is positive definite.</p> <p>I note, however, that a diagonally dominant matrix is not necessarily positive definite, although it has eigenvalues of positive real part.</p> http://mathoverflow.net/questions/47603/other-ways-to-define-naturals/47608#47608 Answer by Michael Renardy for Other ways to define naturals Michael Renardy 2010-11-28T20:10:11Z 2010-11-28T20:10:11Z <p>It depends what "express in terms of" means. Are the following allowed?</p> <p>$$S(x)=x+f_1(f_4(x)),$$ $$Pd(x)=x-f_1(f_4(x)).$$ Or perhaps something like: $$S(x)=f_2(f_4(x)+f_1(f_4(x))).$$</p> http://mathoverflow.net/questions/47314/bounding-a-smooth-function-near-the-endpoint/47321#47321 Answer by Michael Renardy for Bounding a smooth function near the endpoint Michael Renardy 2010-11-25T11:51:09Z 2010-11-25T11:51:09Z <p>You need to make the stronger assumption $g(a)=g'(a)=...=g^{(k-1)}(a)=0$. Then your statement is true with $\alpha=k$. You can see this by using the Cauchy-Schwarz inequality in $g^{(k-1)}(x)=\int_a^x g^{(k)}(y)\,dy$ to obtain $|g^{(k-1)}(x)|\le C(x-a)^{1/2}$, and then integrating repeatedly to get $|g(x)|\le C(x-a)^{k-1/2}$.</p> <p>This is essentially optimal, since the function $(x-a)^{k-1/2}/\ln(x-a)$ satisfies all the hypotheses. In particular, you cannot get $\alpha=k+1$. </p> http://mathoverflow.net/questions/46104/a-simple-ordinary-differential-equation/46114#46114 Answer by Michael Renardy for A simple ordinary differential equation Michael Renardy 2010-11-15T13:00:32Z 2010-11-15T13:00:32Z <p>You don't need the Cauchy-Kovalevskaya theorem. Just the analytic inverse function theorem.</p> http://mathoverflow.net/questions/50472/sums-of-arctangents Comment by Michael Renardy Michael Renardy 2010-12-27T21:21:38Z 2010-12-27T21:21:38Z No, the OEIS stuff does not pan out. The modulus of the next coefficient is 3, not 5. http://mathoverflow.net/questions/50472/sums-of-arctangents Comment by Michael Renardy Michael Renardy 2010-12-27T11:51:16Z 2010-12-27T11:51:16Z You can reexpand the Taylor series of the arctan function. I am not sure what the pattern is. Up to tenth order, I get $$\arctan(x)=\arctan(1)+\arctan((x-1)/2)-\arctan((x-1)^2/4)+\arctan((x-1)^3/8)$$ $$-\arctan((x-1)^5/32+\arctan((x-1)^6/64)-\arctan((x-1)^7/128$$ $$+\arctan((x-1)^9/256)-\arctan(3(x-1)^{10}/1024).$$ So up to this point, we get the coefficient sequence $$1, -1, 1, 0, -1, 1, -1, 0, 2, -3.$$ http://mathoverflow.net/questions/50120/eigenvalues-of-matrix-product/50125#50125 Comment by Michael Renardy Michael Renardy 2010-12-22T04:02:50Z 2010-12-22T04:02:50Z The problem as I understood it did not say B was symmetric, only that H was. http://mathoverflow.net/questions/50120/eigenvalues-of-matrix-product Comment by Michael Renardy Michael Renardy 2010-12-22T03:52:58Z 2010-12-22T03:52:58Z BD and DB are similar matrices, so they have the same eigenvalues. http://mathoverflow.net/questions/47418/nice-classes-of-non-closable-operators Comment by Michael Renardy Michael Renardy 2010-11-26T14:18:53Z 2010-11-26T14:18:53Z Moreover, a closed operator can have empty spectrum. http://mathoverflow.net/questions/46970/proofs-of-the-uncountability-of-the-reals/47021#47021 Comment by Michael Renardy Michael Renardy 2010-11-23T00:40:09Z 2010-11-23T00:40:09Z All you need to do is prove that between two rationals is an irrational. A variant of the well known proof that sqrt(2) is irrational should do the trick here. Just exploit the sparsity of squares among &quot;large&quot; integers.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9278416037559509, "perplexity": 1298.1715634839184}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368704134547/warc/CC-MAIN-20130516113534-00067-ip-10-60-113-184.ec2.internal.warc.gz"}
https://en.wikipedia.org/wiki/Pitch_interval
# Pitch interval Augmented second on C. In musical set theory, a pitch interval (PI or ip) is the number of semitones that separates one pitch from another, upward or downward.[1] They are notated as follows:[1] PI(a,b) = b - a For example C4 to D4   is 3 semitones: PI(0,3) = 3 - 0 While C4 to D5   is 15 semitones: PI(0,15) = 15 - 0 However, under octave equivalence these are the same pitches (D4 & D5,  ), thus the #Pitch-interval class may be used. ## Pitch-interval class Octave and augmented second on C  . In musical set theory, a pitch-interval class (PIC, also ordered pitch class interval and directed pitch class interval) is a pitch interval modulo twelve.[2] The PIC is notated and related to the PI thus: PIC(0,15) = PI(0,15) mod 12 = (15 - 0) mod 12 = 15 mod 12 = 3 ## Equations Using integer notation and modulo 12, ordered pitch interval, ip, may be defined, for any two pitches x and y, as: • ${\displaystyle \operatorname {ip} \langle x,y\rangle =y-x}$ and: • ${\displaystyle \operatorname {ip} \langle y,x\rangle =x-y}$ the other way.[3] One can also measure the distance between two pitches without taking into account direction with the unordered pitch interval, similar to the interval of tonal theory. This may be defined as: • ${\displaystyle \operatorname {ip} (x,y)=|y-x|}$[4] The interval between pitch classes may be measured with ordered and unordered pitch class intervals. The ordered one, also called directed interval, may be considered the measure upwards, which, since we are dealing with pitch classes, depends on whichever pitch is chosen as 0. Thus the ordered pitch class interval, i<x, y>, may be defined as: • ${\displaystyle \operatorname {i} \langle x,y\rangle =y-x}$ (in modular 12 arithmetic) Ascending intervals are indicated by a positive value, and descending intervals by a negative one.[3]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8894791007041931, "perplexity": 4503.260697908562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257244.16/warc/CC-MAIN-20190523123835-20190523145835-00204.warc.gz"}
https://web2.0calc.com/questions/a-problem-about-joe-biden
+0 # A Problem about Joe Biden 0 484 1 +121 Joe is studying a bacteria population. There are 20 bacteria present at 3:00 p.m. and the population doubles every 3 minutes. Assuming none of the bacteria die, how many bacteria are present at 3:15 p.m. the same day? Mar 3, 2021 #1 +589 +2 every 3 minutes they double, so they double 5 times, or they're population is multiplied by 2^5 or 32. so 20$\cdot$32=$\boxed{640}$ Mar 3, 2021 every 3 minutes they double, so they double 5 times, or they're population is multiplied by 2^5 or 32. so 20$\cdot$32=$\boxed{640}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.894039511680603, "perplexity": 1426.663102684698}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948708.2/warc/CC-MAIN-20230327220742-20230328010742-00166.warc.gz"}
https://www.physicsforums.com/threads/accident-analysis.259900/
# Accident Analysis. 1. Sep 28, 2008 ### AnNe11 1. The problem statement, all variables and given/known data A 1400 sedan goes through a wide intersection traveling from north to south when it is hit by a 2500 SUV traveling from east to west. The two cars become enmeshed due to the impact and slide as one thereafter. On-the-scene measurements show that the coefficient of kinetic friction between the tires of these cars and the pavement is 0.750, and the cars slide to a halt at a point 5.39 west and 6.55 south of the impact point. Part A - How fast was sedan traveling just before the collision? Part B How fast was SUV traveling just before the collision? 2. Relevant equations 3. The attempt at a solution 2. Sep 28, 2008 ### tiny-tim Welcome to PF! Hi Anne! Welcome to PF! Show us what you've tried, and where you're stuck, and then we'll know how to help. Last edited: Sep 28, 2008 3. Oct 23, 2009 ### ifyjo Hi, it look like it's been a while since this thread was posted in, but I happen to have the same question. I will post my attempts at solving it below. First, if you don't mind, I would like to supplement my own figures for the question instead of the original posters since it seems he (or she) has lost interest in the question. A 1600 kg sedan goes through a wide intersection traveling from north to south when it is hit by a 2000 kg SUV traveling from east to west. The two cars become enmeshed due to the impact and slide as one thereafter. On-the-scene measurements show that the coefficient of kinetic friction between the tires of these cars and the pavement is 0.750, and the cars slide to a halt at a point 5.57 m west and 6.28 m south of the impact point.​ I believe the momentum before the collision will be the same after the collision. To find the momentum after the collision, I need the mass of the enmeshed cars (3600 kg I believe) and its speed. Since I was given the coefficient of kinetic friction and the distance traveled until rest, I think I calculate the acceleration, then pair it with distance in the equation V2 = V02 + 2ad, where d is the length of the hypotenuse between 5.57 m and 6.28 m (8.3942 m). I I'm not sure about the acceleration, but my attempt has put the net force equal to the coefficient of friction multiplied by the mass and the acceleration due to gravity (ma = kmg). This would yield 7.35 for the acceleration, giving me 11.1083 as the initial velocity when plugged into the equation in the preceding paragraph. However, I am not sure how go from there. Any help would be much appreciated, thanks! 4. Oct 23, 2009 ### tiny-tim Welcome to PF! Hi ifyjo! Welcome to PF! Yes, the deceleration is µg, = .735, and the distance is 8.3942, so the "initial" velocity is 11.108 in the direction given. Now, you know the direction of the two cars just before the collision, so call their speeds u and w, resolve 11.108 into west and south components and use conservation of momentum. (incidentally, since you'll be resolving the "initial" velocity into west and south components anyway, you could have used the same deceleration on each component separately, without bothering about the hypotenuse!) 5. Oct 25, 2009 ### ifyjo Thanks so much for your help tiny-tim. I got it all to work out fine. Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook Have something to add? Similar Discussions: Accident Analysis.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8045576810836792, "perplexity": 1043.4243471199864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171053.19/warc/CC-MAIN-20170219104611-00060-ip-10-171-10-108.ec2.internal.warc.gz"}
https://scholarship.rice.edu/handle/1911/13110/browse?type=author&value=Ugron%2C+Gabor+Imre
Now showing items 1-1 of 1 • #### Frequency dependence of the sensitivity in continuously equivalent networks  (1966) The purpose of this paper is to examine sensitivity and its frequency dependence in continuously equivalent networks. First, continuously equivalent network theory and different methods of sensitivity calculations are ...
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9669100642204285, "perplexity": 1983.6760788520714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313428.28/warc/CC-MAIN-20190817143039-20190817165039-00309.warc.gz"}
https://www.physicsforums.com/threads/centroid-of-cylindrical-cone.317627/
# Centroid of cylindrical cone 1. Jun 2, 2009 ### zandria 1. The problem statement, all variables and given/known data Determine the centroid of volume for a right circular cone with base diameter of 100mm and an altitude of 200mm. 2. Relevant equations I know that if the my xy-plane is parallel to the base of the cylindrical cone then the x and y coordinates of the centroid must be zero and therefore I only need to find the z coordinate of the centroid. The equation I am using is $$z_c = \frac{1}{M} \int_{body} z dm$$ where M is the total mass and $$dm = \rho dV$$ 3. The attempt at a solution I am trying to use cylindrical coordinates but I think my limits of integration are incorrect. I have tried to solve the integral above with the following limits. $$0<\theta<2\pi$$ $$0<r<50$$ $$0<z<(200-r/4)$$ I'm not sure if the limits for the z coordinate is correct. Am I on the right path? 2. Jun 2, 2009 ### LowlyPion Well, I wouldn't worry with polar coordinates, because you are dealing with basically a stack of disks aren't you? They are each have a weight of ρ*πr² Exploit then the fact that r is a function of z, and your integral should be pretty straight forward shouldn't it? 3. Jun 2, 2009 ### zandria Thank you. I was essentially doing the right thing on my first try before I changed everything, but I made an algebra mistake when trying to use cylindrical coordinates. Thanks for the short cut ... less room for stupid mistakes.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9635710120201111, "perplexity": 298.0271616556285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823072.2/warc/CC-MAIN-20160723071023-00131-ip-10-185-27-174.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/33190/find-s-in-terms-of-p-and-c-for-certain-equations
# Find s in terms of p and c for certain equations $s$, $p$ and $c$ are vectors; I need to find $s$ in terms of the other two for: (1) $| s - c | = 1$ (2) $s = \lambda p$ ( $\lambda$ is a constant ) How can I use the constant $\alpha = (p \cdot c)^2 - p^2 * (c^2 - 1)$? There may be no solution. - This is the second problem of the same kind posted within the last hour or so: math.stackexchange.com/questions/33183/… ; are these part of some more general problem, an assignment? What is the context for these problems? –  Arturo Magidin Apr 15 '11 at 19:51 The problems are part of some past papers I'm working on for a Maths Methods exam. –  Sorin Cioban Apr 15 '11 at 20:23 The question is not entirely clear. I will try to make a clear question from it, using more or less your notation. It may not be the question you intended to ask! Let the vectors $p$ and $c$ be given. Find a constant $\lambda$ such that if $s=\lambda p$, then $|s-c|=1$. Solution: For any vector $v$, $|v|=\sqrt{v\cdot v}$. Since $s=\lambda p$, we want $$(\lambda p -c)\cdot(\lambda p -c)=1$$ Expanding the dot product according to the usual rules, we obtain $$\lambda^2 (p\cdot p) -2\lambda (p\cdot c) +c\cdot c-1=0$$ The above equation is a quadratic equation in $\lambda$ (unless $p$ is the zero-vector). Solve for $\lambda$ using the Quadratic Formula. Note that there may not be a (real) solution. - @user6312: "quadratic equation in $\lambda$ (unless $c$ is the zero-vector)" should be "(unless $p$ is the zero vector)". –  Arturo Magidin Apr 15 '11 at 20:33 @Arturo Magidin: Thank you! I have made the correction. –  André Nicolas Apr 15 '11 at 20:48 Just as in the other problem you posted, since $s=\lambda\cdot p$, if you know $\lambda$ then you know $s$. If $p=0$, then there is a solution if and only if $|c|=1$, in which case the solution is $s=0$. Assume $p\neq 0$. Plugging $s=\lambda p$ into the first equation, you know that $$|\lambda p - c| = 1.$$ Since $|v|^2 = v\cdot v$ for any vector $v$, this gives $$1 = (\lambda p - c)\cdot (\lambda p - c) = \lambda^2 (p\cdot p) - 2\lambda(p\cdot c) + (c\cdot c),$$ or equivalently, that $$(p\cdot p)\lambda^2 - 2(p\cdot c)\lambda + \bigl( (c\cdot c)-1\bigr) = 0.$$ This is a quadratic equation in $\lambda$; if the discriminant $$4(p\cdot c)^2 - 4(p\cdot p)\bigl((c\cdot c)-1\bigr) = 4\alpha$$ (with $\alpha$ the correct version of what you write above; see note at the end) is negative, there are no solutions. If $4\alpha$ is nonnegative, then solving for $\lambda$ gives that $$\lambda = \frac{2(p\cdot c) + \sqrt{4(p\cdot c)^2 - 4(p\cdot p)\bigl((c\cdot c)-1\bigr)}}{2(p\cdot p)}$$ or $$\lambda = \frac{2(p\cdot c) - \sqrt{4(p\cdot c)^2 - 4(p\cdot p)\bigl((c\cdot c) - 1\bigr)}}{2(p\cdot p)}.$$ which yields (up to) two possible solutions for $\lambda$, hence up to two values for $s$. The constant $\alpha$ in your statement is a rather bad attempt at describing (one fourth of) the discriminant. $p^2$ should be $p\cdot p$ and $c^2$ should be $c\cdot c$. Using $$\alpha = (p\cdot c)^2 - (p\cdot p)\bigl((c\cdot c) - 1\bigr)$$ and simplifying, we can write it as: • If $p=0$, then no solutions if $|c|\neq 1$, and $s=0$ is the unique solution if $|c|=1$. • If $p\neq 0$ and $\alpha\lt 0$, then no solutions; • If $p\neq 0$ and $\alpha\geq 0$, then the solutions are given by $$\lambda = \frac{(p\cdot c) + \sqrt{\alpha}}{p\cdot p},\qquad\text{and}\qquad \lambda = \frac{(p\cdot c) - \sqrt{\alpha}}{p\cdot p}.$$ - I'm beginning to recognize your posts before I see your name under them -- I saw the question and saw the beginning of your answer and thought, this must be another one of those admirable thorough and patient explanations by Arturo :-) You're really making quite an extraordinary contribution to this site. –  joriki Apr 15 '11 at 23:09 @joriki: Thank you kindly! –  Arturo Magidin Apr 15 '11 at 23:47
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9612745642662048, "perplexity": 225.65034585411976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246653426.7/warc/CC-MAIN-20150417045733-00001-ip-10-235-10-82.ec2.internal.warc.gz"}
https://zbmath.org/?q=an:0674.10025
× ## Geometric Ramanujan conjecture and Drinfeld reciprocity law.(English)Zbl 0674.10025 Number theory, trace formulas and discrete groups, Symp. in Honor of Atle Selberg, Oslo/Norway 1987, 201-218 (1989). [For the entire collection see Zbl 0661.00005.] This paper contains a discussion of the authors’ recent work concerning cuspidal automorphic forms. Let F be a global field of positive characteristic p, $${\mathbb{A}}$$ the ring of F-adèles, $$G=GL(r)$$, and $$\pi$$ an irreducible admissible G($${\mathbb{A}})$$-module. The authors prove (the “Purity theorem”) that if $$\pi$$ is cuspidal with supercuspidal component $$\pi_{\infty}$$ then each conjugate of each Hecke eigenvalue $$z_ i(\pi_ V)$$, for almost all unramified components $$\pi_ v$$, lies on the unit circle in $${\mathbb{C}}$$. Their proof relies on a new form of the Selberg trace formula for a test function with supercuspidal component. In addition, they give a higher rank analogue of the classical theory of congruence relations that arises from the geometry of certain correspondences. Using methods similar to those in the proof of the Purity theorem they also show how the Drinfel’d explicit reciprocity law follows from a conjecture of Deligne. Reviewer: S.Kamienny ### MSC: 11F70 Representation-theoretic methods; automorphic representations over local and global fields 11F80 Galois representations 22E55 Representations of Lie and linear algebraic groups over global fields and adèle rings Zbl 0661.00005
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.950529158115387, "perplexity": 652.923001673799}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500094.26/warc/CC-MAIN-20230204044030-20230204074030-00004.warc.gz"}
https://chorasimilarity.wordpress.com/tag/fan-out-moves/
# I-don’t-always advantage of the chemical concrete machine … over other computing formalisms, is contained in the following wise words: Indeed, usually a FAN-OUT gate is something which has a variable as an input and two copies of it as an output. That is why FAN-OUT gates are not available in any model of computation, like for example in quantum computing. But if you don’t use variable (names) and there’s nothing circulating through the wires of your computer model, then you can use the FAN-OUT gate, without  impunity, with the condition to have something which replaces the FAN-OUT behaviour, without it’s bad sides.  Consider graph rewriting systems for your new computer. This is done in the chemical concrete machine, with the help of DIST enzymes and associated moves (chemical reactions). (“DIST” comes from distributivity.) In graphic lambda calculus, the parent of the chemical concrete machine, I proved that combinatory logic can be done by using local moves available and one global move, called GLOBAL FAN-OUT.  This global move is what is resembling the most with the behaviour of a usual FAN-OUT gate:  A graph $A$ connected to the input of a FAN-OUT gate is replaced by two copies of the graph. That’s bad, I think, so in the chemical concrete machine I arrived to prove that GLOBAL FAN-OUT can be replaced, as concerns graphs (or molecules, in the chemical concrete machine formalism) which represent combinators, with successions of local DIST moves (and some other local moves) . It is possible exactly because there are no variable names. Moreover, there’s something almost biological in the succession of moves: we see how combinators reproduce. As an illustration, the following is taken from the post  Chemical concrete machine, detailed (V) : Here are the four “molecules” which represent the combinators B, C, K, W.  (Via the red-green vs black-white change of notation, they can be deduced from their expressions in lambda calculus by the algorithm described here . ) Let’s see how the “molecule” K behaves, when connected to a FAN-OUT gate (green node with one input and two outputs): The “reproduction” of the molecule B is more impressive: In the formalism of the chemical concrete machine, $\delta^{+}$ is a distributivity move (or “enzyme” which facilitates the move in one direction, preferentially), and $\phi^{+}$ is a FAN-IN move (facilitated in one direction). ___________________________ See more about this in the Chemical concrete machine tutorial. ___________________________ This makes me believe that, as long as we don’t reason in terms of states (or any other variables), it is possible to have FAN-OUT gates in quantum computation. # Local FAN-IN eliminates GLOBAL FAN-OUT (II) As I wrote in   Local FAN-IN eliminates global FAN-OUT (I) , the introduction of the three moves (FAN-IN and the two DIST moves) eliminates global FAN-OUT from the lambda calculus sector of the graphic lambda calculus.  In this post we shall see that we can safely eliminate other two moves, namely R1a, R1b, as well as improving the behaviour of the crossings from the $\lambda$-TANGLE sector. The equilibrium is thus established: three new moves instead of the three old moves. And there are some unexpected advantages. ______________ Proposition. Proof.  (a) Done in the following picture. The proof of (b) is here: Finally, here is the proof of (c): ______________ The $\lambda$-TANGLE sector of the graphic lambda calculus is obtained by using the lambda-crossing macros In Theorem 6.3   arXiv:1305.5786 [cs.LO]  I proved that all the oriented Reidemeister moves (with the crossings replaced by the respective macros), with the exception of the moves R2c, R2d, R3a and R3h, can be proved by using the graphic beta move and elimination of loops.  We can improve the theorem in the following way. Theorem.  By using the graphic beta move, elimination of loops, FAN-IN and CO-COMM, we can prove all the 16 oriented Reidemeister moves. Proof. The missing moves R2c, R2d, R3a and R3h are all equivalent (by using the graphic beta move and elimination of loops, see this question/answer at mathoverflow) with the following switching move, which we can prove with FAN-IN and CO-COMM: The proof is done. # Local FAN-IN eliminates global FAN-OUT (I) For being able to build  a chemical concrete machine (see the posts  A chemical concrete machine for lambda calculus  and  Why build a chemical concrete machine, and how?) we have to prove that  universal computation can be attained with only local moves in graphic lambda calculus. Or, the lambda calculus sector of the graphic lambda calculus, which gives universality to graphic lambda calculus, uses the global FAN-OUT move (see theorem 3.1 (d)  arXiv:1305.5786 [cs.LO]. Similarly, we see in proposition 3.2 (d), which describes the way combinatory logic appears in graphic lambda calculus, that again global FAN-OUT is used. I want to describe a way to eliminate the global FAN-OUT move from combinatory logic (as appears in graphic lambda calculus via the algorithm described here ). ________________ There are reasons to dislike global moves in relation to B-type neural networks (see the last post    Pair of synapses, one controlling the other (B-type NN part III) ). Similar concerns can be found in the series of posts which has as the most recent one Dictionary from emergent algebra to graphic lambda calculus (III) . In this first post I shall introduce a local FAN-IN move and two distributivity moves and I shall prove that they eliminate the need for using global FAN-OUT in combinatory logic. In the next post I shall prove that we can eliminate two other moves (so that the total number of moves of graphic lambda calculus stays the same as before) and moreover we can recover from distributivity and local FAN-OUT moves the missing oriented Reidemeister moves from the $\lambda$-TANGLE sector. ________________ Definition. The local FAN-IN move is described in the next figure and it can be applied for any $\varepsilon \not = 1$. Comments: • as you see, in the move appears a dilation gate, what can this has to do with combinatory logic? As I explained previously, the properties of the gates are coming through the moves they are involved in, and not from their name. I could have introduced a new gate, with two inputs and one output, call this new gate “fan-in gate” and use it in the FAN-IN move. However, wait until the next post to see that there are other advantages, besides the economy of gates available, in using a dilation gate as a fan-in. • the FAN-IN move resembles to the packing arrows trick which is used extensively in the neural networks posts.  This suggests to use as a  fan-in gate the green triangle gate and as fan-out gate the red triangle gate. This would eliminate the $\Upsilon$ gate from the formalism, but is not clear to me how this replacement would interfere with the rest of the moves. • the FAN-IN move resembles with the dual of the graphic beta move, but is not the same (recall that until now I have not accepted the dual of the graphic beta move in the list of the moves of graphic lambda calculus, although there are strong reasons to do so): which is needed in the emergent algebra sector in order to make the dictionary to work (and related as well to the goal of non using global FAN-OUT in that sector).  This latter move is in fact a distributivity move (see further), but we are free to choose different moves in different sectors of the graphic lambda calculus, • I know it is surprising that until now there was nothing about evaluation strategies in graphic lambda calculus, the reason being that because there are no variables then there is noting to evaluate. However, the situation is not so simple, at some point, for example in the chemical concrete machine or for neural networks, some criterion for choosing the order of moves will be needed. But it is an important point to notice that replacing global FAN-OUT (which could be seen as a remnant of having variables and evaluating them) by local FAN-IN has nothing to to with evaluation strategies. ________________ Definition: The distributivity moves (related to the application and lambda abstraction gates) are the following: Comments: • the first distributivity move is straighforward, an application gate is just doubled and two fan-out moves establish the right connections. We see here why the “mystery move” can be seen as a kind of distributivity move, • the second distributivity move is where we need a fan-in gate (and where we use a dilation gate instead): because of th orientation of the arrows, after we double the lambda abstraction gates, we need to collect two arrows into one! ________________ Combinatory logic terms appear in graphic lambda calculus as  trees made by application gates, with leaves one of the combinators S, K, I (seen as graphs in $GRAPH$.  I want to show the following. [UPDATE: made some corrections.] Theorem.   We can replace the global FAN-OUT move with a sequence of local FAN-IN ,  DIST, CO-COMM and local pruning moves, every time the global FAN-OUT move is applied to a term made by SKI combinators. Proof.  First, remark that a sequence of  DIST moves for the application gate allows to reduce the problem of replacing global FAN-OUT moves for any combinator to the problem of replacing it for S, K, and I. This is because the DIST move for the application gate allows to do the FAN-OUT of trees of application gates: Now we have to prove that we can perform global FAN-OUT for I , K, S combinators.  For the combinator I the proof is the following: For the combinator K we shall also use a local pruning move: Finally, the proof for the combinator S is the following: Now we are going to use 3 DIST moves, followed by the switch of arrows explained in   Local FAN-IN eliminates GLOBAL FAN-OUT (II) , which is applied in the dashed green ellipse from the next figure: And we are done. UPDATE: At close inspection, it turns out that we don’t need to do switch (macro) moves. Instead, if we go back at the point we were before the last figure,  we may use  first CO-ASSOC and then perform the three FAN-IN moves . # Fan-out moves: CO-COMM, CO-ASSOC, GLOBAL FAN-OUT, LOCAL FAN-OUT This is part of the Tutorial: Graphic lambda calculus. Here are described the moves directly related to the $\Upsilon$ gate. • CO-COMM move.  This is the local move depicted in the following figure It means we may permute the outputs of a $\Upsilon$ gate. The name means “co-commutativity”, because the diagram resembles to the one of the commutativity property, with the exception of the arrows orientations, which are in opposite directions (hence “co-“). • CO-ASSOC move. This is a local move, described by the next figure It has the following effect: by using CO-ASSOC moves, we may move between any two binary trees formed only with $\Upsilon$ gates, with the same number of output leaves. The name means “co-associativity” and the explanation is similar to the previous one, with “commutativity” replaced by “associativity”. •   GLOBAL FAN-OUT. This is a global move, because it involves a modification of an arbitrary number of nodes (gates) and arrows. Precisely, the move acts like in the following picture: if $A$ is a graph in $GRAPH$  then we may replace the graph $A$ connected to an $\Upsilon$  gate by two copies of $A$. GLOBAL FAN-OUT implies CO-COMM  (namely two GLOBAL FAN-OUT moves have the effect of one CO-COMM move).  The move CO-COMM is not useless though: we may choose to work in a sector of the graphic lambda calculus which uses CO-COMM but not GLOBAL FAN-OUT. There is a variant of this move which is local. • LOCAL FAN-OUT. Fix a number $N$ and consider only graphs $A$ which have at most $N$ (nodes + arrows). The $N$ LOCAL FAN-OUT move is the same as the GLOBAL FAN-OUT move, only it applies only to such graphs $A$. LOCAL FAN-OUT does not imply CO-COMM. Important remark: The gate $\Upsilon$ is really a fan-out gate only in the sense described by the FAN-OUT moves. In the absence of one of this moves, the gate cannot  be described as a “fan-out”. The “fan-out” is in the moves, not in the gate. _________________________ Return to Tutorial: Graphic lambda calculus
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 23, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8297490477561951, "perplexity": 1263.2356544337308}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195530385.82/warc/CC-MAIN-20190724041048-20190724063048-00256.warc.gz"}
https://deepai.org/publication/communication-distortion-and-randomness-in-metric-voting
Communication, Distortion, and Randomness in Metric Voting In distortion-based analysis of social choice rules over metric spaces, one assumes that all voters and candidates are jointly embedded in a common metric space. Voters rank candidates by non-decreasing distance. The mechanism, receiving only this ordinal (comparison) information, should select a candidate approximately minimizing the sum of distances from all voters. It is known that while the Copeland rule and related rules guarantee distortion at most 5, many other standard voting rules, such as Plurality, Veto, or k-approval, have distortion growing unboundedly in the number n of candidates. Plurality, Veto, or k-approval with small k require less communication from the voters than all deterministic social choice rules known to achieve constant distortion. This motivates our study of the tradeoff between the distortion and the amount of communication in deterministic social choice rules. We show that any one-round deterministic voting mechanism in which each voter communicates only the candidates she ranks in a given set of k positions must have distortion at least 2n-k/k; we give a mechanism achieving an upper bound of O(n/k), which matches the lower bound up to a constant. For more general communication-bounded voting mechanisms, in which each voter communicates b bits of information about her ranking, we show a slightly weaker lower bound of Ω(n/b) on the distortion. For randomized mechanisms, it is known that Random Dictatorship achieves expected distortion strictly smaller than 3, almost matching a lower bound of 3-2/n for any randomized mechanism that only receives each voter's top choice. We close this gap, by giving a simple randomized social choice rule which only uses each voter's first choice, and achieves expected distortion 3-2/n. There are no comments yet. Authors • 15 publications • Improved Metric Distortion for Deterministic Social Choice Rules In this paper, we study the metric distortion of deterministic social ch... 05/04/2019 ∙ by Kamesh Munagala, et al. ∙ 0 • Dimensionality, Coordination, and Robustness in Voting We study the performance of voting mechanisms from a utilitarian standpo... 09/05/2021 ∙ by Ioannis Anagnostides, et al. ∙ 0 • Metric-Distortion Bounds under Limited Information In this work we study the metric distortion problem in voting theory und... 07/06/2021 ∙ by Ioannis Anagnostides, et al. ∙ 0 • Resolving the Optimal Metric Distortion Conjecture We study the following metric distortion problem: there are two finite s... 04/16/2020 ∙ by Vasilis Gkatzelis, et al. ∙ 0 • An Analysis Framework for Metric Voting based on LP Duality Distortion-based analysis has established itself as a fruitful framework... 11/17/2019 ∙ by David Kempe, et al. ∙ 0 • On the Distortion Value of the Elections with Abstention In Spatial Voting Theory, distortion is a measure of how good the winner... 12/24/2018 ∙ by Mohammad Ghodsi, et al. ∙ 0 • On Voting and Facility Location We study mechanisms for candidate selection that seek to minimize the so... 12/18/2015 ∙ by Michal Feldman, et al. ∙ 0 This week in AI Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday. 1 Introduction In voting or social choice, there is a set of alternatives (such as political candidates or courses of action) from which a group (such as a country or an organization) wants to select a winner.111A large and important part of the literature studies the goal of choosing a complete consensus ranking of all candidates; we will not study this alternative goal here, and therefore identify social choice with the selection of a single winner. Each voter submits a ranking (or preference order) of the candidates, and the mechanism (or social choice rule) chooses a winner based on these submitted rankings. Many different social choice rules have been proposed, and it is an important question how to compare them. One fruitful and long line of work, dating back at least to the correspondence of Borda and Condorcet [25, 26], formulates axioms that a social choice rule “should” satisfy; one can then compare social choice rules by which or how many of these axioms they satisfy [19]. Unfortunately, many results in this area are impossibility results, most notably Arrow’s result for producing a consensus ranking [7] and the Gibbard-Satterthwaite Theorem ruling out truthful voting rules with minimal additional properties [34, 49]. An alternative to the axiomatic approach is to consider social choice as an optimization problem with the goal of selecting the “best” candidate for the population [17, 20, 47, 48]. A natural way to express the notion of “best” is to assume that each voter has a utility (or cost) for each candidate; the mechanism’s goal is to optimize the aggregate (e.g., average or median) utility or cost of all voters. However, as articulated in [18, 4], the social choice rule has to optimize with crucial information missing: a voter can only communicate her222For consistency and clarity, we will always refer to voters using female and candidates using male pronouns. ranking according to the utility/cost. In other words, the mechanism receives only ordinal information — which candidate is preferred over which other candidate — even though it needs to optimize a cardinal objective function. From an optimization perspective, this means that the mechanism should simultaneously optimize over all utility/cost functions that are consistent with the reported rankings, in that they would give rise to the observed rankings. The worst-case ratio (over all cost/utility functions) between the mechanism’s cost/utility and that of the optimum candidate for the specific function is called the mechanism’s distortion. (Formal definitions of all concepts and terms are given in Section 2.) In applying this general framework, an important question is what class of cost/utility functions to consider. A natural approach was suggested in [4] (see also the expanded/improved journal version [3] and general overview [2]): all candidates and voters are jointly embedded in a metric space, and the cost of voter for candidate is their metric distance . The assumption that voters rank candidates by non-decreasing distance in a latent space dates back to earlier work on so-called single-peaked preferences [13, 14, 29, 43, 41, 10, 9], though much of the earlier work focuses on the special case when the metric is the line. Using the framework of distortion and metric costs, [4, 3] show a remarkable separation. While many commonly used voting rules (such as Plurality, Veto, -approval, Borda count) have either unbounded distortion or distortion linear in the number of candidates, and indeed all score-based rules have distortion (in terms of the number of candidates), uncovered-set rules have distortion at most 5. To describe uncovered-set rules, consider a tournament graph on the candidates which contains the directed edge iff at least as many voters prefer to as vice versa. The uncovered set of is the set of all candidates with paths of length at most 2 to all other candidates [44]; an example of such a candidates is the candidate with maximum outdegree, which is selected by the Copeland rule. [3] show that any candidate in the uncovered set of has distortion at most 5, and also show a lower bound of 3 on the distortion of every deterministic voting mechanism. One advantage of some of the mechanisms with large distortion — such as Plurality, Veto, or -approval with small — is that they require little communication from the voters. Instead of having to transmit her entire ranking, a voter under Plurality only needs to share her first choice; similarly a voter under Veto only needs to share her last choice. This observation raises the question of whether high distortion is inherently a consequence of limited communication between voters and the mechanism. The answer to the preceding question is clearly “No:” there are simple randomized mechanisms achieving constant distortion. Perhaps the simplest is Random Dictatorship: “Return the first choice of a uniformly random voter.” This mechanism is known to have distortion strictly smaller than 3 [5], a smaller distortion than any deterministic mechanism can achieve. However, despite the frequent mathematical appeal and elegance of randomized algorithms and mechanisms, most organizations are leery of using randomization for making important decisions;333 A reader taking issue with this statement may want to think about his/her own computer science, mathematics, economics, or operations research department. Even though these are likely among the most savvy organizations in terms of understanding randomization, decision making procedures practically never involve randomization, except the occasional coin flip to break a tie. (And no, the fact that most of your colleagues seem to vote essentially randomly does not count!) The reasons for such a preference in most organization likely include an aversion to variance or low-probability undesirable events; naturally, one can envision guarantees between deterministic and expectation bounds, such as the bounds on the squared distortion in [31]. hence, we consider determinism a very desirable property in the design of voting mechanisms. Considering the following three properties: (1) low distortion, (2) low communication, (3) determinism, it is known that any two can be achieved simultaneously: • Random Dictatorship satisfies (1), (2). • Uncovered-set mechanisms satisfy (1), (3). • Plurality and many other mechanisms satisfy (2), (3). 1.1 Our Models and Results We only consider the goal of minimizing the average (or total) metric distance of all voters from the winning candidate.444Recall that [4] and several follow-up articles studied both the average and median distance. Our main result, proved in Section 4, is essentially a negative answer to the question of whether any voting mechanisms can simultaneously have all three desirable properties. We consider a model in which each voter communicates bits of information about her ranking to the mechanism, in a single round.555Analyzing the distortion of multi-round deterministic mechanisms with limited communication is a very interesting direction for future work. Associated with each -bit string is a subset of rankings. The must form a disjoint cover of all possible rankings. If they did not form a cover, some voters might not have any message to send, making the mechanism ill-defined. And if the were not disjoint, then it is not clear how a voter with multiple possible messages would make the (non-deterministic) choice which one to send; in particular, this choice could depend on the actual metric distances, and it might require much more subtle definitions to place meaningful restrictions on a mechanism to not exploit such information. Each voter communicates the (unique) such that her permutation is in . We require that the same set is associated with the string , regardless of the identity of the voter sending the string.666Our results require this assumption. While studying the power of mechanisms that allow different voters to use different encodings of their preferences would be interesting theoretically, voting mechanisms which treat votes differently a priori tend to not be accepted in practice. Under this model, in Section 4, we prove the following lower bound: Every one-round deterministic voting mechanism in which each voter sends only a -bit string to the mechanism has distortion at least . Most mechanisms with limited communication are of a fairly specific form: voters can communicate only their choices in a (small) set of positions of their ranking, typically at the top or bottom of their ballots. (Either giving the candidate for each such position, or specifying them as a set, as in -approval.) For such restricted mechanisms, a simpler proof (in Section 3) gives a lower bound that is stronger by a factor : Any deterministic one-round social choice rule which receives, from each voter, no information about candidates outside positions in her ranking, has distortion at least . The proof of Theorem 1.1 is significantly easier and cleaner than the proof of Theorem 1.1, while still containing some of the key ideas. Therefore, we present the proof of Theorem 1.1 before that of Theorem 1.1. Theorem 1.1 provides a generalization of Theorem 1 of the recent work [31], which proves linear distortion for the special case when consists of the top positions, for constant . In fact, [31] shows these lower bounds on the expected squared distortion of randomized mechanisms; this directly implies the same bounds for deterministic mechanisms. The fact that the lower bound of Theorem 1.1 is stronger than that of Theorem 1.1 by a factor of is discussed in more detail in Section 4. To see it most immediately, consider the case . Because , Theorem 1.1 provides a super-constant lower bound on the distortion. On the other hand, communicating the positions of candidates requires bits, so the lower bound of Theorem 1.1 is vacuous. Closing this gap is an interesting direction for future work, discussed in Section 7. The reason we consider Theorem 1.1 our main contribution is that it helps us pinpoint the source of high distortion. Several recent works have shown lower bounds on the distortion of different specific classes of social choice rules, such as score-based rules [3] or the above-mentioned top- ballots [31]. Our result implies that regardless of the intricacy of the mechanism, low communication (within the context studied here) and determinism are enough to force high distortion. Communication as a measure of complexity is fairly natural, as evidenced by the mechanisms typically used in practice for large numbers of alternatives. Communication can also be regarded as a stand-in for cognitive effort imposed on the voters, although admittedly, the computation of a message in a general -bit bounded mechanism may still require the voter to first determine her full ranking of all candidates. The results of Theorems 1.1 and 1.1 are lower bounds, raising the question of how small one can make a mechanism’s distortion when communication is limited. In Section 5, we address this question, proving the following theorem. There is a one-round deterministic social choice rule which, given only each voter’s top candidates (in order), selects a candidate with distortion at most . The deterministic social choice rule of Theorem 1.1 is a generalization of the Copeland rule to such top- ballots. Up to constant factors,777An application of Corollary 5.3 of [38] gives an upper bound of , which, however, is still far from matching the lower bound. the bounds of Theorems 1.1 and 1.1 match. Closing the gap between the upper and lower bound is likely difficult, as even for , the best-known lower bound of 3 does not match the best current upper bound of due to [45]; whether there is a deterministic mechanism with metric distortion 3 is a well-known open question. Notice also that Theorem 1.1 implies that knowing each voter’s ranking for a constant fraction of candidates is sufficient to achieve constant distortion, a fact that may not be a priori obvious. As we discussed earlier, the main focus in this article is on deterministic mechanisms: as discussed earlier, the Random Dictatorship mechanism has distortion strictly smaller than 3, achieving small distortion and low communication simultaneously.888The amount by which it is smaller is of order ; here, is the the number of voters, which we consider “large.” [36] prove a nearly matching lower bound: they show that every randomized social choice rule in which each voter only communicates her top candidates must have distortion at least . However, even for , this leaves a gap between the upper bound of essentially for Random Dictatorship and the lower bound of . Recently, [31] shrunk this gap: they proved that the Random Oligarchy mechanism — which samples three voters and outputs a majority of first-place votes if it exists, and otherwise the choice of a random voter among the three — achieves expected distortion close to , though there still remains a small gap between the upper and lower bounds. As an additional result, in Section 6, we close this remaining gap: There is a simple randomized social choice rule in which each voter only communicates her first-choice candidate, and which achieves distortion at most . Nature of Latent Distances The optimization objective of the mechanism is expressed in terms of latent utilities, or more specifically, distances. A subtle question is whether voters “know” their utilities for (or distances to) candidates, or — perhaps more philosophically — whether these utilities/distances are “real.” In general, one attractive feature of the distortion framework is that it completely obviates the need to address this question: when a mechanism achieves low distortion, it optimizes robustly over all possible utility/distance functions consistent with the rankings, and the question of whether voters could actually quantify the utilities in a meaningful way is irrelevant. However, when we focus on the design of mechanisms with low communication, the question should be addressed explicitly, as the answer has a strong impact on the design space for mechanisms. When the mechanism designer has control not only over the aggregation of ballots, but also over the type of information about voter preferences that is elicited, this opens the door to designing mechanisms in which agents explicitly communicate numerical estimates of their utilities for some candidates; in turn, having such information may allow a mechanism to achieve lower distortion (as we will see in related work below). If agents themselves cannot quantify their utilities, then not only is communication of a ranking imposed by the class of typically used mechanisms, but it is inherently the only information about the utilities that agents themselves may have access to. Which of these two assumptions (or something between the two along a more fine-grained spectrum) is more realistic likely depends on the envisioned application. For example, if software agents vote on a preferred alternative in a mostly economically motivated setting, then it is very reasonable to assume that the agents can compute (good approximations of) their utilities. On the other hand, when human voters choose between political candidates, assuming an ability to quantify a metric distance in some abstract space of political positions is much less realistic. Thus, we believe that for both assumptions, there are important and natural settings in which they are justified, motivating studies of communication-distortion tradeoffs in both types of scenarios. 1.2 Related Work Communication complexity [39] generally studies the required communication between multiple parties wishing to jointly compute an outcome. Several recent works have studied the communication required specifically for jointly computing particular economic outcomes, or — conversely — to bound the effects of limited communication on such economic outcomes. These include work on auctions and allocations [1, 8, 16, 15, 28], persuasion [30], and general mechanism design [42]. While the high-level concerns are similar across different domains, the specific approaches and techniques do not appear to carry over. The impact of communication more specifically on social choice rules has been explored before; see, for instance, [18] for an overview. However, most of the focus in past work has been on the number of bits that need to be communicated in order to compute the outcome of a particular social choice rule, rather than on proving lower bounds arising due to limited communication when the social choice rule is not pre-specified. A classic paper in this context is by Conitzer and Sandholm [24]: they study vote elicitation rules, i.e., protocols by which a mechanism can interact with voters to determine the winner under a particular voting rule while not eliciting the full ranking information. This raises algorithmic questions about whether the information obtained so far uniquely determines a winner as well as incentive issues, among others, and a large amount of follow-up literature (e.g., [27]) has studied these issues. Relatedly, Conitzer [23] studies how many comparisons need to be elicited from voters to be able to reconstruct their complete ranking, and shows that the number is linear (as opposed to quadratic) when preferences are single-peaked (on the line). Several very recent papers have explicitly considered the tradeoff between communication and distortion in social choice, both in deterministic and randomized settings. Perhaps most immediately related is recent work by Fain et al. [31]. Their focus is on mechanisms with extremely low communication which achieve low expected squared distortion, a measure somewhere between expected distortion and deterministic distortion. They prove that the Random Referee mechanism, which asks two randomly chosen voters for their top choices, and asks a third voter to choose between these two choices, achieves constant expected squared distortion. Notice that this mechanism elicits different information from different voters. Theorem 1 of [31] shows that this is unavoidable, in that any mechanism that only obtains top- lists (for constant ), even from all voters, must have linear expected squared distortion, implying the same result for the distortion of deterministic mechanisms. Our Theorem 1.1 generalizes this result for deterministic mechanisms to non-constant and sets other than the top positions. Another very related piece of work is due to Mandal et al. [40], studying the communication-distortion tradeoff in a setting where the voters have utilities (instead of costs) for the candidates, and these utilities are only assumed to be non-negative and normalized, but do not need to satisfy any other properties (such as being derived from a metric). The other major modeling difference between our work and [40] is that they assume that agents compute their message to the mechanism directly from their utility vector, rather than the ranking. In particular, the mechanism can be designed to allow voters to express the strength of their preferences, albeit in possibly coarse form. This allows for a choice of deterministic/randomized algorithms in two places: (1) the voters’ computation of their message, and (2) the mechanism’s aggregation of the messages into a winner. [40] give upper and lower bounds for deterministic and randomized voting rules in this setting. The positive/algorithmic results in [40] are obtained primarily by generalizing an approach of Benadè et al. [11], asking voters to communicate their top few candidates as well as a suitably rounded version of their utility for those nominated candidates. The bounds are improved in some parameter regimes by having the mechanism randomly select a subset of candidates and restricting voters to choose from this subset. While the results of [40] are clearly directly related to our work, they are not immediately comparable. Because the utilities are not derived from metrics, the mechanisms need to deal with much broader classes of inputs, resulting in (generally) weaker upper bounds and stronger lower bounds. On the other hand, the assumption that voters can explicitly quantify their utilities — and hence have them elicited by a mechanism — gives a mechanism more power than in our setting. Another related recent piece of work is on approval-based voting, due to Pierczyński and Skowron [46]. While much of this work focuses on a different notion of distortion — analyzing the fraction of voters who approve of the winning candidate in the sense of being “close enough” — [46] also analyzes the (traditional) distortion of approval-based voting. Under the type of mechanism that they consider, rather than approving a given number of voters (as in -approval), voters approve all candidates within a given distance of themselves, i.e., within a ball of given radius around themselves. This approval radius can be voter-specific or uniform across voters. In this context, the main result of [46] is to show specific constant distortion whenever a uniform approval radius ensures that a constant fraction of voters, bounded away from 0 and 1, have the optimum candidate within their approval radius.999In particular, when that fraction is between and , the distortion is at most 3. It is of course not clear how a mechanism (or the voters) could determine such a radius. Also note that this type of approval-based mechanism does require voters to quantify their distances, rather than just interact with their individual ordinal rankings. Note that Theorem 1.1 can be considered as somewhat related to this result. It shows that whenever voters communicate their top candidates, where is a constant fraction of the number of candidates, there is a mechanism with constant distortion. However, in contrast to the result of [46], not just the identity, but also the ranking of these top candidates must be communicated; on the other hand, the theorem makes no assumptions about whether the optimum candidate appears in any of these top- rankings. Low communication complexity of voter preferences is also the focus of a recent preprint by Bentert and Skowron [12]. They study the more “traditional” goal of implementing given voting rules with low communication [18], but are interested in approximate implementation of these rules. To make approximation meaningful, they focus on score-based rules, which naturally assign each candidate a score (such as Borda Count, Plurality, or MiniMax). Then, the quality of approximation is the ratio between the score of the winner under full information vs. the score of the winner under limited communication. They focus on mechanisms in which each voter is asked to rank a small subset of candidates; this subset is either the voter’s top candidates (a deterministic mechanism) or a random subset of candidates (a randomized mechanism). Given that the goal in [12] is the approximate implementation of specific scoring-based voting rules rather than achieving low distortion, the results are not directly comparable. However, the techniques in Section 3.2 of [12] readily yield a randomized mechanism with distortion and very low communication complexity per voter when the number of voters is sufficiently large. By asking each voter to compare a uniformly random pair of candidates (see also [37]), and using the majority of returned votes, with high probability (by Chernoff and Union Bounds), one obtains a tournament graph in which each directed edge corresponds to at least a fraction of voters preferring over . Then, a straightforward modification of the analysis of the distortion of uncovered set rules in [3] (or a simple application of Corollary 5.3 in [38]) gives a distortion of . This rule only requires each voter to compute 1 bit in total. However, different voters are asked to answer different questions, which is often considered undesirable. Furthermore, the total communication complexity is bits, whereas the Random Dictator mechanism only needs to elicit bits from one voter. The recent work of Bentert and Skowron is somewhat related to earlier work of Filmus and Oren [33]: they are also interested in the question of when top- ballots from voters are sufficient to obtain the correct candidate. However, [33] study this question under probabilistic models for the ballots, significantly changing the nature of the results. The metric-based distortion view of social choice has proved to be a very fruitful analysis framework. In fact, it has been extended beyond social choice to other optimization problems in which it is natural to assume that a mechanism only receives ordinal information; see, e.g., [6, 2]. Several modeling assumptions have been proposed that yield lower distortion than the worst-case bounds of [3]. One such assumption is termed decisiveness [5, 36]: it posits that for every voter, there is a sufficiently clear first choice among candidates. When the metric space is sufficiently decisive, significantly stronger upper bounds on the distortion can be proved. An alternative approach was proposed in [21, 22]. The authors assumed that the candidates were “representative,” in that they themselves were drawn i.i.d. uniformly from the set of voters. Under this assumption, the authors obtained improved expected distortion bounds for the case of two candidates [21], and constant expected distortion for Borda count and several other position-based scoring rules [22]. As mentioned above, the gap between the upper bound of 5 (achieved, e.g., by the Copeland rule) and the lower bound of 3 has posed an interesting open question for several years now. One initial conjecture of [4] was that the Ranked Pairs mechanism might achieve a distortion of 3. This conjecture was disproved by [35], who showed a lower bound of 5 on the distortion of Ranked Pairs. Very recently, Munagala and Wang [45] have presented a (deterministic) social choice rule with distortion at most , which is the first piece of progress towards closing the gap. In our and much of the preceding work on metric voting, the focus is on distortion, while ignoring incentive compatibility. (Recall the strong impossibility result of [34, 49].) The connection between strategy proofness and distortion in this type of setting was studied in [32]. 2 Preliminaries 2.1 Voters, Candidates, and Social Choice Rules There are candidates, which we always denote by lowercase letters at the end of the alphabet. Sets of candidates are denoted by uppercase letters, and is the set of all candidates. The preference order (or ranking) of voter over the candidates is a bijection , mapping positions to the candidate which voter ranks in position . We say that (strictly) prefers to iff . When only the ranking, but not the identity, of a voter is relevant, we will omit the subscript for legibility. The set of all voters101010We will not need to reference the number of voters explicitly. In general, we treat the number of voters as “much larger” than the number of candidates, and are only interested in bounds in terms of the number of candidates. is denoted by . We write for the set of all possible rankings , and for the rankings of all voters, which we call the vote profile. In the traditional full-information view, a social choice rule (we use the terms mechanism or voting mechanism interchangeably) is given the rankings of all voters, i.e., , and produces as output one winning candidate . For most of this article, we are interested only in deterministic social choice rules . 2.2 Communication-bounded mechanisms Our main contribution is to consider communication-bounded social choice rules. As in the standard model described above, we still only consider deterministic single-round mechanisms, i.e., each voter can only send a single message to the mechanism. However, this message is now also restricted to be at most bits long. This induces sets of rankings; when the mechanism receives a message from voter , all it learns is that . As discussed in the introduction, we assume that the form a disjoint partition of , i.e., they are pairwise disjoint and cover all rankings: . The fact that is a power of 2 is not relevant anywhere in our proofs, so we also consider mechanisms with arbitrary numbers of sets. [-communication bounded social choice rule] An -communication bounded social choice rule consists of pairwise disjoint sets with , and a deterministic mapping . Communication-bounded social choice rules that are used in practice, such as Plurality, Veto, -approval, and combinations thereof, are of a specific form: there is a set of positions, and voters can communicate the set of candidates they have in positions in , possibly with an ordering, but cannot communicate any additional information about their ranking of candidates in positions outside . For such mechanisms, we will be able to prove stronger lower bounds on the distortion, and with a significantly simpler proof. We define them formally as follows: A -entry social choice rule is an -communication bounded social choice rule with the following additional restriction on the sets : there exists a set of at most positions such that if agree for all positions in , i.e., for all , then if and only if . 2.3 Metric Space and Distortion The key modeling contribution of the metric-based distortion [4] objective is to assume that all voters and candidates are embedded in a pseudo-metric space . denotes the distance between voter and candidate . Being a pseudo-metric, it satisfies non-negativity and the triangle inequality for all voters and candidates . Given our choice of defining the metric only for pairs consisting of a voter and a candidate, symmetry is not directly relevant. One can naturally extend the pseudo-metric to pairs of candidates or pairs of voters, but those distances will never appear in our mechanisms or proofs. For our upper bounds, we explicitly allow the distance between candidates and voters (and thus also between pairs of candidates or pairs of voters) to be 0; however, for improved flow, we will still refer to as a metric. In our lower-bound constructions, all distances will be strictly positive; that is, we do not exploit the increased generality for negative results. We say that a vote profile is consistent with the metric , and write , if whenever . That is, is consistent with iff all voters rank candidates by non-decreasing distance from themselves. Notice that in case of ties among distances, i.e., , several vote profiles are consistent with . None of our results depend on any tie breaking assumptions. The cost of candidate is , i.e., the sum of distances of to all voters.111111[4] also consider the median distance as an optimization objective; here, we only focus on the sum/average objective. An optimum candidate is any candidate ; in our analysis, it will not matter which candidate is considered “the” optimum candidate in case of ties. The social choice rule is handicapped by not knowing the metric , instead only observing the consistent vote profile (or some limited information about it, when communication is restricted). Due to this handicap, and possibly other suboptimal choices, it will typically choose candidates with higher cost than . The distortion of is the worst-case ratio between the cost of the candidate chosen by , and the optimal candidate (determined with knowledge of the actual distances ). Formally, ρ(f)=maxPsupd:d∼PC(f(P))C(x∗d). We can think of the distortion in terms of a game between the social choice rule and an adversary. First, the adversary chooses the vote profile . Then, the social choice rule, knowing only (or part of that information, in case of communication restrictions), chooses a winning candidate = . Then, the adversary chooses a metric consistent with that maximizes the ratio between the cost of the candidate chosen by and the optimum candidate for . The goal now is to define a social choice rule — under suitable constraints — that achieves small distortion , and to prove lower bounds on all social choice rules under the given constraints. 3 A Lower Bound for k-Entry Social Choice Rules In this section, we establish the lower bound of Theorem 1.1, restated here formally. Every one-round deterministic -entry social choice rule has distortion at least . Let . Because every deterministic social choice rule has distortion at least 3 [4], we only need to consider the case where , i.e., . We will prove the theorem by induction on , with the base case holding because the only such case with is , where the mechanism receives no information about any voter’s preferences, and hence has unbounded distortion. First, we consider the case when . We designate one candidate who is “infinitely” far from all other candidates and voters, and thus ranked last by all voters. The mechanism clearly cannot choose as a winner. This reduces the problem to one of candidates, and a set of positions at which voters specify their ranking. By induction hypothesis, applied to this instance, the distortion is lower-bounded by ; the inequality holds because . For the remainder of the proof, we can assume that , i.e., voters do not specify their least favorite candidate. In this case, we will not need to use the induction hypothesis for . For each subset of candidates, and each ordering , we say that a voter has type if she puts the candidates from in the positions , in the order given by . That is, has type iff for . There are types of voters. We define a vote profile which has exactly a fraction of voters having type , for each type. Throughout, we will talk about fractions, rather than numbers, of voters, so that the total adds up to 1. Each subset of candidates and each order among those candidates is equally frequent, and in aggregate, the vote profile expresses no preference by the voters for any candidate over any other. Let be the candidate chosen by the social choice rule for this input. is well-defined as a function of all voters’ types, because (1) for each voter , the message sent by is uniquely determined by her ranking of candidates in positions in , and (2) the mechanism’s output is a deterministic function of only the messages sent by the voters. We now define a metric space. Let be a very small constant (we will let ), and . Consider a voter of type . We distinguish two cases: 1. In the first case, . Let be any ordering that puts the candidates in in positions in the order , and which additionally has , i.e., candidate is in the last position in ’s ranking. Apart from this, is arbitrary. By construction, a voter with ranking has type . We now set the distance between and the candidate to 1, and the distance from to every candidate (for ) to . These distances are consistent with the ranking . 2. In the second case, . Again, let be any permutation that puts the candidates in in positions in the order (ensuring that is consistent with having type ). This time, the position of in is prescribed by , and we let the remaining positions of be arbitrary. Voter has distance exactly from each candidate , including the case when . Again, ranks the candidates in the order given by . We now verify that these distances satisfy the triangle inequality. Consider voters and candidates . We will show that , by distinguishing two cases for : 1. In the first case, . Then, . Either the distance , in which case the triangle inequality holds obviously, or , in which case our definition ensures that as well. In either case, the triangle inequality holds. 2. In the second case, , so either or , depending on the case of the definition. Because all distances are lower-bounded by , the triangle inequality clearly holds if . In the other case , we have that , which together with again ensures that the triangle inequality holds. Recall that is selected by the social choice rule under the given rankings. Each voter of type with has cost 1 for candidate , and cost at most for any candidate . Each voter of type with has cost at least for candidate , and cost at most for each candidate . Of the types , exactly have . Thus, the cost of candidate is at least , while the cost of any other candidate is at most . Letting , the distortion approaches 1+2(n!(n−k)!−k⋅(n−1)!(n−k)!)k⋅(n−1)!(n−k)!=1+2(n−k)k=2n−kk.\QED 4 The General Lower Bound In this section, we prove the more general lower bound of Theorem 1.1. The bound applies to all -communication bounded social choice rules, but is slightly weaker than that of Theorem 3. To gain some insight into general communication-bounded social choice rules, we begin with an easy proposition, independently obtained as Lemma 4.1 in [40]. We include a proof here for completeness, and because it illustrates some of the type of reasoning required for the proof of Theorem 1.1. Assume that there exists a set containing two rankings , with , i.e., there is a which does not uniquely specify the voter’s top-ranked candidate. Then, the corresponding social choice rule has unbounded distortion. Proof. Let . Consider a vote profile in which all voters communicate the message to the mechanism, i.e., state that their ranking is in . If the mechanism chooses as the winner, then the metric will be such that all voters have distance 0 from , and distance 1 from all other candidates121212At the cost of small , which we could then let go to 0, we could avoid ties here; in the limit, we would obtain exactly the same result. See the proof of Theorem 3 for spelled-out details., including . Then, the cost of is 0, while the cost of is 1, giving infinite cost ratio, i.e., distortion. Similarly, if the mechanism does not choose as the winner, then all voters will be at distance 0 from and at distance 1 from all other candidates, including . Again, the cost ratio between the optimum candidate and the winner will be infinite. ∎ Let be any one-round -communication bounded social choice rule on candidates. Then, must have distortion at least . Proof. The high-level idea of the proof is to use induction on the number of candidates, to show that when communication is “sufficiently bounded,” any social choice rule must have high distortion. After completing the proof by induction, we would like to apply the result to candidates, and “sufficiently bounded” must then include -communication bounded. Therefore, the relationship between the number of candidates in the induction proof and the bound on communication depends on , and to avoid notational ambiguity, we will use different variable names for the induction. Specifically, we use for the number of candidates within the induction proof, and for the upper bound on communication. Let . We will prove by induction on that every -communication bounded social choice rule on candidates with has distortion at least . The base case is easy: the communication bound is , so the voters cannot communicate any preference. By Proposition 4, the social choice rule has unbounded distortion. For the induction step, we distinguish two cases: 1. In the first case, we assume that for each candidate , at least a fraction of all sets contain a ranking that ranks last, i.e., . Then, we consider a vote profile with voters in which for each , exactly one voter submits . Let be the candidate chosen by . Consider the following metric space: For every voter who submitted such that there is a ranking ranking last, we define the distance between and to be 1, and the distance from all other candidates131313Again, ties could be broken by using small without affecting the final result. to be 0. For all other voters, the distance to all candidates is . Said differently, all candidates are at distance 0 from each other, and at distance 1 from . All voters who could possibly rank last are in the same location as the candidates different from , while all other voters are halfway between and the other candidates. Then, the cost of is at least , while the cost of each other candidate is at most . Thus, the distortion of the mechanism is at least , completing the proof directly. 2. Otherwise, let be a candidate such that at most a fraction of all sets contain a ranking that ranks last. Define to be the number of such sets, and assume w.l.o.g. (by renumbering) that are all the sets which contain at least one ranking with in the last position. By the assumption in this part of the proof, we have that . We will only construct instances in which all voters rank last; thus, no voter communicates any message . No mechanism with finite distortion can select as a winner, by the same argument as in the preceding case. (That is, the metric puts at distance 1 from all voters, and all other candidates at distance 0 from all voters.) As a result, we obtain an instance with candidates, only remaining possible rankings, and — crucially — only remaining sets of rankings. We can therefore apply the induction hypothesis for , and conclude that the mechanism’s distortion is at least . To show that we can apply the inductive claim with in the end, observe that . It remains to show that . To do so, we rewrite by using the Taylor expansion of around , then apply straightforward bounds: γ =1−M−1/(n−2) =1n−2∞∑k=11k⋅(1−1/M)k⋅k−1∏j=1(1−1j⋅(n−2)) ≤1n−2∞∑k=11k⋅(1−1/M)k =1n−2⋅lnM. Substituting this bound for into the distortion completes the proof. ∎ To compare the bound of Theorem 4 with that of Theorem 3, observe that when voters get to specify the candidates in each of (given) positions in a ranking, this generates a partition of into sets: one for each subset and order within that subset. These sets of rankings do in fact form a disjoint cover. For the “interesting” range , we can simply bound , so we get that . This shows that the lower bound of Theorem 4 is weaker than that of Theorem 3 by a factor of . Closing this gap is an interesting direction for future work, briefly discussed in Section 7. 5 A Near-Matching Upper Bound While the results of Theorems 3 and 4 are negative, there are parameter ranges, such as , in which they leave room for non-trivial positive results, in particular, sublinear distortion. In this section, we investigate how well one-round mechanisms can do with limited communication. Our main result is a -entry social choice rule which — up to constants — matches the lower bound of Theorem 3. This shows that the lower bound of Theorem 3 is essentially tight. Not surprisingly, the mechanism is a variation on uncovered set mechanisms, which are the only type of mechanism known to achieve constant distortion even with access to the full vote profile. In our mechanism, each voter communicates her top choices. We say that voter prefers over if either: (1) Both and are among her top choices, and she ranks higher than , or (2) is among her top choices, and is not. Obviously, the mechanism does not know which of two candidates she prefers if neither candidate is among her top candidates. As in uncovered set mechanisms like Copeland, we construct a comparison graph among the candidates. Define . For each ordered pair , the graph contains a directed edge if and only if at least an fraction of all voters prefer over . Notice that because , it is possible that contains both and . Similarly, it is possible that for a pair , contains neither nor ; for instance, this will happen if no voter ranks either or among her top candidates. Let be the set of candidates such that at least a fraction of voters rank among their top candidates. (We will show in the proof of Lemma 5 that is not empty.) The winner returned by is a candidate in the induced graph with largest outdegree; notice that edges leaving are not counted. has distortion at most . We begin with a lemma showing the key structural property of the winning candidate . In , for every candidate , there is a directed path of length at most 3 from to . Proof. Similar to the definition of , let be the set of candidates such that at least a fraction of the voters ranks somewhere among their top candidates. By the Pigeon Hole Principle, because each voter ranks a fraction of candidates in her top , and , at least one candidate occurs in a fraction of top- lists. In particular, (and thus ) is non-empty. Each candidate has a directed edge to each candidate .141414The opposite edge may be in as well; this is irrelevant. This is because appears in at least a fraction of top- lists, while appears in at most a fraction. In particular, at least an fraction of voters rank , but not , in their top- lists, and thus prefer to . Now consider the induced graph . For each pair , at least one of the edges or is in . This is because of the (at least) fraction of voters with in their lists, at least an fraction rank higher in their lists, or at least an fraction rank lower (or not in their lists). Hence, is a supergraph of a tournament graph. Because has maximum degree in , it also has maximum degree in at least one tournament subgraph of . It is well known (see, e.g., [44, 4]) that the maximum-degree node in a tournament graph is in the uncovered set, i.e., it has a directed path of length at most 2 to every other node. This of course still holds in the supergraphs and . Thus, has a directed path of length 2 in to every candidate . Let be arbitrary. By the preceding two paragraphs, has a directed edge to each , and has a directed path of length at most 2 to . In summary, has a directed path of length at most 3 to each candidate . ∎ Next, we show a lemma upper-bounding the cost ratio of two candidates when has a directed path of length at most 3 to . Let be two candidates such that there is a directed path of length at most edges from to in . Then, . Lemma 5 can be considered a (somewhat weaker) generalization of a result proved in the proof of Theorem 7 in [4] (see also the discussion in the subsequent remark in [4]). By Lemma 6 of [4], if an fraction of voters prefer over , then . In particular, for , this implies an upper bound of 3 on the cost ratio. If has a directed path of length to , then this bound implies151515Theorem 7 of [4] uses a more intricate proof to improve the upper bound for length-2 paths from this immediate 9 to 5. that . However, since we are interested in a regime where , the exponential dependence on (recall that we have would result in bounds that do not match our lower bounds asymptotically. The point of Lemma 5 is to improve upon this exponential dependence. The exponential dependence on is an artifact of our relatively simple proof. Applying Corollary 5.3 from [38] instead would yield an improved bound of or , depending on whether is even or odd. Proof. Let be a directed path of edges from to . We distinguish two cases, based on the relative lengths of the distances and , compared to . 1. If there exists a candidate (with ) such that , then let be maximal with this property. All the voters who prefer over , which comprise at least an fraction of all voters, are at distance at least from . By maximality of , all candidates with have . Using the triangle inequality and summing this inequality for all gives us that . Again by triangle inequality, the voters who prefer over are at distance at least from . 2. In the other case, all candidates with have . Again, using the triangle inequality and summing this inequality for all , we can bound . Therefore, by triangle inequality, . At least an fraction of voters prefer over , and their distance to is at least . Because the distance from to is at most , by the triangle inequality, the distance of these voters from is at least . In both cases, we have thus shown that at least an fraction of voters are at distance at least from . Thus, the cost of is at least . By the triangle inequality, C(w)≤C(z)+d(w,z)≤(1+3ℓ−1α)⋅C(z). This completes the proof of the lemma. ∎ Theorem 5 By Lemma 5, has a path of length at most 3 in to every candidate ; in particular, to the optimum candidate . Thus, by Lemma 5 with , . Substituting and bounding now completes the proof. 6 A Tight Upper Bound for Randomized Algorithms We have seen that limited communication is a serious handicap for deterministic social choice rules, in that all communication-bounded deterministic social choice rules must have essentially linear distortion. It is well known [5, 36] that this lower bound disappears for randomized social choice rules: for example, the Random Dictatorship mechanism, which elects the first choice of a uniformly random voter, has distortion slightly smaller than 3, even though each voter only communicates her first choice. When each voter can only communicate her first choice, [36] proved a lower bound of on the distortion of every randomized mechanism. Fain et al. [31] showed that the Random Oligarchy mechanism has a an upper bound on the distortion almost matching the bound. Here, we give a simple randomized mechanism which achieves an expected distortion of exactly , thereby closing the remaining gap. The mechanism is as follows: • With probability , select a candidate using the Proportional to Squares mechanism. That is, for each candidate , let be the fraction of voters who rank first. Select candidate with probability . • With the remaining probability , select a candidate using the Random Dictatorship mechanism. That is, choose a voter uniformly at random, and return her first choice. Notice that this mechanism selects candidate with probability exactly . We prove the following theorem: The expected distortion of is at most . The proof is straightforward: it consists of a bit of arithmetic and using Lemma 3 of [36], restated here in our notation. [Lemma 3 of [36]] Let be the vector of the fractions of voters ranking candidate first, for all . Suppose that for every such first-place vote vector and every candidate , the probability of electing under is at most . Then, the distortion of is at most . The main technical lemma, proved momentarily, is the following: For all , we have that . Theorem 6 Let candidate be the first choice of a fraction of voters. The probability that is chosen under is (1−1n−1)⋅ν+1n−1⋅ν2∑yν2y ≤(1−1n−1)⋅ν+1n−1⋅ν2ν2+(n−1)(1−νn−1)2 =(1−1n−1)⋅ν+ν2(n−1)⋅ν2+(1−ν)2. Multiplying with the term , we now have (1−1n−1)⋅(1−ν)+ν⋅(1−ν)(n−1)⋅ν2+(1−ν)2. By Lemma 6, this quantity is bounded by . Since this bound holds for all and all , we can substitute it into Lemma 6, and obtain a bound of on the distortion, as claimed. Lemma 6 We want to upper-bound . First, we have that , and , so the inequality holds at the extreme points. We lower-bound the denominator of the second term. By setting the derivative , we get that the only local extremum is a minimum at , where , whereas and . Thus, . Substituting the lower bound on , we can bound f(t)≤(1−1n−1)⋅(1−t)+n⋅t⋅(1−t)n−1. A derivative test shows that this expression has a local maximum at , where its value is . Thus, we have shown that for all . 7 Conclusions As we already discussed in the introduction and Section 4, there is a gap of in the lower bound on distortion we achieve for -entry social choice rules and more general -communication bounded social choice rules. It does not appear that our techniques from Section 4 can be directly generalized to produce bounds matching the ones of Theorem 3. Thus, if the stronger bound holds more generally, a proof will likely require a deeper understanding of the combinatorial structure of partitions of . An intriguing alternative is that there may be a mechanism in which voters communicate only bits of information per candidate, but which nonetheless achieves constant distortion. An obstacle to designing such mechanisms is that it is very unclear how a mechanism would make use of information in which it cannot distinguish between several very different rankings. Throughout this article, we assumed that all voters use the same “encoding” in communicating with the mechanism. For both -entry social choice rules and -communication bounded rules, one could consider relaxing this uniformity, although voting mechanisms which treat voters differently a priori are typically not widely accepted. For -entry social choice rules, our lower-bound proof can be directly adapted to give the same lower bound so long as no voter (or almost no voter) gets to specify which candidate she ranks last. However, the proof does not carry over directly when some, but not all, voters can specify their bottom-ranked candidate, since our technique of “sacrificing” a candidate may come at a higher cost to the adversary. For -communication bounded rules, it is much less clear how to deal with arbitrarily differing encodings. A further generalization would be to let voters choose which encoding to use, or which subset of positions to fill in. Mechanisms allowing such a choice by the voters would have to be considered as “non-deterministic,” because there is not a unique message any more for each ranking. This raises the issue of how a voter would determine which of many possible messages to send. In particular, the specific choice of message may encode additional (e.g., cardinal) information about the voter’s ranking. It would require some subtlety to define a model to rule out the revelation of a lot of cardinal information, while still allowing voters non-trivial choices. Here, we only considered single-round mechanisms. It is well-known that in many settings, including in the implementation of social choice rules [18, 50], multiple rounds of communication can lead to significantly (including exponentially) lower overall communication. Indeed, [36, 31] studied randomized multi-round voting mechanisms with the explicit goal of reducing the required communication, while achieving low distortion. In the case of randomized mechanisms, receiving bits of information from each voter is enough to achieve distortion (as we showed in Section 6 — it was known previously how to achieve distortion 3), so the room for improving the required communication with multiple rounds is limited. However, for deterministic mechanisms, there is potential for significant improvement, and a natural question is whether one might even achieve constant distortion with only (or ) communication from each voter. Acknowledgements The author would like to thank Elliot Anshelevich, Yu Cheng, Shaddin Dughmi, Tyler LaBonte, Jonathan Libgober, and Sigal Oren for useful conversations and pointers. References • [1] Noga Alon, Noam Nisan, Ran Raz, and Omri Weinstein. Welfare maximization with limited interaction. In Proc. 56th IEEE Symp. on Foundations of Computer Science, pages 1499–1512, 2015. • [2] Elliot Anshelevich. Ordinal approximation in matching and social choice. ACM SIGecom Exchanges, 15(1):60–64, July 2016. • [3] Elliot Anshelevich, Onkar Bhardwaj, Edith Elkind, John Postl, and Piotr Skowron. Approximating optimal social choice under metric preferences. Artificial Intelligence, 264:27–51, 2018. • [4] Elliot Anshelevich, Onkar Bhardwaj, and John Postl. Approximating optimal social choice under metric preferences. In Proc. 29th AAAI Conf. on Artificial Intelligence, pages 777–783, 2015. • [5] Elliot Anshelevich and John Postl. Randomized social choice functions under metric preferences. In Proc. 25th Intl. Joint Conf. on Artificial Intelligence, pages 46–59, 2016. • [6] Elliot Anshelevich and Shreyas Sekar. Blind, greedy, and random: Algorithms for matching and clustering using only ordinal information. In Proc. 30th AAAI Conf. on Artificial Intelligence, pages 390–396, 2016. • [7] Kenneth Arrow. Social Choice and Individual Values. Wiley, 1951. • [8] Sepehr Assadi. Combinatorial auctions do need modest interaction. In Proc. 18th ACM Conf. on Economics and Computation, pages 145–162, 2017. • [9] Salvador Barberà. An introduction to strategy-proof social choice functions. Social Choice and Welfare, 18:619–653, 2001. • [10] Salvador Barberà, Faruk Gul, and Ennio Stacchetti. Generalized median voter schemes and committees. Journal of Economic Theory, 61:262–289, 1993. • [11] Gerdus Benadè, Swaprava Nath, Ariel D. Procaccia, and Nisarg Shah. Preference elicitation for participatory budgeting. In Proc. 31st AAAI Conf. on Artificial Intelligence, pages 376–382, 2017. • [12] Matthias Bentert and Piotr Skowron. Comparing election methods where each voter ranks only few candidates. Preprint available on arXiv 1901.10848, 2019. • [13] Duncan Black. On the rationale of group decision making. J. Political Economy, 56:23–34, 1948. • [14] Duncan Black. The Theory of Committees and Elections. Cambridge University Press, 1958. • [15] Liad Blumrosen and Michal Feldman. Implementation with a bounded action space. In Proc. 7th ACM Conf. on Electronic Commerce, pages 62–71, 2006. • [16] Liad Blumrosen, Noam Nisan, and Ilya R. Segal. Auctions with severely bounded communication. Journal of Artificial Intelligence Research, 28:233–266, 2007. • [17] Craig Boutilier, Ioannis Caragiannis, Simi Haber, Tyler Lu, Ariel D. Procaccia, and Or Sheffet. Optimal social choice functions: A utilitarian view. Artificial Intelligence, 227:190–213, 2015. • [18] Craig Boutilier and Jeffrey S. Rosenschein. Incomplete information and communication in voting. In Felix Brandt, Vincent Conitzer, Ulle Endriss, Jérôme Lang, and Ariel D. Procaccia, editors, Handbook of Computational Social Choice, chapter 10, pages 223–257. Cambridge University Press, 2016. • [19] Felix Brandt, Vincent Conitzer, Ulle Endriss, Jérôme Lang, and Ariel D. Procaccia, editors. Handbook of Computational Social Choice. Cambridge University Press, 2016. • [20] Ioannis Caragiannis and Ariel D. Procaccia. Voting almost maximizes social welfare despite limited communication. Artificial Intelligence, 175(9):1655–1671, 2011. • [21] Yu Cheng, Shaddin Dughmi, and David Kempe. Of the people: Voting is more effective with representative candidates. In Proc. 18th ACM Conf. on Economics and Computation, pages 305–322, 2017. • [22] Yu Cheng, Shaddin Dughmi, and David Kempe. On the distortion of voting with multiple representative candidates. In Proc. 32nd AAAI Conf. on Artificial Intelligence, pages 973–980, 2018. • [23] Vincent Conitzer. Eliciting single-peaked preferences using comparison queries. Journal of Artificial Intelligence Research, 35:161–191, 2009. • [24] Vincent Conitzer and Tuomas Sandholm. Vote elicitation: Complexity and strategy-proofness. In Proc. 17th AAAI Conf. on Artificial Intelligence, pages 392–397, 2002. • [25] Jean-Charles de Borda. Mémoire sur les élections au scrutin. Histoire de l’Académie Royale des Sciences, Paris, pages 657–665, 1784. • [26] M. J. A. Nicolas de Condorcet. Essai sur l’application de l’analyse à la probabilité des décisions rendues à la pluralité des voix. Imprimerie Royale, Paris, 1785. • [27] Ning Ding and Fangzhen Lin. Voting with partial information: what questions to ask? In Proc. 12th Intl. Conf. on Autonomous Agents and Multiagent Systems, pages 1237–1238, 2013. • [28] Shahar Dobzinski, Noam Nisan, and Sigal Oren. Economic efficiency requires interaction. In Proc. 46th ACM Symp. on Theory of Computing , pages 233–242, 2014. • [29] Anthony Downs. An economic theory of political action in a democracy. The Journal of Political Economy, 65(2):135–150, 1957. • [30] Shaddin Dughmi, David Kempe, and Ruixin Qiang. Persuasion with limited communication. In Proc. 17th ACM Conf. on Economics and Computation, pages 663–680, 2016. • [31] Brandon Fain, Ashish Goel, Kamesh Munagala, and Nina Prabhu. Random dictators with a random referee: Constant sample complexity mechanisms for social choice. In Proc. 33rd AAAI Conf. on Artificial Intelligence, pages 1893–1900, 2019. • [32] Michal Feldman, Amos Fiat, and Iddan Golomb. On voting and facility location. In Proc. 17th ACM Conf. on Economics and Computation, pages 269–286, 2016. • [33] Yuval Filmus and Joel Oren. Efficient voting via the top- elicitation scheme: a probabilistic approach. In Proc. 15th ACM Conf. on Economics and Computation, pages 295–312, 2014. • [34] Alan F. Gibbard. Manipulation of voting schemes: a general result. Econometrica, 41(4):587–601, 1973. • [35] Ashish Goel, Anilesh Kollagunta Krishnaswamy, and Kamesh Munagala. Metric distortion of social choice rules: Lower bounds and fairness properties. In Proc. 18th ACM Conf. on Economics and Computation, pages 287–304, 2017. • [36] Stephen Gross, Elliot Anshelevich, and Lirong Xia. Vote until two of you agree: Mechanisms with small distortion and sample complexity. In Proc. 31st AAAI Conf. on Artificial Intelligence, pages 544–550, 2017. • [37] Jeremy A. Hansen. The random pairs voting rule: Introduction and evaluation with a large dataset. In Proc of COMSOC-16, 2016. • [38] David Kempe. An analysis framework for metric voting based on LP duality. In Proc. 34th AAAI Conf. on Artificial Intelligence, 2020. • [39] E. Kushilevitz and Noam Nisan. Communication Complexity. Cambridge University Press, 1997. • [40] Debmalya Mandal, Ariel D. Procaccia, Nisarg Shah, and David P. Woodruff. Efficient and thrifty voting by any means necessary. In Proc. 33rd Advances in Neural Information Processing Systems, 2019. • [41] Samuel Merrill and Bernard Grofman. A unified theory of voting: Directional and proximity spatial models. Cambridge University Press, 1999. • [42] Dilip Mookherjee and Masatoshi Tsumagari. Mechanism design with communication constraints. Journal of Political Economy, 122(5):1094–1129, 2014. • [43] Hervé Moulin. On strategy-proofness and single peakedness. Public Choice, 35:437–455, 1980. • [44] Hervé Moulin. Choosing from a tournament. Social Choice and Welfare, 3(4):271–291, 1986. • [45] Kamesh Munagala and Kangning Wang. Improved metric distortion for deterministic social choice rules. In Proc. 20th ACM Conf. on Economics and Computation, pages 245–262, 2019. • [46] Grzegorz Pierczyński and Piotr Skowron. Approval-based elections and distortion of voting rules. Preprint available on arXiv 1901.06709, 2019. • [47] Ariel D. Procaccia. Can approximation circumvent Gibbard-Satterthwaite? In Proc. 24th AAAI Conf. on Artificial Intelligence, pages 836–841, 2010. • [48] Ariel D. Procaccia and Jeffrey S. Rosenschein. The distortion of cardinal preferences in voting. In Proc. 10th Intl. Workshop on Cooperative Inform. Agents X, pages 317–331, 2006. • [49] Mark A. Satterthwaite. Strategy-proofness and Arrow’s conditions: Existence and correspondence theorems for voting procedures and social welfare functions. Journal of Economic Theory, 10:187–217, 1975. • [50] Ilya R. Segal. Nash implementation with little communication. Theoretical Economics, 5:51–71, 2010.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9282708764076233, "perplexity": 1070.4138098852975}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585171.16/warc/CC-MAIN-20211017082600-20211017112600-00328.warc.gz"}
http://sepwww.stanford.edu/sep/prof/pvi/zp/paper_html/node23.html
Next: Boundedness Up: INSTABILITY Previous: The mapping between Z ## The meaning of divergence To prove that one equals zero, take an infinite series such as 1, -1, +1, -1, +1, ,group the terms in two different ways, and add them as follows: Of course this does not prove that one equals zero: it proves that care must be taken with infinite series. Next, take another infinite series in which the terms may be regrouped into any order without fear of paradoxical results. For example, let a pie be divided into halves. Let one of the halves be divided in two, giving two quarters. Then let one of the two quarters be divided into two eighths. Continue likewise. The infinite series is 1/2, 1/4, 1/8, 1/16, .No matter how the pieces are rearranged, they should all fit back into the pie plate and exactly fill it. The danger of infinite series is not that they have an infinite number of terms but that they may sum to infinity. Safety is assured if the sum of the absolute values of the terms is finite. Such a series is called absolutely convergent." Next: Boundedness Up: INSTABILITY Previous: The mapping between Z Stanford Exploration Project 10/21/1998
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.962108850479126, "perplexity": 606.0872718910206}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812855.47/warc/CC-MAIN-20180219231024-20180220011024-00519.warc.gz"}
https://www.physicsforums.com/threads/why-is-the-strength-of-weak-nuclear-force-important.734720/
# Why is the strength of weak nuclear force important ? 1. Jan 25, 2014 ### thatoekhant I am just a student. I read that if the strength of weak nuclear force were stronger than current value, this would cause the rarity of neutrons. And, if the strength of weak nuclear force were weaker than current value, this would cause most of hydrogen to convert to helium. I can't understand those statement. Why ? Please ! 2. Jan 25, 2014 ### Hawkwind Well, the weak force is responsible for the beta decays. Thus, a larger coupling constant would result in reduced lifetimes of the decaying particles - e.g. the neutron. The beta decay also plays a role in the fusion process H + H -> He. In the 1st step, a deuterium nucleus will be formed p + p -> p + n + positron + neutrino So, a beta+ decay of the proton is involved. A higher coupling constant would increase the fusion H + H -> He Last edited: Jan 25, 2014 3. Jan 26, 2014 ### thatoekhant Thanks. But, some say that if there were rarity of neutrons, number of helium atom would be rare during big bang. But, I think even if there had not been sufficient heliums and only hydrogen atoms exists, formed stars would have converted hydrogens to helium by changing a proton to a neutron so helium will not be rare anyway. That will make the production of heavier elements without problems cos there are helium atoms produced by stars by converting hydrogens to helium. So, I think rarity of helium atom during big bang is not a problem. Is that right ? Please ! 4. Jan 26, 2014 ### llynne The fact that the weak nuclear force is at a critical point of balance is significant. The events of the big bang (if there was such a thing) would not affect things so much as an ongoing effect of shorter particle lifetimes or reduced stability in atoms. The values of various forces hold atoms in a balanced way. The strong and weak nuclear forces and electromagnetic forces set up repulsive/attractive fields which locate each particle within a certain zone of an atom or molecule. Our understanding of it depends upon careful study of what we can measure of nanoscopic interactions. I think a little knowledge is a bad thing. How can you propose that it wouldn't matter if the weak nuclear force were different? And please rather than "some say" grab a reference. Tell us who says it, that really helps to explain what you mean. 5. Jan 27, 2014 ### thatoekhant Thanks a lot. I would like to ask some questions. If there were no weak nuclear force, the sun would not burn because there would be no deuterium . And, di protons are extremely unstable. So, The sun would not burn . Is that right ? Besides, may I know the life time of a di proton , please. And also, May I know whether the mass of the formed diproton is less than the two H1 hydrogen atoms or not . Does a proton decay to a neutron every 10 minutes in the sun ? Last edited: Jan 27, 2014 6. Jan 27, 2014 ### Staff: Mentor If there would be no weak force, the big bang would have happened completely different. So different that I have no idea how our universe would look like. Assuming the weak force would have "vanished" in some way after the big bang: neutrons would be stable and fuse with protons quickly, so deuterium would not be an issue. Could shorten the lifetime of stars, as deuterium fusion is way quicker than the proton-proton reaction. Diprotons are so short-living, the decay process has to happen "nearly at the same time". I don't think that question makes sense. 7. Feb 7, 2014 ### thatoekhant I would like to ask a question . As far as I know , number of protons is greater than that of neutrons in the universe. I have read that it is because the mass of neutron is slightly greater than that of proton. Could someone explain me relationship between mass of particles and their present numbers ?
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9009400010108948, "perplexity": 688.4708926202335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676592150.47/warc/CC-MAIN-20180721012433-20180721032433-00508.warc.gz"}
http://physics.stackexchange.com/questions/6074/do-we-take-gravity-9-8-m-s%c2%b2-for-all-heights-when-solving-problems-why-or-why
# Do we take gravity = 9.8 m/s² for all heights when solving problems? Why or why not? Do we take gravity = 9.8 m/s² for all heights when solving problems? - I take it as 10 - it makes doing the approximation head-doable –  Dirk Bruere Jun 19 at 10:06 No, the value $9.8\frac{\mathrm{m}}{\mathrm{s}^2}$ is an approximation that is only valid at or near the Earth's surface. You can go a few miles up or down and it'll still be good enough, but once you get any significant distance away from the surface of Earth, you would need to use a different value for gravitational acceleration. You can calculate the value from Newton's law of gravitation, $F = Gm_1m_2/r^2$, and you'll get $$g = \frac{GM}{r^2} = \frac{3.99\times 10^{14}\ \mathrm{m^3/s^2}}{r^2}$$ where $M$ is the mass of the Earth and $r$ is the distance from the Earth's center to the point for which you are doing the calculation. - It's strange that you would measure the height in miles when the value for gravity is in meters per second squared. –  LDC3 Nov 3 '14 at 2:53 Not really; miles are a common unit for height. And it's trivial to convert when needed. –  David Z Nov 3 '14 at 2:57 Maybe you could cite that $g = \frac{GM}{r^2} \approx \frac{4 \cdot 10^{14}}{r^2}$ (error: 0.3%). r in meters, g in m/s² –  André Neves Nov 3 '14 at 3:00 Maybe if you are a geologist, but for most scientist, we rarely use anything but SI units. –  LDC3 Nov 3 '14 at 3:03 @LDC3 some scientists rarely use anything but SI units, I'm sure, but many branches of science have their own conventional unit systems. In particle physics we use natural units ($c$ and $\hbar$ set to 1), in condensed matter they often use some lattice spacing as a length unit, in cosmology they use megaparsecs or the Hubble radius, and so on. The point is, a qualified scientist is capable of understanding the science regardless of what units are used. –  David Z Nov 3 '14 at 4:03 To expand a little on David's point assume we move from the nominal "surface" where $g$ is $9.8\text{ m}/\text{s}^2$ to another point at radius $r + \Delta r$. How much does the acceleration of gravity change? $$g = \frac{GM}{(r+\Delta r)^2} = \frac{GM}{r^2(1 + \Delta r/r)^2}$$ and as long as $\Delta r$ is small compared to $r$ we can reasonably approximate this as $$g \approx \frac{GM}{r^2}\left(1 - 2\frac{\Delta r}{r}\right) .$$ Well, the radius of the Earth is about $6000 \text{ km}$ so the approximation is good at less than 1% error for around $30\text{ km}$ up or down from the nominal surface, which is all the land and sea floor, and a bit up and down from there. It is also worth noticing that due to variations in the local mass density of the Earth the measured value of $g$ even at the surface can vary by several tenth of a percent. - In fact, the measured variations in $g$ are very useful to geophysicists, oil prospectors, etc. –  Ted Bunn Feb 28 '11 at 14:56 I was reading an article on the use of optical lattice clocks today which explained how such clocks allow an even more precise measurement of these changes - useful to evaluate height of water table, prospect for oil and gas, etc. –  Floris Nov 3 '14 at 4:18 $g$ becomes $g \approx 9.7 \frac{m}{s^2}$ at a height of about 35km, so it would be ok to use the value $9.81$ for "down to earth" problems. The relevant wikipedia article has lots of useful information, like for example the following approximation formula for different heights: $$g_h=g_0\left(\frac{r_e}{r_e+h}\right)^2$$ Where $g_h$, is the gravity measure at height $h$ above sea level; $r_e$, is the Earth's mean radius and $g_0$, is the standard gravity. - that's not an approximation, it's exact (as long as you assume earth as a point mass...) –  Tobias Kienzler Feb 28 '11 at 8:50 @Tobias: It's an approximation in the sense that it treats earth 1) as a point or a perfect sphere 2) not rotating, etc... –  Eelvex Feb 28 '11 at 9:51 It might also be worth mentioning that $g$ isn't even constant over the earth's surface at sea level. Depending on the mass distribution and the shape (not perfectly spherical!) of the earth, different parts of the world have different $g$. - Acceleration due to gravity, g is not a universal constant like G. Its calculated by formula mentioned in previous answers. So, for a constant mass system, g depends only on r (distance between center of earth & object in problem). As r = R + h (R is radius of earth & h is height of object from surface) & R is constant, g depends mainly on height. The relation: Increase the height, g will become less (as per formula) The value 9.8 m/s² is valid for the object at the surface of earth (at sea level). When height is small (with respect to radius of earth), the value is slightly less than 9.8 m/s². So, this variation can be neglected for a high school etc problems. When accuracy is important (due to scientific reasons etc), the value of g can't be 9.8 m/s². Once again, This consideration is valid only for constant mass system. Plus, for relativistic systems, the formula isn't valid with constant space & time scale. - The approximation of 9.81 m/s^2 is a generalisation. The exact value is most likely different at a specific location, due to the distance from the centre of the earth to the point being evaluated. The reference to "surface of the earth" is also a relative since the earth is known not to be perfectly round due to centrifugal forces making the radius greater at the equator. Also, since the earth is spinning the same centrifugal forces have a slight influence on object mass at the evaluation point. In metrology laboratories, the exact value for g is displayed for that exact location. - As we go above or below the surface of earth the value of g decreases since g is inversely proportional to height - ## protected by Qmechanic♦Aug 19 at 10:08 Thank you for your interest in this question. Because it has attracted low-quality answers, posting an answer now requires 10 reputation on this site.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9457173943519592, "perplexity": 564.4360470014512}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644065306.42/warc/CC-MAIN-20150827025425-00290-ip-10-171-96-226.ec2.internal.warc.gz"}
https://stats.stackexchange.com/questions/185037/statistical-quality-control-p-chart
# Statistical Quality Control p-chart A company manufactures small metal brackets. They are packaged in containers of 1000 brackets each. At the unloading facility, 10 containers have arrived and 36 brackets are selected at random from each container. The fraction non-confirming in each sample is $0.0,\ 0.0,\ 0.0,\ 0.01,\ 0.02,\ 0.02, \ 0.06,\ 0.0,\ 0.0 \ and\ 0.0$. (i) Do the data from this shipment indicate statistical control? (ii) What is the minimum sample size that would give a positive lower control limit for this chart? I tried to attempt this question by using the p-chart to determine statistical quality control. (i) Here, $n = 36$ and so, $\overline p = \frac{{\sum {{p_i}} }}{{10}} = \frac{{0.2}}{{10}} = 0.02$ The $3\sigma$ control limits for the p-chart are given by: $\overline p \pm 3\sqrt {\frac{{\overline p (1 - \overline p )}}{n}}$ Hence, we have: Lower Control Limit (LCL) = -0.05 $\approx$ 0 (as LCL cannot be negative) Control Line (CL) = 0 Upper Control Limit (UCL) = 0.09 We see that all the points lie within these control limits. So, we can say that the shipment is in statistical control. For the (ii) part of the question, we need: $LCL\ > \ 0$ $\Rightarrow \overline p > 3\sqrt {\frac{{\overline p (1 - \overline p )}}{n}}$ Taking $\overline p = 0.02$ and solving the above inequality I finally got: $n > 441$ So, the minimum sample size should be $442$ for the LCL to be positive. Can someone please tell me if my approach is correct or not? As long as the sample size is consistent, you should be using an $np$ chart instead of a $p$ chart. There is the $huge$ problem that with 36 samples, the numbers should be $0.00, 0.00, 0.00, 0.03, 0.03, 0.03, 0.06, 0.00, 0.00$ and $0.00$. $\frac{1}{36}=0.02\overline{7}$ and $\frac{2}{36}=0.0\overline{5}$. The values provided in the sample problem are impossible.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8833328485488892, "perplexity": 836.838218859379}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583662124.0/warc/CC-MAIN-20190119034320-20190119060320-00314.warc.gz"}
http://www.kurims.kyoto-u.ac.jp/~arakawa/index-e.html
# Tomoyuki Arakawa E-MAIL: arakawa at kurims.kyoto-u.ac.jp Research Interest: Representation Theory, Vertex Algebras Last Update: 25-Oct-2016 Curriculum Vitae (pdf) I will give a graduate course at MIT for Spring Semester, 2016. The details will be here. RIMS Representation Theory Seminar Workshop on W-algebras, November 28 - December 2, 2016, University of Melbourne. ESI conference on Geometry and Representation Theory, January 16-27, 2017, Vienna Past Conferences Papers 1. (with H. Yamada and H. Yamauchi) Vertex operator algebras associated with Z/kZ-codes, to appear in Springer Proceedings in Mathematics & Statistics, Vol. 191. 2. (with K. Kawasetsu) Quasi-lisse vertex algebras and modular linear differential equations, arXiv:1610.05865 [math.QA]. 3. (with A. Moreau) On the irreducibility of associated varieties of W-algebras, arXiv:1608.03142 [math.RT]. 4. (with V. Futorny, L.-E. Ramirez) Weight representations of admissible affine vertex algebras, arXiv:1605.07580 [math.RT]. 5. Introduction to W-algebras and their representation theory, arXiv:1605.00138 [math.RT]. 6. (with A. Moreau) Sheets and associated varieties of affine vertex algebras, arXiv:1601.05906 [math.RT]. 7. (with T. Creutzig and A. Linshaw) Cosets of Bershadsky-Polyakov algebras and rational $\mathcal{W}$-algebras of type $A$, arXiv:1511.09143 [math.RT]. 8. (with A. Moreau) Joseph ideals and lisse minimal W-algebras, J. Inst. Math. Jussieu, published online. 9. (with W. Wang) Modular affine vertex algebras and baby Wakimoto modules,Proc. Symp. Pure Math.,Volume: 92 (2016),1-16. 10. (with A. Molev) Explicit generators in rectangular affine W-algebras of type A, arXiv:1403.1017 [math.RT], to appear in Lett. Math. Phys. 11. Rationality of W-algebras: principal nilpotent cases, Ann. Math. 182 (2015), 565-604. 12. Rationality of admissible affine vertex algebras in the category O, Duke Math. J, Volume 165, Number 1 (2016), 67-93. 13. Two-sided BGG resolutions of admissible representations, Represent. Theory 18 (2014), 183-222. 14. (with C.-H. Lam and H. Yamada) Zhu's algebra, C_2-algebra and C_2-cofiniteness of parafermion vertex operator algebras, Adv. Math., vol.264 (2014), 261--295. 15. (with T. Kuwabara and F. Malikov) Localization of affine W-algebras, Comm. Math. Phys, April 2015, Volume 335, Issue 1, pp 143-182. 16. W-algebras at the critical level, Contemp. Math., 565, 1--14, 2012. 17. Rationality of Bershadsky-Polyakov vertex algebras, Comm. Math. Phys., October 2013, Volume 323, Issue 2, pp 627-633. 18. Associated varieties of modules over Kac-Moody algebras and $C_2$-cofiniteness of W-algebras, Int. Math. Res. Notices (2015) Vol. 2015 11605--11666. 19. A remark on the $C_2$-cofiniteness condition on vertex algebras, Math. Z. vol. 270, no. 1-2, 559-575, 2012. 20. (with F. Malikov) A vertex algebra attached to the flag manifold and Lie algebra cohomology, AIP Conf. Proc. 1243, pp. 151-164, 2009, arXiv:0911.0922 [math.AG]. 21. (with P. Fiebig) The linkage principle for restricted critical level representations of affine Kac-Moody algebras, Compos. Math., 148, 1787--1810, 2012. 22. (with F. Malikov) A chiral Borel-Weil-Bott theorem, Adv. Math., 229 (2012) 2908-2949. 23. (with P. Fiebig) On the restricted Verma modules at the critical level, Trans. Amer. Math. Soc. 364 (2012), 4683-4712. 24. (with D. Chebotarov and F. Malikov) Algebras of twisted chiral differential operators and affine localization of $g$-modules, Sel. Math. New Ser., vol.17, no. 1, 1-46, 2011. 25. Representation theory of W-algebras, II, Adv. Stud. Pure Math. 61(2011), 51--90. 26. Characters of representations of affine Kac-Moody Lie algebras at the critical level, arXiv:0706.1817v2 [math.QA]. 27. Representation Theory of W-Algebras, Invent. Math., Vol. 169 (2007), no. 2, 219--320. 28. A New Proof of the Kac-Kazhdan Conjecture, Int. Math. Res. Not. 2006. Art. ID 27091, 5 pages. 29. Representation Theory of Superconformal Algebras and the Kac-Roan-Wakimoto Conjecture, Duke Math. J., Vol. 130 (2005), No. 3, 435-478. 30. Vanishing of cohomology associated to quantized Drinfeld-Sokolov reduction, Int. Math. Res. Not. 2004, no.15, 730--767. 31. Drinfeld functor and finite-dimensional representations of Yangian, Comm. Math. Phys. 205 (1999), no. 1, 1--18. 32. (with T. Suzuki) Duality between $sl_n(C)$ and the degenerate affine Hecke algebra, J. Algebra 209 (1998), no. 1, 288--304. 33. (wih T. Suzuki and A. Tsuchiya) Degenerate double affine Hecke algebra and conformal field theory. Topological field theory, primitive forms and related topics (Kyoto, 1996), 1--34, Progr. Math., 160, Birkhauser, 1998. 34. (with T. Nakanishi, K. Oshima and A. Tsuchiya) Spectral decomposition of path space in solvable lattice model. Comm. Math. Phys. 181 (1996), no. 1, 157--182. RIMS
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8960474133491516, "perplexity": 3505.604933967021}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988722951.82/warc/CC-MAIN-20161020183842-00217-ip-10-171-6-4.ec2.internal.warc.gz"}
https://link.springer.com/article/10.1186%2Fs12862-018-1326-7
BMC Evolutionary Biology , 19:22 # Improved inference of site-specific positive selection under a generalized parametric codon model when there are multinucleotide mutations and multiple nonsynonymous rates • Katherine A. Dunn • Toby Kenney • Hong Gu • Joseph P. Bielawski Open Access Methodology article Part of the following topical collections: 1. Theories and models ## Abstract ### Background An excess of nonsynonymous substitutions, over neutrality, is considered evidence of positive Darwinian selection. Inference for proteins often relies on estimation of the nonsynonymous to synonymous ratio (ω = dN/dS) within a codon model. However, to ease computational difficulties, ω is typically estimated assuming an idealized substitution process where (i) all nonsynonymous substitutions have the same rate (regardless of impact on organism fitness) and (ii) instantaneous double and triple (DT) nucleotide mutations have zero probability (despite evidence that they can occur). It follows that estimates of ω represent an imperfect summary of the intensity of selection, and that tests based on the ω > 1 threshold could be negatively impacted. ### Results We developed a general-purpose parametric (GPP) modelling framework for codons. This novel approach allows specification of all possible instantaneous codon substitutions, including multiple nonsynonymous rates (MNRs) and instantaneous DT nucleotide changes. Existing codon models are specified as special cases of the GPP model. We use GPP models to implement likelihood ratio tests for ω > 1 that accommodate MNRs and DT mutations. Through both simulation and real data analysis, we find that failure to model MNRs and DT mutations reduces power in some cases and inflates false positives in others. False positives under traditional M2a and M8 models were very sensitive to DT changes. This was exacerbated by the choice of frequency parameterization (GY vs. MG), with rates sometimes > 90% under MG. By including MNRs and DT mutations, accuracy and power was greatly improved under the GPP framework. However, we also find that over-parameterized models can perform less well, and this can contribute to degraded performance of LRTs. ### Conclusions We suggest GPP models should be used alongside traditional codon models. Further, all codon models should be deployed within an experimental design that includes (i) assessing robustness to model assumptions, and (ii) investigation of non-standard behaviour of MLEs. As the goal of every analysis is to avoid false conclusions, more work is needed on model selection methods that consider both the increase in fit engendered by a model parameter and the degree to which that parameter is affected by un-modelled evolutionary processes. ## Keywords Codon model Positive selection Protein evolution Multiple nucleotide mutations Multiple nonsynonymous rates M-series models G-series models Likelihood ratio test False positives Model misspecification Codon frequencies ## Abbreviations DT Double-triple GA Genetic algorithm GPP General purpose parametric model GTR General time reversible model GY Goldman and Yang HI Hydrophobicity index HTLV Human T-lymphotropic virus LRT Likelihood ratio test MEP Mixed empirical and parametric model MG Muse and Gaut MLE Maximum likelihood estimate MNR Multiple nonsynonymous rates PCP Physiochemical-constrained parametric REV Fully reversible model SBA Smoothed bootstrap aggregation SNR Single nonsynonymous rate ## Background Markovian models of codon evolution have been extensively developed and tested over the last decade, largely due to their value in investigations of functional divergence at the molecular level (see Anisimova and Liberles [1] for a recent review). Unlike an amino acid model, the rate of evolution prior to selection at the level of the protein (i.e., the rate of synonymous codon substitution, or dS) can be readily estimated under a model of codon substitution. Comparing that rate to the rate of evolution after the effect of selection on the protein (i.e., the rate of nonsynonymous codon substitutions, or dN) leads to an easily interpretable index of natural selection pressure. Specifically, the ratio ω = dN/dS is estimated from a dataset and interpreted in terms of purifying selection (ω < 1), neutral evolution (ω = 1), or positive selection (ω > 1). Codon models used in this way can be divided into two very broad groups based on their treatment of how physiochemical properties of amino acids might impact the probability of a nonsynonymous substitution. One group of models assumes a single instantaneous rate for all amino acid exchanges. This leads to a single selective regime (i.e., one ω) for all nonsynonymous substitutions, regardless of how radical or conservative a change in amino acid physiochemical property. We follow Delport et al. [2] in referring to these as single-nonsynonymous rate (SNR) models (see Table 1 for definitions of all the model-related acronyms used in this study). The other group of models attempt to relax the SNR restriction by permitting multiple-nonsynonymous rates (MNR). Interestingly, SNR models are much more widely used in studies of protein functional divergence despite well-known variability in amino acid replacement rates, as inferred from large protein sequence databases [3, 4, 5]. Table 1 Descriptions of the model-related acronyms Acronym Description DT Indicates that a model allows simultaneous double (D) and triple (T) nucleotide changes between codons G0 A GPP codon model employing a single ω parameter G1aX A GPP codon model with the same discrete mixture of two ω parameters as model M1a; the total number of free parameters in the model is given by X, and varies depending on how DT and exchangeabilities are modeled G2aX A GPP codon model with the same discrete mixture of three ω parameters as model M2a; the total number of free parameters in the model is given by X, and varies depending on how DT and exchangeabilities are modeled GPP General-Purpose Parametric (GPP) modelling framework for codons GTR General Time Reversible (GTR) model for single nucleotide changes GY The codon modelling framework of Goldman and Yang [26] where the transition probability is proportional to the target codon frequency M0 A codon model employing a single ω parameter as implemented in PAML [54] M1a A codon model employing a constrained discrete mixture of two ω parameters [45] M2a A codon model employing a constrained discrete mixture of three ω parameters [45] M3 A codon model employing an unconstrained discrete mixture of k independent ω parameters [6] M8 A codon model employing a discretized β distribution to model among site variability in ω [6] MEP Mixed Empirical and Parametric (MEP) models combine empirical estimates of exchangeabilities with so-called mechanistic parameters of codon evolution MG The codon modelling framework of Muse and Gaut [47] where the transition probability is proportional to the target nucleotide frequency MNR A class of models allowing Multiple Nonsynonymous Rates (MNR) of exchangeability between codons PCP Physiochemical-Constrained Parametric (PCP) models parameterize the influence of physiochemical constraints on nonsynonymous changeability REV A fully reversible codon model described by a 61×61 matrix Q, where all codon exchangeabilities are independent parameters of the model. SNR A class of models allowing only a Single Nonsynonymous Rates (SNR) of exchangeability between codons The primary reason for employing SNR models is computational convenience. In addition to needing only a single ω parameter, substitutions between codons having two or more nucleotide differences are often assigned zero probability. By employing both restrictions, the number of parameters in the codon rate matrix is reduced from thousands to just a few. For example, in addition to ω, a typical formulation might only require parameters for the transition/transversion ratio (κ) and the equilibrium codon frequency of the ith codon (πi). Such simplification facilitates the extension SNR models to permit variation in selection regimes among sites (e.g., [6, 7]), branches [8], or both (e.g., [9, 10]) while keeping model complexity low enough for single-gene datasets. Simulation studies indicate that extending SNR models in this way substantially increases power to detect adaptive molecular evolution (e.g., [7, 9, 11]), and experimental assessment of the results of SNR models has validated their utility in a wide variety of real datasets (e.g., [12, 13, 14, 15]). One strategy for model improvement is to increase mechanistic realism while avoiding over parameterization [16]. Thus, modelling variability in amino acid exchangeabilities through MNR codon models should improve inferences about functional divergence [2, 17, 18]. However, given the size and complexity of the codon rate matrix, this is a challenging task and a variety of strategies have been explored. Here, we divide those strategies into three categories: (i) mixed empirical and parametric (MEP) models; (ii) physiochemical-constrained parametric (PCP) models and (iii) general-purpose parametric (GPP) models. Below we provide a brief review of those models implemented for the purpose of making inference about the process of molecular evolution. Note that Schneider et al. [19] were the first to construct a codon model having heterogeneous amino acid exchangeabilities. Because the purpose of their model was to aid the process of alignment it will not be considered further. MEP models combine empirical estimates of exchangeabilities with so-called mechanistic parameters of codon evolution (e.g., ω, κ, and πi). Doron-Faigenboim and Pupko [17] chose to integrate existing empirical amino acid exchangeability matrices with such mechanistic parameters. In this situation, nonsynonymous exchangeabilities between codons are set equal to amino-acid exchangeabilities (189 parameters) previously derived from large sets of amino acid sequences. Kosiol et al. [18] used a massive dataset to estimate the first fully empirical codon model (1830 codon exchangeability parameters) and then combined those with mechanistic parameters for codon evolution. De Maio et al. [20] subsequently reduced that model’s complexity while maintaining comparable performance. The empirical matrices in these studies represent very broad averages of the propensity for amino acid change. Miyazawa [21] and Zoller and Schneider [22] developed different methods to tailor the information contained within an empirical exchangeability matrix to a specific dataset. The advantage of all these MEP approaches is that they separate the DNA level evolutionary process from the effect of selection acting on the protein. However, the ω parameter of MEP models no longer has the same interpretation as other codon models because database-derived exchangeability values reflect a broadly averaged effect of selection, and these influence the data-specific estimates of selection pressure derived from the ω parameter [18, 23]. Building upon the well-known relationship between substitution rates and the physiochemical differences of amino acids (e.g., Clark [24]; Grantham [25]), the PCP models explicitly parameterize the influence of physiochemical constraints on nonsynonymous changeability. Goldman and Yang [26] and Yang, Nielsen and Hasegawa [27] employed explicit mathematical functions to model the relationship between the ω parameter and physiochemical properties, and Yang [28] allowed the influence of the physiochemical property to vary among sites. Sainudiin et al. [29] and Wong et al. [30] implemented models that partition nonsynonymous changes into a small number of categories according to a pre-defined physiochemical property. As the purpose of those models was to test if certain physiochemical properties might be subject to natural selection, their parameterization is focused on comparing the rate of property-altering substitutions to the rate of property-conserving substitutions. Conant and Stadler [31] accounted for multiple amino acid properties by modelling exchangeabilities between nonsynonymous codons as a linear combination of five pre-specified measures of physiochemical property. The advantage of these PCP approaches is that they permit investigation of explicit relationships between physiochemical properties and selection pressure while seeming to avoid over parameterization of the codon model. However, the PCP approach requires strong assumptions about the relative importance of different properties, and they are not well suited to assessing the fit of alternative property scales (which are often non-independent). The space of possible physiological constraints is vast, and any given set of constraints neglects the potential importance of unique structural factors. The GPP models are fundamentally different from the MEP and PCP models in two ways: (i) they do not impose empirically estimated exchangeabilities on individual datasets, nor do they require the nonsynonymous substitution rate to depend on a pre-specified physiochemical property, and (ii) they seek to identify the best approximation of a fully-reversible (REV) codon model (a 61 × 61 Q matrix that fully determines the dynamics of the codon substitution process) for a given sequence alignment. The REV codon model is attractive because it is a way of relaxing the unrealistic restriction that all amino acid changes have a single instantaneous rate. The cost, however, is an independent parameter for the rate of exchangeability between every unique pair of amino acids, which is far too parameter-rich for an individual gene. Hence, the analytical objective of the GPP approach is to explain a set of data using as few MNR parameters as possible. Delport et al. [2] developed a promising model search-strategy based on a genetic algorithm (GA). The GA is employed to search for the best assignment of amino acid pairs to a set of exchangeability parameters, where the number of such model parameters is also estimated from the data. Zaheri, Dib and Salamin [32] developed a novel analytical framework whereby the full instantaneous rate matrix for codons (3721 elements) can be estimated from just 19 model parameters. The full codon matrix is obtained by using Kronecker product to combine three 4 × 4 nucleotide matrices specified for each position of the codon. Both approaches appear to capture important aspects of real protein-coding sequence evolution, but via very different strategies. However, the parameters of the 4 × 4 nucleotide matrices employed by Zaheri, Dib and Salamin [32] are not defined with respect to an explicit process of codon evolution, which limits their use for testing of codon-level evolutionary processes. Double and triple (DT) nucleotide substitutions between codons are biologically possible [33, 34, 35] as successive changes on a rapid time scale (e.g., promoted by compensatory pressures [36]), via mechanistic processes such as error-prone polymerase activity [37] or during the process of DNA break repair (e.g., Sakofsky et al. [38]). Although such rates are several orders of magnitude lower than single nucleotide substitutions between codons [39, 40, 41], models that permit DT changes yield significant improvements in their fit to real data, suggesting that they could be an important addition to codon models. Models allowing DT changes between codons include those of Doron-Faigenboim and Pupko [17], Kosiol, Holmes and Goldman [18], De Maio et al. [20], Miyazawa [21], Zoller and Schneider [22], Zaheri, Dib and Salamin [32], Venkat et al. [42] and Jones et al. [43]. De Maio et al. [20] suggest that some widely used models for ω heterogeneity could yield high false positive rates when applied to data where both MNRs and DT codon changes occur. The recent study by Venkat et al. [42] found that double changes alone can induce high false positive rates when branch-site codon models are used in branch-specific tests for positive selection. The MNR models of Delport et al. [2] and Zaheri, Dib and Salamin [32], as currently implemented, do not yet allow among-codon heterogeneity in ω. SNR models developed by Jones et al. [43] and Venkat et al. [42] are site-heterogeneous and permit multiple changes between codons, but do not permit MNRs or a general time reversible (GTR) nucleotide model. Because the GTR model has the maximum number of exchangeability (6) and frequency parameters (4) compatible with time-reversibility, it should help avoid the negative effect of model violations for the DNA-level substitution process [7, 44]. Here we introduce a novel pair of GPP models that benefit from (i) permitting DT codon changes, (ii) a full GTR nucleotide model, (iii) MNRs via heterogeneous amino acid exchangeabilities, and (iv) estimation of ω that is not confounded by average amino acid exchangeabilities estimated from a large database of proteins. These new models, referred to as G1a and G2a, use a discrete ω distribution similar to those used in the SNR models M1a and M2a [6, 45]. The ω distributions similar to M1a and M2a were chosen because the likelihood ratio test (LRT) derived from them appears to have reasonable power while maintaining some robustness to model misspecification [46]. These GPP models can be extended further so that the instantaneous rate matrix can take any form up to the REV codon model. We use simulation to evaluate testing for sites under positive selection under several different formulations of models G1a and G2a. We conclude by applying these models to a set of transmembrane proteins from Streptococcus. ## Methods ### SNR codon models M0, M1a, M2a, M3 and M8 Goldman and Yang [26] and Muse and Gaut [47] independently proposed similar formulations for modelling the Markovian substitution process between sense codons. Here we present the core formulation of Goldman and Yang [26], as it was developed into models that form some LRTs investigated within this study. The instantaneous substitution rate between codon i and j (i ≠ j) at a single site within an alignment of protein coding sequences is defined as: $${q}_{ij}=\left\{\begin{array}{c}0,\ \mathrm{i}\mathrm{f}\ \mathrm{i}\ \mathrm{a}\mathrm{nd}\ \mathrm{j}\ \mathrm{differ}\ \mathrm{by}\ \mathrm{more}\ \mathrm{than}\ \mathrm{one}\ \mathrm{nucleotide}\\ {}{\pi}_j,\ \mathrm{i}\mathrm{f}\ \mathrm{i}\ \mathrm{a}\mathrm{nd}\ \mathrm{j}\ \mathrm{differ}\ \mathrm{by}\ \mathrm{a}\ \mathrm{synonymous}\ \mathrm{transversion}\\ {}\kappa {\pi}_j,\ \mathrm{i}\mathrm{f}\ \mathrm{i}\ \mathrm{a}\mathrm{nd}\ \mathrm{j}\ \mathrm{differ}\ \mathrm{by}\ \mathrm{a}\ \mathrm{synonymous}\ \mathrm{transition}\ \\ {}\omega {\pi}_j,\ \mathrm{i}\mathrm{f}\ \mathrm{i}\ \mathrm{a}\mathrm{nd}\ \mathrm{j}\ \mathrm{differ}\ \mathrm{by}\ \mathrm{a}\ \mathrm{nonsynonymous}\ \mathrm{transversion}\\ {}\omega \kappa {\pi}_j,\ \mathrm{i}\mathrm{f}\ \mathrm{i}\ \mathrm{a}\mathrm{nd}\ \mathrm{j}\ \mathrm{differ}\ \mathrm{by}\ \mathrm{a}\ \mathrm{nonsynonymous}\ \mathrm{transition}\end{array}\right.$$ where the matrix Q specifies a continuous-time, stationary, time-reversible Markov process. Parameters πj, κ and ω specify the stationary frequencies of codon j, the transitions to transversion rate ratio, and the nonsynonymous to synonymous rate ratio, respectively. Because this formulation models all nonsynonymous changes using a single ω parameter, this is an example of a SNR model. The transition probability matrix P(t) is related to matrix Q by P(t) = eQt, thereby giving the probabilities for state changes over a branch of length t. The likelihood of a codon site for a given phylogenetic tree and branch lengths can then be calculated using the pruning algorithm of Felsenstein [48]. The above formulation is widely referred to as model M0, and it assumes that the intensity of natural selection (as captured by parameter ω) is the same for all sites in the codon sequence alignment. Model M0 was extended to a series of models that permit the ω parameter to vary among sites [6], which includes the models known as M1a, M2a, M3 and M8. Hereafter, the family of codon models derived from M0 that permit the ω parameter to vary among sites will be referred as “M-series” models. All members of the M-series family are SNR models. Models M1a and M2a [45] are widely used as the basis of an LRT for positive selection, and for empirical Bayes identification of positively selected sites within a multi-species alignment [49]. These models employ a restricted form of the ω distribution that, although highly idealized, leads to desirable properties for the LRT [11, 46]. Model M1a (a.k.a. nearly neutral) is a discrete mixture of two classes of sites: strictly neutral sites with ω1 = 1, and sites subject to purifying selection with ω0 estimated from the data but constrained to take a value < 1. The mixture weights for these classes of sites (p0 and p1) also are estimated from the data. Model M2a extends model M1a by adding a third class of sites for positive selection (ω+ > 1). As these models are nested they serve as the basis of a LRT for sites evolving by positive selection. Model M3 employs an unconstrained discrete distribution for ω [6]. In this model, sites are assumed to belong to k discrete classes, each having a parameter for selection (ωi) and a proportion of sites (pi) within the gene. An LRT of M3 against M0 (a special case of M3 where k = 1 and all sites have just a single ω) constitutes a test for variable selection intensity among sites [11]. In this study we use the LRT of M0 versus M3k = 2 to pre-screen the real datasets and thereby ensure each contains signal for among-site variation in the intensity of natural selection. Model M8 uses a flexible β distribution to permit ω to vary among sites within the interval (0,1) and an extra discrete category that can allow ω+ > 1 [6]. For computational convenience the β distribution is divided into 10 bins. An LRT for positive selection is obtained by comparing a restricted form of M8 (ω+ = 1, fixed) to an alternative form of M8 (ω+ ≥ 1, estimated). In both models the mixture weights for the β distribution (p0) and ω+ (p+) are estimated from the data. This LRT represents a popular alternative to M1a and M2a as a test for sites evolving by positive selection. ### GPP codon models G1a and G2a We developed GPP codon models that employ the same discrete distributions for ω as employed by M1a and M2a, but without requiring that any other simplifying assumptions be imposed on the data (e.g., SNRs, zero probability for DT changes, and restrictions on the GTR). These models are hereafter referred to as G1a and G2a. Like M1a, model G1a assumes that data evolve under one of two discrete selective regimes: purifying selection and strict neutral evolution. Model G2a extends this by adding a class of sites evolving under positive selection. The restrictions, as well as the notation, are the same for the ω parameters (ω0 < 1, ω1 = 1, and ω+ > 1) and mixture weights (p0, p1 and p+). G1a and G2a are derived from a simple GPP codon model that includes the current models such as Goldman and Yang [26] and Muse and Gaut [47] as special cases. We refer to the basic form of this model, which has only a single class of sites, as G0. The GPP model exploits the fact that a time-reversible process is expressible as the product of a matrix of exchangeability parameters (R) and the steady state frequencies (π), and uses a logarithm link function to link the non-zero off-diagonal elements of the 61 × 61 instantaneous codon matrix, Q = Rπ, to a linear model format (see online Additional file 1 for details). We assume R is symmetric, and the instantaneous rates can be written as qij = πjrij, where πjis the equilibrium frequency of the jth codon, and the parameter rij determines the exchangeability between codons. In G0 the matrix of exchangeability parameters, R, is determined by a set of model parameters, β0, …, βn. For each βk there is a corresponding matrix X(k), and the value of rij for i ≠ j is determined by log(rij) = ∑kβk(X(k))ij. The diagonal elements of R are set such that rows of Q sum to 0. The first model parameter, β0, is a scaling factor set so that the branch lengths can be interpreted as the expected numbers of substitutions per codon sites, and the other parameters β1, …, βn are intended to represent different mechanisms of the evolutionary process. This framework allows specification of all possible instantaneous codon substitutions, and any restrictions on the process are special cases of the general model where the instantaneous rate is set to zero (e.g., prohibition of codon substitutions involving DT nucleotide changes is a special case of the general model). As the familiar SNR codon model M0 [26] is a special case of G0, it serves as a convenient way to illustrate how a GPP model is specified. M0 can be expressed within the GPP framework as follows: $${q}_{ij}=\left\{\begin{array}{c}0,\ \mathrm{i}\mathrm{f}\ \mathrm{i}\ \mathrm{a}\mathrm{nd}\ \mathrm{j}\ \mathrm{differ}\ \mathrm{by}\ \mathrm{more}\ \mathrm{than}\ \mathrm{one}\ \mathrm{nucleotide}\\ {}{e}^{\beta_0}{\pi}_j,\ \mathrm{i}\mathrm{f}\ \mathrm{i}\ \mathrm{a}\mathrm{nd}\ \mathrm{j}\ \mathrm{differ}\ \mathrm{by}\ \mathrm{a}\ \mathrm{synonymous}\ \mathrm{transversion}\\ {}{e}^{\beta_0}{e}^{\beta_1}{\pi}_j,\ \mathrm{i}\mathrm{f}\ \mathrm{i}\ \mathrm{a}\mathrm{nd}\ \mathrm{j}\ \mathrm{differ}\ \mathrm{by}\ \mathrm{a}\ \mathrm{synonymous}\ \mathrm{transition}\ \\ {}{e}^{\beta_0}{e}^{\beta_2}{\pi}_j,\ \mathrm{i}\mathrm{f}\ \mathrm{i}\ \mathrm{a}\mathrm{nd}\ \mathrm{j}\ \mathrm{differ}\ \mathrm{by}\ \mathrm{a}\ \mathrm{nonsynonymous}\ \mathrm{transversion}\\ {}{e}^{\beta_0}{e}^{\beta_1}{e}^{\beta_2}{\pi}_j,\ \mathrm{i}\mathrm{f}\ \mathrm{i}\ \mathrm{a}\mathrm{nd}\ \mathrm{j}\ \mathrm{differ}\ \mathrm{by}\ \mathrm{a}\ \mathrm{nonsynonymous}\ \mathrm{transition}\end{array}\right.$$ where $${e}^{\beta_0}$$ is the required matrix scale factor, $${e}^{\beta_1}$$ is equivalent to the transition/transversion rate ratio (κ), and $${e}^{\beta_2}$$ is equivalent to the nonsynonymous/synonymous rate ratio (ω). Transitions are indicated by a matrix X(1) whose entries are 1 for all single nucleotide changes between codons that are transitions (and 0 for all other entries). Nonsynonymous changes are indicated by a matrix X(2) whose entries are 1 for all single nucleotide changes that yield a change in the encoded amino acid (and 0 for all other entries). Note that the requirement that qij = 0 if i and j differ in more than one nucleotide position is explicitly enforced after applying the link function. By removing this requirement and extending X(1) and X(2) to include DT changes, we obtain an extension of G0 that permits multiple nucleotide changes between codons. Model G0 (like model M0) is a SNR model because the nonsynonymous exchangeabilities are all equal. However, nonsynonymous exchangeabilities need not be constrained in this way. Any number of mechanisms for differences in nonsynonymous exchangeabilities can be added to the model through additional βi parameters. For example, empirical data indicate that differences in hydrophobicity among pairs of amino acids is well known to impact the probability of an amino acid substitution (e.g., Clark [24]; Grantham [25]). Taking hydrophobicity as an example, a matrix of pairwise differences in hydrophobicity between amino acids can be constructed from a given scale (e.g., HI of Monera et al. [50]), and the nonsynonymous transition rate can then be linked to the exponent of the entries in this matrix via $${e}^{\beta_{HI}}$$, where βHI is a fitted parameter in the model. Any such addition to the model yields a process of codon evolution having MNRs. Restrictions on the DNA-level process of evolution also can be relaxed. For example, rather than the single parameter for the transition/transverion rate ratio (β1, in the above model), each DNA-level exchangeability can be modelled with a separate parameter (βAC, βCT, βAT, βTG, βCG). This leads to a codon model having a GTR process at the DNA level, which has been recommended when testing for positive selection (e.g., Kosakovsky Pond and Frost [7]). Parameterization of a codon model in terms of β1, …, βn means that process-variation among sites can be modelled with different random effects for different model parameters. In this study we develop GPP models motivated by M1a and M2a by using constrained discrete distributions to model among site variation in the nonsynonymous rate (β2 in the above model). These models (G1a and G2a) extend M1a and M2a by permitting double and triple changes between codons, a full GTR process at the DNA level, and model MNRs via the addition of β1, …, βn for different aspects of physiochemical constraints. ### Simulation based assessment of the G-series and M-series models Simulation is used to evaluate MLE estimation under the new G-series models and the performance of several LRTs for positive selection (e.g., G1a vs G2a). Our overall design is comprised of 32 distinct evolutionary scenarios (Fig. 1), which serve as the basis for four simulation studies focused on different ways in which model based inference could be impacted. Although the evolutionary details differ between the 32 scenarios, each is comprised of 100 replicate datasets, each having sequences of 300 codons in length. Simulated datasets were generated using methods implemented in version 1.2 of the COLD program “www.mathstat.dal.ca/~tkenney/Cold/”. COLD is an open source software package available for download from the COLD website “www.mathstat.dal.ca/~tkenney/Cold/”, and from GitHub “https://github.com/tjk23/COLD”. The commands used to generate the sequence data for this study, the relevant Newick tree files, and all multi-sequence alignments that were produced for each of the simulation studies, are available to download from the DRYAD repository for this study [51]. #### Simulation study 1 The purpose of this study is to investigate the impact of DT codon changes on the false positive rate. For this simulation we start with the 5-taxon tree and branch lengths of Wong et al. [45] (Fig. 1a). The generating process for this study is based on a selective regime at the codon level derived from a strictly neutral model of codon evolution (Fig. 1b). In this scenario 50% of the sites are subject to perfect purifying selection (ω = 0) and 50% are subject to neutral evolution (ω = 1). This scenario is often included in simulation studies as a “benchmark case” for LRTs (e.g., Kosakovski Pond and Frost [7]; Anisimova et al. [11]; Wong et al. [45]; Bao et al. [46]). Here, we extend this benchmark case by adding DT changes between codons, with rates 0.06 and 0.03 respectively. These are in accordance with the notion that their rates are substantially lower than the rate of single nucleotide substitution between codons [39, 40]. To enhance interpretability, we began by setting all GTR exchangeabilities to 1 and specified equal nucleotide frequencies. This scenario is referred to as case 1a (Fig. 1b). We then extended this simulation study in two ways. The first extension was to increase the complexity of the nucleotide-level process by adding unequal GTR exchangeabilities and nucleotide frequencies (from [6]). This extension is referred to as case 1b (Fig. 1b). The next extension was designed to investigate the impact of taxon sampling. Each terminal branch of the 5-taxon tree in Fig. 1a was split by the addition of a second lineage, resulting in a 10-taxon tree. The total length of the new tree (sum of the branches) was set equal to that of the 5-taxon tree, but with the tree length re-distributed evenly among all branches (see online Additional file 2). Simulation over the 10-taxon tree was based on the more complex process of case 1b, and is referred to as case 1c. Each dataset was analysed with M1a and M2a, and variants that permit DT changes, hereafter called G1aDT and G2aDT). #### Simulation study 2 The purpose of this study is to investigate model performance using much more complex scenarios than the strictly neutral case above. The tree and branch lengths are derived from a set of 17 real β-globin sequences (Fig. 1a), and thus are the same for all scenarios. This tree has been used widely in previous simulation studies (e.g., [6, 11]). This study is comprised of 24 distinct scenarios (Fig. 1c). Each scenario is based on a mixture of sites having three distinct selective regimes. All scenarios have a large fraction of sites (77%) dominated by purifying selection (ω0 = 0.05). A moderate fraction of sites (20%) assumed to evolve under moderate purifying selection (ω1 = 0.5) or neutrality (ω1 = 1.0). A small fraction of sites (3%) evolve with ω ≥ 1 (ω+ = 1.0, 1.5, 2.0 or 5.0). In addition we also employ heterogeneous GTR exchangeabilities, and unequal nucleotide frequencies at the three positions of the codon, as estimated from a set of real β-globin sequences. Lastly, we cover a range of nonsynonymous rate heterogeneity by specifying hydrophobicity factors $$\left({e}^{\beta_{HI}}\right)$$ of 1.0, 0.4 or 0.05. The hydrophobicity index of Monera et al. [50] was re-scaled by a factor of 100, so that it takes values in the interval [− 1,1], and the absolute value of the difference between the hydrophobicity of amino acids was computed for all pairs of amino acids. The matrix of these scores (online Additional file 3) was linked to the nonsynonymous substitution rate via a parameter in the GPP generating process (βHI). When βHI = 0, the matrix of hydrophobicity scores will have no impact on nonsynonymous rates, yielding a SNR codon model ($${e}^{\beta_{HI}}=1\Big)$$. When $${e}^{\beta_{HI}}=0.4\ \mathrm{and}\ 0.05$$, the process of codon evolution has MNRs, with $${e}^{\beta_{HI}}=0.05$$ yielding an extremely biased MNR model. As our primary interest is the effect of MNRs, we do not include DT codon changes in this study. Note that hydrophobicity is used for convenience to induce MNRs here; any property scale can be similarly used within this GPP framework. Figure 1c indicates the relationship between the different scenarios in this study. Each scenario was analysed with three different pairs of models. The first was the pair of SNR models M1a and M2a. This pair represents an under-fit modelling scenario. The second pair was G1ax and G2ax, which represent GPP models having perfect fit to the generating process. The superscript of x represents the number of mechanistic model parameters required for a perfect fit to a given scenario. The third pair of models was G1a13 and G2a13. In addition to the branch lengths, and each model’s parameters for the ω distribution, these models have x = 13 additional parameters. The 13 additional parameters account for DT changes (2 parameters), 6 amino acid properties (polarity, volume, hydropathy, isoelectric point, polar requirement & composition), and GTR exchangeabilities (5 free parameters). Models G1a13 and G2a13 are used here to represent an over-fit modelling scenario. #### Simulation study 3 The purpose of this study is to extend Study 2 by adding simultaneous DT nucleotide changes between codons. To minimize the computational burden, the impact of DT nucleotide changes was explored in a selected subset of six scenarios covered in Simulation Study 2. Specifically, we chose three different distributions for ω (see 2b, 2d, 2g in Fig. 1), and applied two hydrophobicity factors to each one. The hydrophobicity factors $$\left({e}^{\beta_{HI}}\right)$$ were 1.0 (yielding an SNR model) and 0.05 (yielding a highly variable MNR model). One ω distribution excluded positive selection (77% ω = 0.05 and 23% ω = 1.0). The other two ω distributions included positive selection (77% ω = 0.05; 20% ω = 0.50; 3% ω = 2.0, and 77% ω = 0.05; 20% ω = 1.0; 3% ω = 2.0). As in Study 2, the tree, branch lengths, GTR parameters and codon frequencies were derived from a set of real β-globin sequences. Also like Study 2, we used an under-fit model pair (M1a and M2a), a perfectly fit model pair (G1ax and G2ax), and an over-fit model (G1a13 and G2a13). #### Simulation study 4 The purpose of this study was to investigate the impact of alternative model formulations on false positive rates for the M-series LRTs. Users of M-series models have many choices for how to (i) model the distribution of ω variability among sites, and (ii) parameterize codon frequencies within the model. A comprehensive assessment of alternative ω distributions is beyond the scope of this study. For this reason we chose to assess the LRT for positive selection that compares M8ω + =1 with M8ω+ > 1 because it is a popular alternative, and because M8 is based on a discretized β distribution. There are two fundamentally different approaches to parameterize codon frequencies. One of them emphasizes the context of the nucleotide change within the complete codon, and employs the equilibrium frequency of the target codon (πj) to model transition probabilities ([26], hereafter denoted GY). The other emphasizes the independence of the process of mutation among sites, and employs the equilibrium frequency of the target nucleotide (j) at a single position (k) averaged over all codons (πjk) to model transition probabilities among codons ([47], hereafter denoted MG). Both approaches employ estimates of four nucleotide frequencies at each position of the codon (denoted F3 × 4), and thus each requires 9 free parameters. Despite having similar instantaneous rate matrices, these two Markov processes have different properties when codon frequencies are uneven (e.g., [52]). To investigate the effect of both kinds of modelling choices (ω distribution and codon frequencies), we applied both frequency parameterizations (πj vs. πjk) to the LRT of M1a vs. M2a and to the LRT of M8ω + =1 with M8ω+ > 1. This comparison yields four LRTs per simulation scenario, and because we were interested in false positive rates we applied those four LRTs to all nine null scenarios of Simulations Studies 1–3 (SNR: 1a, 1b, 1c, 2a, 2b & 3a; MNR: 2a, 2b & 3a). In these we covered M-series false positive rates for DT changes, MNRs, and the combination of DT changes and MNRs. ### Real data analyses We analyzed a set of 24 Streptococcus transmembrane proteins. The data are derived from a previous phylogenomic analysis of Streptococcus genomes [53]. The homologous gene clusters identified in that study were filtered for clusters of transmembrane proteins with ≥4 unique sequences. The sequence alignments for these gene clusters range from 4 to 19 lineages, and included pathogens and their non-pathogenic relatives. The data were then pre-screened with a LRT for among-codon heterogeneity in ω ([6]: M0 vs M3). Three genes had no significant evidence for heterogeneity in ω according to this LRT and were excluded from subsequent analyses. The remaining 21 genes were tested using a pair of models that are (presumably) under-fit with respect to DT changes and MNRs (M1a and M2a), and a pair that can be considered mechanistically over-fit for at least some of their parameters (G1a13 and G2a13). As we do not know the true generating process for these data, we cannot analyze them using a perfectly fit model pair. ### Likelihood calculations and likelihood ratio tests The values of the model parameters, including branch lengths, were estimated from the data via maximum likelihood. The only exception was the equilibrium frequencies, which were obtained from the empirical codon frequencies within each dataset. The SNR codon models M0, M3, M1a, M2a and M8 were fit to the data as implemented in the codeml program of the PAML package [54]. Fitting the G-series models described above was made possible by an efficient Hessian calculation for phylogenetic likelihood [55], and the GPP modelling framework implemented in version 1.2 of the COLD program “www.mathstat.dal.ca/~tkenney/Cold/”. Model M1a differs from M2a only in the parameters of the ω distribution. As these models are nested, and differ by two free parameters, the log likelihood statistic (2Δ) should be approximately χ2 distributed with 2 degrees of freedom. However, the alternative model (M2a) is related to the null model (M1a) by fixing one of its mixture weights on the boundary (p+ = 0). This means that the LRT statistic $${\chi}_2^2$$ is not the correct distribution; however, we use it here because it is expected to be conservative in many scenarios. The GPP models used in this study employ the same ω distributions, and their LRTs are carried out in the same way. The method for calculation of phylogenetic likelihood under a GPP model is fully described in Kenney and Gu [55]. The implementation of the unique hessian likelihood calculation, and the optimization routines employed to fit the GPP models to sequence data, are distributed via the COLD package as open source software “www.mathstat.dal.ca/~tkenney/Cold/, https://github.com/tjk23/COLD”. COLD uses a variety of metrics to monitor convergence, but COLD’s main convergence test is whether the expected improvement from the next step is less than 1e-10. To deal with some difficult cases, COLD will also claim convergence if the moving average of either expected or actual improvement is less than 1e-5, and will signal if the program has failed to make progress for a long time. Problematic cases of optimization are indicated when either (i) COLD fails to converge within the maximum number of iterations, or (ii) the likelihood of an alternative model is lower than the null (indicating convergence to a sub-optimal peak). In this study, if ether outcome occurred, models were re-run several times with different initial values for the model parameters. ## Results ### Simulation study 1: False positives under a strictly neutral model with DT substitutions Recent work suggests that the simplified assumptions employed by models M1a and M2a (e.g., prohibiting DT changes between codons) could negatively impact the inference of positive selection in some cases [18, 20]. To further investigate the impact of DT changes we generated data under the strictly neutral model, with rate 0.06 and 0.03 for DT substitutions respectively. Previous studies found that the false positive rate under the strictly neutral model (without DT substitution) was just 2% for the M1a vs. M2a LRT [11]. By adding DT substitutions, we found that the false positive rate increased to 49% at α = 0.05. Imposing additional process-heterogeneity at the DNA level (unequal GTR exchangeabilities and nucleotide frequencies) did not increase the false positive rate (rather, it declined to 22%). The analogous LRTs, carried out under GPP models that exactly match the generating process (G1aDT & G2aDT; α = 0.05), were much less sensitive. False positives were approximately 4% under equal GTR exchangeabilities and nucleotide frequencies, and when both GTR exchangeabilities and nucleotide frequencies were unequal. The strictly neutral scenario can be a challenging case for some models because of the large fraction of sites on the boundary of positive selection (50% at ω = 1) can make it easy to obtain a false signal for positive selection (ω+ > 1) by chance at some sites. Indeed, for this reason it is often included in simulation studies as a “benchmark case” (e.g., [7, 11, 45, 46]). The M1a vs. M2a LRT tended to perform well in many previous studies, which did not include DT changes, because the estimates for ω+ under M2a tended to be only a little > 1 and the estimated proportion of such sites (p+) tended to be very low. However, by including DT changes in our simulation scenario, the estimates of ω+ under M2a become upwardly biased in the 5-taxon case (Table 2), which leads to more false positives. To investigate if the relatively long branches in the 5-taxon case represents a worst-case scenario (a large opportunity for DT changes to occur along a single branch), we doubled the number of taxa without increasing the total tree length (case 1c). While the median estimate of ω+ did get smaller (1.35 in case 1c), the signal for ω+ > 1 remained significant. This is because estimated value of p+ increased from 0.28 to 0.49 under M2a when taxon sampling was increased from case 1b (complex model and 5 taxon tree) to case 1c (complex model, 10-taxon tree having shorter branch lengths). The effect of this on the LRT of M1a vs. M2a was an increase in the false positive rate from 22 to 48% (Table 2). Thus, the strategy of sampling additional taxa such that longer branches are shortened does not appear to be effective at mitigating the effect of DT misspecification on the LRT of M1a vs. M2a. Table 2 False positive rates under a strictly neutral evolutionary process with DT nucleotide substitutions between codons Simulation LRT false positive rate median ω+ and p+ MLEs M1a - M2a M2a 1a (simple, 5 taxa) 0.49 0.04 ω+ = 6.08 ω+ = 1.16 p+ = 0.37 p+ = 0.33 1b (complex, 5 taxa) 0.22 0.04 ω+ = 10.9 ω+ = 1.37 p+ = 0.28 p+ = 0.20 1c (complex, 10 taxa) 0.48 0.04* ω+ = 1.35 ω+ = 1.02 p+ = 0.49 p+ = 0.35 One hundred replicates (sequence length = 300 codons) were simulated for each scenario. Simulation 1a is based on a simple model (equal DNA exchangeabilities and equal codon frequencies) evolved over a 5-taxon tree. Simulation 1b is based on a more complex generating process using DNA exchangeabilities and codon frequencies derived from a real dataset. Simulation 1b was extended to the case of a 10-taxon tree. Codon models fitted to simulation 1a assumed equal codon frequencies (fequal), and those fitted to simulation 1b used GY94-style F3 × 4 codon frequencies. The asterisk symbol (*) indicates that the results for simulation 1c under the 10-taxon tree is based on 97 replicates due to convergence problems with some datasets Although these results confirm the suggestion that DT changes can impact the M1a vs. M2a LRT, the strictly neutral scenario is a very unrealistic model for real protein coding sequences. Real sequences will have much more variability among sites in ω, and the fraction of strictly neutral sites (i.e., ω = 1), if any, will be much less than 50% (e.g., Yang et al. [6]). Moreover, in the case of real data analysis it is extremely unlikely that a fitted model will be an exact match to the true generating process; thus, the impact of model misspecification on the fitted values of ω+ are unavoidable. For these reasons we explore more realistic evolutionary scenarios in Simulation Studies 2 and 3, and we employ both under-fit and over-fit models to carry out the LRTs. ### Simulation study 2: MNRs and more realistic distributions for ω variability among sites Here we explore more realistic scenarios by adding (i) greater among-site variability in ω, (ii) a much smaller fraction of strictly neutral sites, (iii) a different GTR process for each position of the codon, and (iv) different levels of MNR evolution (Fig. 1b). We withhold DT changes from this study in order to focus on the effect of MNRs (DT changes will be combined with MNR evolution in Simulation Study 3). MNR evolution is induced by using hydrophobicity to determine the relationship between pairs of amino acids and their substitution probability. In this formulation, an H-score of 1 yields an SNR process, whereas an H-score of 0.05 yields a large MNR effect. Note that we do not mean to imply that hydrophobicity is the primary determinant of protein fitness; rather, we use it here as a simple means of inducing unequal exchangeabilities between amino acids. Although far simpler than real data, this MNR-process is sufficient to permit us to explore the impact on parameter estimation and the LRT for positive selection. Two ω distributions without positive selection (Fig. 1b: scenarios 2a and 2b) were employed as a means to investigate false positive rates. Very similar scenarios have been used before for this purpose [11, 45, 46], but assuming a SNR process. Consistent with the results reported in those previous studies, M1a vs M2a (hereafter LRT-1) has low false positives in the SNR case (Table 3: $${e}^{\beta_{HI}}$$ = 1). Results were similar for a LRT based on a null GPP model that perfectly fits the data (G1ax vs G2ax: hereafter LRT-2), and a LRT based on a null GPP model that was over-parameterized (G1a13 vs G2a13: hereafter LRT-3). False positive rates were at, or below, the specified level for all three LRTs even after adding low-MNR and high-MNR to the generating evolutionary process (Table 3: $${e}^{\beta_{HI}}$$ = 0.4; $${e}^{\beta_{HI}}$$ = 0.05). The only challenge to inference that we observed was a small tendency for convergence problems when using the over-parameterized models in LRT-3. This is not surprising given that the models for LRT-3 are over-parameterized for both number of categories in the ω distribution and the amount of MNR. Convergence problems can arise as a consequence of over-parameterization if the likelihood function becomes irregular or discontinuous over the parameter domain [56]. However, the finding that the false positive rate was relatively insensitive to a large MNR effect was surprising given the considerable amount of attention that has been focused on adding MNRs to codon models [17, 18, 20, 21, 22, 32]. Table 3 False positive rates (null scenarios) and true positive rates (alternative scenarios) for three LRTs when the evolutionary process includes both ω variability among sites and MNRs SNR ($${e}^{\beta_{HI}}$$ = 1) Low MNR ($${e}^{\beta_{HI}}$$ = 0.4) High MNR ($${e}^{\beta_{HI}}$$ = 0.05) ω 0 ω 1 ω 2 LRT-1 LRT-2 LRT-3 LRT-1 LRT-2 LRT-3 LRT-1 LRT-2 LRT-3 Null scenarios False positives 2a 0.05 0.5 1.0 0.00 0.01 0.01* 0.00 0.01 0.00* 0.00 0.00 0.00* 2b 1.0 1.0 0.01 0.04 0.03* 0.00 0.03 0.00* 0.00 0.03 0.00* Alternative scenarios True positives 2c 0.05 0.5 1.5 0.03 0.36 0.44 0.01 0.24 0.20* 0.00 0.09 0.00* 2d 2.0 0.52 0.82 0.85 0.05 0.65 0.61* 0.00 0.45 0.14* 2e 5.0 1.00 1.00 1.00 1.00 0.99 1.00 0.14 0.99 1.00* 2f 0.05 1.0 1.5 0.06 0.10 0.08 0.00 0.14 0.05 0.00 0.14 0.01* 2 g 2.0 0.33 0.46 0.37 0.00 0.46 0.24 0.00 0.31 0.09* 2 h 5.0 1.00 0.99 1.00 0.98 1.00 1.00 0.09 1.00 0.99* LRT-1 compares M1a to M2a (under-fit models). LRT-2 compares G1ax to G2ax (perfect-fit models). LRT-3 compares G1a13 to G2a13 (over-fit models). The asterisk symbol (*) indicates scenarios where either convergence problems or suboptimal peaks were encountered for the models of LRT-3. To overcome these problems, models were re-fit to the same dataset multiple times, each using a different set of initial parameter values. The number of problematic datasets for SNR was 2a = 21 and 2b = 1; for low MNR was 2a = 27, 2b = 16, 2c = 16 and 2f = 10; and for high MNR was 2a = 29, 2b = 20, 2c = 35, 2e = 15, 2f = 15 and 2 g = 1. Because using multiple initials for the problematic datasets was successful, the results above are for all 100 replicates We used scenarios 2c through 2 h to investigate the power of the same three LRTs over a range of signal for positive selection. LRT-based inference about positive selection should get easier with stronger signal for positive selection; i.e., via a bigger gap between ω1 and ω+, or with increasing ω+. This was the case for all three LRTs (Table 3). Power to reject the null was typically larger when there was a bigger gap between ω1 and ω+ (2c-2e vs. 2f-2 h in Table 3) and with increasing values of ω+ (e.g., 2e > 2d > 2c in Table 3). The LRTs based on the GPP models (LRT-2 & LRT-3) tended to have more power than the traditional test (LRT-1), however all three LRTs performed very well (~ 100%) when the signal is strong enough (2e and 2 h in Table 3). Although the true relationship between these models and any real dataset will be unknown, it is almost certainly the case that the real evolutionary process will be more complex. These results are relevant, as they suggest a tendency for over-simplified models to have less power to detect positive selection. Next we focused on the impact of MNRs on power by conditioning our comparisons on the signal for positive selection (Table 3: weak = 2c, 2f; moderate = 2d, 2 g; strong = 2e, 2 h). Inducing a low level of MNRs (by setting $${e}^{\beta_{HI}}$$ = 0.4) yielded a reduction in power in all LRTs when the signal for positive selection was not strong. The decline was largest for LRT-1 in scenarios 2d (0.53 - > 0.05) and 2 g (0.35 - > 0.00). The effect was similar for LRT-2 and LRT-3 in the same scenarios, but those tests still retained some power (ranging from 0.26 to 0.69). Power was reduced in scenarios 2c and 2f as well. Inducing a high level of MNRs (by setting $${e}^{\beta_{HI}}$$ = 0.05) increased the effect. Again, LRT-1 was most affected, as it had substantial reductions in power even in cases where signal for positive selection was strongest (2e and 2 h). The relationship between the strength of positive selection, the degree of MNR variation, and the power of the LRT is complex. The reason that all methods do best when strong signal for positive selection (ω+ = 5) is combined with either SNR or low MNRs is because there are more opportunities for nonsynonymous changes having ω > 1 to occur along a branch and thereby contribute to the empirical site pattern distributions for those scenarios. Alternatively, when there are high MNRs, nonsynonymous changes having ω > 1 occur less frequently, and have less of an influence on the site pattern distribution. For appreciable signal to accumulate in the data, the ω must be high (≥5) when there are high MNRs. Furthermore, fitting models M1a and M2a to such data with high MNRs effectively averages the signal over all amino acid differences, regardless of hydrophobicity, thereby yielding reduced estimates for its ω values. Hence, the power is very low for LRT-1 (unlike LRT-2 and LRT-3) when there are high MNRs because of two related factors: (i) less signal within the site pattern distribution, and (ii) lower expected values for the ω parameters. Of course, the power of all three tests is negatively impacted by reductions in signal for ω > 1, but LRT-2 and LRT-3 were less affected because the GPP models have larger expected values for ω. Taken together, the results of Simulation Study 2 suggest that MNR processes will not necessarily elevate false positive rates; however, true signal for positive selection appears to be harder to detect when a gene has evolved under an MNR process. ### Simulation study 3: Combining DT nucleotide changes between codons with MNRs This study extends six of the scenarios from Simulation Study 2 by adding simultaneous DT changes between codons. We chose three distributions for ω (one null and two alternative scenarios) and applied both a SNR ($${e}^{\beta_{HI}}$$ = 1) and a highly variable MNR ($${e}^{\beta_{HI}}$$ = 0.05) to each. The null scenario in this study (case 3a in Table 4) is more complex as compared to the “benchmark” null (case 1a); this null scenario includes unequal GTR exchangeabilities, a more complex mixture of selective regimes (ω distribution) and DT changes. For LRT-1, adding simultaneous DT changes to the more complex SNR case resulted in a false positive rate of 55%. This is consistent with, but larger than, what was observed for LRT-1 in the case 1a employed in Simulation Study 1 (31%). The false positive rates for LRT-2 and LRT-3 (Table 4), which are based on models that allow DT changes, were below the specified significance level of the LRTs (α = 0.05) in the SNR case. Results, however, differed substantially when highly variable MNRs were added to the null scenario. The false positive rate for LRT-1 dropped to zero, whereas it was 6% for LRT-2 (perfect fit models) and 10% for LRT-3 (over-fit models). Table 4 False positive rates (null scenarios) and true positive rates (alternative scenarios) for three LRTs when the evolutionary process includes DT nucleotide substitutions between codons, ω variability among sites, and MNRs SNR ($${e}^{\beta_{HI}}$$ = 1) High MNR ($${e}^{\beta_{HI}}$$ = 0.05) ω 0 ω 1 ω 2 LRT-1 LRT-2 LRT-3 LRT-1 LRT-2 LRT-3 Null scenarios False positives 3a 0.05 1.0 1.0 0.55 0.02 0.03 0.0 0.06* 0.10* Alternative scenarios True positives 3b 0.05 0.5 2.0 0.95 0.87 0.92 0.01 0.44 0.26 3c 0.05 1.0 2.0 0.99 0.47 0.46 0.0 0.27 0.18 LRT-1 compares M1a to M2a (under-fit models). LRT-2 compares G1ax to G2ax (perfect-fit models). LRT-3 compares G1a13 to G2a13 (over-fit models). The asterisk symbol (*) indicates that the results are based on < 100 replicates due to convergence problems with some datasets when there was high MNRs. For LRT-2 case 3a is based on 99 replicates. LRT-3 case 3a is based on 91 replicates Interestingly, we experienced convergence problems for some datasets evolved under the null scenario with highly variable MNRs. Convergence problems were most frequent for LRT-3, which also had a false positive rate above the specified level of the test. Both phenomena could be related to the over-parameterization of the G2a model of LRT-3. Mingrone et al. [57] recently demonstrated that model M2a employed within LRT-1 could have MLEs with non-standard behaviour in some cases. In their study, instabilities in the parameter estimates arose when the model was over-parameterized relative to low signal for among-site variability in ω. As models of LRT-3 are over-parameterized for both among-site variability in ω and amino acid exchangeability parameters, we may have obtained “irregular estimates” (sensu Mingrone et al. [57]) in case 3a. If there is model irregularity under this setting, then the assumed large sample likelihood theory might not be applicable to LRT-3 in case 3a; this could lead to anti-conservative behaviour (e.g., Mingrone et al. [58]), which is what we observed. It is worth noting that the anti-conservative behaviour of LRT-3 in the high MNR case (10%) was relatively mild in comparison to the anti-conservative behaviour of LRT-1 in the SNR case (55%). Cases 3b and 3c of this study were used to investigate the combined effect of simultaneous DT nucleotide changes and MNRs on power. As a baseline, power was first assessed for 3b and 3c under the SNR scenario with DT changes. LRT-1 had the highest power in both SNR scenarios. However, since LRT-1 also had a very high false positive rate in SNR case 3a, its power may simply reflect a bias in the direction of the alternative model (M2a) when DT changes are occurring. Such a bias is consistent with the results of Simulation Study 1 and those reported by Kosiol et al. [18] and De Maio et al. [20]. LRT-2 and LRT-3 had reasonable power (Table 4). As expected, power was lower in case 3c where the gap between ω1 and ω+ was the smallest. The addition of MNRs had a dramatic impact on the power of all three LRTs. LRT-1 had almost no power to detect positive selection. Compared to the SNR scenario LRT-2 and LRT-3 had reduced power, with LRT-3 exhibiting the larger decrease of the two. Taken together, the results of this simulation study suggest that appropriately parameterized G-series models can yield improvements in power over previous LRTs for complex evolutionary scenarios involving both DT changes, and MNRs. However, model complexity requires careful management. LRTs based on too simple a model can lead to excessive false positives in some cases (e.g., LRT-1 in SNR case 3a), whereas naive over-parameterization of the model also has negative consequences (e.g., LRT-3 in MNR cases 3a-3c). In the latter case, failure to meet the regularity conditions otherwise assumed to be in place for likelihood-based inference could have led to MLE instabilities and degraded LRTs. With respect to the problem of meeting regularity conditions, there are several potential solutions for real data. The first is to use nonparametric bootstrapping to screen real data for MLE instabilities (e.g., Baker et al. [15]). However, the computational burden would be very high for complex models such as G2a13, making it impractical for large-scale surveys of genes. The second is to develop a method that penalizes unstable mixture weights for ω in a way that corrects any bias in the LRT [58, 59]; development of such a method is not trivial and is beyond the scope of this paper. The third is to develop and test parameter selection methods suitable for the GPP models. This also poses a computational burden. Ideally, we need a fast method, perhaps based on carefully chosen heuristics, for finding a good balance between model bias and variance. The problem is that model selection methods that rely on MLEs could be compromised in those cases where there has been a breakdown of the usual regularity conditions [57, 58, 59]. New methods for model selection may be warranted. ### Simulation study 4: Performance of alternative formulations of the SNR codon models in the null cases of simulation studies 1–3 We investigated whether an alternative form of either the ω distribution, or the parameterization of codon frequencies, could be used within the M-series framework to reduce false positive rates. To investigate the effect of frequency parameterization, we re-analyzed all nine null scenarios with LRT-1 (M1a-M2a) after replacing the F3 × 4 GY frequency parameterization with that of MG (Table 5). The MG parameterization had no effect on false positives in those four cases where the rate had been 0% under GY. In the remaining 5 cases, false positive rates under MG were comparable to, and in some cases much larger than, GY. The lowest non-zero false positive rate was associated with a case with no DT changes between codons [SNR only: case 2b], whereas much higher rates were observed in four other cases where DT changes had occurred [SNR + DT: cases 1a-c, 3a]. This result is not unexpected given that the MG parameterization emphasizes the independence of the mutation process between codon positions, and the process of simultaneous DT change employed to simulate those data is a stronger violation of that independence assumption. It was surprising, however, that the effect was so potent as to yield false positive rates > 90% in two cases. More extensive investigation of the relationship between DT processes and the parameterization of codon frequencies is warranted. Table 5 Sensitivity of false positive rates to the choice of model parameterization under the nine different null scenarios of Simulation Studies 1–3 SNR + DT cases SNR (no DT) High MNR (3a = DT) model 1a 1b 1c 3a 2a 2b 2a 2b 3a M1a - M2a GY 0.31 0.22 0.48 0.55 0.0 0.01 0.0 0.0 0.0 M1a - M2a MG 0.25 0.41 0.91 0.94 0.0 0.15 0.0 0.0 0.0 M8ω = 1 - M8ω > 1 GY 0.47 0.24 0.58 0.56 0.0 0.02 0.0 0.0 0.0 M8ω = 1 - M8ω > 1 MG 0.44 0.57 0.94 0.97 0.0 0.18 0.0 0.0 0.0 Scenarios 1a and 1b are based on a 5-taxon tree, and 2a, 2b and 3a are based on a 17-taxon tree (see Fig. 1). GY denotes the frequency parameterization of Goldman and Yang [26] where the transition probability is proportional to target codon. MG denotes the frequency parameterization Muse and Gaut [47] where the transition probability is proportional to target nucleotide. Both require frequency estimates for the four nucleotides at each position of the codon (denoted F3 × 4), and thus each requires 9 free parameters To investigate if an alternative form of the ω distribution might help reduce false positive rates within the M-series framework, we re-analyzed all nine null scenarios using a popular alternate LRT that compares M8ω + =1 to M8ω + ≥ 1. We applied this LRT under both the MG and GY codon frequency parameterization (Table 5). False positive rates between the two LRTs were generally similar; under the alternate LRT the same four cases had 0 false positives, with the remaining five cases having comparable false positive rates, although slightly higher for M8ω + =1 vs. M8ω + ≥ 1. The same relationship between MG and GY was also observed for the alternate LRT; false positive rates were higher under MG, and exceeded 90% in two of the cases. These results are interesting because M8 is based on a discretized β distribution, with typically 10 categories used for ω. Because this model is far more flexible than the 2 and 3 category ω distributions used in M1a and M2a, it is usually viewed as a superior model. Indeed, as measured by likelihood score, M8 will often fit a real dataset much better than either M1a or M2a (e.g., [6, 53]). Nonetheless, our results suggest that the formulation of M8 that yields more power in some scenarios also yields more sensitivity to misspecification in others. We note that greater robustness of the M1a vs. M2a LRT to model misspecification has been suggested previously (e.g., [46]). Taken together, these results support the view that performance depends on a complex relationship between the parameterization of a model and the nature of the signal within a given dataset, and that model performance measured under idealized conditions may not be safely extrapolated to real data having more complex evolutionary dynamics [43, 60]. ### Real data analyses We applied LRT-1 and LRT-3 to a set of 21 real Streptococcus sequence alignments. LRT-1 is presumed to represent an under-fit scenario, as it is based on codon models (M1a and M2a) that assume a SNR process and which do not permit DT changes. LRT-1 also represents a typical analysis of real data under the M-series modelling approach as implemented in the CODEML program [54]. LRT-3 is presumed to represent an over-fit scenario, as the models (G1a13 and G2a13) employ 6 different amino acid properties as a means to model MNRs, and it seems unlikely that all of these are necessary for a given dataset. LRT-3 is based on the default model complexity for the COLD program, so it is used to represent a typical analysis under the G-series modelling approach. The real data results (Table 6) are generally consistent with the simulation results; namely, that LRTs based on the G-series models should have more power, but using over-fit models could lead to convergence problems in some datasets. In our real data analysis, LRT-1 was significant for 1 gene, and marginal in another 3, whereas LRT-3 was significant for 3 genes, and there was only a single marginal case. However, convergence problems were encountered with the G-series models for some genes. Table 6 Results of applying LRT-1 and LRT-3 to the set of 21 real Streptococcus sequence alignments Gene under-fit models over-fit models M2a vs. G2a13 NC NS TL LRT-1: M1a vs. M2a M2a MLEs LRT-3: G1a13 vs. G2a13 G2a13 MLEs 2Δl 1 892 19 6.98 N.S. ω+ = 1.0 p+ = 0.026 N.S. ω+ = 1.02 p+ = 0 881.3 2 639 16 6.37 N.S. ω+ = 1.0 p+ = 0.15 P < 0.0001 ω+ = 4.9 p+ = 0.028 504.2 3 228 11 3.74 N.S. ω+ = 1.0 p+ = 0.046 N.S. ω+ = 1.2 p+ = 0 152.1 4 577 9 8.49 N.S. ω+ = 1.0 p+ = 0.05 N.S. ω+ = 1.18 p+ = 0 466.1 5 390 9 5.16 N.S. ω+ = 1.0 p+ = 0.19 P < 0.0001 ω+ = 11.7 p+ = 0.03 109.6 6 348 11 4.5 N.S. ω+ = 1.0 p+ = 0.04 N.S. ω+ = 3.11 p+ = 0 113.7 7 184 10 0.37 P < 0.0001 ω+ = 5.29 p+ = 0.24 P < 0.0001 ω+ = 4.36 p+ = 0.29 71.7 8 169 6 30 N.S. ω+ = 1.0 p+ = 0.001 N.S. ω+ = 8.46 p+ = 0.02 130.9 9 227 10 5.46 N.S. ω+ = 1.0 p+ = 0.25 N.S. ω+ = 20.5 p+ = 0.14 50.3 10†§ 450 10 2.2 N.S. ω+ = 1.0 p+ = 0.06 N.S. ω+ = 1 p+ = 0 14.3 11 444 7 4.6 N.S. ω+ = 1.0 p+ = 0.31 N.S. ω+ = 1.03 p+ = 0 109.7 12 473 9 0.45 N.S. ω+ = 1.0 p+ = 0.21 N.S. ω+ = 10.6 p+ = 0.007 17.3 13 427 8 0.05 0.10 > P > 0.05 ω+ = 15.7 p+ = 0.006 N.S. ω+ > 99 p+ = 0.02 6.2 14 632 7 0.09 0.10 > P > 0.05 ω+ = 15.3 p+ = 0.016 N.S. ω+ = 22.5 p+ = 0.03 25.1 15 209 7 10.3 N.S. ω+ = 1.0 p+ = 0.05 N.S. ω+ = 1 p+ = 0 164.5 16 232 6 0.43 0.10 > P > 0.05 ω+ = 9.4 p+ = 0.29 N.S. ω+ = 2.3 p+ = 0.37 49.1 17 661 5 3.3 N.S. ω+ = 1.0 p+ = 0.27 P = 0.051 ω+ = 1.0 p+ = 0.33 220.6 18 564 5 7.7 N.S. ω+ = 1.0 p+ = 0.5 N.S. ω+ = 1.3 p+ = 0 171.4 19 261 4 9.5 N.S. ω+ = 1.0 p+ = 0.04 N.S. ω+ = 1.0 p+ = 0 113.6 20 201 4 2.2 N.S. ω+ = 1.0 p+ = 0.03 N.S. ω+ = 17.8 p+ = 0.04 40.4 21 166 4 2.7 N.S. ω+ = 2.15 p+ = 0.20 N.S. ω+ = 17.8 p+ = 0.017 34.69 NC is the number of codons in the sequence alignment after removal of sites with ambiguities or indels. NS is the number of gene sequences in the alignment. TL is the total tree length estimated under codon model M0 as the mean number of substitution per codon. N.S. indicates a non-significant LRT. The dagger symbol () indicates a gene for which likelihood optimization under a G-series model did meet convergence criteria. The two-fold s symbol (§) indicates that the MLEs were obtained by removing tip branches having near-zero lengths and re-fitting the model. The gene names, along with the sequence alignments, are provided in the DRYAD repository [51] The models utilized by LRT-1 and LRT-3 permit an exploration of the impact of model complexity on the inference of positive selection. The one significant result for LRT-1 (gene 7) does not appear to be a false positive due to DT substitutions, as LRT-3 was also significant for that gene. This is in contrast to the three cases of borderline significance for LRT-1 (genes 13, 14 and 16), where LRT-3 was not significant for any of them. Note that these three borderline cases for LRT-1 occurred in the datasets with the lowest tree lengths. In nearly all of the non-significant cases for LRT-1, the MLEs for M2a indicated either ω+ ≈ 1 or p+ ≈ 0. This is expected for M2a when it does not provide a significant improvement over M1a [11, 45, 58]. There was one case (gene 21) where the LRT-1 was not significant and yet both ω+ and p+ were large. Exceptionally large estimates for p+ have been observed for M2a when there is very low signal within the data about the parameters of the ω distribution [57]. This was certainly the case for gene 21, which is the shortest gene in the dataset (166 codons) and is represented by just 4 sequences. In all but three genes (10, 12 and 13), the G-series models yielded very substantial increases in likelihood over the M-series models (Table 6), suggesting that the additional complexity of the G-series models was in many cases warranted. However, because the G-series are likely to be over-fit, we will avoid making direct, or mechanistic, interpretations of the MLEs with respect to the MNR process, or the rate of DT change (see Jones et al. [43, 59] for a discussion of the problem of interpreting confounded parameter estimates). Development and validation of parameter selection methods for the G-series models will ultimately permit us to make inferences about such “background” processes. Nonetheless, our simulation studies indicate that the G-series models, via LRT-3, can be used to make inferences about the impact of positive selection within a gene. Consistent with the expectation for greater power (see Table 4), LRT-3 was highly significant for genes 2, 5 and 7, whereas LRT-1 was significant for one gene. In two of those genes the MLEs for ω+ and p+ suggest a small fraction of sites under positive selection (p+ < 0.03). If those genes were truly evolving under an MNR process, then such low signal would be difficult to detect via LRT-1 (see Simulation Studies 2 and 3). Signs of G-series convergence problems were observed for three genes (10, 15 and 21). Because LRT-1 and LRT-3 were consistent for genes 15 and 21 (both non-significant), we do not think convergence problems negatively affected the LRTs in those two cases. Convergence problems were more severe for gene 10, but were ameliorated by removing terminal taxa with near zero branch lengths and re-fitting the models to those data. Convergence problems for genes 10, 15 and 21 may be a symptom of over-parameterization of G2a13 for those data, which could have led to an irregular likelihood function. A further complication is that the extent to which non-standard behaviours of the MLEs could emerge seems to depend on the details of the true generating process for each gene [57, 58]. In such cases the optimization algorithm can readily produce unreliable parameter estimates (see Mingrone et al. [57] and Suzuki and Nei [61] for empirical examples). For this reason we view the MLEs for these genes with more caution than those obtained from the other genes. It is important to note that this is not the first report of convergence problems and non-standard MLE behaviours, or of disagreements among model-based LRTs in the analysis of real data. Furthermore, a wide variety of codon models seem to be implicated in such issues. Perhaps the best understood example is the tax gene of HTLV. This gene is well known for MLEs that suggest 100% of sites are under positive selection despite having 87% sites being invariant across all 20 lineages that comprise the dataset [61]. Subsequent analyses of the tax gene indicate that the implausibly large estimate of sites evolving under positive selection results from violations of statistical regularity conditions [57]. Another example comes from a large-scale survey of primate nuclear receptor genes for spatial and temporal changes in selection pressure [15]. By using a novel method of non-parametric bootstrap (SBA: [57]), they identified non-standard MLE in some nuclear receptor genes and not others [15]. Taking the results of our analysis of 21 real Streptococcus genes with those other real data analyses highlights the importance of adopting a standard for best practices that includes a set of reliability and robustness analyses. Bielawski et al. [62] proposed an experimental design, and workflow, that includes a suite of quality control, statistical reliability, and model robustness analyses that can be used to identify problematic datasets under the branch-site style of codon models. We propose that such an “experimental design” should be applied to all computational analyses of real data, regardless of the chosen codon-modelling framework. ## Discussion We have extended previous work [18, 20] by showing that the LRT based on models M1a and M2a can produce incorrect conclusions about positive selection when both (i) nonsynonymous rates depend on the amino acid property and (ii) codon substitutions have occurred via DT changes. We have also shown that LRTs can be constructed which have better performance in such scenarios by incorporating additional parameters into the model. However, incorporating too many parameters into a model creates other difficulties, some of which can result in computational problems and inferior performance. More work on model selection methods is clearly warranted. Nonetheless, the over-parameterised models tended to perform better than the under-parameterised models in our simulations, which suggests that there is a role for the G-series models in analyses of real data. We recommend that G-series models should be deployed within a larger experimental design that includes (i) assessing robustness of results to model assumptions (e.g., Bielawski et al. [62]), and (ii) routine use of the non-parametric bootstrap to assess non-standard behaviour of MLEs (e.g., Mingrone et al. [57]). Our investigation of M-series models revealed that the choice of ω distribution (M2 vs. M8) had a minor impact, whereas the choice of codon frequency parameterization (GY vs. MG) can have a major impact on false positives when DT changes had occurred. While both GY and MG can yield unacceptably high false positives, rates tended to be higher under MG (sometimes exceeding 90%). False positive rates for both the GY and MG style models can be understood using the origin-fixation model framework [63], which is a framework for reconciling population genetic processes with macro-evolutionary dynamics. Origin-fixation models assume that residence times for polymorphisms are much shorter than the time between population mutation events. This yields a macro-evolutionary process that instantaneously “jumps” from one fixed state to another (i.e., codon i to j) as an embedded Markov chain [63]. Both GY and MG assume that the embedded Markov chain is driven solely by single nucleotide mutations. Thus, both are misspecified if either (i) the true mutation process includes simultaneous double or triple changes, or (ii) such changes do not occur, but the true process violates the “weak mutation” assumptions of the origin-fixation framework. These two scenarios are unidentifiable within real data by single-change codon models, and either violation (i) or (ii) could increase false positives. Now consider that case of two codons that differ by 2 or 3 nucleotides over a given branch; for a fixed ω value, GY and MG will yield different total probabilities of transition from one end of that branch to the other via a sequence of single nucleotide changes. Thus, when fitting these models to real data, the model that “sees” such a sequence of change as having a lower probability will need to further increase the rate of nonsynonymous substitution (via an increase in ω) to explain the evolution of those data. It seems that by emphasizing the independence of the mutation process between codon positions, MG requires even larger values of ω to explain rapid evolution between codons that differ by 2 or 3 nucleotides. Models that include parameters for apparent DT changes avoid this effect (e.g., [42, 43] and G1a and G2a used here) regardless of whether the process follows phenomenon (i) or (ii) above. There is some subtlety in the interpretation of the nonsynonymous rate when modelling MNRs based on the physiochemical properties of the amino acids. Such models can be interpreted as asserting that there is some degree of evolutionary pressure against changes involving certain amino acid properties. Using hydrophobicity as an example, a large influence on the substitution rate such that $${e}^{\beta_{HI}}=0.05$$ means that there is strong selective pressure against changes in hydrophobicity. However, within the constraints of selective pressure against changes in hydrophobicity, there may still exist diversifying selection at some sites, independent of the general tendency to preserve hydrophobicity. This means that there can be natural selection for changes in amino acid which do not affect hydrophobicity, and that the selection against changes to hydrophobicity is reduced at these sites. Thus, hydrophobicity manifests as a phenomenological outcome of several processes, with the nonsynonymous rate reflecting the average tendency towards conservation of hydrophobicity over the entire dataset. When G-series models come to be used to investigate the effect of different aspects of physiochemical constraint in real data (polarity, volume, polar requirement, etc.), we recommend using the methods of Jones et al. [43] to assess the amount of phenomenological load carried by the estimates of parameters that imply physiochemical mechanisms of selection. The models evaluated here are sometimes referred to as “site models”, as they permit the average intensity of natural selection to vary only over the sites. There is growing interest in using the so-called “branch-site” and “clade-site” mixture models to investigate adaptive protein evolution (e.g., Yang and Nielsen, [8]; Bielawski and Yang, [64]; Zhang et al. [65]; Murrell et al. [66]). Such codon models permit the intensity of selection to vary over branches as well as over sites. Venkat et al. [42] recently demonstrated that false positive rates for the branch-site tests can also be exceptionally high when there are double changes between codons. However it is not yet possible to attribute branch-specific false positives to DT changes in real data, as Jones et al. [43] showed that the DT process and the fundamental process of shifting balance on a fixed fitness landscape are confounded. Both of these non-adaptive processes produce site pattern distributions that are consistent with temporal dynamics in ω, with the amount of phenomenological load on ω depending on a complex relationship between model and data [43, 60]. While the G-series models can be extended by adding temporal dynamics in ω to those models already having DT changes and MNRs, this will likely intensify problems that arise when statistical regularity conditions have not been met [15, 57, 62]. Hence, further work on G-series models should focus on developing and testing new methods for parameter selection. The translation of the G-series models to real data will be better suited by first addressing this important issue. The issues that we have addressed here (LRT power, LRT accuracy, non-standard MLE behaviour, and convergence problems) reflect different aspects of how the relationship between the model and the data can affect inference, and these issues are relevant to all types of codon models [60]. In this study we have focused on modelling MNRs at the amino acid level, DT changes at the codon level, and the GTR process at the DNA level; however, codon models often make other simplifying assumptions about site independence, reversibility, and homogeneity of the tree topology among sites, to name just a few. While these have been investigated to varying extent (e.g., [67, 68, 69]), the traditional ways in which simulation studies have been designed are unable to reveal problems associated with statistical irregularity [56, 57, 60] or reveal the effects of realistic levels of model misspecification [10, 43, 44, 60, 70, 71]. Future development of all codon models, as well as formal assessment of parameter selection methods, will require simulation under much more true-to-life scenarios (e.g., DT changes and various MNR scenarios) and cover greater, and more realistic, levels of model misspecification. Only through such studies are we able to appreciate the kinds of inference issues that we are most likely to encounter in real data, and thereby update our notion of best practices accordingly [60]. ## Conclusions We confirm that failure to model MNRs or DT changes can negatively impact the power and false positive rates of LRTs for positive selection. False positives under codon models M2a and M8 can be very sensitive to DT changes. This is exacerbated by the choice of frequency parameterization (GY vs. MG), with rates sometimes > 90% under MG. The MG parameterization emphasizes the independence of the mutation process between codon positions, and this tends to yield larger fitted values for ω when the evolutionary process includes DT changes. We describe a novel modelling-framework, GPP, for codons that allows specification of all possible instantaneous codon substitutions, including MNRs and instantaneous DT nucleotide changes. We note that existing codon models can be specified as special cases of the GPP model. LRTs for positive selection implemented under the GPP framework yield substantial improvements in accuracy and power when the true evolutionary process includes MNRs and DT mutations. But, we also find that over-parameterized models can perform less well, and this can contribute to degraded performance of LRTs. For this reason all codon models (GPP and traditional) should be deployed within an experimental design that includes (i) assessing robustness to model assumptions, and (ii) investigation of non-standard behaviour of MLEs. Within such a design, GPP models should be used alongside traditional codon models to analyze real data. Further work is needed on methods for parameter selection, especially with regard to their performance under realistic levels of misspecification. ## Notes ### Acknowledgements The authors thank Joseph Mingrone for helpful discussions, and for assisting with high-performance computing activities. We thank Christopher T. Jones for helpful discussions about the relationship between DT change and alternative formulations for codon frequencies. The work described here was supported by grants from the Natural Sciences and Engineering Research Council of Canada to JPB (DG298394), TK (DG49452014) and HG (DG253484), the Centre for Comparative Genomics and Evolutionary Bioinformatics, and a postdoctoral fellowship (funded by the Tula Foundation) to KAD. ### Funding Funding was provided by grants from the Natural Sciences and Engineering Research Council (NSERC) of Canada and the Tula Foundation. NSERC funding was used to support the computational infrastructure utilized by this study. Tula Foundation funds provided salary support for a postdoctoral research fellow. The aforementioned funding bodies played no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript. ### Authors’ contributions JPB, TK and HG conceived and designed study. TK and HG developed the modelling framework. TK coded the methods. TK and KAD tested the methods. KAD, JPB and TK analyzed data. JPB and KAD wrote the manuscript. All authors read and approved the final manuscript. ### Ethics approval and consent to participate This manuscript does not involve human participants, human data or human tissue; therefore this declaration is not applicable. ### Consent for publication This manuscript does not contain any individual person’s data in any form; therefore this declaration is not applicable. ### Competing interests The authors declare that they have no competing interests. ### Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Supplementary material 12862_2018_1326_MOESM1_ESM.pdf (752 kb) Additional file 1: The GPP model uses a logarithm link function to link the non-zero off-diagonal elements of the 61 × 61 instantaneous rate matrix to a linear model. (PDF 688 kb) 12862_2018_1326_MOESM2_ESM.pdf (118 kb) Additional file 2: The 10-taxon tree used in simulation 1c that was obtained by dividing each terminal taxon of the 5-taxon tree (used in 1a and 1b), and then re-distributing the total tree-length evenly among all branches. (PDF 66 kb) 12862_2018_1326_MOESM3_ESM.pdf (69 kb) Additional file 3: Specification of hydrophobicity factors in the model, and the matrix of hydrophobicity scores between all amino acids. (PDF 57 kb) ## References 1. 1. Anisimova M, Liberles D. Detecting and understanding natural selection. In: Cannarozzi GM, Schneider A, editors. Codon evolution: mechanisms and models: Oxford University Press; 2012. p. 73–96.Google Scholar 2. 2. Delport W, Scheffler K, Botha G, Gravenor MB, Muse SV, Pond SLK. CodonTest: modeling amino acid substitution preferences in coding sequences. PLoS Comput Biol. 2010;6(8):e1000885. 3. 3. Dayhoff MO, Eck RV, Park CM. A model of evolutionary change in proteins. In: Dayhoff MO, editor. Atlas of protein sequence and structure. Vol. 5. Washington, D.C: National Biomedical Research Foundation; 1972. p. 89–99.Google Scholar 4. 4. Jones DT, Taylor WR, Thornton JM. The rapid generation of mutation data matrices from protein sequences. Comput Appl Biosci. 1992;8(3):275–82. 5. 5. Whelan S, Goldman N. A general empirical model of protein evolution derived from multiple protein families using a maximum-likelihood approach. Mol Biol Evol. 2001;8(5):691–9.Google Scholar 6. 6. Yang Z, Nielsen R, Goldman N, Pedersen AMK. Codon-substitution models for heterogeneous selection pressure at amino acid sites. Genetics. 2000;155(1):431–49. 7. 7. Kosakovsky Pond SL, Frost SD. Not so different after all: a comparison of methods for detecting amino acid sites under selection. Mol Biol Evol. 2005;22(5):1208–22. 8. 8. Yang Z, Nielsen R. Codon-substitution models for detecting molecular adaptation at individual sites along specific lineages. Mol Biol Evol. 2002;19(6):908–17. 9. 9. Murrell B, Wertheim JO, Moola S, Weighill T, Scheffler K, Kosakovsky Pond SL. Detecting individual sites subject to episodic diversifying selection. PLoS Genet. 2012;8(7):e1002764. 10. 10. Jones CT, Youssef N, Susko E, Bielawski JP. Shifting balance on a static mutation–selection landscape: a novel scenario of positive selection. Mol Biol Evol. 2016;34(2):391–407.Google Scholar 11. 11. Anisimova M, Bielawski JP, Yang Z. Accuracy and power of the likelihood ratio test in detecting adaptive molecular evolution. Mol Biol Evol. 2001;18(8):1585–92. 12. 12. Bielawski JP, Dunn KA, Sabehi G, Béjà O. Darwinian adaptation of proteorhodopsin to different light intensities in the marine environment. Proc Natl Acad Sci U S A. 2004;101(41):14824–9. 13. 13. Field SF, Bulina MY, Kelmanson IV, Bielawski JP, Matz MV. Adaptive evolution of multicolored fluorescent proteins in reef-building corals. J Mol Evol. 2006;62(3):332–9. 14. 14. Demogines A, Abraham J, Choe H, Farzan M, Sawyer SL. Dual host-virus arms races shape an essential housekeeping protein. PLoS Biol. 2013;11(5):e1001571. 15. 15. Baker JL, Dunn KA, Mingrone J, Wood BA, Karpinski BA, Sherwood CC, et al. Functional divergence of the nuclear receptor NR2C1 as a modulator of Pluripotentiality during hominid evolution. Genetics. 2016;203(2):905–22. 16. 16. Liberles DA, Teufel AI, Liu L, Stadler T. On the need for mechanistic models in computational genomics and metagenomics. Genome Biol Evol. 2013;5(10):2008–18. 17. 17. Doron-Faigenboim A, Pupko T. A combined empirical and mechanistic codon model. Mol Biol Evol. 2007;24(2):388–97. 18. 18. Kosiol C, Holmes I, Goldman N. An empirical codon model for protein sequence evolution. Mol Biol Evol. 2007;24(7):1464–79. 19. 19. Schneider A, Cannarozzi GM, Gonnet GH. Empirical codon substitution matrix. BMC Bioinformatics. 2005;6(1):1.Google Scholar 20. 20. De Maio N, Holmes I, Schlötterer C, Kosiol C. Estimating empirical codon hidden markov models. Mol Biol Evol. 2013;30(3):725–36. 21. 21. Miyazawa S. Selective constraints on amino acids estimated by a mechanistic codon substitution model with multiple nucleotide changes. PLoS One. 2011;6(3):e17244. 22. 22. Zoller S, Schneider A. A new semi-empirical codon substitution model based on principal component analysis of mammalian sequences. Mol Biol Evol. 2011;29(1):271–7. 23. 23. Delport W, Scheffler K, Seoighe C. Models of coding sequence evolution. Brief Bioinformatics. 2008;10(1):97–109. 24. 24. Clarke B. Selective constraints on amino-acid substitutions during the evolution of proteins. Nature. 1970;228(5267):159–60. 25. 25. Grantham R. Amino acid difference formula to help explain protein evolution. Science. 1974;185(4154):862–4. 26. 26. Goldman N, Yang Z. A codon-based model of nucleotide substitution for protein-coding DNA sequences. Mol Biol Evol. 1994;11(5):725–36. 27. 27. Yang Z, Nielsen R, Hasegawa M. Models of amino acid substitution and applications to mitochondrial protein evolution. Mol Biol Evol. 1998;15(12):1600–11. 28. 28. Yang Z. Relating physicochemical properties of amino acids to variable nucleotide substitution patterns among sites. Pac Symp Biocomput. 2000;2000:81–92.Google Scholar 29. 29. Sainudiin R, Wong WSW, Yogeeswaran K, Nasrallah JB, Yang Z, Nielsen R. Detecting site-specific physicochemical selective pressures: applications to the class I HLA of the human major histocompatibility complex and the SRK of the plant sporophytic self-incompatibility system. J Mol Evol. 2005;60(3):315–26. 30. 30. Wong WS, Sainudiin R, Nielsen R. Identification of physicochemical selective pressure on protein encoding nucleotide sequences. BMC Bioinformatics. 2006;7(1):1.Google Scholar 31. 31. Conant GC, Stadler PF. Solvent exposure imparts similar selective pressures across a range of yeast proteins. Mol Biol Evol. 2009;26(5):1155–61. 32. 32. Zaheri M, Dib L, Salamin N. A generalized mechanistic codon model. Mol Biol Evol. 2014;31(9):2528–41. 33. 33. Averof M, Rokas A, Wolfe KH, Sharp PM. Evidence for a high frequency of simultaneous double-nucleotide substitutions. Science. 2000;287(5456):1283–6. 34. 34. Schrider DR, Hourmozdi JN, Hahn MW. Pervasive multinucleotide mutational events in eukaryotes. Curr Biol. 2011;21(12):1051–4. 35. 35. Besenbacher S, Sulem P, Helgason A, Helgason H, Kristjansson H, Jonasdottir, et al. Multi-nucleotide de novo mutations in humans. PLoS Genet. 2016;12(11):e1006315. 36. 36. Bazykin GA, Kondrashov FA, Ogurtsov AY, Sunyaev S, Kondrashov AS. Positive selection at sites of multiple amino acid replacements since rat–mouse divergence. Nature. 2004;429(6991):558. 37. 37. Harris K, Nielsen R. Error-prone polymerase activity causes multinucleotide mutations in humans. Genome Res. 2014;24(9):1445–54. 38. 38. Sakofsky CJ, Roberts SA, Malc E, Mieczkowski PA, Resnick MA, Gordenin DA, et al. Break-induced replication is a source of mutation clusters underlying kataegis. Cell Rep. 2014;7(5):1640–8. 39. 39. Smith NG, Webster MT, Ellegren H. A low rate of simultaneous double-nucleotide mutations in primates. Mol Biol Evol. 2003;20(1):47–53. 40. 40. Whelan S, Goldman N. Estimating the frequency of events that cause multiple-nucleotide changes. Genetics. 2004;167(4):2027–43. 41. 41. Tamuri AU, dos Reis M, Goldstein RA. Estimating the distribution of selection coefficients from phylogenetic data using sitewise mutation-selection models. Genetics. 2012;190(3):1101–15. 42. 42. Venkat A, Hahn MW, Thornton JW. Multinucleotide mutations cause false inferences of lineage-specific positive selection. Nat Ecol Evol. 2018;1:1280–8.Google Scholar 43. 43. Jones CT, Youssef N, Susko E, Bielawski JP. Phenomenological load on model parameters can lead to false biological conclusions. Mol Biol Evol. 2018;35(6):1473–88. 44. 44. Laurin-Lemay S, Philippe H, Rodrigue N. Multiple factors confounding phylogenetic detection of selection on codon usage. Mol Biol Evol. 2018;35(6):1463–72. 45. 45. Wong WS, Yang Z, Goldman N, Nielsen R. Accuracy and power of statistical methods for detecting adaptive evolution in protein coding sequences and for identifying positively selected sites. Genetics. 2004;168(2):1041–51. 46. 46. Bao L, Gu H, Dunn KA, Bielawski JP. Likelihood-based clustering (LiBaC) for codon models, a method for grouping sites according to similarities in the underlying process of evolution. Mol Biol Evol. 2008;25(9):1995–2007. 47. 47. Muse SV, Gaut BS. A likelihood approach for comparing synonymous and nonsynonymous nucleotide substitution rates, with application to the chloroplast genome. Mol Biol Evol. 1994;11(5):715–24. 48. 48. Felsenstein J. Maximum likelihood and minimum-steps methods for estimating evolutionary trees from data on discrete characters. Syst Biol. 1973;22(3):240–9.Google Scholar 49. 49. Nielsen R, Yang Z. Likelihood models for detecting positively selected amino acid sites and applications to the HIV-1 envelope gene. Genetics. 1998;148(3):929–36. 50. 50. Monera OD, Sereda TJ, Zhou NE, Kay CM, Hodges RS. Relationship of sidechain hydrophobicity and α-helical propensity on the stability of the single-stranded amphipathic α-helix. J Pept Sci. 1995;1(5):319–29. 51. 51. Dunn KA, Kenney T, Gu H, Bielawski JP. Data from: Improved inference of site-specific selection under a generalized parametric codon model when there are multinucleotide mutations and multiple nonsynonymous rates Dryad Digital Repository. . 52. 52. Aris-Brosou S, Bielawski JP. Large-scale analyses of synonymous substitution rates can be sensitive to assumptions about the process of mutation. Gene. 2006;378:58–64. 53. 53. Anisimova M, Bielawski JP, Dunn KA, Yang Z. Phylogenomic analysis of natural selection pressure in streptococcus genomes. BMC Evol Biol. 2007;7(1):154. 54. 54. Yang Z. PAML 4: phylogenetic analysis by maximum likelihood. Mol Biol Evol. 2007;24(8):1586–91. 55. 55. Kenney T, Gu H. Hessian calculation for phylogenetic likelihood based on the pruning algorithm and its applications. Stat Appl Genet Mol Biol. 2012;11(4):14.Google Scholar 56. 56. Gill PE, Murray W, Wright MH. Practical optimization. San Diego: Academic Press; 1981.Google Scholar 57. 57. Mingrone J, Susko E, Bielawski J. Smoothed bootstrap aggregation for assessing selection pressure at amino acid sites. Mol Biol Evol. 2016;33(11):2976–89. 58. 58. Mingrone J, Susko E, Bielawski J. Modified likelihood ratio tests for positive selection. Bioinformatics (accepted pending minor revisions). 2018.Google Scholar 59. 59. Chen H, Chen J, Kalbfleisch JD. A modified likelihood ratio test for homogeneity in finite mixture models. J R Stat Soc Series B Stat Methodol. 2001;63(1):19–29.Google Scholar 60. 60. Jones CT, Susko E, Bielawski JP. Looking for Darwin in genomic sequences; validity and success depends on the relationship between the model and the data. In: Anisimova M, editor. Evolutionary genomics: statistical and computational methods. New York: Springer (Humana); 2018.Google Scholar 61. 61. Suzuki Y, Nei M. False-positive selection identified by ML-based methods: examples from the Sig1 gene of the diatom Thalassiosira weissflogii and the tax gene of a human T-cell lymphotropic virus. Mol Biol Evol. 2004;21(5):914–21. 62. 62. Bielawski JP, Baker JL, Mingrone J. Inference of episodic changes in natural selection acting on protein coding sequences via CODEML. Curr Protoc Bioinformatics. 2016;54(1):6–15. 63. 63. McCandlish DM, Stoltzfus A. Modeling evolution using the probability of fixation: history and implications. Q Rev Biol. 2014;89(3):225–5266. 64. 64. Bielawski JP, Yang Z. A maximum likelihood method for detecting functional divergence at individual codon sites, with application to gene family evolution. J Mol Evol. 2004;59(1):121–32. 65. 65. Zhang J, Nielsen R, Yang Z. Evaluation of an improved branch-site likelihood method for detecting positive selection at the molecular level. Mol Biol Evol. 2005;22(12):2472–9. 66. 66. Murrell B, Weaver S, Smith MD, Wertheim JO, Murrell S, Aylward A, et al. Gene-wide identification of episodic selection. Mol Biol Evol. 2015;32(5):1365–71. 67. 67. Pedersen AK, Wiuf C, Christiansen FB. A codon-based model designed to describe lentiviral evolution. Mol Biol Evol. 1998;15(8):1069–81. 68. 68. Robinson DM, Jones DT, Kishino H, Goldman N, Thorne JL. Protein evolution with dependence among codons due to tertiary structure. Mol Biol Evol. 2003;20(10):1692–704. 69. 69. Wilson DJ, McVean G. Estimating diversifying selection and functional constraint in the presence of recombination. Genetics. 2006;172(3):1411–25. 70. 70. Spielman, Wilke, Spielman SJ, Wilke CO. The relationship between dN/dS and scaled selection coefficients. Mol Biol Evol. 2015;32(4):1097–108. 71. 71. Spielman SJ, Wan S, Wilke CO. A comparison of one-rate and two-rate inference frameworks for site-specific dN/dS estimation. Genetics. 2016;204(2):499–511. ## Authors and Affiliations • Katherine A. Dunn • 1 • Toby Kenney • 2 • Hong Gu • 2 • Joseph P. Bielawski • 1 • 2 • 3
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8531404733657837, "perplexity": 2825.347991196945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583879117.74/warc/CC-MAIN-20190123003356-20190123025356-00016.warc.gz"}
http://math.stackexchange.com/questions/27271/what-might-i-use-to-show-that-an-entire-function-with-positive-real-parts-is-con
# What might I use to show that an entire function with positive real parts is constant? So the question asks me to prove that an entire function with positive real parts is constant, and I was thinking that this might somehow be related to showing an entire bounded function is constant (Liouville's theorem), but are there any other theorems that might help me prove this fact? - I'd use that one! There are definitely others, but they would all be considered "overkill" and probably use Liouville's theorem in their proof. – Matt Mar 15 '11 at 21:16 Jonas Meyer's alternative answer, and Soarer's answer, both explain how to reduce your problem to an application of Liouville's theorem. This is typically how such questions are expected to be solved in a first course in complex analysis (which I'm guessing is where your question comes from). – Matt E Mar 16 '11 at 3:30 It isn't a nonconstant polynomial, by the fundamental theorem of algebra. It doesn't have an essential singularity at infinity, by the Casorati-Weierstrass theorem. What other possibilities are there? Alternatively, if you add $1$, you get a function satisfying $|g(z)|\geq 1$ for all $z$. What can you say about the reciprocal of $g$? - The other three answers are overkill to me.. Simply consider $e^{-f}$ if $f$ is your function. Is it bounded? - I don't think that $1/(1+f)$ is overkill. – Jonas Meyer Mar 16 '11 at 1:37 my bad, I paid attention to your Casorati-Weierstrass proof :P – Soarer Mar 16 '11 at 4:13 Both of the two other answers are already excellent, but if you really want to bring on the big guns, use Picard's Little Theorem - noting that the half plane consists of more than two points. - Have you learned the Riemann Mapping theorem? If so, what can you do with the image of this entire function? Remember, the composition of analytic functions is analytic. Liousville's theorem should then finish the problem off. - More elementary would be to use an explicit Möbius transformation sending the right half plane to the unit disk. – Dan Petersen Mar 15 '11 at 21:39
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8372684121131897, "perplexity": 254.5081853219857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443738099622.98/warc/CC-MAIN-20151001222139-00020-ip-10-137-6-227.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/notation-for-marking-the-voltage-drop-in-this-picture.789068/
Notation for marking the voltage drop in this picture 1. Dec 24, 2014 LongApple 1. The problem statement, all variables and given/known data When we write the V_0 on the right of the diagram as show below, between which two points does the voltage drop refer to? There are two nodes on the top correct? I am assuming the voltage drop refers to the two rightmost nodes. I've tried to circle the nodes. http://i.imgur.com/kutasHe.png 2. Relevant equations It's just a notation question. 3. The attempt at a solution It's not a homework problem. 2. Dec 24, 2014 Bystander Good assumption. Boy, what a lousy problem. 3. Dec 24, 2014 LongApple What is the rule for the notation in general? Is it common to just mark voltage as height on the paper as opposed to between specific nodes? 4. Dec 24, 2014 Bystander Are there standard conventions/notations for circuit diagrams? I'm certain there almost have to be. Are they universally applied? Not in the 50 odd years I've been deciphering them. Check NEMA, IEEE, Giaccaletto, Kaufman & Seidman, who else ..... 5. Dec 24, 2014 Staff: Mentor Because it's written beside an element, I'd say in general a marked voltage will be the voltage across that element. Yes, in this case it follows that it's the potential difference between the rightmost nodes. 6. Dec 24, 2014 ehild And that element is the current generator. So the marked voltage is across it. Draft saved Draft deleted Similar Discussions: Notation for marking the voltage drop in this picture
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9137164354324341, "perplexity": 2188.5541565846843}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823350.23/warc/CC-MAIN-20171019160040-20171019180040-00574.warc.gz"}
http://www.zazzle.co.uk/too+many+guitars+magnets
Showing All Results 5,920 results Page 1 of 99 Related Searches: music, rock and roll, rock n roll Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo £3.20 Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo Got it! We won't show you this product again! Undo No matches for Showing All Results 5,920 results Page 1 of 99
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.815867006778717, "perplexity": 4436.957429286696}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011162707/warc/CC-MAIN-20140305091922-00095-ip-10-183-142-35.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/basis-for-the-homogeneous-system.398744/
# Homework Help: Basis for the homogeneous system 1. Apr 26, 2010 ### thushanthan 1. The problem statement, all variables and given/known data Find a basis for the solution space of the homogeneous systems of linear equations AX=0 2. Relevant equations Let A=1 2 3 4 5 6 6 6 5 4 3 3 1 2 3 4 5 6 and X= x y z u v w 3. The attempt at a solution 1. The problem statement, all variables and given/known data 2. Relevant equations 3. The attempt at a solution 2. Apr 27, 2010 ### HallsofIvy In Latex, your matrix problem is $$\begin{bmatrix}1 & 2 & 3 & 4 & 5 & 6 \\ 6 & 6 & 6 & 4 & 3 & 3 \\ 1 & 2 & 3 & 4 & 5 & 6 \end{bmatrix}\begin{bmatrix} x \\ y \\ z \\ u \\ v \\ w\end{bmatrix}= \begin{bmatrix}0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \\ 0\end{bmatrix}$$ That is the same as the three equations, x+ 2y+ 3z+ 4u+ 5v+ 6w= 0, 6x+ 6y+ 4z+ 4u+ 3v+ 3w= 0, and x+ 2y+ 3z+ 4u+ 5v+ 6w= 0. Of course, the first and third are exactly the same so we only have two equations. We can solve those two equations for two of the variables in terms of the other 4. Replace those two in <x, y, z, u, v, w> with their (linear) expressions in the other 4. For example, suppose the solution were u= 2x- 3y+ 4z- w, v= x+ y- 3z+ 4w (I just made those up. Solve the two equations yourself.) Then we could write <x, y, z, u, v, w>= <x, y, z, 2x- 3y+ 4z- w, x+ y-3 z+ 4w, w>. Now separate variables: <x, 0, 0, 2x, x, 0>+ <0, y, 0, -3y, y, 0>+ <0, 0, z, 4z, -3z, 0>+ <0, 0, 0, -w, 4w, w>. Finally, take each variable out of its vector: x<1, 0, 0, 2, 1, 0>+ y<0, 1, 0, -3, 1, 0>+ z<0, 0, 0, 1, 4, -3, 0>+ w<0, 0, 0, -1, 4, 1>. Since any vector in the nullspace can be written as a linear combination of those 4 vectors, they form a basis for the null space. (Again, that is NOT the solution to YOUR problem. You will have to solve those two equations for two of the variables your self.) 3. Apr 27, 2010 ### thushanthan Thank you!! Now I got it
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8588468432426453, "perplexity": 1759.3683084586125}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376828507.84/warc/CC-MAIN-20181217135323-20181217161323-00520.warc.gz"}
https://jeremycote.me/2017/06/03/why-cant-i-divide-by-sin/
One of the most common misconceptions I see while working with students in secondary school is the notion of an inverse. The idea isn’t too complicated, but the reason that I see students making mistakes with it is because they are in the process of learning about functions and it becomes a cognitive burden to think about these abstract processes such as inverses and other transformations. However, I firmly believe that giving students the right idea of how these different concepts fit together will help them navigate their classes with ease. ## Example: Trigonometric Functions I want to start right off with an example that is indicative of why it’s important that you understand inverses. Imagine you had the following problem, and you were looking to solve for $x$: There’s nothing particularly nasty about this equation. Like all the other ones you see, you need to isolate for $x$, so you start by subtracting $10$ from both sides. From here, you then probably say to yourself, “We need to divide by $3$ on each side.” So, that’s what we do: Now, here’s where things can get tricky if you’re not being careful. Depending on what you’ve been taught, you will have several reactions to this. Unfortunately, the one that usually happens is, “Let’s divide both sides by $\sin$”, giving us: Let’s state it right now: this is incorrect. In fact, you can prove it to yourself by trying to enter this value of $x$ into your calculator. It won’t work (unless, you wrote the $3$ after the $sin$, which then would give you an answer, albeit an incorrect one). I wouldn’t mention this if I haven’t seen it enough before, but I think it captures a misunderstanding of something that’s quite a bit more important than simply saying, “You have to use $\sin^{-1}$ to get the answer.” What I want to give you is a reason to understand why what I wrote above is wrong, and this is the concept of an inverse. ## What exactly is an inverse? Like virtually all topics in mathematics, there are many ways to think about inverses. For the purpose of solving equations, I want to present you this simple first thought: An inverse “undoes” whatever you’ve done to an expression. It’s similar to an “undo” button that you might use on a computer. This isn’t very precise, so let me give you a more mathematical definition: Given a function $f(x)$, its inverse, denoted $f^{-1}(x)$, is defined by the following: $f^{-1}(f(x))=x$. This might still seem a little unclear, but with a few examples, you will understand that this isn’t a groundbreaking concept. ### Example: Find the inverse of $x^2$ To find the inverse, we need to first identify what kind of function is “acting” on $x$. In this case, the function that is acting on $x$ take the form of $f(t)=t^2$. Note that I’ve used the variable $t$ in order to make it distinct from $x$. Then, once we substitute $t=x$ into our equation, we get $f(x)=(x)^2$. I also added parentheses around the $x$ in this equation in order to show you that the thing I’m doing to $x$ is squaring it. So far, so good. We now have to the inverse, $f^{-1}(x)$. To do this, ask yourself the question: how do I get rid of the “squaring” function that is acting on the $x$ at the moment? Remember, our goal is to make a new function that when you substitute $(x)^2$ into it, the result you get is $x$. Try it out for yourself. The operation we need to do is take the square root. As such, we define our new inverse function to be $f^{-1}(t)=\sqrt{t}$. What this means is that I have to take the square root of whatever I put into $t$. For our purposes, we are going to feed it our function $f(x)=(x)^2$, giving us: In other words, if we had the equation $x^2=4$, we know that solving for $x$ means taking the square root on both sides of the equation, giving us $x=\pm 2$. One way to look at this is to say that you’re taking the inverse of the square function, which returns the variable by itself (in this case, $x$). ## Revisiting our example Let’s go back to our trigonometric example from above. To remind you, we were trying to solve: At this point, you should be looking at this and thinking, “Okay, there’s a function that’s acting on $x$ on the left hand side of the equation. In order to solve for $x$, all I need to do is take the inverse of that function.” Indeed, this is precisely the purpose of the inverse sine function, denoted $\sin^{-1}(x)$! It’s purpose is to “undo” the work that the sine function did, and return the angle you originally fed the function (in this case, $x$). Explicitly, this is how the manipulation goes: One thing that I want to very clearly express: the $-1$ superscript on the function is not an exponent. Instead, it’s just a symbol we use to declare that it’s the inverse function, just like our symbol of a generic inverse function for a regular function $f(x)$ is $f^{-1}(x)$. ## Solving equations is just repeatedly applying inverse functions Once you understand the idea of an inverse function, you start to see that they are everywhere. Indeed, when we solve basically any equation, we are implicitly asking, “How do I undo what the equation has done?” If you look back to the example we first started with, $10+3\sin(x) = 11$, we first applied the inverse of $f(t)=10+t$ on both sides, namely $f^{-1}(t)=t-10$. This corresponded to subtracting $10$ from both sides of the equation. Similarly, we applied another inverse function to divide both sides by $3$. Remember, when you’re trying to solve for a certain quantity, you want to do the inverse of what the equation has done. By looking at solving equations by repeatedly applying inverses, you won’t make the mistake of dividing by $sin$ ever again. ## Final note I just wanted to include a final remark about inverses here. I didn’t explicitly say it above, but when you’re working with algebra (but not special functions like trigonometric ones), you usually have two different kinds of inverses: additive inverses and multiplicative inverses. They aren’t complicated at all, but they are slightly different. An additive inverse means that, if you have a certain term that we’ll call $a$, then an additive inverse satisfies the following: That’s simple enough, and it usually means just slapping on a negative sign to your term. Next, we have the multiplicative inverse. If you have a term that we’ll call $x$, then the multiplicative inverse satisfies the following: Once again, nothing too complicated. Just be aware that both exist, and that they are both different “kinds” of inverses. You use a specific one depending on what you’re trying to solve for.
{"extraction_info": {"found_math": true, "script_math_tex": 1, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.881505012512207, "perplexity": 189.6821323870916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203947.59/warc/CC-MAIN-20190325112917-20190325134917-00067.warc.gz"}
https://byjus.com/questions/what-is-intermediate-complex-theory-of-catalysis/
# What Is Intermediate Complex Theory Of Catalysis? Catalyst is a substance which alter or change the current rate of reaction. The compound is formed with less energy consumption than needed for the actual reaction. The intermediate compound being unstable combines with other reactant to form the desired product and the catalyst is regenerated. Usually, homogenous catalysis follows intermediate compound formation theory of catalysis. What actually happened here, is that due to the intermediate complex formation, activation energy decreases and rate of reaction accelerates.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8140717148780823, "perplexity": 2005.9597005402034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057447.52/warc/CC-MAIN-20210923195546-20210923225546-00690.warc.gz"}
http://mathhelpforum.com/calculus/162622-finding-dy-dx-equation-using-both-chain-rule-product-rule.html
# Thread: Finding dy/dx of equation using both chain rule and product rule 1. ## Finding dy/dx of equation using both chain rule and product rule Hello everyone, Recently I've been learning about the chain rule and the product rule, which by themselves is fairly straightforward to solve. However, it becomes a little more complex when attempting to solve an equation with both of them together. So any help in solving the following equation using the chain rule and product rule will be greatly appreciated. Here is the equation: y = (3x + 5) ^2 x (2x - 2) I solved the chain rule part (I think), which is the (3x + 5) ^2 segment. Using the chain rule, I got the answer 6(3x + 5), leaving me with the equation of: y = 6(3x + 5)(2x - 2) to be solved using the product rule. But I am not entirely confident with the steps involved, as I believe it involves factorizing and such. Thanks! Nathaniel 2. That's not how it is done... y = 6(3x + 5)(2x - 2) is wrong, because y = (3x + 5)^2 . (2x - 2) [use . if necessary to show product] Using product rule first: Let us break it up. Let u = (3x+5)^2, let v = (2x-2) then, we get: $\dfrac{du}{dx} = 6(3x + 5) = 18x + 30$ $\dfrac{dv}{dx} = 2$ Then, y = uv y' = udv/dx + vdu/dx $y'= (3x+5)^2 \cdot 2 + (2x-2) \cdot (18x + 30)$ Now, you simplify: $y'= 2(3x+5)^2 + (2x-2)(18x + 30)$ You can expand and simplify further itf you want. 3. $\displaystyle y = (3x + 5)^2(2x - 2)$ $\displaystyle \frac{dy}{dx} = (3x + 5)^2\,\frac{d}{dx}(2x - 2) + (2x -2)\,\frac{d}{dx}[(3x + 5)^2]$ $\displaystyle = 2(3x + 5)^2 + 6(3x + 5)(2x - 2)$ $\displaystyle = (3x + 5)[2(3x + 5) + 6(2x - 2)]$ $\displaystyle = (3x + 5)(6x + 10 + 12x - 12)$ $\displaystyle = (3x + 5)(18x - 2)$ $\displaystyle = 2(3x + 5)(9x - 1)$. 4. Thank you both so much for the speedy replies, Unknown008, and Prove It. Okay, so I get both methods, but there is just one thing that I think I need clarification on: Prove It, can you please explain the reasoning behind the factorizing. As in I do not understand why the 6 suddenly moved in front of the (2x + 2), and why the (3x + 5) is out of the front of what seems to be a factorized equation. Once again, thanks so much for all the help! EDIT: Actually, I get it now, I think! That factorizing is just like x ( 2x + 6y) and when you expand you get 2x ^ 2 + 6xy And subbing the values of x and y, I get: 2(3x + 5) ^ 2 + 6(3x + 5)(2x - 2) I think that is correct reasoning. Please do clarify if I am wrong though. 5. Yes, that's it Once you get the hang of it, you can differentiate directly to what Prove It posted. 6. Actually, there's just one last thing. The second last line of Prove It's method reads: (3x + 5)(18x - 2) And the final answer is: 2(3x + 5)(9x -1) I'm just wondering if I need to use the product rule to achieve that final answer? Thanks again. Nathaniel. 7. The conversion from the before last line to the last line is simply the factorisation of (18x -2) = 2(9x -1) The product rule is used only on the line where dy/dx appeared first. 8. Very good indeed, Unknown008. So the equation would still be correct if I wrote it as (3x + 5) . 2(9x -1) ? It was just confusing, because the nature of (3x + 5) didn't change with the 2 stuck out the front. 9. It's algebraically the same thing. That's why in my post, I told you that you could simplify further if you want. Sometimes, for the problem, it's better to simplify for the other parts to be easier. Sometimes, it's just a matter of substituting values of x and/or y. Then, simplification doesn't necessaily makes it easier than before. 10. Excellent. I think my troubles regarding this problem have been solved now. I'll just do some practice with similar problems. This has been a great help, and yet again a wonderful resource. Cheers!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8597288131713867, "perplexity": 543.8962281801486}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549423992.48/warc/CC-MAIN-20170722102800-20170722122800-00069.warc.gz"}
http://mathhelpforum.com/discrete-math/136314-how-prove-distributivity-cardinals.html
# Thread: How to prove distributivity of cardinals? 1. ## How to prove distributivity of cardinals? I have this kind of problem: (1): Let k,m,l be cardinals. Prove that k(m+l)=km+lm. I know, that I have to prove that there exist a bijection between Kx(LuM) and (KxL)u(KxM) where card(K) = k, card(L) = l and card(M) = m, but how I do it? I'm little bit lost here, can anyone help? 2. Pick representative sets K, L and M, respectively (all mutually disjoint). You want to show that K×(LM) ∼ (K×L)∪(K×M). These are actually equal since if (a,b) is in the LHS then either bL or b ∈ M. In the first case, (a,b) ∈ K×L and in the second (a,b) ∈ K×M. The converse also applies.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9876278042793274, "perplexity": 2390.892905184785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189214.2/warc/CC-MAIN-20170322212949-00024-ip-10-233-31-227.ec2.internal.warc.gz"}
https://rd.springer.com/article/10.1140/epjc/s10052-018-5602-x
The European Physical Journal C , 78:123 # Joule–Thomson expansion of Kerr–AdS black holes Open Access Regular Article - Theoretical Physics ## Abstract In this paper, we study Joule–Thomson expansion for Kerr–AdS black holes in the extended phase space. A Joule–Thomson expansion formula of Kerr–AdS black holes is derived. We investigate both isenthalpic and numerical inversion curves in the TP plane and demonstrate the cooling–heating regions for Kerr–AdS black holes. We also calculate the ratio between minimum inversion and critical temperatures for Kerr–AdS black holes. ## 1 Introduction Since the first studies of Bekenstein and Hawking [1, 2, 3, 4, 5, 6], black holes as thermodynamic system have been an interesting research field in theoretical physics. The black hole thermodynamics provides fundamental relations between theories such as classical general relativity, thermodynamics and quantum mechanics. Black holes as thermodynamic system have many exciting similarities with general thermodynamic system. These similarities become more obvious and precise for the black holes in AdS space. The properties of AdS black hole thermodynamics have been studied since the seminal paper of Hawking and Page [7]. Furthermore, the charged AdS black holes thermodynamic properties were studied in [8, 9] and it was shown that the charged AdS black holes have a van der Waals like phase transition. Recently black hole thermodynamics in AdS space has been intensively studied in the extended phase space where the cosmological constant is considered as the thermodynamic pressure. Extended phase space leads to important results: Smarr relation is satisfied for the first law of the black holes thermodynamics in the presence of variable cosmological constant. It also provides the definition of the thermodynamic volume which is more sensible than the geometric volume of the black hole. In addition to similar behaviours with conventional thermodynamic systems, studying the AdS black holes is another important reason for the AdS/CFT correspondence [10]. Considering the cosmological constant as thermodynamic pressure, \begin{aligned} P=-\frac{\varLambda }{8\pi }, \end{aligned} (1) and its conjugate quantity as thermodynamic volume, \begin{aligned} V=\left( \frac{\partial M}{\partial P}\right) _{S,Q,J}, \end{aligned} (2) lead us to investigate thermodynamic properties, rich phase structures and other thermodynamic phenomena for AdS black holes in a similar way to the conventional thermodynamic systems. Based on this idea, the charged AdS black hole thermodynamic properties and phase transition were studied by Kubiznak and Mann [11]. It was shown in this study that the charged AdS black hole phase transition has the same characteristic behaviors with van der Waals liquid–gas phase transition. They also computed critical exponents and showed that they coincide with exponents of van der Waals fluids. It was shown in [12] that the cosmological constant as pressure requires considering the black hole mass M as the enthalpy H rather than as internal energy U. In recent years, thermodynamic properties and phase transition of AdS black holes have been widely investigated [13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54].1 The phase transition of AdS black holes in the extended phase space is not restricted to a van der Waals type transition, but also the reentrant phase transition and the triple point for AdS black holes were studied in [31, 32, 33, 34]. The compressibility of rotating AdS black holes in four and higher dimensions was studied in [35, 36]. In [37], a general method was used for computing the critical exponents for AdS black holes which have a van der Waals like phase transition. Furthermore, heat engines behaviours of the AdS black holes have been studied. For example, in [38] two kind of heat engines were proposed by Johnson for charged AdS black holes and heat engines were studied for various black hole solutions in [39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49].2 More recently, adiabatic processes [53] and Rankine cycle [54] have been studied for the charged AdS black holes. In [55], we also studied the well-known Joule–Thomson expansion process for the charged AdS black holes. We obtained inversion temperature to investigate inversion and isenthalpic curves. We also showed heating–cooling regions in TP plane. However, so far, Joule–Thomson expansion for Kerr–AdS black holes in extended phase space has never been studied. The main purpose of this study is to investigate Joule–Thomson expansion for Kerr–AdS black holes. The paper is arranged as follows. In Sect. 2, we briefly review some thermodynamic properties of Kerr–AdS black holes which are introduced in [14].3 In Sect. 3, we first of all derive a Joule–Thomson expansion formula for Kerr–AdS black hole by using first law and Smarr formula. Then we obtain the equation of inversion pressure $$P_{i}$$ and entropy S to investigate the inversion curves. We also show that the ratio between minimum inversion and critical temperatures for Kerr–AdS black holes is the same as the ratio of charged AdS black holes [55]. Finally, we discuss our results in Sect. 4. (Here we use the units $$G_{N}=\hbar =k_{B}=c=1$$.) In this section, we briefly review Kerr–AdS black hole thermodynamic properties in the extended phase space. The line element of Kerr–AdS black hole in four dimensional AdS space is given by \begin{aligned} \mathrm{d}s^{2}= & {} -\frac{\varDelta }{\rho ^{2}}\left( \mathrm{d}t-\frac{a\sin ^{2}\theta }{\varXi }\mathrm{d}\phi \right) ^{2}+\frac{\rho ^{2}}{\varDelta }\mathrm{d}r^{2}+\frac{\rho ^{2}}{\varDelta _{\theta }}\mathrm{d}\theta ^{2}\nonumber \\&+\,\frac{\varDelta _{\theta }\sin ^{2}\theta }{\rho ^{2}}\left( a\mathrm{d}t-\frac{r^{2}+a^{2}}{\varXi }\mathrm{d}\phi \right) ^{2}, \end{aligned} (3) where \begin{aligned} \varDelta= & {} \frac{(r^{2}+a^{2})(l^{2}+r^{2})}{l^{2}}-2mr, \qquad \varDelta _{\theta }=1-\frac{a^{2}}{l^{2}}\cos ^{2}\theta , \nonumber \\ \rho ^{2}= & {} r^{2}+a^{2}\cos ^{2}\theta , \qquad \varXi =1-\frac{a^{2}}{l^{2}}, \end{aligned} (4) and l represents AdS curvature radius. The metric parameters m and a are related to the black hole mass M and the angular momentum J by \begin{aligned} M=\frac{m}{\varXi ^{2}}, \qquad J=a\frac{m}{\varXi ^{2}}. \end{aligned} (5) The mass of a Kerr–AdS black hole in terms of S, J and P [14, 56] is given by \begin{aligned} M=\frac{1}{2}\sqrt{\frac{\left( S+\frac{8PS^{2}}{3}\right) ^{2}+4\pi ^{2}\left( 1+\frac{8PS}{3}\right) J^{2}}{\pi S}}. \end{aligned} (6) The first law and the corresponding Smarr relation of the Kerr–AdS black hole are given by \begin{aligned} dM= & {} TdS+VdP+\varOmega dJ, \end{aligned} (7) \begin{aligned} \frac{M}{2}= & {} TS-VP+\varOmega J, \end{aligned} (8) respectively, and the Smarr relation can be derived by a scaling argument [12]. From Eq. (7), one can obtain the thermodynamic quantities. The expression for the temperature is \begin{aligned} T= & {} \left( \frac{\partial M}{\partial S}\right) _{J,P}=\frac{1}{8\pi M}\left[ \phantom {\left( \frac{J}{S}\right) ^{2}}\left( 1+\frac{8PS}{3}\right) \left( 1+8PS\right) \right. \nonumber \\&\left. -\,4\pi ^{2}\left( \frac{J}{S}\right) ^{2}\right] . \end{aligned} (9) The thermodynamic volume is defined by \begin{aligned} V=\left( \frac{\partial M}{\partial P}\right) _{S,J}=\frac{2}{3\pi M}\left[ S\left( S+\frac{8PS^{2}}{3}\right) +2\pi ^{2}J^{2}\right] .\nonumber \\ \end{aligned} (10) Finally, we obtain the angular velocity as follows: \begin{aligned} \varOmega =\left( \frac{\partial M}{\partial J}\right) _{S,P}=\frac{\pi J}{MS}\left( 1+\frac{8PS}{3}\right) . \end{aligned} (11) In this section, we obtain some thermodynamic quantities of Kerr–AdS black holes. In the next section, we will use these quantities to investigate Joule–Thomson expansion effects for Kerr–AdS black holes. ## 3 Joule–Thomson expansion In this section, we will investigate Joule–Thomson expansion for Kerr–AdS black holes. The expansion is characterized by temperature change with respect to pressure. Enthalpy remains constant during the expansion process. As we know from [12], black hole mass is identified enthalpy in AdS space. Therefore, the black hole mass remains constant during expansion process. Joule–Thomson coefficient $$\mu$$, which characterizes the expansion, is given by [57] \begin{aligned} \mu =\left( \frac{\partial T}{\partial P}\right) _{J,M}. \end{aligned} (12) Cooling–heating regions can be determined by sign of Eq. (12). Change of pressure is negative since the pressure always decreases during the expansion. The temperature may decrease or increase during process. Therefore temperature determines sign of $$\mu$$. If $$\mu$$ is positive (negative), cooling (heating) occurs. The inversion curve, which is obtained at $$\mu =0$$ for infinitesimal pressure drops, characterizes the expansion process and it determines the cooling–heating regions in the TP plane.4 We begin to derive Joule–Thomson expansion coefficient formula for Kerr–AdS black holes. First, we differentiate Eq. (8) to obtain \begin{aligned} dM= & {} 2(TdS+SdT-VdP-PdV+\varOmega dJ\nonumber \\&+\,Jd\varOmega ). \end{aligned} (13) Since $$dM=dJ=0$$, Eqs. (7) and (13) can be written \begin{aligned}&TdS=-VdP, \end{aligned} (14) \begin{aligned}&TdS+SdT-VdP-PdV+Jd\varOmega =0, \end{aligned} (15) respectively. If Eq. (14) can be substituted into Eq. (15), one can obtain \begin{aligned} -2V{+}S\left( \frac{\partial T}{\partial P}\right) _{M}-P\left( \frac{\partial V}{\partial P}\right) _{M}{+}J\left( \frac{\partial \varOmega }{\partial P}\right) _{M}=0,\quad \end{aligned} (16) which can be rearranged to give the Joule–Thomson formula as follows: \begin{aligned} \mu =\left( \frac{\partial T}{\partial P}\right) _{M}=\frac{1}{S}\left[ P\left( \frac{\partial V}{\partial P}\right) _{M}-J\left( \frac{\partial \varOmega }{\partial P}\right) _{M}+2V\right] .\nonumber \\ \end{aligned} (17) Here we obtain the Joule–Thomson expansion formula in terms of the Kerr–AdS black hole parameters. At inversion pressure $$P_{i}$$, $$\mu$$ equals zero and therefore we obtain $$P_{i}$$ from Eq. (17), \begin{aligned} P_{i}=\left( \frac{\partial P}{\partial V}\right) _{M}\left[ J\left( \frac{\partial \varOmega }{\partial P}\right) _{M}-2V\right] . \end{aligned} (18) From Eq. (6), we can obtain the pressure as a function of mass, entropy and angular momentum, \begin{aligned} P=\frac{3}{8}\left[ \frac{2\sqrt{\pi }\sqrt{\pi ^{3}J^{4}+M^{2}S^{3}}-2\pi ^{2}J^{2}}{S^{3}}-\frac{1}{S}\right] . \end{aligned} (19) If we combine Eqs. (10), (11) and (19) with Eq. (18), we obtain a relation between inversion pressure and entropy as follows: \begin{aligned}&256P_{i}^{3}S^{7}+256P_{i}^{2}S^{6}+84P_{i}S^{5}+(9-384\pi ^{2}J^{2}P_{i}^{2})S^{4}\nonumber \\&\quad -336\pi ^{2}J^{2}P_{i}S^{3}-72\pi ^{2}J^{2}S^{2}-72\pi ^{4}J^{4}=0. \end{aligned} (20) The last equation is useful to determine inversion curves, but first we will investigate minimum inversion temperature. Eq. (20) can be given for $$P_{i}=0$$ \begin{aligned} S^{4}-8\pi ^{2}J^{2}S^{2}-8\pi ^{4}J^{4}=0, \end{aligned} (21) and we find four roots for this equation. However, one root is physically meaningful. This root is given by \begin{aligned} S=\sqrt{2(2+\sqrt{6})}\pi J. \end{aligned} (22) One can substitute Eq. (22) into Eq. (9) and obtain the minimum inversion temperature, \begin{aligned} T_{i}^{\mathrm{min}}=\frac{\sqrt{3}}{4(916+374\sqrt{6})^{\frac{1}{4}}\pi \sqrt{J}}. \end{aligned} (23) For Kerr–AdS black holes, the critical temperature $$T_{\mathrm{c}}$$ is given by [16] \begin{aligned} T_{\mathrm{c}}= & {} \frac{64k_{1}^{2}k_{2}^{4}+32k_{1}k_{2}^{3}+3k_{2}^{2}-12}{4\pi k_{2}\sqrt{k_{2}(8k_{1}k_{2}+3)(8k_{1}k_{2}^{3}+3k_{2}^{2}+12)}}\frac{1}{\sqrt{J}}\nonumber \\\simeq & {} \frac{0.041749}{\sqrt{J}}, \end{aligned} (24) where \begin{aligned} k_{1}= & {} \frac{1}{64\left( 103-3\sqrt{87}\right) ^{17/3}}\nonumber \\&\times \,\bigg (-2^{2/3}\left( 225679003807-24183767608\sqrt{87}\right) \nonumber \\&\times \,\root 3 \of {103-3 \sqrt{87}}-17\left( 103-3\sqrt{87}\right) ^{2/3}\nonumber \\&\times \,\left( 484826973\sqrt{87}-5116133497\right) \nonumber \\&-\,\root 3 \of {2}\left( 68098470527+5855463275 \sqrt{87}\right) \bigg ),\nonumber \\ k_{2}= & {} \frac{2}{3}\left( 2+\root 3 \of {206-6\sqrt{87}}+\root 3 \of {206+6\sqrt{87}}\right) . \end{aligned} The ratio between minimum inversion and critical temperature is given by \begin{aligned} \frac{T_{i}^{\mathrm{min}}}{T_{\mathrm{c}}}\approx 0.504622, \end{aligned} (25) which is the same as the value of charged AdS black holes [55]. Solving Eq. (20) may not be analytically possible. Therefore, we use numerical solutions to plot inversion curves in the TP plane. In Fig. 1, we plotted inversion curves for various angular momentum values. In contrast to van der Waals fluids, it can be seen from Fig. 1 that the inversion curves are not closed and there is only one inversion curve. We found similar behaviours for the charged AdS black holes in our previous work [55]. In Fig. 2, we plot isenthalpic (constant mass) and inversion curves for various values of angular momentum in the TP plane. If the entropy from Eq. (6) can be substituted into Eq. (9), we obtain constant mass curves in the TP plane. As it can be seen from Fig. 2, the inversion curves divide the plane into two regions. The region above the inversion curves corresponds to cooling region, while the region under the inversion curves corresponds to heating region. Indeed, heating and cooling regions are already determined from the sign of isenthalpic curves slope. The sign of the slope is positive in the cooling region and it changes in the heating region. On the other hand, cooling (heating) does not happen on the inversion curve which plays the role of a boundary between the two regions. ## 4 Conclusions In this study, we investigated Joule–Thomson expansion for Kerr–AdS black holes in the extended phase space. The Kerr–AdS black hole Joule–Thomson formula was derived by using the first law of black hole thermodynamics and the Smarr relation. We plotted isenthalpic and inversion curves in the TP plane. In order to plot the inversion curves, we solved Eq. (20) numerically. Moreover, we obtained the minimum inversion temperature $$T_{i}$$ and calculated the ratio between inversion and critical temperatures for Kerr–AdS black holes. Similar results were reported for the charged AdS black holes in [55] by us. For example, there is only a lower inversion curve for Kerr–AdS and the charged AdS black holes. Therefore, we only consider a minimum inversion temperature $$T_{i}^{\mathrm{min}}$$ at $$P_{i}=0$$. Cooling regions are not closed for both systems. The ratios between minimum inversion temperatures and critical temperatures are nearly the same for the two black hole solutions. The ratio may deviate from 0.5 for other black hole solutions. The same ratio may be obtained for other black hole solutions in the different limit cases. Furthermore, we restricted the study to a four-dimensional solution. Therefore the ratio may depend on the dimensions of space-time. In order to compare the charged AdS/Kerr–AdS black holes with van der Waals fluids, we present schematic inversion curves for van der Waals fluids and the charged AdS/Ker-AdS black holes in Fig. 3. In contrast to the charged AdS and Kerr–AdS black holes, there are upper and lower inversion curves for van der Waals fluids [55]. Therefore the cooling region is closed and we only consider both the minimum inversion temperature $$T_{i}^{\mathrm{min}}$$ and the maximum inversion temperature $$T_{i}^{\mathrm{max}}$$ for this system. While cooling always occurs above the inversion curves for both black hole solutions, cooling only occurs in the region surrounded by the upper and lower inversions curves for van der Waals fluids. ## Footnotes 1. 1. See [50, 51, 52] and the references therein for various black hole solutions. 2. 2. See [52] and the references therein. 3. 3. Indeed, Kerr–Newman–AdS black hole thermodynamics functions are introduced in [14]. But one can easily obtain Kerr–AdS black holes thermodynamic functions, when electric charge Q goes to zero. 4. 4. There are two approaches for the Joule–Thomson expansion process. The differential and integral versions correspond to infinitesimal and finite pressure drops, respectively. In this paper, we considered differential version of Joule–Thomson expansion for Kerr–AdS black holes. See [57]. ## Notes ### Acknowledgements We would like to thank the anonymous referees for their helpful and constructive comments. ## References 1. 1. J.D. Bekenstein, Lett. Nuovo Cimento 4, 737 (1972) 2. 2. J.D. Bekenstein, Phy. Rev. D 7, 2333 (1973) 3. 3. J.M. Bardeen, B. Carter, S.W. Hawking, Commun. Math. Phys. 31, 161 (1973) 4. 4. J.D. Bekenstein, Phys. Rev. D 9, 3292 (1974) 5. 5. S.W. Hawking, Nature 248, 30 (1974) 6. 6. S.W. Hawking, Commun. Math. Phys. 43, 199 (1975) 7. 7. S.W. Hawking, D.N. Page, Commun. Math. Phys. 87, 577 (1983) 8. 8. A. Chamblin, R. Emparan, C.V. Johnson, R.C. Myers, Phys. Rev. D 60, 064018 (1999) 9. 9. A. Chamblin, R. Emparan, C.V. Johnson, R.C. Myers, Phys. Rev. D 60, 104026 (1999) 10. 10. J.M. Maldacena, Int. J. Theor. Phys. 38, 1113 (1999) 11. 11. D. Kubiznak, R.B. Mann, J. High Energy Phys. 07, 033 (2012) 12. 12. D. Kastor, S. Ray, J. Traschen, Class. Quantum Gravity 26, 195011 (2009) 13. 13. B.P. Dolan, Class. Quantum Gravity 28, 235017 (2011) 14. 14. B.P. Dolan, arXiv:1209.1272 (2012) 15. 15. J.X. Mo, W.B. Liu, Phys. Lett. B 727, 336 (2013) 16. 16. S.W. Wei, P. Cheng, Y.X. Liu, Phy. Rev. D 93, 084015 (2016)Google Scholar 17. 17. S. Gunasekaran, R.B. Mann, D. Kubiznak, J. High Energy Phys. 11, 110 (2012) 18. 18. E. Spallucci, A. Smailagic, Phys. Lett. B 723, 436 (2013) 19. 19. A. Belhaj, M. Chabab, H.E. Moumni, L. Medari, M.B. Sedra, Chin. Phys. Lett. 30, 090402 (2013) 20. 20. R.G. Cai, L.M. Cao, L. Li, R.Q. Yang, J. High Energy Phys. 9, 005 (2013) 21. 21. R. Zhao, H.H. Zhao, M.S. Ma, L.C. Zhang, Eur. Phys. J. C 73, 2645 (2013) 22. 22. M.S. Ma, F. Liu, R. Zhao, Class. Quantum Gravity 73, 095001 (2014) 23. 23. A. Belhaj, M. Chabab, H.E. Moumni, K. Masmar, M.B. Sedra, Int. J. Geom. Methods Mod. Phys. 12, 1550017 (2015) 24. 24. S. Dutta, A. Jain, R. Soni, J. High Energy Phys. 12, 60 (2013) 25. 25. G.Q. Li, Phys. Lett. B 735, 256 (2014) 26. 26. J. Liang, C.B. Sun, H.T. Feng, Europhys. Lett. 113, 30008 (2016) 27. 27. S.H. Hendi, M.H. Vahidinia, Phys. Rev. D 88, 084045 (2013) 28. 28. S.H. Hendi, S. Panahiyan, B.E. Panah, J. High Energy Phys. 01, 129 (2016) 29. 29. J. Sadeghi, H. Farahani, Int. J. Theor. Phys. 53, 3683 (2014) 30. 30. D. Momeni, M. Faizal, K. Myrzakulov, R. Myrzakulov, Phys. Lett. B 765, 154 (2017) 31. 31. N. Altamirano, D. Kubiznak, R.B. Mann, Z. Sherkatghanad, Class. Quantum Gravity 31, 042001 (2014) 32. 32. A.M. Frassino, D. Kubiznak, R.B. Mann, F. Simovic, J. High Energy Phys. 09, 80 (2014) 33. 33. R.A. Hennigar, R.B. Mann, Entropy 17, 8056 (2015) 34. 34. S.W. Wei, Y.X. Liu, Phys. Rev. D 90, 044057 (2014) 35. 35. B.P. Dolan, Phys. Rev. D 84, 127503 (2011) 36. 36. B.P. Dolan, Class. Quantum Gravity 31, 035022 (2014) 37. 37. B.R. Majhi, S. Samanta, Phys. Lett. B 773, 203 (2017) 38. 38. C.V. Johnson, Class. Quantum Gravity 31, 205002 (2014) 39. 39. C.V. Johnson, Class. Quantum Gravity 33, 135001 (2016) 40. 40. C.V. Johnson, Class. Quantum Gravity 33, 215009 (2016) 41. 41. A. Belhaj, M. Chabab, H.E. Moumni, K. Masmar, M.B. Sedra, A. Segui, J. High Energy Phys. 05, 149 (2015) 42. 42. E. Caceres, P.H. Nguyen, J.F. Pedraza, J. High Energy Phys. 1509, 184 (2015) 43. 43. 44. 44. M.R. Setare, H. Adami, Gen. Relativ. Gravity 47, 132 (2015) 45. 45. C.V. Johnson, Entropy 18, 120 (2016) 46. 46. C. Bhamidipati, P.K. Yerra, Eur. Phys. J. C 77, 534 (2017) 47. 47. H. Liu, X.H. Meng, Eur. Phys. J. C 77, 556 (2017) 48. 48. J.X. Mo, F. Liang, G.Q. Li, J. High Energy Phys. 03, 10 (2017) 49. 49. M. Zhang, W.B. Liu, Int. J. Theor. Phys. 55, 5136 (2016) 50. 50. B.P. Dolan, Mod. Phys. Lett. A 30, 1540002 (2015) 51. 51. N. Altamirano, D. Kubiznak, R.B. Mann, Z. Sherkatghanad, Galaxies 2, 89 (2014) 52. 52. D. Kubiznak, R.B. Mann, M. Teo, Class. Quantum Gravity 34, 063001 (2017) 53. 53. S. Lan, W. Liu, arXiv:1701.04662 (2017) 54. 54. S.W. Wei, Y.X. Liu, arXiv:1708.08176 (2017) 55. 55. Ö. Ökcü, E. Aydıner, Eur. Phys. J. C 77, 24 (2017) 56. 56. M.M. Caldarelli, G. Cognola, D. Klemm, Class. Quantum Gravity 17, 399 (2000) 57. 57. B.Z. Maytal, A. Shavit, Cryogenics 37, 33 (1997)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.948244035243988, "perplexity": 3280.6640355674826}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125936981.24/warc/CC-MAIN-20180419150012-20180419170012-00126.warc.gz"}
http://potto.org/fluidMech/static2.php
Next: Mass    Previous: Static1 # Chapter Static (continue) ## 4.6 Buoyancy and Stability Fig. 4.34 Schematic of Immersed Cylinder. One of the oldest known scientific research on fluid mechanics relates to buoyancy due to question of money was carried by Archimedes. Archimedes principle is related to question of density and volume. While Archimedes did not know much about integrals, he was able to capture the essence. Here, because this material is presented in a different era, more advance mathematics will be used. While the question of the stability was not scientifically examined in the past, the floating vessels structure (more than 150 years ago) show some understanding. The total forces the liquid exacts on a body are considered as a buoyancy issue. To understand this issue, consider a cubical and a cylindrical body that is immersed in liquid and center in a depth of, $h_0$ as shown in Figure 4.34. The force to hold the cylinder at the place must be made of integration of the pressure around the surface of the square and cylinder bodies. The forces on square geometry body are made only of vertical forces because the two sides cancel each other. However, on the vertical direction, the pressure on the two surfaces are different. On the upper surface the pressure is $\rho \, g \, (h_0-a/2)$. On the lower surface the pressure is $\rho \, g \, (h_0+a/2)$. The force due to the liquid pressure per unit depth (into the page) is \begin{align} F = \rho \, g \, \left( (h_0-a/2) - (h_0+a/2) \right)\,ll\, b = - \rho\, g\, a\, b\, ll = -\rho \, g \, V \label{static:eq:qPforce} \end{align} In this case the $ll$ represents a depth (into the page). Rearranging equation \eqref{static:eq:qPforce} to be \begin{align} \dfrac{F} {V} = \rho \, g \label{static:eq:qPforcePerell} \end{align} The force on the immersed body is equal to the weight of the displaced liquid. This analysis can be generalized by noticing two things. All the horizontal forces are canceled. Any body that has a projected area that has two sides, those will cancel each other. Another way to look at this point is by approximation. For any two rectangle bodies, the horizontal forces are canceling each other. Thus even these bodies are in contact with each other, the imaginary pressure make it so that they cancel each other. On the other hand, any shape is made of many small rectangles. The force on every rectangular shape is made of its weight of the volume. Thus, the total force is made of the sum of all the small rectangles which is the weight of the sum of all volume. Fig. 4.35 The floating forces on Immersed Cylinder. In illustration of this concept, consider the cylindrical shape in Figure 4.34. The force per area (see Figure 4.35) is \begin{align} dF = \overbrace{\rho\,g\, \left(h_0 - r\, \sin \theta\right)}^{P} \overbrace{\sin \theta\, r\, d\theta }^{dA_{vertical}} \label{static:eq:cylinderElement} \end{align} The total force will be the integral of the equation qref {static:eq:cylinderElement} \begin{align} F = \int_{0}^{2\pi} {\rho\,g\,\left(h_0 - r\, \sin \theta\right)} {\,r\, d\theta\, \sin \theta} \label{static:eq:cylinderElementIa} \end{align} Rearranging equation \eqref{static:eq:cylinderElement} transforms it to \begin{align} F = r\,g\,\rho\int_{0}^{2\pi} { \left(h_0 - r\, \sin \theta\right)} {\,\sin \theta\, d\theta } \label{static:eq:cylinderElementI} \end{align} The solution of equation \eqref{static:eq:cylinderElementI} is \begin{align} F = -\pi \,{r}^{2}\,\rho\,g \label{static:eq:cylinderElementISolution} \end{align} The negative sign indicate that the force acting upwards. While the horizontal force is \begin{align} F_v = \int_{0}^{2\,\pi} { \left(h_0 - r\, \sin \theta\right)} \cos\theta\,d\theta = 0 \label{static:eq:cylinderhorizontalForce} \end{align} # Example 4.19 To what depth will a long log with radius, $r$, a length, $ll$ and density, $\rho_w$ in liquid with density, $\rho_l$. Assume that $\rho_l>\rho_w$. You can provide that the angle or the depth. Fig. 4.36 Schematic of a thin wall floating body. Typical examples to explain the buoyancy are of the vessel with thin walls put upside down into liquid. The second example of the speed of the floating bodies. Since there are no better examples, these examples are a must. # Example 4.20 A cylindrical body, shown in Figure 4.36 ,is floating in liquid with density, $\rho_{l}$. The body was inserted into liquid in a such a way that the air had remained in it. Express the maximum wall thickness, $t$, as a function of the density of the wall, $\rho_s$ liquid density, $\rho_{l}$ and the surroundings air temperature, $T_1$ for the body to float. In the case where thickness is half the maximum, calculate the pressure inside the container. The container diameter is $w$. Assume that the wall thickness is small compared with the other dimensions ($t << w$ and $t< # Solution The air mass in the container is $m_{air} = \overbrace{\pi\,w^2\,h}^{V} \overbrace{\dfrac{P_{atmos}}{R\,T}} ^{\rho_{air}}$ The mass of the container is $m_{container} = \left(\overbrace{\pi\,w^2 + 2\,\pi\,w\,h}^{A}\right) \,t \,\rho_{s}$ The liquid amount enters into the cavity is such that the air pressure in the cavity equals to the pressure at the interface (in the cavity). Note that for the maximum thickness, the height,$h_1$has to be zero. Thus, the pressure at the interface can be written as $P_{in} = \rho_{l}\,g\,h_{in}$ On the other hand, the pressure at the interface from the air point of view (ideal gas model) should be $P_{in} = \dfrac{m_{air}\,R\,T_1} {\underbrace{ h_{in}\,\pi\,w^2}_{V}}$ Since the air mass didn't change and it is known, it can be inserted into the above equation. $\rho_{l}\,g\,h_{in}+ P_{atmos} = P_{in} = \dfrac{{\left(\pi\,w^2\,h\right) \overbrace{\dfrac{P_{atmos}}{R\,T_1}}^{\rho} }\,R\,T_1} { h_{in}\,\pi\,w^2}$ The last equation can be simplified into $\rho_{l}\,g\,h_{in} + P_{atmos} = \dfrac{h \, {P_{atmos}} } {h_{in}}$ And the solution for$h_{in}$is $h_{in}= - \dfrac{P_{atmos} +\sqrt{4\,g\,h\,P_{atmos}\,\rho_l+{P_{atmos}}^{2}} }{2\,g\,\rho_l}$ and $h_{in} = \dfrac {\sqrt{4\,g\,h\,P_{atmos}\,\rho_l+{P_{atmos}}^{2}}-P_{atmos}} {2\,g\,\rho_l}$ The solution must be positive, so that the last solution is the only physical solution. Advance Material # Example 4.21 Calculate the minimum density an infinitely long equilateral triangle (three equal sides) has to be so that the sharp end is in the water. # Solution The solution demonstrates that when$h \longrightarrow 0$then$h_{in} \longrightarrow 0. When the gravity approaches zero (macro gravity) then \begin{align*} h_{in}= \dfrac{P_{atmos}}{\rho_l\,g}+h -\dfrac{{h}^{2}\,\rho_l\,g}{P_{atmos}} +\dfrac{2\,{h}^{3}\,{\rho_l}^{2}\,{g}^{2}}{{P_{atmos}}^{2}} -\dfrac{5\,{h}^{4}\,{\rho_l}^{3}\,{g}^{3}}{{P_{atmos}}^{3}} +\cdots \end{align*} This strange'' result shows that bodies don't float in the normal sense. When the floating is under vacuum condition, the following height can be expanded into \begin{align*} h_{in}=\sqrt{\dfrac{h\,P_{atmos}} {{g\,\rho_l}}} +\dfrac{P_{atmos}}{2\,g\,\rho_l} + \cdots \end{align*} which shows that the large quantity of liquid enters into the container as it is expected. Archimedes theorem states that the force balance is at displaced weight liquid (of the same volume) should be the same as the container, the air. Thus, \begin{align*} \overbrace{\pi\, w^2\, (h-h_{in}) \,g}^{\text{net displayed water}}= \overbrace{\left(\pi\,w^2 + 2\,\pi\,w\,h\right)\,t\,\rho_{s}\,g} ^{\text{container}} + \overbrace{{\pi\,w^2\,h}\, \left(\dfrac{P_{atmos}}{R\,T_1} \right) \,g} ^{\text{air}} \end{align*} If air mass is neglected the maximum thickness is \begin{align*} t_{max} = \dfrac{ 2\,g\,h\,w\,\rho_l+P_{atmos}\,w -w\,\sqrt{4\,gh\,P_{atmos}\,\rho_l+{P_{atmos}}^{2}} } {\left( 2\,g\,w+4\,g\,h\right) \,\rho_l\,\rho_s} \end{align*} The condition to have physical value for the maximum thickness is \begin{align*} 2\,g\,h\,\rho_l+P_{atmos} \ge \sqrt{4\,gh\,P_{atmos}\,\rho_l+{P_{atmos}}^{2}} \end{align*} The full solution is \begin{align*} \begin{array}{cc} t_{max} = & -\dfrac{\left( w\,R\,\sqrt{4\,gh\,P_{atmos}\,\rho_l+{P_{atmos}}^{2}} -2\,g\,h\,w\,R\,\rho_l-P_{atmos}\,w\,R\right) \,T_1+2\,g\,h\,P_{atmos}\,w\,\rho_l} {\left( 2\,g\,w+4\,g\,h\right) \,R\,\rho_l\,\rho_s\,T_1} \end{array} \end{align*} In this analysis the air temperature in the container immediately after insertion in the liquid has different value from the final temperature. It is reasonable as the first approximation to assume that the process is adiabatic and isentropic. Thus, the temperature in the cavity immediately after the insertion is \begin{align*} \dfrac{T_i}{T_f} = \left( \dfrac{P_i}{P_f} \right) \end{align*} The final temperature and pressure were calculated previously. The equation of state is \begin{align*} P_i = \dfrac{m_{air}\,R\,T_i}{V_i} \end{align*} The new unknown must provide additional equation which is \begin{align*} V_i = \pi\,w^2\,h_{i} \end{align*} #### Thickness Below The Maximum For the half thicknesst= \dfrac{t_{max}}{2}the general solution for any given thickness below maximum is presented. The thickness is known, but the liquid displacement is still unknown. The pressure at the interface (after long time) is \begin{align*} \rho_l \,g \, h_{in} +P_{atmos} = \dfrac{\pi\,w^2\,h \dfrac{P_{atmos}}{R\,T_1} R\, T_1} {\left(h_{in}+h_1\right)\,\pi\,w^2} \end{align*} which can be simplified to \begin{align*} \rho_l \, g\,h_{in} + P_{atmos} = \dfrac{h\,P_{atmos}}{h_{in}+h_1} \end{align*} The second equation is Archimedes' equation, which is \begin{align*} \pi\,w^2\left(h-h_{in} -h_1\right) = \left( \pi\,w^2 +2\,\pi\,w\,h)\,t\,\rho_s\,g \right) +\pi\,w^2\,h\,\left( \dfrac{P_{atmos}}{R\,T_1}\right)\,g \end{align*} End Advance Material # Example 4.22 A body is pushed into the liquid to a distance,h_0$and left at rest. Calculate acceleration and time for a body to reach the surface. The body's density is$\alpha\, \rho_{l}$, where$\alpha$is ratio between the body density to the liquid density and ($0 < \alpha < 1). Is the body volume important? # Solution The net force is \begin{align*} F = \overbrace{V\,g\,\rho_l}^{\text{liquid weight}} - \overbrace{V\,g\,\alpha\,\rho_l}^{\text{body weight}} = V\,g\, \rho_l \,( 1 -\alpha) \end{align*} But on the other side the internal force is \begin{align*} F = m\,a = \overbrace{V\,\alpha \rho_l}^{m}\, a \end{align*} Thus, the acceleration is \begin{align*} a = g \left( \dfrac{1-\alpha}{\alpha}\right) \end{align*} If the object is left at rest (no movement) thus time will be (h=1/2\,a\,t^2) \begin{align*} t = \sqrt{\dfrac{2\,h \alpha}{g(1-\alpha)}} \end{align*} If the object is very light (\alpha \longrightarrow 0) then \begin{align*} t_{min} = \sqrt{\dfrac{2\,h\,\alpha}{g}} +\dfrac{\sqrt{2\,g\,h}\;{\alpha}^{\dfrac{3}{2}}} {2\,g} +\dfrac{3\,\sqrt{2\,g\,h}\,{\alpha}^{\dfrac{5}{2}}}{8\,g} +\dfrac{5\,\sqrt{2\,g\,h}\,{\alpha}^{\dfrac{7}{2}}}{16\,g} +\cdots \end{align*} From the above equation, it can be observed that only the density ratio is important. This idea can lead to experiment in large gravity'' because the acceleration can be magnified and it is much more than the reverse of free falling. # Example 4.23 In some situations, it is desired to find equivalent of force of a certain shape to be replaced by another force of a standard'' shape. Consider the force that acts on a half sphere. Find equivalent cylinder that has the same diameter that has the same force. # Solution The force act on the half sphere can be found by integrating the forces around the sphere. The element force is $dF = (\rho_L - \rho_S) \, g\, \overbrace{r\, \cos\phi\, \cos\theta}^{h} \overbrace{\cos\theta\,\cos\phi\, \overbrace{r^2\,d\theta\,d\phi}^{dA} }^{dA_x}$ The total force is then $F_x = \int_0^{\pi} \int_0^{\pi} (\rho_L - \rho_S) \, g\, {\cos^2\phi \cos^2\theta} \, {r^3\,d\theta\,d\phi}$ The result of the integration the force on sphere is $F_s = \dfrac{{\pi}^{2}\, (\rho_L - \rho_S)\, r^3 }{4}$ The force on equivalent cylinder is $F_c = \pi\,r^2 \, (\rho_L - \rho_S)\,h$ These forces have to be equivalent and thus $\dfrac{{\pi}^{\cancel{2}}\, \cancel{(\rho_L-\rho_S)}\,r^{\cancelto{1}{3}}}{4} = \cancel{\pi}\,\cancel{r^2} \, \cancel{(\rho_L - \rho_S)}\,h$ Thus, the height is $\dfrac{h}{r} = \dfrac{\pi}{4}$ # Example 4.24 In the introduction to this section, it was assumed that above liquid is a gas with inconsequential density. Suppose that the above layer is another liquid which has a bit lighter density. Body with density between the two liquids,\rho_l < \rho_s < rho_h$is floating between the two liquids. Develop the relationship between the densities of liquids and solid and the location of the solid cubical. There are situations where density is a function of the depth. What will be the location of solid body if the # Solution In the discussion to this section, it was shown that net force is the body volume times the the density of the liquid. In the same vein, the body can be separated into two: one in first liquid and one in the second liquid. In this case there are two different liquid densities. The net force down is the weight of the body$\rho_c\, h\, A$. Where$h$is the height of the body and$Ais its cross section. This force is balance according to above explanation by the two liquid as \begin{align*} \rho_c\, \cancel{h\, A} = \cancel{A \,h}\,\left( \alpha\,\rho_l + (1-\alpha) \rho_h \right) \end{align*} Where\alphais the fraction that is in low liquid. After rearrangement it became \begin{align*} \alpha = \dfrac{ \rho_c - \rho_h}{\rho_l - \rho_h} \end{align*} the second part deals with the case where the density varied parabolically. The density as a function ofx$coordinate along$h$starting at point$\rho_his \begin{align*} \rho (x) = \rho_h - \left( \dfrac{x}{h} \right)^2 \left( \rho_h - \rho_l \right) \end{align*} Thus the equilibration will be achieved,Ais canceled on both sides, when \begin{align*} \rho_c\, h = \int_{x_1}^{x_1+h} \left[ \rho_h - \left( \dfrac{x}{h} \right)^2 \left(\rho_h-\rho_l\right) \right]dx \end{align*} After the integration the equation transferred into \begin{align*} \rho_c\, h = \dfrac{\left( 3\,\rho_l-3\,\rho_h\right) \,{x1}^{2}+ \left( 3\,h\,\rho_l-3\,h\,\rho_h\right) \,x1+{h}^{2}\,\rho_l+2\,{h}^{2}\,\rho_h} {3\,h} \end{align*} And the location where the lower point of the body (the physical),x_1, will be at \begin{align*} X_1 = \dfrac{\sqrt{3}\,\sqrt{3\,h^2\,{\rho_l}^{2}+\left( 4\,\rho_c-6\,{h}^{2}\,\rho_h\right) \,\rho_l+3\,{h}^{2}\,{\rho_h}^{2}-12\,\rho_c\,\rho_h}+3\,h\,\rho_l-3\,h\,\rho_h} {6\,\rho_h-2\,\rho_l} \end{align*} For linear relationship the following results can be obtained. \begin{align*} x_1=\dfrac{h\,\rho_l+h\,\rho_h-6\,\rho_c}{2\,\rho_l-2\,\rho_h} \end{align*} In many cases in reality the variations occur in small zone compare to the size of the body. Thus, the calculations can be carried out under the assumption of sharp change. However, if the body is smaller compare to the zone of variation, they have to accounted for. # Example 4.25 A hollow sphere is made of steel (\rho_s/\rho_w \cong 7.8$) with a$t$wall thickness. What is the thickness if the sphere is neutrally buoyant? Assume that the radius of the sphere is$R. For the thickness below this critical value, develop an equation for the depth of the sphere. # Solution The weight of displaced water has to be equal to the weight of the sphere \begin{align} \label{sphere:gov} \rho_s\,\cancel{g} \, \dfrac{4\,\pi\, R^3}{3} = \rho_w \,\cancel{g} \, \left( \dfrac{4\,\pi\, R^3}{3} - \dfrac{4\,\pi\, \left(R-t\right)^3}{3} \right) \end{align} after simplification equation \eqref{sphere:gov} becomes \begin{align} \label{sphere:govR} \dfrac{\rho_s\,R^3 }{\rho_w} = 3\,t\,{R}^{2}-3\,{t}^{2}\,R+{t}^{3} \end{align} Equation \eqref{sphere:govR} is third order polynomial equation which it's solution (see the appendix) is \begin{align*} \label{sphere:completSol} t_1&=&\left( -\dfrac{\sqrt{3}\,i}{2}-\dfrac{1}{2}\right) \,{\left( {\dfrac{\rho_s}{\rho_w}R}^{3}- {R}^{3}\right) }^{\dfrac{1}{3}}+R \\ t_2&=&\left( \dfrac{\sqrt{3}\,i}{2}-\dfrac{1}{2}\right) \,{\left( {\dfrac{\rho_s}{\rho_w} R}^{3}- {R}^{3}\right) }^{\dfrac{1}{3}}+R\\ t_3&=& R\,\left( \sqrt[3]{ \dfrac{\rho_s}{\rho_w} - 1 } + 1 \right) \end{align*} The first two solutions are imaginary thus not valid for the physical world. The last solution is the solution that was needed. The depth that sphere will be located depends on the ratio oft/R$which similar analysis to the above. For a given ratio of$t/R, the weight displaced by the sphere has to be same as the sphere weight. The volume of a sphere cap (segment) is given by \begin{align} \label{sphere:capV} V_{cap} = \dfrac{\pi\,h^2\,(3R-h)}{3} \end{align} Wherehis the sphere height above the water. The volume in the water is \begin{align} \label{sphere:waterV} V_{water} = \dfrac{4\,\pi\, R^3}{3} - \dfrac{\pi\,h^2\,(3R-h)}{3} = \dfrac{4\,\pi\,\left( R^3 -3\,R\,h^2 + h^3 \right) }{3} \end{align} When V_{water}denotes the volume of the sphere in the water. Thus the Archimedes law is \begin{align} \label{sphere:archimedes1} \dfrac{\rho_w\,4\,\pi\,\left( R^3 -3\,R\,h^2 + h^3 \right) }{3} = \dfrac{\rho_s\,4\,\pi\,\left( 3\,t\,{R}^{2}-3\,{t}^{2}\,R+{t}^{3} \right)}{3} \end{align} or \begin{align} \label{sphere:archimedes} \left( R^3 -3\,R\,h^2 + h^3 \right) = \dfrac{\rho_w}{\rho_s} \left( 3\,t\,{R}^{2}-3\,{t}^{2}\,R+{t}^{3} \right) \end{align} The solution of \eqref{sphere:archimedes} is \begin{multline} \label{sphere:solArc} h = \left( \dfrac{\sqrt{-fR\,\left( 4\,{R}^{3}-fR\right) }}{2}-\dfrac{fR-2\,{R}^{3}}{2}\right)^{\dfrac{1}{3}} \\ + \dfrac{{R}^{2}} {{\left( \dfrac{\sqrt{-fR\,\left( 4\,{R}^{3}-fR\right) }}{2}-\dfrac{fR-2\,{R}^{3}}{2}\right)}^{\dfrac{1}{3}}} \end{multline} Where-fR = R^3- \dfrac{\rho_w}{\rho_s}\,(3\,t\,R^2-3\,t^2\,R+t^3)There are two more solutions which contains the imaginary component. These solutions are rejected. # Example 4.26 One of the common questions in buoyancy is the weight with variable cross section and fix load. For example, a wood wedge of wood with a fix weight/load. The general question is at what the depth of the object (i.e. wedge) will be located. For simplicity, assume that the body is of a solid material. # Solution It is assumed that the volume can be written as a function of the depth. As it was shown in the previous example, the relationship between the depth and the displaced liquid volume of the sphere. Here it is assumed that this relationship can be written as \begin{align} \label{FixVariableW:d-V} V_w = f(d,\mbox{other geometrical parameters}) \end{align} The Archimedes balance on the body is \begin{align} \label{FixVariableW:archimedes1} \rho_{ll} V_{a}= \rho_{w} V_{w} \end{align} \begin{align} \label{FixVariableW:archimedes} d = f^{-1} \dfrac{\rho_{ll} V_{a}}{ \rho_{w}} \end{align} # Example 4.27 In example 4.26 a general solution was provided. Find the reverse function,f^{-1}$for cone with$30^{\circ}$when the tip is in the bottom. # Solution First the function has to built for$d(depth). \begin{align} \label{woodenCone:gov} V_{w} = \dfrac{\pi\,d\,\left(\dfrac{d}{\sqrt{3}} \right)^2}{3} = \dfrac{\pi\,d^3}{9} \end{align} Thus, the depth is \begin{align} \label{woodenCone:d} d = \sqrt[3]{\dfrac{9\,\pi\, \rho_w}{\rho_{ll}\,V_a} } \end{align} Fig. 4.37 Schematic of floating bodies. ### 4.6.1 Stability Figure 4.37 shows a body made of hollow balloon and a heavy sphere connected by a thin and light rod. This arrangement has mass centroid close to the middle of the sphere. The buoyant center is below the middle of the balloon. If this arrangement is inserted into liquid and will be floating, the balloon will be on the top and sphere on the bottom. Tilting the body with a small angle from its resting position creates a shift in the forces direction (examine Figure 4.37b). These forces create a moment which wants to return the body to the resting (original) position. When the body is at the position shown in Figure 4.37c ,the body is unstable and any tilt from the original position creates moment that will further continue to move the body from its original position. This analysis doesn't violate the second law of thermodynamics. Moving bodies from an unstable position is in essence like a potential. Fig. 4.38 Schematic of floating cubic. A wooden cubic (made of pine, for example) is inserted into water. Part of the block floats above water line. The cubic mass (gravity) centroid is in the middle of the cubic. However the buoyant center is the middle of the volume under the water (see Figure 4.38). This situation is similar to Figure 4.37c. However, any experiment of this cubic wood shows that it is stable locally. Small amount of tilting of the cubic results in returning to the original position. When tilting a larger amount than\pi/4$, it results in a flipping into the next stable position. The cubic is stable in six positions (every cubic has six faces). In fact, in any of these six positions, the body is in situation like in 4.37c. The reason for this local stability of the cubic is that other positions are less stable. If one draws the stability (later about this criterion) as a function of the rotation angle will show a sinusoidal function with four picks in a whole rotation. Fig. 4.39 Stability analysis of floating body. So, the body stability must be based on the difference between the body's local positions rather than the absolute'' stability. That is, the body is stable'' in some points more than others in their vicinity. These points are raised from the buoyant force analysis. When the body is tilted at a small angle,$\beta$, the immersed part of the body center changes to a new location, B' as shown in Figure . The center of the mass (gravity) is still in the old location since the body did not change. The stability of the body is divided into three categories. If the new immerse volume creates a new center in such way that couple forces (gravity and buoyancy) try to return the body, the original state is referred as the stable body and vice versa. The third state is when the couple forces do have zero moment, it is referred to as the neutral stable. The body, shown in Figure 4.39, when given a tilted position, move to a new buoyant center, B'. This deviation of the buoyant center from the old buoyant center location, B , should to be calculated. This analysis is based on the difference of the displaced liquid. The right green area (volume) in Figure 4.39 is displaced by the same area (really the volume) on left since the weight of the body didn't change so the total immersed section is constant. For small angle,$\beta$, the moment is calculated as the integration of the small force shown in the Figure 4.39 as$\Delta F. The displacement of the buoyant center can be calculated by examining the moment these forces creates. The body weight creates opposite moment to balance the moment of the displaced liquid volume. \begin{align} \overline{BB'}\, W = \mathbf{M} \label{static:eq:momentBouyant} \end{align} Where\mathbf{M}$is the moment created by the displaced areas (volumes),$\overline{BB'}$is the distance between points B and point B' , and,$W$referred to the weight of the body. It can be noticed that the distance$\overline{BB'}$is an approximation for small angles (neglecting the vertical component.). So the perpendicular distance,$\overline{BB'}, should be \begin{align} \overline{BB'} = \dfrac{ \mathbf{M}}{W} \label{static:eq:momentBouyantD} \end{align} The moment\mathbf{M}can be calculated as \begin{align} \mathbf{M} = \int_{A} \overbrace{g\,\rho_l\,\underbrace{x\,\beta\,dA}_{dV}}^{\delta F}\,x = g \,\rho_l\, \beta \int_{A} x^2 dA \label{static:eq:staticMoment} \end{align} The integral in the right side of equation qref{static:eq:staticMoment} is referred to as the area moment of inertia and was discussed in Chapter 3. The distance,\overline{BB'}can be written from equation qref{static:eq:staticMoment} as \begin{align} \overline{BB'} = \dfrac{g\,\rho_l\, I_{xx} } {\rho_{s} V_{body} } \label{static:eq:tiltdeX} \end{align} The point where the gravity force direction is intersecting with the center line of the cross section is referred as metacentric point, M. The location of the metacentric point can be obtained from the geometry as \begin{align} \overline{BM} = \dfrac{\overline{BB'}} {\sin \beta} \label{static:eq:metacentricP} \end{align} And combining equations \eqref{static:eq:tiltdeX} with qref{static:eq:metacentricP} yields \begin{align} \overline{BM} = \dfrac{\cancel{g}\, \rho_l \beta I_{xx}} {\cancel{g}\,\rho_{s}\,\sin\beta\,V_{body}} = \dfrac{ \rho_l\,I_{xx}}{\rho_{s}\,V_{body}} \label{static:eq:GMIntermidiateA} \end{align} For small angle (\beta \sim 0) \begin{align} \lim_{\beta \rightarrow 0} \dfrac{\sin \beta}{ \beta} \sim 1 \label{static:eq:lhopitalRule} \end{align} It is remarkable that the results is independent of the angle. Looking at Figure 4.39, the geometrical quantities can be related as \begin{align} \overline{GM} = \overbrace{\dfrac{\rho_l\,I_{xx}}{\rho_{s} V_{body}}} ^{\overline{BM}} - \overline{BG} \label{static:eq:GMIntermidiate} \end{align} # Example 4.28 A solid cone floats in a heavier liquid (that is\rho_l/\rho_c> 1$). The ratio of the cone density to liquid density is$\alpha$. For a very light cone$\rho_{c}/\rho_{l} \sim 0$, the cone has zero depth. At this condition, the cone is unstable. For middle range,$1 > \rho_{c}/\rho_{l} > 0$there could be a range where the cone is stable. The angle of the cone is$\theta$. Analyze this situation. # Solution The floating cone volume is$\dfrac{\pi\,d\,r^2}{3}$and the center of gravity is D/4. The distance$\overline{BG}$depend on$das \begin{align} \label{coneStability:BG} \overline{BG} = D/4 - d/4 \end{align} WhereD$is the total height and$dis the height of the submerged cone. The moment of inertia of the cone is circle shown in Table . The relationship between the radius the depth is \begin{align} \label{coneStability:d-r} r = d\,\tan\theta \end{align} \begin{align} \label{coneStability:GMini} \overline{GM} = \dfrac{\rho_l\,\overbrace{\dfrac{\pi\,\left( d\,\tan\theta\right)^4}{64}} ^{I_{xx}}}{\rho_{s} \underbrace{\dfrac{\pi\,d\,\left( d\,\tan\theta\right)^2}{3} }_{V_{body}} } - \overbrace{\left(\dfrac{D}{4} - \dfrac{d}{4} \right)}^{\overline{BG}} \end{align} Equation \eqref{coneStability:GMini} can be simplified as \begin{align} \label{coneStability:GM} \overline{GM} = \dfrac{\rho_l\,d\, \tan^2\theta }{\rho_{s}\,192} - \left(\dfrac{D}{4} - \dfrac{d}{4} \right) \end{align} The relationship betweenD$and$dis determined by the density ratio ( as displaced volume \begin{align} \label{coneStability:d-D} \rho_l\,d^3 = \rho_c\, D^3 \Longrightarrow D = d \sqrt[3]{\dfrac{\rho_l}{\rho_c}} \end{align} Substituting equation \eqref{coneStability:d-D} into qref{coneStability:GM} yield the solution when\overline{GM} = 0 \begin{align} \label{coneStability:sol} 0 = \dfrac{\rho_l\,d\, \tan^2\theta }{\rho_{s}\,192} - \left(\dfrac{d \sqrt[3]{\dfrac{\rho_l}{\rho_c}}}{4} - \dfrac{d}{4} \right) \Longrightarrow \dfrac{\rho_l\, \tan^2\theta} {\rho_{s}\,48} = \sqrt[3]{\dfrac{\rho_l}{\rho_c} - 1 } \end{align} Since\rho_l > \rho_c$this never happened. Fig. 4.40 Cubic body dimensions for stability analysis. To understand these principles consider the following examples. # Example 4.29 A solid block of wood of uniform density,$\rho_s = \alpha\,\rho_{l}$where ($0\le\alpha\le1$) is floating in a liquid. Construct a graph that shows the relationship of the$\overline{GM}$as a function of ratio height to width. Show that the block's length,$L$, is insignificant for this analysis. # Solution Equation \eqref{static:eq:GMIntermidiate} requires that several quantities should be expressed. The moment of inertia for a block is given in Table and is$I_{xx}= \dfrac{La^3}{12}$. Where$L$is the length into the page. The distance$\overline{BG}is obtained from Archimedes' theorem and can be expressed as \begin{align*} W = \rho_s \,\overbrace{a\,h\,L}^{V} = \rho_l \,\overbrace{a\,h_1\,L}^{\text{immersed volume} } \Longrightarrow h_1 = \dfrac{\rho_s}{\rho_l} h \end{align*} Fig. 4.41 Stability of cubic body infinity long. Thus, the distance\overline{BG}is (see Figure 4.38) \begin{align*} \overline{BG} = \dfrac{h}{2} - \overbrace{\dfrac{\rho_s}{\rho_l}\, h}^{h_1}\,\dfrac{1}{2} = \dfrac{h}{2} \left(1 - \dfrac{\rho_s}{\rho_l} \right) \label{static:eq:BGbar} \end{align*} \begin{align*} GM = \dfrac{\cancel{g}\,\rho_l\, \overbrace{\dfrac{\cancel{L}\,a^3}{12}}^{I_{xx}} } {\cancel{g}\,\rho_s\,\underbrace{a\,h\,\cancel{L}}_V} - \dfrac{h}{2} \left(1 - \dfrac{\rho_s}{\rho_l} \right) \end{align*} Simplifying the above equation provides \begin{align*} \dfrac{\overline{GM}}{h} = \dfrac{1}{12\,\alpha} \left(\dfrac{a}{h}\right)^2 - \dfrac{1}{2} \left( 1 - \alpha \right) \end{align*} Notice that\overline{GM}/{h}$isn't a function of the depth,$L. This equation leads to the condition where the maximum height above which the body is not stable anymore as \begin{align*} \dfrac{a}{h} \ge \sqrt {{6\,(1-\alpha)\alpha}} \label{static:eq:stabilityCritieriaCubic} \end{align*} Fig. 4.42 The maximum height reverse as a function of density ratio. One of the interesting point for the above analysis is that there is a point above where the ratio of the height to the body width is not stable anymore. In cylindrical shape equivalent to equation qref{static:eq:stabilityCritieriaCubic} can be expressed. For cylinder (circle) the moment of inertia isI_{xx} = \pi\,b^4/64$. The distance$\overline{BG}is the same as for the square shape (cubic) (see above \eqref{static:eq:BGbar}). Thus, the equation is \begin{align*} \dfrac{\overline{GM}}{h} = \dfrac{g}{64\,\alpha} \left(\dfrac{b}{h}\right)^2 - \dfrac{1}{2} \left( 1 - \alpha \right) \end{align*} And the condition for maximum height for stability is \begin{align*} \dfrac{b}{h} \ge \sqrt{{32\,(1-\alpha)\,\alpha}} \end{align*} This kind of analysis can be carried for different shapes and the results are shown for these two shapes in Figure 4.42. It can be noticed that the square body is more stable than the circular body shape. #### Principle Main Axises Any body has infinite number of different axises around which moment of inertia can be calculated. For each of these axises, there is a different moment of inertia. With the exception of the circular shape, every geometrical shape has an axis in which the moment of inertia is without the product of inertia. This axis is where the main rotation of the body will occur. Some analysis of floating bodies are done by breaking the rotation of arbitrary axis to rotate around the two main axises. For stability analysis, it is enough to find if the body is stable around the smallest moment of inertia. For example, a square shape body has larger moment of inertia around diagonal. The difference between the previous calculation and the moment of inertia around the diagonal is $\Delta I_{xx} = \overbrace{\dfrac{\sqrt{2}\,a\left( \dfrac{\sqrt{3}\,a}{2}\right)^3 }{6}}^{I\;diagonal\;axis} \;- \overbrace{\dfrac{a^4}{12}}^{normal'' axis} \sim 0.07\,{a}^{4}$ Which show that if the body is stable at main axises, it must be stable at the diagonal'' axis. Thus, this problem is reduced to find the stability for principle axis. #### Unstable Bodies What happen when one increases the height ratio above the maximum height ratio? The body will flip into the side and turn to the next stable point (angle). This is not a hypothetical question, but rather practical. This happens when a ship is overloaded with containers above the maximum height. In commercial ships, the fuel is stored at the bottom of the ship and thus the mass center (pointG$) is changing during the voyage. So, the ship that was stable (positive$\overline{GM}$) leaving the initial port might became unstable (negative$\overline{GM}$) before reaching the destination port. Fig. 4.43 Stability of two triangles put tougher. # Example 4.30 One way to make a ship to be a hydrodynamic is by making the body as narrow as possible. Suppose that two opposite sides triangle (prism) is attached to each other to create a long ship'' see Figure 4.43. Supposed that$\mathbf{a/h}\longrightarrow \tilde 0$the body will be unstable. On the other side if the$\mathbf{a/h}\longrightarrow \tilde \infty$the body is very stable. What is the minimum ratio of$\mathbf{a/h}$that keep the body stable at half of the volume in liquid (water). Assume that density ratio is$\rho_l / \rho_s = \bar{\rho}$. # Solution The answer to the question is that the limiting case where$\overline{GM} = 0. To find this ratio equation terms in \eqref{static:eq:GMIntermidiate} have to be found. The Volume of the body is \begin{align*} V = 2\;\left( \dfrac {a^2 \, h} {2} \right) = a^2 \, h \end{align*} The moment of inertia is triangle (see explanation in example \eqref{mech:ex:triangleIxx} is \begin{align*} I_{xx} = \dfrac{a\,h^3}{2} \end{align*} And the volume is \begin{align*} V_{body} = a^2 \; \sqrt{h^2 - \dfrac{a^2}{4} } = a^2\,h \; \sqrt{1 - \dfrac{1}{4}\, \dfrac{a^2}{h^2} } \end{align*} The point\mathbf{B}$is a function of the density ratio of the solid and liquid. Denote the liquid density as$\rho_l$and solid density as$\rho_s$. The point$\mathbf{B}can be expressed as \begin{align*} B = \dfrac {a\, \rho_s} {2\, \rho_l} \end{align*} And thus the distance\overline{BG}is \begin{align*} \overline{BG} = \dfrac{a}{2} \left( 1 - \dfrac{\rho_s}{\rho_l} \right) \end{align*} The limiting condition requires that \overline{GM} = 0so that \begin{align*} \dfrac{\rho_l\,I_{xx}}{\rho_{s} V_{body}} = \overline{BG} \end{align*} Or explicitly \begin{align*} \dfrac{\rho_l \,\dfrac{a\,h^3}{2}} { \rho_s \,a^2\,h \; \sqrt{1 - \dfrac{1}{4}\, \dfrac{a^2}{h^2} } } = \dfrac{a}{2} \left( 1 - \dfrac{\rho_s}{\rho_l} \right) \end{align*} After rearrangement and using the definitions of\xi= h/a\bar{\rho} \rho_l/\rho_sresults in \begin{align*} \dfrac{\bar{\rho} \,\xi^2 }{ \sqrt{1 - \dfrac{\xi^2}{4} } } = \left( 1 - \dfrac{1}{\bar{\rho}} \right) \end{align*} The solution of the above solution is obtained by squaring both sides and defining a new variable such asx=\xi^2. After the above manipulation and selecting the positive value and to keep stability as \begin{align*} x < \dfrac{\sqrt{\dfrac{ \sqrt{64\,{\bar{\rho}}^{4}-64\,{\bar{\rho}}^{3}+{\bar{\rho}}^{2}-2\,\bar{\rho}+1} }{\bar{\rho}}+\dfrac{1}{\bar{\rho}}-1}}{2\,\sqrt{2}\,\bar{\rho}} \end{align*} ### 4.6.1.1 Stability of Body with Shifting Mass Centroid Fig. 4.44 The effects of liquid movement on the\overline{GM}. Ships and other floating bodies carry liquid or have a load which changes the mass location during tilting of the floating body. For example, a ship that carries wheat grains where the cargo is not properly secured to the ship. The movement of the load (grains, furniture, and/or liquid) does not occur in the same speed as the body itself or the displaced outside liquid. Sometimes, the slow reaction of the load, for stability analysis, is enough to be ignored. Exact analysis requires taking into account these shifting mass speeds. However, here, the extreme case where the load reacts in the same speed as the tilting of the ship/floating body is examined. For practical purposes, it is used as a limit for the stability analysis. There are situations where the real case approaches to this extreme. These situations involve liquid with a low viscosity (like water, alcohol) and ship with low natural frequency (later on the frequency of the ships). Moreover, in this analysis, the dynamics are ignored and only the statics is examined (see Figure 4.44). A body is loaded with liquid B'' and is floating in a liquid A'' as shown in Figure 4.44. When the body is given a tilting position the body displaces the liquid on the outside. At the same time, the liquid inside is changing its mass centroid. The moment created by the inside displaced liquid is \begin{align} M_{in} = g\, {\rho_l}_B \beta {I_{xx}}_B \label{static:eq:momentL} \end{align} Note that{I_{xx}}_Bisn't the same as the moment of inertia of the outside liquid interface. The change in the mass centroid of the liquid A'' then is \begin{align} \overline{G_{1}G_{1}'} = \dfrac{\cancel{g}\, \cancel{{\rho_l}_B} \beta {I_{xx}}_B} {\underbrace{\cancel{g}\,V_B\,\cancel{{\rho_l}_B}} _{\text{Inside liquid weight }}} = \dfrac{{I_{xx}}_B}{V_B} \label{static:eq:GG'} \end{align} Equation \eqref{static:eq:GG'} shows that\overline{GG^{'}}$is only a function of the geometry. This quantity,$\overline{G_1G_1'}, is similar for all liquid tanks on the floating body. The total change of the vessel is then calculated similarly to center area calculations. \begin{align} \cancel{g}\,m_{total}\, \overline{GG'} = \cancelto{0}{{g}\,m_{body}} + \cancel{g}\, m_f \overline{G_1G_1'} \label{static:eq:GGtotal} \end{align} For more than one tank, it can be written as \begin{align} \overline{GG'} = \dfrac{g}{W_{total}} \sum_{i=1}^n \overline{G_iG_i} {\rho_l}_i V_i = \dfrac{g}{W_{total}} \sum_{i=1}^n \dfrac{{{I_{xx}}_b}_i} {{V_b}_i} \label{static:eq:totalGG} \end{align} A new point can be defined asG_c$. This point is the intersection of the center line with the vertical line from$G'. \begin{align} \overline{G\,G_c} = \dfrac{\overline{GG'}} {\sin\beta} \label{static:eq:GGc} \end{align} The distance that was used before\overline{GM}$is replaced by the criterion for stability by$\overline{G_c\,M}and is expressed as \begin{align} \overline{G_c\,M} = {\dfrac{g\, \rho_A\,{I_{xx}}_A}{\rho_{s} V_{body}}} -\overline{BG} - \dfrac{1}{m_{total}}\, \dfrac{{I_{xx}}_b}{V_b} \label{static:eq:GcM} \end{align} If there are more than one tank partially filled with liquid, the general formula is \begin{align} \overline{G_c\,M} = {\dfrac{g\, \rho_A\,{I_{xx}}_A}{\rho_{s} V_{body}}} -\overline{BG} - \dfrac{1}{m_{total}} \sum_{i=1}^{n}\dfrac{{{I_{xx}}_b}_i}{{V_b}_i} \label{static:eq:GcMg} \end{align} Fig. 4.45 Measurement of GM of floating body. One way to reduce the effect of the moving mass center due to liquid is done by substituting a single tank with several tanks. The moment of inertial of the combine two tanks is smaller than the moment of inertial of a single tank. Increasing the number of tanks reduces the moment of inertia. The engineer could design the tanks in such a way that the moment of inertia is operationally changed. This control of the stability,\overline{GM}$, can be achieved by having some tanks spanning across the entire body with tanks spanning on parts of the body. Movement of the liquid (mostly the fuel and water) provides way to control the stability,$GM$, of the ship. ### Metacentric Height,$\overline{GM}$, Measurement The metacentric height can be measured by finding the change in the angle when a weight is moved on the floating body. Moving the weight,$T$a distance,$dthen the moment created is \begin{align} M_{weight} = T\,d \label{static:eq:Td} \end{align} This moment is balanced by \begin{align} M_{righting} = W_{total} \overline{GM}_{new} \,\theta \label{static:eq:TdR} \end{align} Where,W_{total}$, is the total weight of the floating body including measuring weight. The angle,$\theta, is measured as the difference in the orientation of the floating body. The metacentric height is \begin{align} \overline{GM}_{new} = \dfrac{T\,d}{W_{total} \,\theta} \label{static:eq:GMmessured} \end{align} If the change in the\overline{GM}$can be neglected, equation \eqref{static:eq:GMmessured} provides the solution. The calculation of$\overline{GM}$can be improved by taking into account the effect of the measuring weight. The change in height of$Gis \begin{align} \cancel{g}\, m_{total}\, G_{new} = \cancel{g}\, m_{ship}\, G_{actual} + \cancel{g}\,T\,h \label{static:eq:deltaGMR} \end{align} Combining equation \eqref{static:eq:deltaGMR} with equation \eqref{static:eq:GMmessured} results in \begin{align} \overline{GM}_actual = \overline{GM}_{new}\, \dfrac{m_{total}}{m_{ship}} - h \, \dfrac{T}{m_{ship}} \label{static:eq:GMactual} \end{align} The weight of the ship is obtained from looking at the ship depth. ### 4.6.1.3 Stability of Submerged Bodies The analysis of submerged bodied is different from the stability when the body lays between two fluid layers with different density. When the body is submerged in a single fluid layer, then none of the changes of buoyant centroid occurs. Thus, the mass centroid must be below than buoyant centroid in order to have stable condition. However, all fluids have density varied in some degree. In cases where the density changes significantly, it must be taken into account. For an example of such a case is an object floating in a solar pond where the upper layer is made of water with lower salinity than the bottom layer(change up to 20% of the density). When the floating object is immersed into two layers, the stability analysis must take into account the changes of the displaced liquids of the two liquid layers. The calculations for such cases are a bit more complicated but based on the similar principles. Generally, this density change helps to increase the stability of the floating bodies. This analysis is out of the scope of this book (for now). ### 4.6.1.4 Stability of None Systematical or Strange'' Bodies Fig. 4.46 Calculations of\overline{GM}$for abrupt shape body. While most floating bodies are symmetrical or semi–symmetrical, there are situations where the body has a strange'' and/or un-symmetrical body. Consider the first strange body that has an abrupt step change as shown in Figure 4.46. The body weight doesn't change during the rotation that the green area on the left and the green area on right are the same (see Figure 4.46). There are two situations that can occur. After the tilting, the upper part of the body is above the liquid or part of the body is submerged under the water. The mathematical condition for the border is when$b=3\,a$. For the case of$b< 3\,a$the calculation of moment of inertia are similar to the previous case. The moment created by change in the displaced liquid (area) act in the same fashion as the before. The center of the moment is needed to be found. This point is the intersection of the liquid line with the brown middle line. The moment of inertia should be calculated around this axis. For the case where$b < 3\,axsome part is under the liquid. The amount of area under the liquid section depends on the tilting angle. These calculations are done as if none of the body under the liquid. This point is intersection point liquid with lower body and it is needed to be calculated. The moment of inertia is calculated around this point (note the body is ended'' at end of the upper body). However, the moment to return the body is larger than actually was calculated and the bodies tend to be more stable (also for other reasons). ### 4.6.1.5 Neutral frequency of Floating Bodies This case is similar to pendulum (or mass attached to spring). The governing equation for the pendulum is \begin{align} ll \ddot{\beta} - g\,\beta = 0 \label{static:eq:govPendulum} \end{align} Where herell$is length of the rode (or the line/wire) connecting the mass with the rotation point. Thus, the frequency of pendulum is$\dfrac{1}{2\,\pi}\sqrt{\dfrac{g}{ll}}$which measured in$Hz$. The period of the cycle is$2\,\pi\,\sqrt{ll/g}. Similar situation exists in the case of floating bodies. The basic differential equation is used to balance and is \begin{align} \overbrace{I\ddot{\beta}}^{rotation} - \overbrace{V\,\rho_s\,\overline{GM}\,\beta}^{rotating\;moment}=0 \label{static:eq:govFloat} \end{align} In the same fashion the frequency of the floating body is \begin{align} \dfrac{1}{2\,\pi} \sqrt{\dfrac {V\,\rho_s\,\overline{GM}}{I_{body}}} \label{static:eq:floatFreq} \end{align} and the period time is \begin{align} 2\,\pi \sqrt{\dfrac{I_{body}} {V\,\rho_s\,\overline{GM}}} \label{static:eq:periodFreq} \end{align} In general, the larger\overline{GM}$the more stable the floating body is. Increase in$\overline{GM}$increases the frequency of the floating body. If the floating body is used to transport humans and/or other creatures or sensitive cargo it requires to reduce the$\overline{GM}$so that the traveling will be smoother. ### 4.6.2 Surface Tension The surface tension is one of the mathematically complex topic and related to many phenomena like boiling, coating, etc. In this section, only simplified topics like constant value will be discussed. In one of the early studies of the surface tension/pressure was done by Torricelli. In this study he suggest construction of the early barometer. In barometer is made from a tube sealed on one side. The tube is filled with a liquid and turned upside down into the liquid container. The main effect is the pressure difference between the two surfaces (in the tube and out side the tune). However, the surface tension affects the high. This effect is large for very small diameters. # Example 4.31 In interaction of the molecules shown in Figure ? describe the existence of surface tension. Explain why this description is erroneous? # Solution The upper layer of the molecules have unbalanced force towards the liquid phase. Newton's law states when there is unbalanced force, the body should be accelerate. However, in this case, the liquid is not in motion. Thus, the common explanation is wrong. Fig. 4.47 A heavy needle is floating on a liquid. # Example 4.32 Needle is made of steel and is heavier than water and many other liquids. However, the surface tension between the needle and the liquid hold the needle above the liquid. After certain diameter, the needle cannot be held by the liquid. Calculate the maximum diameter needle that can be inserted into liquid without drowning. # Solution Under Construction ## 4.7 Rayleigh–Taylor Instability Rayleigh–Taylor instability (or RT instability) is named after Lord Rayleigh and G. I. Taylor. There are situations where a heavy liquid layer is placed over a lighter fluid layer. This situation has engineering implications in several industries. For example in die casting, liquid metal is injected in a cavity filled with air. In poor designs or other situations, some air is not evacuated and stay in small cavity on the edges of the shape to be casted. Thus, it can create a situation where the liquid metal is above the air but cannot penetrate into the cavity because of instability. This instability deals with a dense, heavy fluid that is being placed above a lighter fluid in a gravity field perpendicular to interface. Example for such systems are dense water over oil (liquid–liquid), or water over air(gas–liquid). The original Rayleigh's paper deals with the dynamics and density variations. For example, density variations according to the bulk modulus (see section 4.3.3.2) are always stable but unstable of the density is in the reversed order. Supposed that a liquid density is arbitrary function of the height. This distortion can be as a result of heavy fluid above the lighter liquid. This analysis asks the question of what happen when a small amount of liquid from the above layer enter into the lower layer? Whether this liquid continue and will grow or will it return to its original conditions? The surface tension is the opposite mechanism that will returns the liquid to its original place. This analysis is referred to the case of infinite or very large surface. The simplified case is the two different uniform densities. For example a heavy fluid density,$\rho_L$, above lower fluid with lower density,$\rho_G. For perfectly straight interface, the heavy fluid will stay above the lighter fluid. If the surface will be disturbed, some of heavy liquid moves down. This disturbance can grow or returned to its original situation. This condition is determined by competing forces, the surface density, and the buoyancy forces. The fluid above the depression is in equilibrium with the sounding pressure since the material is extending to infinity. Thus, the force that acting to get the above fluid down is the buoyancy force of the fluid in the depression. Fig. 4.48 Description of depression to explain the Rayleigh–Taylor instability. The depression is returned to its original position if the surface forces are large enough. In that case, this situation is considered to be stable. On the other hand, if the surface forces (surface tension) are not sufficient, the situation is unstable and the heavy liquid enters into the liquid fluid zone and vice versa. As usual there is the neutral stable when the forces are equal. Any continues function can be expanded in series of cosines. Thus, example of a cosine function will be examined. The conditions that required from this function will be required from all the other functions. The disturbance is of the following \begin{align} h = -h_{max} \cos \dfrac{2\,\pi\,x} {L} \label{static:eq:disturbance} \end{align} whereh_{max}$is the maximum depression and$L$is the characteristic length of the depression. The depression has different radius as a function of distance from the center of the depression,$x$. The weakest point is at$x=0$because symmetrical reasons the surface tension does not act against the gravity as shown in Figure qref{static:fig:cos}. Thus, if the center point of the depression can hold'' the intrusive fluid then the whole system is stable. The radius of any equation is expressed by equation \eqref{intro:eq:radius}. The first derivative of$\cos$around zero is$\sinwhich is approaching zero or equal to zero. Thus, equation \eqref{intro:eq:radius} can be approximated as \begin{align} \dfrac{1}{R} = \dfrac{d^2h}{dx^2} \label{static:eq:radiusAppx} \end{align} For equation \eqref{static:eq:disturbance} the radius is \begin{align} \dfrac{1}{R} =-\dfrac{4\,\pi^2\,h_{max}}{L^2} \label{static:eq:sinR} \end{align} According to equation \eqref{intro:eq:STbcylinder} the pressure difference or the pressure jump is due to the surface tension at this point must be \begin{align} P_H - P_L = \dfrac{4\,h_{max}\,\sigma\,\pi^2}{L^2} \label{static:eq:deltaP} \end{align} The pressure difference due to the gravity at the edge of the disturbance is then \begin{align} P_H - P_L = g\,\left( \rho_H-\rho_L \right) h_{max} \label{static:eq:detlaPe} \end{align} Comparing equations \eqref{static:eq:deltaP} and qref{static:eq:detlaPe} show that if the relationship is \begin{align} \dfrac{4\,\sigma\,\pi^2}{L^2} > g \,\left( \rho_H-\rho_L \right) \label{static:eq:noEq} \end{align} It should be noted thath_{max}is irrelevant for this analysis as it is canceled. The point where the situation is neutral stable \begin{align} L_c = \sqrt {\dfrac{4\,\pi^2 \sigma}{g\left( \rho_H-\rho_L \right)} } \label{phase:eq:notEq} \end{align} An alternative approach to analyze this instability is suggested here. Consider the situation described in Figure 4.49. If all the heavy liquid attempts'' to move straight down, the lighter liquid will prevent'' it. The lighter liquid needs to move up at the same time but in a different place. The heavier liquid needs to move in one side and the lighter liquid in another location. In this process the heavier liquid enter'' the lighter liquid in one point and creates a depression as shown in Figure 4.49. Fig. 4.49 Description of depression to explain the instability. To analyze it, considered two control volumes bounded by the blue lines in Figure 4.49. The first control volume is made of a cylinder with a radiusr$and the second is the depression below it. The extra'' lines of the depression should be ignored, they are not part of the control volume. The horizontal forces around the control volume are canceling each other. At the top, the force is atmospheric pressure times the area. At the cylinder bottom, the force is$\rho\,g\,h\times A$. This acts against the gravity force which make the cylinder to be in equilibrium with its surroundings if the pressure at bottom is indeed$\rho\,g\,h$. For the depression, the force at the top is the same force at the bottom of the cylinder. At the bottom, the force is the integral around the depression. It can be approximated as a flat cylinder that has depth of$r\,\pi/4(read the explanation in the example 4.23) This value is exact if the shape is a perfect half sphere. In reality, the error is not significant. Additionally when the depression occurs, the liquid level is reduced a bit and the lighter liquid is filling the missing portion. Thus, the force at the bottom is \begin{align} F_{bottom} \sim \pi\,r^2 \left[ \left( \dfrac{\pi\,r}{4} + h \right) \,\left( \rho_L - \rho_G\right) \,g + P_{atmos} \right] \label{static:eq:bottomDepression1} \end{align} The net force is then \begin{align} F_{bottom} \sim \pi\,r^2 \left(\dfrac{\pi\,r}{4} \right) \,\left( \rho_L - \rho_G\right) \,g \label{static:eq:bottomDepression} \end{align} The force that hold this column is the surface tension. As shown in Figure 4.49, the total force is then \begin{align} F_{\sigma} = 2\,\pi \, r \,\sigma\, \cos\theta \label{static:eq:sigmaF} \end{align} The forces balance on the depression is then \begin{align} 2\,\pi \, r \,\sigma \cos\theta \sim \pi\,r^2 \left(\dfrac{\pi\,r}{4} \right) \,\left( \rho_L - \rho_G\right) \,g \label{static:eq:balance} \end{align} The radius is obtained by \begin{align} r \sim \sqrt{ \dfrac { 2\,\pi\,\sigma\cos\theta}{ \,\left( \rho_L - \rho_G\right) \,g }} \label{static:eq:radiusMinTubeExI} \end{align} The maximum surface tension is when the angle,\theta=\pi/2. At that case, the radius is \begin{align} r \sim \sqrt{ \dfrac { 2\,\pi\,\sigma}{ \,\left( \rho_L - \rho_G\right) \,g }} \label{static:eq:radiusMinTubeEx} \end{align} Fig. 4.50 The cross section of the interface. The purple color represents the maximum heavy liquid raising area. The yellow color represents the maximum lighter liquid that goes down.'' The maximum possible radius of the depression depends on the geometry of the container. For the cylindrical geometry, the maximum depression radius is about half for the container radius (see Figure 4.50). This radius is limited because the lighter liquid has to enter at the same time into the heavier liquid zone. Since the exchange'' volumes of these two process are the same, the specific radius is limited. Thus, it can be written that the minimum radius is \begin{align} {r_{min}}_{tube} = 2\,\sqrt{\dfrac{2\,\pi\,\sigma}{ g\,{\left(\rho_L-\rho_G\right)} }} \label{static:eq:solutionD2} \end{align} The actual radius will be much larger. The heavier liquid can stay on top of the lighter liquid without being turned upside down when the radius is smaller than the equation ef{static:eq:solutionD2}. This analysis introduces a new dimensional number that will be discussed in a greater length in the Dimensionless chapter. In equation \eqref{static:eq:solutionD2} the angle was assumed to be 90 degrees. However, this angle is never can be obtained. The actual value of this angle is about\pi/4$to$\pi/3$and in only extreme cases the angle exceed this value (considering dynamics). In Figure 4.50, it was shown that the depression and the raised area are the same. The actual area of the depression is only a fraction of the interfacial cross section and is a function. For example,the depression is larger for square area. These two scenarios should be inserting into equation 4.168 by introducing experimental coefficient. # Example 4.33 Estimate the minimum radius to insert liquid aluminum into represent tube at temperature of 600$[K]$. Assume that the surface tension is$400[mN/m]$. The density of the aluminum is$2400 kg/m^3$. # Solution The depression radius is assume to be significantly smaller and thus equation \eqref{static:eq:radiusMinTubeEx} can be used. The density of air is negligible as can be seen from the temperature compare to the aluminum density. $r \sim \sqrt{\dfrac{8\,\pi\,\overbrace{0.4}^{\sigma}}{ 2400\times 9.81}}$ The minimum radius is$r \sim 0.02 [m]$which demonstrates the assumption of$h>>r$was appropriate. Fig. 4.51 Three liquids layers under rotation with various critical situations. #### Open Question by April 15, 2010 The best solution of the following question will win 18 U.S. dollars and your name will be associated with the solution in this book. # Example 4.34 A canister shown in Figure 4.51 has three layers of different fluids with different densities. Assume that the fluids do not mix. The canister is rotate with circular velocity,$\omega$. Describe the interface of the fluids consider all the limiting cases. Is there any difference if the fluids are compressible? Where is the maximum pressure points? For the case that the fluids are compressible, the canister top center is connected to another tank with equal pressure to the canister before the rotation (the connection point). What happen after the canister start to be rotated? Calculated the volume that will enter or leave, for known geometries of the fluids. Use the ideal gas model. You can assume that the process is isothermal. Is there any difference if the process is isentropic? If so, what is the difference? ## 4.8 Qualitative questions These qualitative questions are for advance students and for those who would like to prepare themselves preliminary examination (Ph. D. examinations). # Additional Question 1. 1. The atmosphere has different thickness in different locations. Where will be atmosphere thickness larger in the equator or the north pole? Explain your reasoning for the difference. How would you estimate the difference between the two locations. 2. 2. The author's daughter (8 years old) stated that fluid mechanics make no sense. For example, she points out that warm air raise and therefor the warm spot in a house is the top floor (that is correct in a 4 story home). So why when there is snow on high mountains? It must be that the temperature is below freezing point on the top of the mountain (see for example Mount Kilimanjaro, Kenya). How would you explain this situation? Hint, you should explain this phenomenon using only concepts that where developed in this chapter. 3. 3. The surface of the ocean has spherical shape. The stability analysis that was discussed in this chapter was based on the assumption that surface is straight. How in your opinion the effect of the surface curvature affects the stability analysis. 4. 4. If the gravity was changing due to the surface curvature what is the effect on the stability. 5. 5. A car is accelerated (increase of velocity) in an include surface upwards. Draw the constant pressure line. What will constant pressure lines if the car will be driven downwards. 6. 6. A symmetrical cylinder filled with liquid is rotating around its center. What are the directions of the forces that acting on cylinder. What are the direction of the force if the cylinder is not symmetrical? 7. 7. A body with a constant area is floating in the liquid. The body is pushed down of the equilibrium state into the liquid by a distance$ll$. Assume that the body is not totally immersed in the liquid. What are simple harmonic frequency of the body. Assume the body mass is$m$its volume is,$V\$. Additionally assume that the only body motion is purely vertical and neglect the add mass and liquid resistance. Next: Mass    Previous: Static1
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 62, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9890913963317871, "perplexity": 1712.1083339456968}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511216.45/warc/CC-MAIN-20181017195553-20181017221053-00332.warc.gz"}