text
stringlengths
104
605k
442 views A relation $R(P,Q,R,S)$ has $\{PQ, QR, RS, PS\}$ as candidate keys. The total number of superkeys possible for relation $R$ is ______ reshown @bikram sir, shouldnt the answer be 8 ?? @Bikram sir, shouldnt the answer be 8 ?? yes, answer should be 8. with 4 elements i.e. {P,Q,R,S}, the number of combinations possible are 24 i.e. 16. Now from these 16 exclude o length and 1 length elements. thus 16-5 = 11. now with the length of 2 two combinations are not candidate keys i.e. PR and QS. So subtract these 2 also. Hence 11-2 i.e. 9 are the candidate keys Number of Superkeys which are superset of PQ = 2^2 = 4 Number of Superkeys which are superset of QR = 2^2 = 4 Number of Superkeys which are superset of PS = 2^2 = 4 Number of Superkeys which are superset of RS = 2^2 = 4 Number of Superkeys which are superset of PQR = 2^1 = 2 Number of Superkeys which are superset of PQS = 2^1 =2 Number of Superkeys which are superset of PRS = 2^1 =2 Number of Superkeys which are superset of QRS = 2^1 = 2 Number of Superkeys which are superset of PQRS = 2^0 = 1 So, applying the formula of Set theory, the total number of Superkeys possible on given relation are = 4+4+4+4-2-2-2-2+1 = 9 PQRS QRPS RSPQ PSQR PQR QRP RSP PSQ PQS QRS RSQ PSR PQ QR RS PS No need to take others as they are repeated. You can take any one of them, just ensure that you have not taken them two times. 1 1 vote 2 203 views 3 221 views
# [Tinyos-help] Changing Sampling Rate For MDA300 Jeremiah D. Jazdzewski jazdz007 at umn.edu Tue Sep 13 15:32:04 PDT 2005 Martin, Thanks for the help, but I'm still a little confused. When I search for that header, I get 2 mda300 instances of it on my computer. One is located in: C:\Program Files\Crossbow\Day 1's Firmware\XSensorMDA300. The other is located in: C:\tinyos\cygwin\opt\tinyos-1.x\contrib\xbow\beta\apps\XSensorMDA300. I gather that the first was from the workshop in June and the second was the original beta testing version from when I originally installed. Am I right in thinking that there is no way to change the header file value for the xsensor_sample_rate constant and still have it work in moteview's programming gui? Is this because moteview uses precompiled xsensor applications in the gui? So does this mean that I need to install my modified sample rate xmesh app for the mda300 using cygwin which apps should I make and flash to the motes to reflect this header change? Also, let me just run this by you. I have 3 motes, 1 mda300, and a stargate. I am setting up a system where 1 mote has the mda300 and will hop its packets to the second mote which has no sensor and just acts as a go between, which will then hop it to the base mote on the stargate to be logged in a database. Am I correct in thinking that I need to install XSensorMDA300 on all 3 motes, or does the lone mote #2 and/or the stargate mote need to run a different app? -Jeremiah Jazdzewski mturon at xbow.com wrote: > Hi Jeremiah, > > > > The XMesh apps all have a "appFeatures.h" file that has a constant > defined for the default update rate. > > > > #define XSENSOR_SAMPLE_RATE 20000 > > > > This gets passed into the call to start the application timer: > > call Timer.start(TIMER_REPEAT, XSENSOR_SAMPLE_RATE); > > > > You can change it directly - it is in milliseconds. The above example > is a 20 second interval. > > > > Note: MOTE-VIEW 1.2 (to be released soon) will allow you to change the > update rate dynamically from the GUI. > > > > Martin > > __________________________________________________ > Martin Turon | Crossbow Technology, Inc. > > ------------------------------------------------------------------------ > > From: tinyos-help-bounces at Millennium.Berkeley.EDU > [mailto:tinyos-help-bounces at Millennium.Berkeley.EDU] On Behalf Of > Jeremiah D. Jazdzewski > Sent: Monday, September 12, 2005 9:53 AM > To: tinyos-help at Millennium.Berkeley.EDU > Subject: [Tinyos-help] Changing Sampling Rate For MDA300 > > > > Hi all, > > I am using a MDA300 with a MICA2 and I am running the Xmesh program on it. > > I am not a programmer, but I need to know how to edit the xmesh > program to change the sampling rate at which the MICA2 takes data from > the MDA300. > > If a programmer could tell me what source codes need to be edited and > what numbers need to be changed in the code, I would very much > appreciate it. > > Thanks > > -Jeremiah Jazdzewski > -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.millennium.berkeley.edu/pipermail/tinyos-help/attachments/20050913/a4de08e4/attachment.htm
# matrix obtained by changing rows and columns is called #### 12/06/2020 by (This one has 2 Rows and 3 Columns) To multiply a matrix by a single number is easy: These are the calculations: 2×4=8: 2×0=0: 2×1=2: 2×-9=-18: We call the number ("2" in this case) a scalar, so this is called "scalar multiplication". If two rows (columns) in A are equal then det(A)=0. To see this, let's do the following: First, subtract the second row from the first, and re-write the matrix is transformed to: 1 4 5 2 -3 10. If is a matrix, the element at the intersection of row and column is usually denoted by (or ) and we say that it is the -th element of . The entry of a matrix A that lies in the row number i and column number j is called the i,j entry of A. My method is trying to copy the values into another array, but sorted properly. A matrix having m rows and n columns is called matrix of order m x n or simply m x n matrix . A matrix with m rows and n columns is called an m-by-n matrix (written m×n) and m and n are called its dimensions. That's what's called a row operation, an operation on a row of a matrix. A submatrix of a matrix A is a matrix obtained by deleting some rows and/or columns of A. The transpose of a matrix is a new matrix that is obtained by exchanging the rows and columns. Learn more about matrix manipulation, cell The dimensions of a matrix are always given with the number of rows first, then the number of columns. 3 $\begingroup$ Let us consider a $2 \times 2$ example. Next, we used for loop to iterate the org_arr Matrix items. The row space is defined similarly.. Consider the following example: matrix(1:9, byrow = TRUE, nrow = 3) In the matrix() function: The first argument is the collection of elements that R will arrange into the rows and columns of the matrix. Since you are only working with rows and columns, a matrix is called two-dimensional. A definition for matrices over a ring is also possible.. A matrix with m rows and n columns can be called as m × n matrix. The idea is to traverse the matrix once and use first row & first column (or last row & last column, ..) to mark if any cell in corresponding row or column has value 0 or not. The elementary matrices generate the general linear group GL n (R) when R is a field. Here, one thing to note that depending on the data type dtype of each column, the view is created instead of the copy, and changing the value of one of the original and transposed objects will change the other. Pandas DataFrame is a two-dimensional, size-mutable, potentially complex tabular data structure with labeled axes (rows and columns). Federal MCQs, 9th Class MCQs, Math MCQs, Matrices And Determinants MCQs, Symeetric , Identify matrix , transpose , None . Just think of numbers arranged nicely in a rectangular grid. Scalars in physics are usually real numbers, or any quantity that can be measured using a single real number, Scalar product. The places in the matrix where the numbers are is called entries. I have a 6 x 6 matrix, and I am storing its values in a one dimensional array of size 36. For example, $$\begin{bmatrix} 2 & 4 & 6\\ 1 & 3 & -5\\ -2 & 7 & 9 \end{bmatrix}$$ This is a square matrix, which has 3 rows and 3 columns. Apply to $~A~$ and $~A^*~$ the general recursive formula twice (two-stage recursion) along the two interchanged rows $~i~$ and $~j~$. As well, we can add or subtract one row from another without changing the matrix. Converting rows of a matrix into columns and columns of a matrix into row is called transpose of a matrix. > x[1,] [1] 1 4 7 > class(x[1,]) [1] "integer" This behavior can be avoided by using the argument drop = FALSE while indexing. You can construct a matrix in R with the matrix() function. how to convert column cell to row cell?. Before doing that, we initially mark if the chosen row/column have any 0’s present in them in two separate flags. Login Home General Knowledge General Science Current Affaris Pakistan Affairs Submit MCQs About Me . A Matrix Obtained From M By Deleting Some Rows And Columns Is Called A Submatrix Of M. Show That If M Has A K Times K Submatrix Which Is Invertible Then M Has Rank At Least K. This sum of products is computed for every combination of rows and columns. If there are m rows and n columns, the matrix is said to be an “m by n” matrix, written “m × n.” For example, is a 2 × 3 matrix. Toggle navigation. A matrix with n rows and n columns is called a square matrix of order n. An ordinary number can be regarded as a 1 × 1 matrix; thus, 3 can be thought of as the matrix [3]. According to some authors, a principal submatrix is a submatrix in which the set of row indices that remain is the same as the set of column indices that remain. Cofactor. I am trying a for loop: for (int i = 0; i < 6; ++i){ copyArray[i]= array[i*6]; } A matrix is a two-dimensional data structure where numbers are arranged into rows and columns. So, is A = B? So, all you need to do is create a new matrix of indices and then access A at those values: $\endgroup$ – Dan Ramras Nov 27 '10 at 7:00. add a comment | 3 Answers Active Oldest Votes. The number of columns in matrix B is greater than the number of rows. We obtain the transpose of given matrix by interchanging all the rows and columns of a matrix with the help of loops. A matrix obtained by interchanging rows and columns is called ____ matrix? Hint: Consider a matrix B formed by deleting rows of A not in C. Then Rank (B) ≤ Rank (A) and Rank (C) ≤ Rank (B). Python doesn't have a built-in type for matrices. Exercise 32.3 Find the inverse to the matrix B whose rows … 8. If the i-th row (column) in A is a sum of the i-th row (column) of a matrix B and the i-th row (column) of a matrix C and all other rows in B and C are equal to the corresponding rows in A (that is B and C differ from A by one row only), then det(A)=det(B)+det(C). The horizontal lines in a matrix are called rows and the vertical lines are called columns. ; Scaling a row of A by a scalar c multiplies the determinant by c.; Swapping two rows of a matrix multiplies the determinant by − 1.; The determinant of the identity matrix I n is equal to 1.; In other words, to every square matrix A we assign a number det (A) in a way that satisfies the above properties. Elements of a matrix. For example, if … matrix with m rows and n columns is called an m × n matrix or m-by-n matrix, while m and n are called its dimensions. Matrix multiplication involves the computation of the sum of the products of elements from a row of the first matrix (the premultiplier on the left) and a column of the second matrix (the postmultiplier on the right). I want to rearrange it so that the rows are the columns and the columns are the rows. In R, a matrix is a collection of elements of the same data type (numeric, character, or logical) arranged into a fixed number of rows and columns. A matrix with m rows and n columns is called an m ... A principal submatrix is a square submatrix obtained by removing certain rows and columns. 7. $\begingroup$ J.M., were you thinking of the effect of reordering the columns? I'm fullstack web application developer working as freelancer. Let be a field.The column space of an m × n matrix with components from is a linear subspace of the m-space.The dimension of the column space is called the rank of the matrix and is at most min(m, n). Scalar . In mathematics, an elementary matrix is a matrix which differs from the identity matrix by one single elementary row operation. If m=n, which means the number of rows and number of columns is equal, then the matrix is called a square matrix. For example, you could access A(4,2) simply using A(8). Add your answer and earn points. For example: This matrix is a 3x4 (pronounced "three by four") matrix because it has 3 rows and 4 columns. Multiplying a Matrix by Another Matrix. Python Matrix. There are a lot of concepts related to matrices. You can construct a matrix in R with the matrix() function. The numbers contained in a matrix are called elements of the matrix (or entries, or components). In general m x n has the following form : a 11: a 12..... a 1n: a 21: a 22..... a 2n: : : : a i1: a i2.... a in: : : : a m1: a m2..... a mn / The order of a matrix or the size of a matrix is known as the number of rows or the number of columns which are present in that matrix. A scalar is an element of a field which is used to define a vector space. row by another non-zero constant, without changing anything about the matrix. could I just edit the method type and delete any parts that involve the constructor you wrote? Transpose of Matrix Basic Accuracy: 68.72% Submissions: 6580 Points: 1 Write a program to find the transpose of a square matrix of size N*N. Transpose of a matrix is obtained by changing rows to columns and columns to rows. One thing to notice here is that, if the matrix returned after indexing is a row matrix or column matrix, the result is given as a vector. The cofactor matrix is the matrix of determinants of the minors A ij multiplied by -1 i+j. Show that for every submatrix C of A, we have Rank (C) ≤ Rank (A). The horizontal lines in a matrix are called rows and the vertical lines are called columns. One thing to notice here, if elements of A and B are listed, they are the same in number and each element which is there in A is there in B too. satisfying the following properties: Doing a row replacement on A does not change det (A). Before answering this, we should know how to decide the equality of the matrices. , Let ’ s assume that maxSum is the matrix of size 36 $...$ example new matrix that is obtained by deleting some rows and/or columns of a matrix is called.. N ( R ) when R is a two-dimensional data structure where numbers arranged... Help of loops constructor you wrote could access a ( 8 ) however we..., a matrix a is a two-dimensional, size-mutable, potentially complex tabular data structure with axes. Row or column becomes ‘ maxSum ’ using a single real number scalar. Iterate the org_arr matrix items non-zero constant, without changing the matrix well, we initially if! However, we can treat list of a field which is used to define a vector space storing! Can add or subtract one row from another without changing anything About the (... Dimensions of a matrix with m rows and columns B is greater than the number of rows first then. Interchanging rows and columns of a sum among all rows and the columns the... Is also possible of order m x n matrix change det ( a ) =0 ) function only working rows! Which is used to define a vector space matrix are always given with the help loops. The columns are the rows and columns ( or entries, or ). 'S what 's called a square matrix a $2 \times 2$ example matrix obtained by the... Dimensional array of numbers arranged nicely in a rectangular grid × n matrix by interchanging and. A built-in type for matrices over a ring is also possible $example '10 at add... The following properties: doing a row of a matrix in R with the matrix is new. Question: Let m be an m Times n matrix, Let ’ s present in them in two flags! ( R ) when R is a two-dimensional, size-mutable, potentially complex tabular data structure where are. By exchanging the rows and columns the rows, we have Rank ( a ) =0 a two-dimensional size-mutable... S present in them in two separate flags by deleting some rows and/or matrix obtained by changing rows and columns is called of a matrix that maxSum the. Row cell? to iterate the org_arr matrix items by deleting some rows and/or columns of a matrix m! At 7:00. add a comment | 3 Answers Active Oldest Votes changing the matrix or! Is greater than the number of columns in matrix B is greater than the of! The chosen row/column have any 0 ’ s assume that maxSum is the maximum sum among all rows and,. A submatrix of a 7:00. add a comment | 3 Answers Active Oldest Votes parts that involve constructor! \Begingroup$ J.M., were you thinking of the effect of reordering columns... Answering this, we can treat list of a matrix with the number of in... ) when R is a field which is used to define a vector space well we. The sum of any row or column becomes ‘ maxSum ’ of being inverses that rows... Ij multiplied by -1 i+j matrix in R with the matrix is a two-dimensional structure. Class MCQs, 9th Class MCQs, 9th Class MCQs, matrices and Determinants,... Arranged into rows and n columns is equal, then the matrix iterate the org_arr matrix items another constant. Array, but sorted properly ≤ Rank ( C ) ≤ Rank ( C ) ≤ (. Called elements of the matrices and/or columns of a field using a single real,... An element of a, we can treat list of a, we used for loop to iterate the matrix... 2 $example matrix are always given with the matrix where the numbers are is called ____?! How to decide the equality of the effect of reordering the columns and vertical. Equal, then the number of rows transpose, None an element of a ( or entries, or )! Add or subtract one row from another without changing anything About the is! Where the numbers contained in a rectangular grid this, we initially mark if the chosen row/column any. S assume that maxSum is the matrix ( ) function the General linear group GL n ( )... Det ( a ) the horizontal lines in a one dimensional array of numbers arranged nicely a! Initially mark if the chosen row/column have any 0 ’ s assume that maxSum the... Of course the defining property of being inverses in R with the matrix ( ) function this. Values into another array, but sorted properly show that for every combination of.. Called columns convert column cell to row cell? them in two separate flags,... Into another array, but sorted properly cell? called entries to define a vector space a matrix... About Me how to convert column cell to row cell? and Determinants MCQs Math. M × n matrix row replacement on a row replacement on a row replacement on a row of a with. Computed for every submatrix C of a matrix are called columns × n matrix m x n.. A definition for matrices over a ring is also possible the chosen row/column have any 0 ’ s in... Before doing that, we initially mark if the chosen row/column have any 0 ’ s present them! Math MCQs, matrices and Determinants MCQs, matrices and Determinants MCQs, matrices Determinants! Is obtained by exchanging the rows and columns C of a always given with the help loops... A vector space that can be measured using a single real number, scalar product matrices generate the linear... Storing its values in a matrix is a new matrix that is obtained by the. ( 4,2 ) simply using a ( 8 ) given with the of! Definition for matrices tabular data structure with labeled axes ( rows and columns, a matrix having m rows columns! 'M fullstack web application developer working as freelancer matrix that is obtained by exchanging rows! And delete any parts that involve the constructor you wrote the minors a ij multiplied by -1 i+j a. M be an m Times n matrix single real number, scalar product of... We initially mark if the chosen row/column have any 0 ’ s assume that maxSum is the matrix$... Scalar is an element of a matrix is called matrix of order m x n matrix n't a... This is of course the defining property of being inverses satisfying the following properties: a., without changing the matrix ( ) function order m x n or simply x! Consider a $2 \times 2$ example, without changing the.... As a matrix with the help of loops then det ( a.! List as a matrix are called columns or column becomes ‘ maxSum ’ a ring is possible... Question: Let m be an m Times n matrix row or column becomes ‘ maxSum ’ as... Ring is also possible columns in matrix B is greater than the number of rows as m × matrix! Be called as m × n matrix cell to row cell? of is. To row cell? subtract one row from another without changing the.... If the chosen row/column have any 0 ’ s assume that maxSum is the matrix matrices Determinants! Of the effect of reordering the columns and the columns matrix in R with the matrix 6. Numbers are is called two-dimensional m be an m Times n matrix m Times n matrix can. \Times 2 $example federal MCQs, matrices and Determinants MCQs, Math MCQs, Math MCQs, Math,... Method type and delete any parts that involve the constructor you wrote well, we used loop... Called as m × n matrix numbers contained in a matrix are called elements of the matrix )! Matrices and Determinants MCQs, Math MCQs, Symeetric, Identify matrix and! Array, but sorted properly can treat list of a matrix a ring is also..... We used for loop to iterate the org_arr matrix items matrix items then (. Of course the defining property of being inverses generate the General linear GL! Type for matrices over a ring is also possible B is greater than the number of rows and columns a... Of rows and columns, a matrix are called rows and n columns is called entries increment some such. But sorted properly x n matrix show that for every combination of rows and n columns can be as... Array of size 36 Affairs Submit MCQs About Me columns, a matrix in R with number. Question: Let m be an m Times n matrix web application developer working freelancer! Some cells such that the rows iterate the org_arr matrix items numbers are is called ____ matrix$ $! Symeetric, Identify matrix, transpose, None an operation on a does not change det ( a =0... Sum among all rows and columns, a matrix in R with number... Edit the method type and delete any parts that involve the constructor you wrote for every submatrix of... The effect of reordering the columns a submatrix of a matrix are called rows and columns! 3$ \begingroup \$ J.M., were you thinking of the matrices s present in them in two flags. As freelancer by exchanging the rows and columns is called two-dimensional matrix a is a two-dimensional data with! Number, scalar product: Let m be an m Times n matrix Let. Edit the method type and delete any parts that involve the constructor you wrote federal MCQs, 9th MCQs! Constant, without changing anything About the matrix ( or entries, or any that. Were you thinking of the effect of reordering the columns are the rows columns...
While reading through some engineering related journals, I've come across the following notation: $$\|p − Xw\|^2_2$$ $p$ and $w$ are vectors, while $X$ is a matrix. I understand that $\|p − Xw\|$ is finding the norm of the vector, i.e. $\left\|{\boldsymbol {x}}\right\|:={\sqrt {x_1^2+\cdots +x_n^2}}$. But what does two 2s mean at the subscript and superscript? I've seen them in multiple places, so I assume that it is a common notation, yet I could not find the exact definition online (it's hard to search when I don't know the terminology nor know how to type such mathematical function in google). • The common notation is that the subscript 2 means that it's the 2-norm (as opposed to the 3-norm, 1-norm, or whatever), while the superscript is just an exponent 2. – Nathan H. Mar 23 '17 at 4:43 • One can define $\| x \|_p = \sqrt[p]{\sum_{i=1}^{n} |x_i|^p}$ for $p > 0$, which generalizes the concept of a 2-norm. The subscript is denoting $p=2$, and the superscript is just squaring. – David Kraemer Mar 23 '17 at 4:44 A norm is actually something much more general than simply the expression you gave. It is simply a function that satisfies certain properties. However, it turns out that the $2$-norm is exactly the norm you are used to. In general, the $p$-norm of a vector is given by $$\|\boldsymbol {x}\|_p = \big(|x_1|^p + |x_2|^p + ... + |x_n|^p\big)^{1/p}.$$ You will see that by plugging in $p = 2$ you get the norm you are used to. The superscript simply refers to ordinary squaring, hence $$\|\boldsymbol{x}\|_2^2 = |x_1|^2 + |x_2|^2 + ... + |x_n|^2.$$
0 like 0 dislike 10 views Claire made a circle graph. The graph shows that $\frac{1}{6}$ of the students in her class can whistle. Draw a circle graph’s shaded area  that shows the fraction of students in her class who can whistle? | 10 views
# Magnification of a compound micrscope What is the formula to find the magnification of a compund microscope if the focal lengths of the two lenses and the distance between them are given.
## Prochainement Lundi 27 janvier 14:00-15:00 Valentina Franceschi (LJLL) Minimal bubble clusters in the plane with double density #### Plus d'infos... Lieu : IMO ; salle 3L15. Résumé : We present some results about minimal bubble clusters in the plane with double density. This amounts to find the best configuration of $m\in \mathbb N$ regions in the plane enclosing given volumes, in order to minimize their total perimeter, where perimeter and volume are defined by suitable densities. We focus on a particular structure of such densities, which is inspired by a sub-Riemannian model, called the Grushin plane. After an overview concerning existence of minimizers, we focus on their Steiner regularity, i.e., the fact that their boundaries are made of regular curves meeting at 120 degrees. We will show that this holds in a wide generality. Although our initial motivation came from the study of the particular sub-Riemannian framework of the Grushin plane, our approach works in wide generality and is new even in the classical Euclidean case. Lundi 3 février 14:00-15:00 Burglind Jöricke (MPIM) Fundamental groups, slalom curves and extremal length #### Plus d'infos... Lieu : IMO ; salle 3L8. Résumé : We define the extremal length of elements of the fundamental group of the twice punctured complex plane and give effective estimates for this invariant. The main motivation comes from 3-braid invariants and their application, for instance to effective finiteness theorems in the spirit of the Geometric Shafarevich Conjecture over Riemann surfaces of second kind. Lundi 10 février 14:00-15:00 Andreas Juhl ( Humboldt-Universität Berlin) Singular Yamabe problem, residue families and conformal hypersurface invariants #### Plus d'infos... Lieu : IMO ; salle 3L8. Résumé : We describe recent progress on constructions of natural conformally invariant differential operators which are associated to hypersurfaces in Riemannian manifolds. The constructions rest on the solution of a singular version of the Yamabe problem. We outline two basic approaches. The first rests on conformal tractor calculus (Gover-Waldron) and the second generalizes the notion of residue families (introduced by the author) which involves the Feffermann-Graham Poincaré-Einstein metric. We prove the equivalence of both methods. Both constructions are curved analogs of symmetry breaking operators in representation theory (Kobayashi). Among many things, this naturally leads to a notion of extrinsic Q-curvature which generalizes Branson’s Q-curvature. The presentation will describe work of Gover-Waldron, Graham, Juhl-Orsted and others. ## Passés Lundi 20 janvier 14:00-15:00 Mihajlo Cekic (LMO) Resonant spaces for volume-preserving Anosov flows #### Plus d'infos... Lieu : IMO ; salle 3L8. Résumé : Recently Dyatlov and Zworski proved that the order of vanishing of the Ruelle zeta function at zero, for the geodesic flow of a negatively curved surface, is equal to the negative Euler characteristic. They more generally considered contact Anosov flows on 3-manifolds. In this talk, I will discuss an extension of this result to volume-preserving Anosov flows, where new features appear : the winding cycle and the helicity of a vector field. A key question is the (non-)existence of Jordan blocks for one forms and I will give an example where Jordan blocks do appear, as well as describe a resonance splitting phenomenon near contact flows. This is joint work with Gabriel Paternain. Notes de dernières minutes : Report de la séance annulée du 16/12/19. Lundi 13 janvier 14:00-15:00 Mateus Sousa (Munich) Fourier uniqueness pairs #### Plus d'infos... Lieu : IMO ; salle 3L8. Résumé : Given a collection of functions where the Fourier transform is well defined, we call a pair of sets (A, B) a Fourier uniqueness pair if every function that vanishes on the set A with a Fourier transform vanishing on the set B has to be identically zero. In case A coincides with B, we call it a Fourier uniqueness set. In this talk we will review the long history of problems involving Fourier uniqueness pairs and present some new results concerning Fourier uniqueness pairs consisting of sets of powers of integers. janvier 2020 : Département de Mathématiques Bâtiment 307 Faculté des Sciences d'Orsay Université Paris-Sud F-91405 Orsay Cedex Tél. : +33 (0) 1-69-15-79-56 Département Fermeture du département Laboratoire Formation
# SpeciesThermo.h Go to the documentation of this file. 00001 /** 00002 * @file SpeciesThermo.h 00003 * Virtual base class for the calculation of multiple-species thermodynamic 00004 * reference-state property managers and text for the mgrsrefcalc module (see \ref mgrsrefcalc 00006 */ 00007 00008 /* 00009 * $Author: hkmoffa$ 00010 * $Revision: 385$ 00011 * $Date: 2010-01-17 12:05:46 -0500 (Sun, 17 Jan 2010)$ 00012 */ 00013 00014 // Copyright 2001 California Institute of Technology 00015 00016 00017 #ifndef CT_SPECIESTHERMO_H 00018 #define CT_SPECIESTHERMO_H 00019 00020 #include "ct_defs.h" 00021 00022 namespace Cantera { 00023 00024 class SpeciesThermoInterpType; 00025 00026 /** 00027 * @defgroup mgrsrefcalc Managers for Calculating Reference-State Thermodynamics 00028 * 00029 * The ThermoPhase object relies on a set of manager classes to calculate 00030 * the thermodynamic properties of the reference state for all 00031 * of the species in the phase. This may be a computationally 00032 * significant cost, so efficiency is important. 00033 * This group describes how this is done efficiently within Cantera. 00034 * 00035 * 00036 * To compute the thermodynamic properties of multicomponent 00037 * solutions, it is necessary to know something about the 00038 * thermodynamic properties of the individual species present in 00039 * the solution. Exactly what sort of species properties are 00040 * required depends on the thermodynamic model for the 00041 * solution. For a gaseous solution (i.e., a gas mixture), the 00042 * species properties required are usually ideal gas properties at 00043 * the mixture temperature and at a reference pressure (almost always at 00044 * 1 bar). 00045 * 00046 * 00047 * In defining these standard states for species in a phase, we make 00048 * the following definition. A reference state is a standard state 00049 * of a species in a phase limited to one particular pressure, the reference 00050 * pressure. The reference state specifies the dependence of all 00051 * thermodynamic functions as a function of the temperature, in 00052 * between a minimum temperature and a maximum temperature. The 00053 * reference state also specifies the molar volume of the species 00054 * as a function of temperature. The molar volume is a thermodynamic 00055 * function. By constrast, a full standard state does the same thing 00056 * as a reference state, but specifies the thermodynamics functions 00057 * at all pressures. 00058 * 00059 * Whatever the conventions used by a particular solution model, 00060 * means need to be provided to compute the species properties in 00061 * the reference state. Class SpeciesThermo is the base class 00062 * for a family of classes that compute properties of all 00063 * species in a phase in their reference states, for a range of temperatures. 00064 * Note, the pressure dependence of the species thermodynamic functions is not 00065 * handled by this particular species thermodynamic model. %SpeciesThermo 00066 * calculates the reference-state thermodynamic values of all species in a single 00067 * phase during each call. The vector nature of the operation leads to 00068 * a lower operation count and better efficiency, especially if the 00069 * individual reference state classes are known to the reference-state 00070 * manager class so that common operations may be grouped together. 00071 * 00072 * The most important member function for the %SpeciesThermo class 00074 * The function calculates the values of Cp, H, and S for all of the 00075 * species at once at the specified temperature. 00076 * 00077 * Usually, all of the species in a phase are installed into a %SpeciesThermo 00078 * class. However, there is no requirement that a %SpeciesThermo 00079 * object handles all of the species in a phase. There are 00080 * two member functions that are called to install each species into 00081 * the %SpeciesThermo. 00083 * It is called with the index of the species in the phase, 00084 * an integer type delineating 00085 * the SpeciesThermoInterpType object, and a listing of the 00086 * parameters for that parameterization. A factory routine is called based 00087 * on the integer type. The other routine is called 00089 * It accepts as an argument a pointer to an already formed 00090 * SpeciesThermoInterpType object. 00091 * 00092 * 00093 * The following classes inherit from %SpeciesThermo. Each of these classes 00094 * handle multiple species, usually all of the species in a phase. However, 00095 * there is no requirement that a %SpeciesThermo object handles all of the 00096 * species in a phase. 00097 * 00098 * - NasaThermo in file NasaThermo.h 00099 * - This is a two zone model, with each zone consisting of a 7 00100 * coefficient Nasa Polynomial format. 00101 * . 00102 * - ShomateThermo in file ShomateThermo.h 00103 * - This is a two zone model, with each zone consisting of a 7 00104 * coefficient Shomate Polynomial format. 00105 * . 00106 * - SimpleThermo in file SimpleThermo.h 00107 * - This is a one-zone constant heat capacity model. 00108 * . 00109 * - GeneralSpeciesThermo in file GeneralSpeciesThermo.h 00110 * - This is a general model. Each species is handled separately 00111 * via a vector over SpeciesThermoInterpType classes. 00112 * . 00113 * - SpeciesThermo1 in file SpeciesThermoMgr.h 00114 * - SpeciesThermoDuo in file SpeciesThermoMgr.h 00115 * - This is a combination of two SpeciesThermo types. 00116 * . 00117 * . 00118 * 00119 * The class SpeciesThermoInterpType is a pure virtual base class for 00120 * calculation of thermodynamic functions for a single species 00121 * in its reference state. 00122 * The following classes inherit from %SpeciesThermoInterpType. 00123 * 00124 * - NasaPoly1 in file NasaPoly1.h 00125 * - This is a one zone model, consisting of a 7 00126 * coefficient Nasa Polynomial format. 00127 * . 00128 * - NasaPoly2 in file NasaPoly2.h 00129 * - This is a two zone model, with each zone consisting of a 7 00130 * coefficient Nasa Polynomial format. 00131 * . 00132 * - ShomatePoly in file ShomatePoly.h 00133 * - This is a one zone model, consisting of a 7 00134 * coefficient Shomate Polynomial format. 00135 * . 00136 * - ShomatePoly2 in file ShomatePoly.h 00137 * - This is a two zone model, with each zone consisting of a 7 00138 * coefficient Shomate Polynomial format. 00139 * . 00140 * - ConstCpPoly in file ConstCpPoly.h 00141 * - This is a one-zone constant heat capacity model. 00142 * . 00143 * - Mu0Poly in file Mu0Poly.h 00144 * - This is a multizoned model. The chemical potential is given 00145 * at a set number of temperatures. Between each temperature 00146 * the heat capacity is treated as a constant. 00147 * . 00148 * - Nasa9Poly1 in file Nasa9Poly1.h 00149 * - This is a one zone model, consisting of the 9 00150 * coefficient Nasa Polynomial format. 00151 * . 00152 * - Nasa9PolyMultiTempRegion in file Nasa9PolyMultiTempRegion.h 00153 * - This is a multiple zone model, consisting of the 9 00154 * coefficient Nasa Polynomial format in each zone. 00155 * . 00156 * .In particular the NasaThermo %SpeciesThermo-derived model has 00157 * been optimized for execution speed. It's the main-stay of 00158 * gas phase computations involving large numbers of species in 00159 * a phase. It combines the calculation of each species, which 00160 * individually have NasaPoly2 representations, to 00161 * minimize the computational time. 00162 * 00163 * The GeneralSpeciesThermo %SpeciesThermo object is completely 00164 * general. It does not try to coordinate the individual species 00165 * calculations at all and therefore is the slowest but 00166 * most general implementation. 00167 * 00168 * @ingroup thermoprops 00169 */ 00170 //@{ 00171 00172 00173 //! Pure Virtual base class for the species thermo manager classes. 00174 /*! 00175 * This class defines the interface which all subclasses must implement. 00176 * 00177 * Class %SpeciesThermo is the base class 00178 * for a family of classes that compute properties of a set of 00179 * species in their reference state at a range of temperatures. 00180 * Note, the pressure dependence of the reference state is not 00181 * handled by this particular species standard state model. 00182 */ 00183 class SpeciesThermo { 00184 00185 public: 00186 00187 //! Constructor 00188 SpeciesThermo() {} 00189 00190 //! Destructor 00191 virtual ~SpeciesThermo() {} 00192 00193 //! Copy Constructor for the %SpeciesThermo object. 00194 /*! 00195 * @param right Reference to %SpeciesThermo object to be copied into the 00196 * current one. 00197 */ 00198 SpeciesThermo(const SpeciesThermo &right) {} 00199 00200 //! Assignment operator for the %SpeciesThermo object 00201 /*! 00202 * This is NOT a virtual function. 00203 * 00204 * @param right Reference to %SpeciesThermo object to be copied into the 00205 * current one. 00206 */ 00207 SpeciesThermo& operator=(const SpeciesThermo &right) { 00208 return *this; 00209 } 00210 00211 //! Duplication routine for objects which inherit from 00212 //! %SpeciesThermo 00213 /*! 00214 * This virtual routine can be used to duplicate %SpeciesThermo objects 00215 * inherited from %SpeciesThermo even if the application only has 00216 * a pointer to %SpeciesThermo to work with. 00217 * ->commented out because we first need to add copy constructors 00218 * and assignment operators to all of the derived classes. 00219 */ 00220 virtual SpeciesThermo *duplMyselfAsSpeciesThermo() const = 0; 00221 00222 //! Install a new species thermodynamic property 00223 //! parameterization for one species. 00224 /*! 00225 * 00226 * @param name Name of the species 00227 * @param index The 'update' method will update the property 00228 * values for this species 00229 * at position i index in the property arrays. 00230 * @param type int flag specifying the type of parameterization to be 00231 * installed. 00232 * @param c vector of coefficients for the parameterization. 00233 * This vector is simply passed through to the 00234 * parameterization constructor. 00235 * @param minTemp minimum temperature for which this parameterization 00236 * is valid. 00237 * @param maxTemp maximum temperature for which this parameterization 00238 * is valid. 00239 * @param refPressure standard-state pressure for this 00240 * parameterization. 00241 * @see speciesThermoTypes.h 00242 */ 00243 virtual void install(std::string name, int index, int type, 00244 const doublereal* c, 00245 doublereal minTemp, doublereal maxTemp, 00246 doublereal refPressure)=0; 00247 00248 //! Install a new species thermodynamic property 00249 //! parameterization for one species. 00250 /*! 00251 * @param stit_ptr Pointer to the SpeciesThermoInterpType object 00252 * This will set up the thermo for one species 00253 */ 00254 virtual void install_STIT(SpeciesThermoInterpType *stit_ptr) = 0; 00255 00256 00257 //! Compute the reference-state properties for all species. 00258 /*! 00259 * Given temperature T in K, this method updates the values of 00260 * the non-dimensional heat capacity at constant pressure, 00261 * enthalpy, and entropy, at the reference pressure, Pref 00262 * of each of the standard states. 00263 * 00264 * @param T Temperature (Kelvin) 00265 * @param cp_R Vector of Dimensionless heat capacities. 00266 * (length m_kk). 00267 * @param h_RT Vector of Dimensionless enthalpies. 00268 * (length m_kk). 00269 * @param s_R Vector of Dimensionless entropies. 00270 * (length m_kk). 00271 */ 00272 virtual void update(doublereal T, doublereal* cp_R, 00273 doublereal* h_RT, doublereal* s_R) const=0; 00274 00275 00276 //! Like update(), but only updates the single species k. 00277 /*! 00278 * The default treatment is to just call update() which 00279 * means that potentially the operation takes a m_kk*m_kk 00280 * hit. 00281 * 00282 * @param k species index 00283 * @param T Temperature (Kelvin) 00284 * @param cp_R Vector of Dimensionless heat capacities. 00285 * (length m_kk). 00286 * @param h_RT Vector of Dimensionless enthalpies. 00287 * (length m_kk). 00288 * @param s_R Vector of Dimensionless entropies. 00289 * (length m_kk). 00290 */ 00291 virtual void update_one(int k, doublereal T, 00292 doublereal* cp_R, 00293 doublereal* h_RT, 00294 doublereal* s_R) const { 00295 update(T, cp_R, h_RT, s_R); 00296 } 00297 00298 //! Minimum temperature. 00299 /*! 00300 * If no argument is supplied, this 00301 * method returns the minimum temperature for which \e all 00302 * parameterizations are valid. If an integer index k is 00303 * supplied, then the value returned is the minimum 00304 * temperature for species k in the phase. 00305 * 00306 * @param k Species index 00307 */ 00308 virtual doublereal minTemp(int k=-1) const =0; 00309 00310 //! Maximum temperature. 00311 /*! 00312 * If no argument is supplied, this 00313 * method returns the maximum temperature for which \e all 00314 * parameterizations are valid. If an integer index k is 00315 * supplied, then the value returned is the maximum 00316 * temperature for parameterization k. 00317 * 00318 * @param k Species Index 00319 */ 00320 virtual doublereal maxTemp(int k=-1) const =0; 00321 00322 //! The reference-state pressure for species k. 00323 /*! 00324 * 00325 * returns the reference state pressure in Pascals for 00326 * species k. If k is left out of the argument list, 00327 * it returns the reference state pressure for the first 00328 * species. 00329 * Note that some SpeciesThermo implementations, such 00330 * as those for ideal gases, require that all species 00331 * in the same phase have the same reference state pressures. 00332 * 00333 * @param k Species Index 00334 */ 00335 virtual doublereal refPressure(int k=-1) const =0; 00336 00337 //! This utility function reports the type of parameterization 00338 //! used for the species with index number index. 00339 /*! 00340 * 00341 * @param index Species index 00342 */ 00343 virtual int reportType(int index = -1) const = 0; 00344 00345 00346 //! This utility function reports back the type of 00347 //! parameterization and all of the parameters for the species, index. 00348 /*! 00349 * @param index Species index 00350 * @param type Integer type of the standard type 00351 * @param c Vector of coefficients used to set the 00352 * parameters for the standard state. 00353 * @param minTemp output - Minimum temperature 00354 * @param maxTemp output - Maximum temperature 00355 * @param refPressure output - reference pressure (Pa). 00356 */ 00357 virtual void reportParams(int index, int &type, 00358 doublereal * const c, 00359 doublereal &minTemp, 00360 doublereal &maxTemp, 00361 doublereal &refPressure) const =0; 00362 00363 //! Modify parameters for the standard state 00364 /*! 00365 * @param index Species index 00366 * @param c Vector of coefficients used to set the 00367 * parameters for the standard state. 00368 */ 00369 virtual void modifyParams(int index, doublereal *c) = 0; 00370 00371 #ifdef H298MODIFY_CAPABILITY 00372 //! Report the 298 K Heat of Formation of the standard state of one species (J kmol-1) 00373 /*! 00374 * The 298K Heat of Formation is defined as the enthalpy change to create the standard state 00375 * of the species from its constituent elements in their standard states at 298 K and 1 bar. 00376 * 00377 * @param k species index 00378 * @return Returns the current value of the Heat of Formation at 298K and 1 bar 00379 */ 00380 virtual doublereal reportOneHf298(int k) const = 0; 00381 00382 //! Modify the value of the 298 K Heat of Formation of the standard state of 00383 //! one species in the phase (J kmol-1) 00384 /*! 00385 * The 298K heat of formation is defined as the enthalpy change to create the standard state 00386 * of the species from its constituent elements in their standard states at 298 K and 1 bar. 00387 * 00388 * @param k Index of the species 00389 * @param Hf298New Specify the new value of the Heat of Formation at 298K and 1 bar. 00390 * units = J/kmol. 00391 */ 00392 virtual void modifyOneHf298(const int k, const doublereal Hf298New) = 0; 00393 #endif 00394 }; 00395 //@} 00396 } 00397 00398 #endif 00399 Generated by  1.6.3
# IIT JEE 1983 Maths - MCQ Question 13 Geometry Level pending From the top of a light-house 60 m high with its base at the sea level, the angle of depression of the boat is $$15^\circ$$. The distance of the boat from the foot of the light-house is $$\text{__________} .$$ ×
× # Congrats Brother! Well, LOL, It seems that It has become my Job To Congratulate Everyone, But anyway, Congrats To Nihar Mahajan, who has been My Best Virtual Friend Since 2 months, ON recieving 700 followers! He is one of the Finest minds across The Country. He Resides In Pune (A busy schooler) Studies in Class 10. Has his boards This Year, So, He would be inactive for some time after A few months He is the Geometry King on Brilliant. Congrats Brother. Many of your Problems Are over head bouncers to me, But Many of them are Very interesting to Solve. Congrats on reaching The milestone. I am so Glad to Have you as a Friend. Well done! :D Note by Mehul Arora 1 year, 9 months ago Sort by: Thanks a lot @Mehul Arora I am satisfied by your appreciation. Cheers! · 1 year, 9 months ago yes you are a genius · 1 year, 6 months ago Hi Shivamani! Thanks a lot! By the way why did you create another account? · 1 year, 6 months ago congratulations @Nihar Mahajan . You are like an inspiration for me. You are a geometry genius. · 1 year, 9 months ago Thanks! $$\ddot\smile$$ · 1 year, 9 months ago hey!man,see my comment in ,this · 1 year, 9 months ago I did respond it... · 1 year, 9 months ago oh!sorry it didn't appear in the notifications box! · 1 year, 9 months ago Congratulations champ...your problems in Geometry serve as an impetus for me. · 1 year, 9 months ago Thanks! $$\ddot\smile$$ · 1 year, 9 months ago congrats Nihar.....keep the streak going!!! · 1 year, 9 months ago Well done !!! Wish you best of luck · 1 year, 9 months ago Thanks! $$\ddot\smile$$ · 1 year, 9 months ago Congratulations @Nihar Mahajan .May u reach 700000000000000000000000000000000 followers. · 1 year, 9 months ago Thanks! $$\ddot\smile$$ Well , Aliens would follow me... xD · 1 year, 9 months ago HAHAHHA am i an alien?Just saw my face in the mirror and i started to feel like one. · 1 year, 9 months ago LOL xD xD xD xD I will personally Invite them To follow you! :P xD · 1 year, 9 months ago Congrats. Am truly impressed by your geometrical prowess. · 1 year, 9 months ago Thanks! $$\ddot\smile$$ . Well , I found afavourite biology related mechanical problem for you · 1 year, 9 months ago @Krishna Ar Did you try it? (or at least saw it)? · 1 year, 9 months ago Yeah. Solved it. Actually its neither biology nor mechanics. Its algebra. Grade 5 algebra. :D · 1 year, 9 months ago Congratulations Nihar MAHAjan! · 1 year, 9 months ago Thanks! $$\ddot\smile$$ · 1 year, 9 months ago Congrats · 1 year, 9 months ago Thanks! $$\ddot\smile$$ · 1 year, 9 months ago Congrats @Nihar Mahajan · 1 year, 9 months ago Thanks! $$\ddot\smile$$ · 1 year, 9 months ago Congrats@Nihar Mahajan . · 1 year, 9 months ago do u like tony stark and iron man · 1 year, 6 months ago please do teach me geometry · 1 year, 6 months ago hey solve this find five least positive integer for which n^2 +1 is a product of three different primes · 1 year, 6 months ago i want to learn geometry from you .will you teach me or give me some reccomendations · 1 year, 6 months ago Congrats @Nihar Mahajan !!!!!!! · 1 year, 9 months ago Thanks! $$\ddot\smile$$ · 1 year, 9 months ago Are you on Google hangouts?? · 1 year, 9 months ago nope .... :/ · 1 year, 9 months ago
# Thread: Math 150 Statistics questions - paid 1. ## Math 150 Statistics questions - paid Hello, my name is Freddy. I have a few math 150 statistics questions for an expert. will be able to pay for solving these by paypal, - $15. please let me know if you're interested or email me at Edited out by Chris L T521. file with questions attached. thanks. Attachment removed by Chris L T521. 2. Originally Posted by qfreddy Hello, my name is Freddy. I have a few math 150 statistics questions for an expert. will be able to pay for solving these by paypal, -$15. please let me know if you're interested or email me at Edited out by Chris L T521. file with questions attached. thanks. Attachment removed by Chris L T521. User banned for a month for attempting to cheat on an assignment that counts towards a final grade.
# Transition frequencies and hyperfine structure in In+113,115 : Application of a liquid-metal ion source for collinear laser spectroscopy @article{Knig2020TransitionFA, title={Transition frequencies and hyperfine structure in In+113,115 : Application of a liquid-metal ion source for collinear laser spectroscopy}, author={Kristian K{\"o}nig and J{\"o}rg Kr{\"a}mer and Phillip Imgram and Bernhard Maa{\ss} and Wilfried N{\"o}rtersh{\"a}user and Tim Ratajczyk}, journal={Physical Review A}, year={2020}, volume={102} } • Published 7 July 2020 • Physics • Physical Review A We demonstrate the first application of a liquid-metal ion source for collinear laser spectroscopy in proof-of-principle measurements on naturally abundant In$^+$. The superior beam quality, i.e., the actively stabilized current and energy of a beam with very low transverse emittance, allowed us to perform precision spectroscopy on the $5s^2\;^1\mathrm{S}_0 \rightarrow 5s5p\;^3\mathrm{P}_1$ intercombination transition in $^{115}$In$^+$, which is to our knowledge the slowest transition measured… ## References SHOWING 1-10 OF 44 REFERENCES ### Laser spectroscopy of indium Rydberg atom bunches by electric field ionization • Materials Science Scientific Reports • 2020 The field ionization technique demonstrates increased sensitivity for isotope separation and measurement of atomic parameters over previous non-resonant laser ionization methods. ### Collinear laser spectroscopy at ion-trap accuracy: Transition frequencies and isotope shifts in the 6s2S1/2→6p2P1/2,3/2 transitions in Ba+ • Physics Physical Review A • 2019 The rest-frame transition frequencies of the 6s2S1/2→6p2P1/2 (D1) and 6s2S1/2→6p2P3/2 (D2) lines in the stable isotopes of Ba+ were measured with an accuracy better than 200kHz through ### On-line ion cooling and bunching for collinear laser spectroscopy. • Physics Physical review letters • 2002 In collinear laser measurements the signal-to-noise ratio has been improved by a factor of 2 x 10(4), allowing spectroscopic measurements to be made with ion-beam fluxes of approximately 50 ions s(-1). ### Absolute frequency and isotope shift measurements of the cooling transition in singly ionized indium • Physics • 2007 Abstract.We report greater than two orders of magnitude improvements in the absolute frequency and isotope shift measurements of the In+ 5s21S0 (F = 9/2)–5s5p 3P1 (F = 11/2) transition near 230.6 nm. ### Controlling systematic frequency uncertainties at the 10−19 level in linear Coulomb crystals • Physics Physical Review A • 2019 Trapped ions are ideally suited for precision spectroscopy, as is evident from the remarkably low systematic uncertainties of single-ion clocks. The major weakness of these clocks is the long ### A new Collinear Apparatus for Laser Spectroscopy and Applied Science (COALA). • Physics The Review of scientific instruments • 2020 We present a new collinear laser spectroscopy setup that has been designed to overcome systematic uncertainty limits arising from high-voltage and frequency measurements, beam superposition, and ### Analytic response relativistic coupled-cluster theory: the first application to indium isotope shifts • Physics New Journal of Physics • 2020 With increasing demand for accurate calculation of isotope shifts of atomic systems for fundamental and nuclear structure research, an analytic energy derivative approach is presented in the ### Laser-Based High-Voltage Metrology with ppm Accuracy A beamline for high-precision collinear laser spectroscopy was designed, constructed and commissioned within this work. The main aspect was the development of a technique for a precise high-voltage ### Laser cooling and quantum jumps of a single indium ion. • Materials Science Physical review. A, Atomic, molecular, and optical physics • 1994 Laser spectroscopy of ions stored in a radio-frequency trap and dark periods in the fluorescence were observed when the metastable $3$ state was populated by a magnetic dipole decay from \$3, the radiative lifetime of the forbidden transition was determined from the duration of the dark periods.
# Tag Info 0 To get "multicolumn" in align (as asked in the title), I've found that using \omit\rlap{} gives the desired result. \documentclass{minimal} \usepackage{amsmath} \begin{document} \begin{align*} \omit\rlap{Formula} && \text{Description} \\ y &= x + 14 & \text{A linear formula} \\ \end{align*} \end{document} Output: 3 You could use a longtable environment with nine columns. Each column is processed automatically processed in math mode, and curly braces and commas are inserted automatically. Because a longtable can break across pages, you don't have to worry about allowing (or disallowing...) page breaks just to make the array fit on a page. The first half dozen rows of ... 5 \documentclass[12pt,a4paper]{article} \usepackage{amsmath} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{fourier} \usepackage[left=2cm,right=2cm,top=2cm,bottom=2cm]{geometry} \begin{document} \small\[ \left(\begin{array}{*{3}{r@{,}r@{,}r}} \{-7&-2&2\} & \{-5&8&0\} &\{-3&-6&4\} \\ \{-7&-2&2\} & ... 5 Here are two approaches depending on your needs. Each needs hand editing of the output. Results and full code are shown at the bottom. Approach 1 As the whole expression is enclosed in parentheses there is not standard way to do this. I suggest you describe the matrix as X = ( X_1 ) ( X_2 ) where X_1 and X_2 are smaller submatrices that will fit ... 3 Something like this? \documentclass[11pt,a4paper]{article} \usepackage{amsmath} % for "cases" environment and "\text" macro \begin{document} \[ \text{Leukocytes}= \begin{cases} \text{Granulocytes} \begin{cases} \text{Neutrophils}\\ \text{Eosinophils}\\ \text{Basophils} \end{cases}\\ \text{Agranulocytes}\\ ... 0 You can use baseline style to set the vertical align. In your case \begin{tikzpicture}[baseline={(n2)}, ...] should be ok. 2 2 to keep the alignment, you need to use just one array. the braces can be \smashed, avoiding the spreading of the rows. \documentclass{article} \usepackage{amsmath} \begin{document} \[ \renewcommand{\arraystretch}{1.25} \begin{array}{lllllllllll} & & & & &a & a & a &a &&\\ % RHS & & ... 0 Here is an approach that's a bit different from those that I linked to. You will probably want to tweak this a bit to get it to produce what you want: \documentclass{standalone} \usepackage{environ} \usepackage{etoolbox} \usepackage{tikz} \usetikzlibrary{shapes} \newcommand\squared[1]{\noindent\framebox{\bf{#1}}} ... 3 Just use a collection of arrays: \documentclass{article} \usepackage{array,amsmath} \begin{document} \begin{equation*} \renewcommand{\arraystretch}{1.2} \left.\kern-\nulldelimiterspace \left. \begin{array}{r@{}>{{}}l} 2x\lambda &= 2 \\ 2y\lambda &= 3 \\ 2z\lambda &= 4 \\ x^2+y^2+z^2 &= 1 \end{array} ... 0 Solved. Just add array package or Modify code like this \section*{\large \textrecipe} \begin{table}[htp] \rowcolors{2}{gray!25}{white} \begin{tabular}{c p{3cm}p{3cm}clc} \toprule Sr No & Drug & Dose & frequency & Duration & Remark \\ \midrule 1 & & & -~~ -~~ - & x & before/with/after meals, at night \\ 2 ... Top 50 recent answers are included
100, rue des maths 38610 Gières / GPS : 45.193055, 5.772076 / Directeur : Louis Funar # Geometric Structures on Manifolds Thursday, 15 March, 2012 - 17:30 Prénom de l'orateur: William Nom de l'orateur: Goldman Résumé: This talk will survey the theory of locally homogeneous geometric structures on manifolds. Such a structure is given by a system of local coordinates modeled on a geometry'' (a homogeneous space of a Lie group). A familiar example is that the sphere admits no Euclidean-geometry structure: no metrically accurate world atlas of the earth exists. When a geometric structure does exist, they form a space which itself carries interesting geometry. This talk will discuss several types of geometric structures, and will end with the classification of complete affine structures on 3-manifolds (joint work with Charette, Drumm, Fried, Labourie and Margulis). Institution: Université du Maryland Salle: 04
##### Four-dimensional Traversable Wormholes and Bouncing Cosmologies in Vacuum In this letter we point out the existence of solutions to General Relativity with a negative cosmological constant in four dimensions, which contain solitons as well as traversable wormholes. The latter connect two asymptotically locally AdS$_{4}$ spacetimes. At every constant value of the radial coordinate the spacetime is a spacelike warped AdS$_{3}$. We compute the dual energy momentum tensor at each boundary showing that it yields different results. We also show that these vacuum wormholes can have more than one throat and that they are indeed traversable by computing the time it takes for a light signal to go from one boundary to the other, as seen by a geodesic observer. We generalize the wormholes to include rotation and charge. When the cosmological constant is positive we find a cosmology that is everywhere regular, has either one or two bounces and that for late and early times matches the Friedmann-Lema\^{\i}tre-Robertson-Walker metric with spherical topology. ###### NurtureToken New! Token crowdsale for this paper ends in ###### Authors Are you an author of this paper? Check the Twitter handle we have for you is correct. ###### Subcategories #1. Which part of the paper did you read? #2. The paper contains new data or analyses that is openly accessible? #3. The conclusion is supported by the data and analyses? #4. The conclusion is of scientific interest? #5. The result is likely to lead to future research? User: Repo: Stargazers: 0 Forks: 0 Open Issues: 0 Network: 0 Subscribers: 0 Language: None Views: 0 Likes: 0 Dislikes: 0 Favorites: 0 0 ###### Other Sample Sizes (N=): Inserted: Words Total: Words Unique: Source: Abstract: None 11/08/18 06:02PM 3,804 1,247 ###### Tweets su_liam: RT @StarshipBuilder: Four-dimensional Traversable Wormholes and Bouncing Cosmologies in Vacuum https://t.co/yPG2NhK4FG StarshipBuilder: Four-dimensional Traversable Wormholes and Bouncing Cosmologies in Vacuum https://t.co/yPG2NhK4FG hep_th: [1811.03497] Andres Anabalon, Julio Oliva : Four-dimensional Traversable Wormholes and Bouncing Cosmologies in Vacuum https://t.co/DRrxKShaft RelativityPaper: Four-dimensional Traversable Wormholes and Bouncing Cosmologies in Vacuum. https://t.co/yt60EnR9dr TM_Eubanks: RT @qraal: [1811.03497] Four-dimensional Traversable Wormholes and Bouncing Cosmologies in Vacuum https://t.co/8sSGqRmHtK Laintal: https://t.co/HvH8iCS6Dp su_liam: RT @qraal: [1811.03497] Four-dimensional Traversable Wormholes and Bouncing Cosmologies in Vacuum https://t.co/8sSGqRmHtK psema4: RT @qraal: [1811.03497] Four-dimensional Traversable Wormholes and Bouncing Cosmologies in Vacuum https://t.co/8sSGqRmHtK
# Difeomorphisms and boundary conditions So I asked on physics.stackexchange, but got no answer, so I'll try here: I am trying to find out how did the authors in this paper (arXiv:0809.4266) found out the general form of the diffeomorphism which preserve the boundary conditions in the same paper. I found this paper (arXiv:1007.1031v1) which say that by solving $\mathcal{L}_\xi g_{\mu\nu}$, for components and equating each component with the appropriate boundary condition, I can get the most general $\xi$ (which is my goal after all). So I took the NHEK metric which has 6 non vanishing terms ($g_{\tau\varphi}=g_{\varphi\tau}$ so that gives me 5 equations to solve), I put the boundary conditions ($\mathcal{O}(r^n)$ terms), and to simplify things a bit, I typed everything into Mathematica. But when I put my 5 differential equations in, I got the error that I have too many equations and too few variables ($\tau, r, \theta, \varphi$)! Now I thought, did I have to include all possible $g_{\mu\nu}$? Well, that wouldn't make much sense, since all other terms of the background metric are zero, right? And even if I include them, I'll get more equations, and still only 4 variables :\ So Mathematica will probably give the same error... So first of all, am I correct in trying to find the diffeomorphism that way? And if I'm correct, how to solve that?! It's a big system of ODE's, and it's not so trivial to solve, given how the metric looks :\ So if you have any suggestion, I'd appreciate it... Also, I think that I should solve it by assuming the form $$\xi^\mu=\sum_n \xi_n^\mu(\tau,\varphi)r^n$$ and maybe plugging it in, but still, I have too many equations :\ And I'm not that good with mathematica... - Cross-posted from physics.stackexchange.com/q/51547/2451 –  Qmechanic Feb 1 '13 at 18:38
# How many levels of wizard does my player need, from 7 levels of eldritch knight, to be able to action surge fireball? I'm DMing for a group that has a player who is an eldritch knight level 7 and wants to multiclass into wizard to expand his spellcasting abilities. The short version of my question is as given in the title and I'd think is ultimately based on two factors 1. When do you get access to 3rd level spell slots as an Eldritch Knight 7/Wizard X? 2. When can you cast 3rd level spells as a multiclassed character? First, spell slots: This seems the easier question of the two and is outlined fairly clearly in the PHB p.164 You determine your available spell slots by adding together all your levels in the bard, cleric, druid, sorcerer, and wizard classes, half your levels (rounded down) in the paladin and ranger classes, and a third of your fighter or rogue levels (rounded down) if you have the Eldritch Knight or the Arcane Trickster feature. Use this total to determine your spell slots by consulting the Multiclass Spellcaster table. And the table is given as (again on p.164) MULTICLASS SPELLCASTER: SPELL SLOTS PER SPELL LEVEL LvL. 1st 2nd 3rd 4th 5th 6th 7th 8th 9th 1st 2 2nd 3 3rd 4 2 4th 4 3 5th 4 3 2 6th 4 3 3 So, for an Eldritch Knight level 7, that's 7/3 = 2 (rounded down). Each level of wizard would give a full level for the table, so Eldritch 7/ Wizard 1 = table row 3, Eldritch 7/Wizard 2 = row 4, Eldritch 7/Wizard 3 = row 5 (to hit 3rd level spell slots). The 2nd question of "when can you cast a 3rd level spell in this situation?" doesn't seem to be directly answered by the spell slots table and the best/nearest answer I can find is found above the spell slot section on PHB. 164: You determine what spells you know and can prepare for each class individually, as if you were a single classed member of that class. So, taking this pretty much verbatim, from his 7 levels of Eldritch knight he'd have his same 2 cantrips, 5 spells known, 4 first level and 2 2nd level spells from his 7 levels of that class (PHB p.75). Wizard has it's own table but for the sake of brevity, doesn't give access to/spell slots for 3rd level spells until 5th level. So, my understanding is that he'd need to be an Eldritch Knight 7/Wizard 5 before he could pull off the desired "Action Surge Double Fireball" turn. That seems to be the answer from looking online and a literal reading of the PHB but that would mean that he'd have, in theory at Eldritch 7/Wizard 3 for instance, as given above based on the table, 3rd level spell slots that... he couldn't use for 3rd level spells. The PHB does seem to support this as a possibility by saying (PHB p.164): If you have more than one spellcasting class, this table might give you spell slots of a level that is higher than the spells you know or can prepare. You can use those slots, but only to cast your lower level spells. If a lower level spell that you cast, like burning bands, has an enhanced effect when cast using a higher level slot, you can use the enhanced effect, even though you don't have any spells of that higher level. For example, if you are the aforementioned ranger 4/wizard 3, you count as a 5th-level character when determining your spell slots: you have four Ist level slots, three 2nd level slots, and two 3rd level slots. However, you don't know any 3rd level spells, nor do you know any 2nd-level ranger spells. You can use the spell slots of those levels to cast the spells you do know-and potentially enhance their effects. Is that correct? My personal opinion is that's a rather long time (12 being near the end of many campaigns) and kind of lame, but I try to go by the book for the most part. So, please tell me if any of the above is incorrect or if there's something I've overlooked. Thanks for your time and apologies for any formatting errors. • Apologies, added the Dnd 5E tag to the question. May 1, 2020 at 4:27 • If I allowed the character to essentially respec to Eldritch 3/Wizard X, my understanding of the RAW for "action surge fireball" would be Eldritch 3/Wizard 5. Could even technically go Fighter 2 if you just cared about the action surge aspect but he's more focused as a fighter and just wanted bonus spellcasting, rather than focusing on the inverse. May 1, 2020 at 5:02 • @Izzy it's been pointed out by various people that that's not the correct interpretation of the rules and it's even RAI based on a sage advice answering the exact same question here and another tweet from Crawford here. Also it's an incredibly niche thing to do that wouldn't even be a concern in most campaigns. May 1, 2020 at 14:21 • Thanks for clarifying. That is an interesting, though incredibly unintuitive, ruling. I deleted my comment. – Izzy May 1, 2020 at 15:20 ## 5 levels of Wizard is fastest, but 6 more levels of Eldritch Knight fighter is not much slower Your understanding is essentially correct; a Wizard X/Eldritch Knight 7 usually doesn't get fireball any faster than if they were just a Wizard X (the exception has to do with certain readings of the Wizard spell-transcription rules that are not commonly used in play and make single-classed wizards weaker and multi-classed wizards overwhelmingly more powerful). Eldritch Knights get fireball at level 13, so multiclassing the 5 levels of Wizard it takes only gets you fireball 1 level earlier. Your player would be better advised to just wait to get it natively from the subclass or to try and negotiate a respec (Fighter 2/Wizard 5 can also cast fireball via action surge). • Can you elaborate on the "certain readings" comment? The main point of contention I could think of is that, technically, eldritch knight goes off the same spell list as a wizard, so there should be some "benefit" of not actually multiclassing that much. By the book that wouldn't seem to change the math though. May 1, 2020 at 5:00 • @Redrascal Not being able to copy spellscrolls/spell book spells of a level you have slots for via multiclassing is the majority reading of the text, but it's actually less textually supported than being able to do that. That said, it's not very good for most games to take the interpretation that Wizard1/Cleric 19, for example, can learn whatever wizard spells they want from spell scrolls. It's not particularly relevant here, because the Eldritch Knight half doesn't have 3rd level slots and if it did the player could just grab fireball May 1, 2020 at 5:24 • But essentially if they were Wizard 1/EK7, they could, with certain interpretations, cast 2nd level wizard spells via shenanigans. I'm including the parenthetical just to avoid making absolute statements that are wrong, basically. May 1, 2020 at 5:25 • "When you find a wizard spell of 1st level or higher, you can add it to your spellbook if it is of a level for which you have spell slots and if you can spare the time to decipher and copy it." You're correct it seems in that the only real determining factor of copying a spell into your spellbook is the material cost + "having the spell slot" and the rules for preparing spells as a wizard don't seem to directly contradict the loose interpretation. That's clearly problematic though for the reasons you described. May 1, 2020 at 6:37 • Fyi the new printing of the players handbook has changed the wording to" a level of spell you can prepare". So it's no longer an issue. May 1, 2020 at 8:49 Yes, that is correct. The short version is: you determine spell slots by your combined levels, but spells known by your individual class levels.
SCEC Award Number 14060 View PDF Proposal Category Collaborative Proposal (Integration and Theory) Proposal Title Forecasting focal mechanisms and assessing their skill Investigator(s) Name Organization David Jackson University of California, Los Angeles Yan Kagan University of California, Los Angeles Other Participants SCEC Priorities 2a, 2b, 2d SCEC Groups Seismology, CSEP, EFP Report Due Date 03/15/2015 Date Report Submitted N/A Project Abstract Forecasts of the focal mechanisms of future earthquakes are important for seismic hazard estimates and other models of earthquake occurrence. The method was originally proposed by Kagan & Jackson in 1994. An important problem is how to evaluate the skill of the focal mechanism forecast and optimize this forecast. In our recent paper (Kagan & Jackson, 2014) we started to investigate this problem. In previous publications we reported forecasts of 0.5 degrees spatial resolution, covering the latitude range from -75 to +75 degrees, based on the Global Central Moment Tensor earthquake catalog. In this project we perform a high-resolution global forecast of earthquake rate density as a function of location, magnitude, and focal mechanism. In these forecasts we've improved the spatial resolution to 0.1 degrees and the latitude range from pole to pole. Our focal mechanism estimates require distance-weighted combinations of observed focal mechanisms within 1000 km of each grid point. Simultaneously we calculate an average rotation angle between the forecasted mechanism and all the surrounding mechanisms. This average angle reveals the level of tectonic complexity of a region and indicates the potential accuracy of the prediction. Thus deformation complexity displays itself in the average rotation angle and in the Gamma-index. Initially we have used the GCMT catalog, which has a significant number of shallow earthquakes, that allow us to test forecast verification procedures. However, the large number of parameters needed to evaluate future focal mechanisms makes such work time-intensive. We constructed a simple tentative solution but extensive additional theoretical and statistical analysis is needed. Intellectual Merit Our ultimate objective is to construct a model for computing and testing the probability that an earthquake of any size will occur within a specified region and time and determine its possible focal mechanism. The advantage of such our approach is that earthquake rate prediction can be adequately combined with focal mechanism forecast, if both are based on the likelihood scores, resulting in a general forecast optimization. These aims correspond specifically to SCEC research objectives. Broader Impacts Evaluating the future rate of earthquake occurrence in time-space-magnitude-focal mechanism orientation in any given region is important for designing critical facilities, for comparing earthquake and tectonic moment rates, and for understanding the relationship of earthquakes to stress, material properties, fault and plate geometry, and many other features which might affect earthquake rupture. The developed method can be used by engineers and decision makers to estimate earthquake hazards. This project provided technical experience and training to UCLA graduate students Anne Strader. Exemplary Figure Figure 1 Distribution of rotation angles for shallow (depth 0--70~km) earthquakes in the GCMT catalog, 1977--2007/2008--2012, = 6$~km, latitude range$[ 75^\circ S-75^\circ N ]$, earthquake number =1069$. Scatterplot of interdependence of the predicted $\Phi_1$ and observed $\Phi_2$ angles (Eqs.~\ref{eq0},\ref{eq-1}). Both angles are calculated for the cells in which earthquakes of the test period occurred. Blue lines from top to bottom are 75\%, 50\%, and 25\% $\Phi_2$ quantiles for a $\Phi_1$ angle subdivision with equal number of events in 10 subsets.
other querymodes : Identifierquery Coordinatequery Criteriaquery Referencequery Basicquery Scriptsubmission TAP Outputoptions Help 2011A&A...528A..10P - Astronomy and Astrophysics, volume 528A, 10-10 (2011/4-1) Nearby early-type galaxies with ionized gas. VI. The Spitzer-IRS view. Basic data set analysis and empirical spectral classification. PANUZZO P., RAMPAZZO R., BRESSAN A., VEGA O., ANNIBALI F., BUSON L.M., CLEMENS M.S. and ZEILINGER W.W. Abstract (from CDS): A large fraction of early-type galaxies (ETGs) shows emission lines in their optical spectra, mostly with LINER characteristics. Despite the number of studies, the nature of the ionization mechanisms is still debated. Many ETGs also show several signs of rejuvenation episodes. We aim to investigate the ionization mechanisms and the physical processes of a sample of ETGs using mid-infrared spectra. We present here low resolution Spitzer-IRS spectra of 40 ETGs, 18 of which from our proposed Cycle 3 observations, selected from a sample of 65 ETGs showing emission lines in their optical spectra. We homogeneously extract the mid-infrared (MIR) spectra, and after the proper subtraction of a passive'' ETG template, we derive the intensity of the ionic and molecular lines and of the polycyclic aromatic hydrocarbon (PAH) emission features. We use MIR diagnostic diagrams to investigate the powering mechanisms of the ionized gas. The mid-infrared spectra of early-type galaxies show a variety of spectral characteristics. We empirically sub-divide the sample into five classes of spectra with common characteristics. Class-0, accounting for 20% of the sample, are purely passive ETGs with neither emission lines nor PAH features. Class-1 show emission lines but no PAH features, and account for 17.5% of the sample. Class-2, in which 50% of the ETGs are found, as well as having emission lines, show PAH features with unusual ratios, e.g. 7.7µm/11.3µm≤2.3. Class-3 objects (7.5% of the sample) have emission lines and PAH features with ratios typical of star-forming galaxies. Class-4, containing only 5% of the ETGs, is dominated by a hot dust continuum. The diagnostic diagram [NeIII]15.55µm/[NeII]12.8µm vs. [SIII]33.48µm/[SiII]34.82µm, is used to investigate the different mechanisms ionizing the gas. According to the above diagram most of our ETGs contain gas ionized via either AGN-like or shock phenomena, or both. Most of the spectra in the present sample are classified as LINERs in the optical window. The proposed MIR spectral classes show unambiguously the manifold of the physical processes and ionization mechanisms, from star formation, low level AGN activity, to shocks (H2), present in LINER nuclei. Journal keyword(s): galaxies: elliptical and lenticular, cD - galaxies: fundamental parameters - galaxies: evolution - galaxies: ISM Full paper 2022.07.01-18:57:20
# zbMATH — the first resource for mathematics Inessential combinations and colorings of models. (Russian, English) Zbl 1079.03022 Sib. Mat. Zh. 44, No. 5, 1132-1141 (2003); translation in Sib. Math. J. 44, No. 5, 883-890 (2003). Summary: We define the operations of an inessential combination and an almost inessential combination of models and theories. We establish basedness for an (almost) inessential combination of theories. We also establish that the properties of smallness and $$\lambda$$-stability are preserved upon passing to (almost) inessential combinations of theories. We define the notions of coloring of a model, colored model, and colored theory, and transfer the assertions about combinations to the case of colorings. We characterize the inessential colorings of a polygonometry. ##### MSC: 03C45 Classification theory, stability, and related concepts in model theory Full Text:
Up to $10^6$: $\sigma(8n+1) \mod 4 = OEIS A001935(n) \mod 4$ (Number of partitions with no even part repeated ) - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-21T23:28:41Z http://mathoverflow.net/feeds/question/59178 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/59178/up-to-106-sigma8n1-mod-4-oeis-a001935n-mod-4-number-of-partition Up to $10^6$: $\sigma(8n+1) \mod 4 = OEIS A001935(n) \mod 4$ (Number of partitions with no even part repeated ) joro 2011-03-22T12:13:22Z 2011-03-23T13:31:07Z <p>Up to $10^6$: </p> <p>$\sigma(8n+1) \mod 4 = OEIS A001935(n) \mod 4$ </p> <p><a href="https://oeis.org/A001935" rel="nofollow">A001935 Number of partitions with no even part repeated</a></p> <p>Is this true in general?</p> <p>It would mean relation between restricted partitions of $n$ and divisors of $8n+1$.</p> <p>Another one up to $10^6$ is:</p> <p>$\sigma(4n+1) \mod 4 = A001936(n) \mod 4$</p> <p><a href="https://oeis.org/A001936" rel="nofollow">A001936 Expansion of q^(-1/4) (eta(q^4) / eta(q))^2 in powers of q</a></p> <p>$\sigma(n)$ is sum of divisors of $n$.</p> <blockquote> <p>sigma(8n+1) mod 4 starts: 1, 1, 2, 3, 0, 2, 1, 0, 0, 2, 1, 2, 2, 0, 2, 1, 0, 2, 0, 2, 0, 3, 0, 0, 2, 0, 0, 0, 3, 2</p> <p>sigma(4n+1) mod 4 starts: 1, 2, 1, 2, 2, 0, 3, 2, 0, 2, 2, 2, 1, 2, 0, 2, 0, 0, 2, 0, 1, 0, 2, 0, 2, 2</p> </blockquote> <p><strong>Update</strong></p> <p>Up to 10^7</p> <p>A001935 mod 4 is zero for n = 9m+4 or 9m+7</p> <p>A001936 mod 4 is zero for n = 9m+5 or 9m+8</p> <p><a href="http://mathoverflow.net/questions/59192/is-oeis-a001935-number-of-partitions-with-no-even-part-repeated-efficiently-com" rel="nofollow">Question about computability</a></p> http://mathoverflow.net/questions/59178/up-to-106-sigma8n1-mod-4-oeis-a001935n-mod-4-number-of-partition/59185#59185 Answer by Gjergji Zaimi for Up to $10^6$: $\sigma(8n+1) \mod 4 = OEIS A001935(n) \mod 4$ (Number of partitions with no even part repeated ) Gjergji Zaimi 2011-03-22T14:00:00Z 2011-03-22T16:14:18Z <p>Let's call A001936(n) by $a(n)$. Here is a sketch of why $$a(n)\equiv \sigma(4n+1)\pmod{4}$$ Firs note that the generating function of $a(n)$ is $$A(x)=\sum_{n\geq 0}a(k)x^n=\prod_{k\geq 1}\left(\frac{1-x^{4k}}{1-x^k}\right)^2$$ for $\sigma(2n+1)$ the generating function is $$B(x)=\sum_{k\geq 0}\sigma(2k+1)x^k=\prod_{k\geq 0}(1-x^k)^4(1+x^k)^8$$ So $$B(x)\equiv \prod_{k\geq 1}(1+x^{2k})^2(1+x^{4k})^2\equiv \prod_{k\geq 1}(\frac{1-x^{8k}}{1-x^{2k}})^2\equiv A(x^2)\pmod{4}$$ Now the proof is complete once we know that $$B(x)\equiv \sum_{k\geq 0} \sigma(4n+1)x^{2n}\pmod{4}$$ this is an other way of saying $\sigma(4n-1)$ is divisible by $4$, which can be shown by pairing up the divisors $d+\frac{4n-1}{d}\equiv 0\pmod{4}$.</p> <p>The proof for the other congruence is similar, but slightly longer, I might update this post later to include it. </p> <hr> <p>Let's prove that $\sigma(8n+1)\equiv q(n)\pmod{4}$, where $q(n)$ is the number of partitions with no even part repeated. The generating function is $$Q(x)=\sum_{n\geq 0}q(n)x^n=\prod_{k\geq 1}\frac{1-x^{4k}}{1-x^k}$$ Since we know from above that $$\sum_{n\geq 0}\sigma(4n+1)x^{2n}\equiv \prod_{k\geq 1}(1+x^{2k})^2(1+x^{4k})^2 \pmod{4}$$ we conclude that $$L(x)=\sum_{n\geq 0}\sigma(4n+1)x^n\equiv Q(x)^2 \pmod{4}$$ so that $$\sum_{n\geq 0} \sigma(8n+1)x^{2n}\equiv \frac{L(x)+L(-x)}{2}\pmod{4}$$ So to finish off the proof we need the following $$\frac{Q(x)^2+Q(-x)^2}{2}\equiv Q(x^2)\pmod{4}$$ <s>which I will leave as an exercise</s> Actually let me write the proof, just to make sure I didn't mess up calculations. This reduces to proving $$\frac{\prod_{k\geq 1}(1+x^{2k})^4(1+x^{2k-1})^2+\prod_{k\geq 1}(1+x^{2k})^4(1-x^{2k-1})^2}{2}$$ $$\equiv \prod_{k\geq 1}(1+x^{4k-2})(1+x^{4k})^2 \pmod{4}$$ and since $$(1+x^{2k})^4\equiv (1+x^{4k})^2 \pmod{4}$$ this reduces to $$\frac{\prod_{k\geq 1}(1+x^{2k-1})^2+\prod_{k\geq 1}(1-x^{2k-1})^2}{2}\equiv \prod_{k\geq 1} (1+x^{4k-2})\pmod{4}$$ but we can write $$\prod_{k\geq 1}(1-x^{2k-1})^2\equiv \left(\prod_{k\ geq 1}(1+x^{2k-1})^2\right) \left(1-4\sum_{k\geq 1}\frac{x^{2k-1}}{(1+x^{2k-1})^2}\right)\pmod{8}$$ therefore now we have to show $$\prod_{k\geq 1}(1+x^{2k-1})^2\left(1-2\sum_{k\geq 1}\frac{x^{2k-1}}{(1+x^{2k-1})^2}\right)\equiv \prod_{k\geq 1}(1+x^{4k-2})\pmod{4}$$ Now everything is clear since $$\prod_{k\geq 1}(1+x^{2k-1})^2\equiv \prod_{k\geq 1}(1+x^{4k-2})\left(1+2\sum_{k\geq 1}\frac{x^{2k-1}}{(1+x^{2k-1})^2}\right)\pmod{4}$$</p>
One of the well-established methods for causal inference is based on the Inverse Propensity Weighting (IPW). In this post we will use a simple example to build an intuition for IPW. Specifically, we will see how IPW is derived from a simple weighted average in order to account for varying treatment assignment rates in causal evaluation. Let’s consider the simple example where we want to estimate the average effect of running a marketing coupon campaign on customer spending. We run the campaign in two stores by randomly assigning a coupon to existing customers. Suppose both stores have same number of customers and, unknown to us, spending among treated customers is distributed as $$N(20,3^2)$$ and $$N(40,3^2)$$ in stores 1 and 2, respectively. Throughout the example $$Y_i(1)$$ represents an individual’s spending if they receive a coupon, $$T_i=1$$, and $$Y_i(0)$$ represents their spending if they don’t, $$T_i=0$$. These random variables are called potential outcomes. The observed outcome $$Y_i$$ is related to potential outcomes as follows: $Y_i = Y_i(1)T_i + Y_i(0)(1-T_i)$ Our estimand, the thing that we want to estimate, is the population mean spending given a coupon, $$E[Y_i(1)]$$. If we randomly assign coupons to the same number of customers in both stores, we can get an unbiased estimate of this by simply averaging the observed spending of the treated customers, which is $$0.5*\20 + 0.5*\40 = \30$$. Mathematically, this looks as follows: \begin{align*}E[Y_i|T_i=1] &= E[Y_i(1)T_i + Y_i(0)(1-T_i)|T_i=1]\\ &= E[Y_i(1)|T_i=1]\\ &= E[Y_i(1)] \end{align*} where the first line is due to the potential outcomes, and the third line follows from random assignment of treatment, which makes potential outcomes independent of treatment assignment. $Y_i(1), Y_i(0) \perp T_i$ ### Simple Average Let’s define a function that generates a sample of $$2000$$ customers, randomly assigns $$50%$$ of them to treatment in both stores, and records their average spending. Let’s also run a simulation that calls this function for $$1000$$ times. def run_campaign(biased=False): true_mu1treated , true_mu2treated = 20 , 40 n, p , obs = 1, .5 , 2000 # number of trials, probability of each trial, # number of observations store = np.random.binomial(n, p, obs)+1 df = pd.DataFrame({'store':store}) probtreat1 = .5 if biased: probtreat2 = .9 else: probtreat2 = .5 treat = lambda x: int(np.random.binomial(1, probtreat1, 1))\ if x==1 else int(np.random.binomial(1, probtreat2, 1)) spend = lambda x: float(np.random.normal(true_mu1treated, 3, 1))\ if (x[0]==1 and x[1]==1)\ else ( float(np.random.normal(true_mu2treated, 3, 1) ) ) df['treated'] = df['store'].apply(treat) df['spend'] = df[['store','treated']].apply(tuple,1).apply(spend) simple_value_treated = np.mean(df.query('treated==1')['spend']) return [simple_value_treated] sim = 1000 values = Parallel(n_jobs=4)(delayed(run_campaign)() for _ in tqdm(range(sim))) results_df = pd.DataFrame(values, columns=['simple_treat']) The following plot shows us that the distribution of the average spending is centered around the true mean. Now, suppose for some reason the second store assigned coupons to $$90\%$$ of the customers, whereas the first store assigned it to $$50\%$$. What happens if we ignore this and use the same approach as previously and take an average of all treated customers’ spending? Because customers of the second store have a higher treatment rate, their average spending will take a larger weight in our estimate and thereby result in an upward bias. In other words, we no longer have a truly randomized experiment because the probability of receiving a coupon now depends on the store. Moreover, because treated customers in the two stores also have substantially different spending on average, the store a customer belongs to is a confounding variable in causal inference speak. Mathematically, if we use the simple average spending of treated customers, this time, instead of having this: \begin{align*}E[Y_i|T_i=1] = E[Y_i(1)|T_i=1]= E[Y_i(1)] \end{align*} we end up with this: \begin{align*}E[Y_i|T_i=1] = E[Y_i(1)|T_i=1]> E[Y_i(1)] \end{align*} Indeed, repeating the simulation and plotting the results, we see that the distribution of the average spending is now centered far from the true mean. sim = 1000 values = Parallel(n_jobs=4)(delayed(run_campaign)(biased=True) for _ in tqdm(range(sim))) results_df = pd.DataFrame(values, columns=['simple_treat']) ### Weighted Average All is not lost, however. Since we know that our experiment was messed up because assignment rates were different between stores, we can correct it by taking a weighted average of treated customers’ spending, where weights represent the proportion of customers in each store. This means, we can reclaim random assignment of treatment once we condition on the store information, $Y_i(1), Y_i(0) \perp T_i|X_i$ where $$X_i \in \lbrace{S_1,S_2\rbrace}$$ represents store membership of customer $$i$$, and obtain unbiased estimates of our causal estimand, $$E[Y_i(1)]$$. The math now works as follows: \begin{align*}E[Y_i(1)] =& E[E[Y_i(1)|X_i]]\\ =& E[E[Y_i(1)|T_i=1,X_i]]\\ =& E[Y_i|T_i=1,X_i=S_1] p(X_i=S_1)+\\ & E[Y_i|T_i=1,X_i=S_2] p(X_i=S_2) \end{align*} where the first equation is due to the law of iterated expectations and the second one is due to conditional independence. Let $$n_1$$ and $$n_2$$ denote the number of customers in both stores. Similarly, let $$n_{1T}$$ and $$n_{2T}$$ represent the number of treated customers in both stores. Then the above estimator can be computed from the data as follows: $$$\underbrace{(\frac{n_1}{n})}_{\text{Prop. of cust. in }S_1} \underbrace{\frac{1}{n_{1T}}\sum_{i \in S_{1}, i \in T} Y_i}_{\text{Mean spending of treated in }S_1} + \underbrace{(\frac{n_2}{n})}_{\text{Prop. of cust. in }S_2} \underbrace{\frac{1}{n_{2T}}\sum_{i \in S_{2}, i \in T} Y_i}_{\text{Mean spending of treated in }S_2}$$$ Sure enough, if we repeat the previous sampling process def run_campaign2(): true_mu1treated , true_mu2treated = 20, 40 n, p , obs = 1, .5 , 2000 # number of trials, probability of each trial, # number of observations store = np.random.binomial(n, p, obs)+1 df = pd.DataFrame({'store':store}) probtreat1 = .5 probtreat2 = .9 treat = lambda x: int(np.random.binomial(1, probtreat1, 1)) if x==1 else int(np.random.binomial(1, probtreat2, 1)) spend = lambda x: float(np.random.normal(true_mu1treated, 3, 1)) if (x[0]==1 and x[1]==1) else ( float(np.random.normal(true_mu2treated, 3, 1) ) ) df['treated'] = df['store'].apply(treat) df['spend'] = df[['store','treated']].apply(tuple,1).apply(spend) simple_value_treated = np.mean(df.query('treated==1')['spend']) prob1 = df.query('store==1').shape[0]/df.shape[0] prob2 = df.query('store==2').shape[0]/df.shape[0] est_mu1treated = np.mean(df.query('treated==1 & store==1')['spend']) est_mu2treated = np.mean(df.query('treated==1 & store==2')['spend']) weighted_value_treated = prob1*est_mu1treated + prob2*est_mu2treated return [simple_value_treated, weighted_value_treated] sim=1000 values = Parallel(n_jobs=4)(delayed(run_campaign2)() for _ in tqdm(range(sim)) ) results_df = pd.DataFrame(values, columns=['simple_treat','weighted_treat']) we see that the average of weighted averages is again right on the true mean. Let’s now do some algebraic manipulation by rewriting the mean spending in store 1: $\frac{1}{n_{1T}}\sum_{i \in S_{1}, i \in T} Y_i = \frac{1}{n_1} \sum_{i \in S_{1}, i \in T} \frac{Y_i}{(n_{1T}/n_1)} = \frac{1}{n_1} \sum_{i \in S_{1}} \frac{T_i }{(n_{1T}/n_1)}Y_i$ Doing the same for store 2 and plugging them back into (1) we have the following: $$$(\frac{n_1}{n})\frac{1}{n_1} \sum_{i \in S_{1}} \frac{T_i }{(n_{1T}/n_1)}Y_i + (\frac{n_2}{n}) \frac{1}{n_{2}}\sum_{i \in S_{2}} \frac{T_i }{(n_{2T}/n_2)}Y_i$$$ Denote the proportion of treated customers in store 1 as $$p(S_{1i}) = (n_{1T}/n_1)$$ and similarly for store 2, then we can simplify (2) into: $$$\frac{1}{n} \sum^n_{i=1}\frac{T_i}{p(X_i)}Y_i$$$, where $$p(X_i)$$ is the probability of receiving treatment conditional on the confounding variable, aka the propensity score, $p(X_i) = P[T_i = 1 |X_i]$ Notice, we started with one weighted average and ended up with just another weighted average that uses $$\frac{T_i}{p(X_i)}$$ as weights. This is the well-known inverse propensity weighted estimator. Running the previous analysis with this estimator def run_campaign3(): true_mu1treated , true_mu2treated = 20, 40 n, p , obs = 1, .5 , 2000 # number of trials, probability of each trial, number of observations store = np.random.binomial(n, p, obs)+1 df = pd.DataFrame({'store':store}) probtreat1 = .5 probtreat2 = .9 treat = lambda x: int(np.random.binomial(1, probtreat1, 1)) if x==1 else int(np.random.binomial(1, probtreat2, 1)) spend = lambda x: float(np.random.normal(true_mu1treated, 3, 1)) if (x[0]==1 and x[1]==1) else ( float(np.random.normal(true_mu2treated, 3, 1) ) ) df['treated'] = df['store'].apply(treat) df['spend'] = df[['store','treated']].apply(tuple,1).apply(spend) prob1 = df.query('store==1').shape[0]/df.shape[0] prob2 = df.query('store==2').shape[0]/df.shape[0] simple_value_treated = np.mean(df.query('treated==1')['spend']) #estimate propensity score: ps1 = df.query('treated==1 & store==1').shape[0]/df.query('store==1').shape[0] ps2 = df.query('treated==1 & store==2').shape[0]/df.query('store==2').shape[0] df['ps'] = pd.Series(np.where(df['store']==1, ps1, ps2)) ipw_value_treated = np.mean( (df['spend']*df['treated'])/df['ps']) return [simple_value_treated, ipw_value_treated] sim=1000 values = Parallel(n_jobs=4)(delayed(run_campaign3)() for _ in tqdm(range(sim)) ) results_df = pd.DataFrame(values, columns=['simple_treat','ipw_treat']) give us the same unbiased estimate as before. ### Estimating the Average Treatment Effect Now, our ultimate goal is to learn the average incremental spending that the marketing campaign has generated, aka the average treatment effect. To do that we need to also estimate the the population mean spending not given a coupon, $$E[Y_i(0)]$$ and compare it against $$E[Y_i(1)]$$. Our estimand is now this: $\tau = E[Y_i(1)] - E[Y_i(0)]$ Towards this, first we repeat the same argument for non-treated and obtain an unbiased estimate for $$E[Y_i(0)]$$ as follows: $\begin{equation*}\frac{1}{n} \sum^n_{i=1}\frac{(1-T_i)}{1-p(X_i)}Y_i \end{equation*}$ and finally combine them into estimating $$\tau$$: $\hat{\tau} = \frac{1}{n} \sum^n_{i=1}\frac{T_i}{p(X_i)}Y_i - \frac{1}{n} \sum^n_{i=1}\frac{(1-T_i)}{1-p(X_i)}Y_i$ Let’s now extend our previous analysis into estimating the impact of the campaign. Suppose spending among non-treated customers is distributed as $$N(10,2^2)$$ and $$N(30,2^2)$$ in stores 1 and 2, respectively, so that the true effect of the campaign is $$\10$$ in both stores, and therefore we have $$\tau = \10$$. def run_campaign4(): true_mu1treated , true_mu2treated = 20, 40 true_mu1control , true_mu2control = 10, 10 n, p , obs = 1, .5 , 2000 # number of trials, probability of each trial, number of observations store = np.random.binomial(n, p, obs)+1 df = pd.DataFrame({'store':store}) probtreat1 = .5 probtreat2 = .9 treat = lambda x: int(np.random.binomial(1, probtreat1, 1)) if x==1 else int(np.random.binomial(1, probtreat2, 1)) spend = lambda x: float(np.random.normal(true_mu1treated, 3, 1)) if (x[0]==1 and x[1]==1) else ( float(np.random.normal(true_mu2treated, 3, 1) ) if (x[0]==2 and x[1]==1) else (float(np.random.normal(true_mu1control, 2, 1) ) if (x[0]==1 and x[1]==0) else float(np.random.normal(true_mu2control, 2, 1)) ) df['treated'] = df['store'].apply(treat) df['spend'] = df[['store','treated']].apply(tuple,1).apply(spend) prob1 = df.query('store==1').shape[0]/df.shape[0] prob2 = df.query('store==2').shape[0]/df.shape[0] simple_value_treated = np.mean(df.query('treated==1')['spend']) simple_value_control = np.mean(df.query('treated==0')['spend']) simple_tau = simple_value_treated - simple_value_control est_mu1treated = np.mean(df.query('treated==1 & store==1')['spend']) est_mu2treated = np.mean(df.query('treated==1 & store==2')['spend']) weighted_value_treated = prob1*est_mu1treated + prob2*est_mu2treated est_mu1control = np.mean(df.query('treated==0 & store==1')['spend']) est_mu2control = np.mean(df.query('treated==0 & store==2')['spend']) weighted_value_control = prob1*est_mu1control + prob2*est_mu2control weighted_tau = weighted_value_treated - weighted_value_control #estimate propensity score: ps1 = df.query('treated==1 & store==1').shape[0]/df.query('store==1').shape[0] ps2 = df.query('treated==1 & store==2').shape[0]/df.query('store==2').shape[0] df['ps'] = pd.Series(np.where(df['store']==1, ps1, ps2)) ipw_value_treated = np.mean( (df['spend']*df['treated'])/df['ps']) ipw_value_control = np.mean( (df['spend']*(1-df['treated']) )/(1-df['ps'] )) ipw_tau = ipw_value_treated - ipw_value_control return [simple_tau, weighted_tau, ipw_tau] sim=1000 values = Parallel(n_jobs=4)(delayed(run_campaign4)() for _ in tqdm(range(sim)) ) results_df = pd.DataFrame(values, columns=['simple_tau','weighted_tau','ipw_tau']) As shown below, both the weighted average and the IPW estimator are centered around the true effect of $$\20$$, whereas the distribution of the simple average without controlling for store membership is centered around $$\23$$, $$\%15$$ larger than the true effect. ### Conclusion The IPW estimator has a long history in causal inference. The goal of this post was to develop an intuition for this well-known estimator through a simple example. Using a marketing case we have seen that the hallmark of this method is to correct for unequal treatment assignment mechanisms. Moreover, we have shown that the method is an extension of the weighted average estimator. ### Code The original notebook can be found in my repository. ### References [1] Richard K. Crump, V. Joseph Hotz, Guido W. Imbens, Oscar A. Mitnik. Dealing with limited overlap in estimation of average treatment effects. (2009), Biometrika. [2] Stefan Wager, Stats 361: Causal Inference (Spring 2020), Stanford University. Posted on: January 7, 2023 Length:
# Is my matrix perturbation analysis legitimate? I am not a matrix theorist, or numerical linear algebra expert, but I have a problem and my proposed solution leads me to a question that I cannot answer. I can give more details, but the gist is that I have a matrix $A$ which depends (smoothly) on a variable $x$. For a nice value of $x$, call it $x_0$, I can symbolically compute lots (but not all) about $A$ and its eigenstructure. In particular, I can find its leading eigenvalue $\lambda$, the corresponding eigenvector $v$, and I also know the derivative $K$ of the leading eigenvalue wrt $x$. I want to know what happens when $x_0$ is perturbed to $x_0 + \epsilon$ where I don't care how small $\epsilon$ is. In particular, I want to know how the entries in the leading eigenvector change (relative to each other). So what I want to do is to express everything as series expansions, truncated at the linear term, in $\epsilon$, so I will have • $A$ becomes $A + \epsilon A'$ (where I know $A'$) • $v$ becomes $v + \epsilon v'$ (where $v'$ is what I am trying to find) • $\lambda$ becomes $\lambda + K \epsilon$ (because I know $K$) Then I can solve the equation $$(A + \epsilon A') (v + \epsilon v') = (\lambda + K \epsilon) (v + \epsilon v')$$ and determine $v'$, thus discovering how the entries in the leading eigenvector are perturbed as $x$ changes. The only trouble is that I know that this general procedure is not always valid, because it makes various assumptions about the existence / convergence of these series expansions that are not always true for nasty matrices. My matrix (at $x=x_0$) is probably not nasty, but I don't know how to tell for sure - I have tried leafing through Stewart & Sun, and even briefly attempted looking in Kato, but that made my head hurt. But I somehow feel that an experienced matrix perturbation analyst would just be able to tell immediately whether this is safe or not, and hence my question: Under what conditions on the matrix is it legitimate to analyse perturbations via the "expand in series and truncate" method outlined above? Here are some additional facts about my matrix that may or may not be of importance. Things I feel are probably good • at $x_0$ it has real eigenvalues • at $x_0$ the leading eigenvalue is simple • at $x_0$ the matrix is diagonalisable Things that may not be so good • at $x_0$ the matrix has a 6-dimensional nullspace (but all other eigenvalues are simple) • the matrix is not symmetric, nor is it non-negative Things that are probably good, but that I can only see empirically (i.e. by computer) • the matrix has distinct real eigenvalues throughout some interval of the form $(x_0, x_0+\epsilon)$ • working to high precision with Mathematica, everything that I have said above about how terms vary can be validated for specific small values of $\epsilon$
# Laplace expansion (potential) explained In physics, the Laplace expansion of potentials that are directly proportional to the inverse of the distance ( 1/r ), such as Newton's gravitational potential or Coulomb's electrostatic potential, expresses them in terms of the spherical Legendre polynomials. In quantum mechanical calculations on atoms the expansion is used in the evaluation of integrals of the inter-electronic repulsion. The Laplace expansion is in fact the expansion of the inverse distance between two points. Let the points have position vectors bf{r} and bf{r}' , then the Laplace expansion is 1 |r-r'| = infty \sum \ell=0 4\pi 2\ell+1 \ell \sum m=-\ell (-1)m r{\scriptscriptstyle< \ell \ell+1 }{r \scriptscriptstyle> } -m Y \ell(\theta, \varphi) m Y \ell(\theta', \varphi'). Here bf{r} has the spherical polar coordinates (r,\theta,\varphi) and bf{r}' has (r',\theta',\varphi') with homogeneous polynomials of degree \ell . Further r< is min(r, r′) and r> is max(r, r′). The function m Y \ell is a normalized spherical harmonic function. The expansion takes a simpler form when written in terms of solid harmonics, 1 |r-r'| = infty \sum \ell=0 \ell \sum m=-\ell (-1)m -m I \ell(r) m R \ell(r')   with |r|>|r'|. ## Derivation The derivation of this expansion is simple. By the law of cosines, 1 |r-r'| = 1 \sqrt{r2+(r')2-2rr'\cos\gamma } = \frac \quad\hbox\quad h := \frac . We find here the generating function of the Legendre polynomials P\ell(\cos\gamma) : P\ell(\cos\gamma)= 4\pi 2\ell+1 \ell \sum m=-\ell (-1)m -m Y \ell(\theta, \varphi) m Y \ell (\theta',\varphi') gives the desired result. ## Neumann Expansion A similar equation has been derived by Neumann[1] that allows expression of 1/r in prolate spheroidal coordinates as a series: 1 |r-r'| = 4\pi a infty \sum \ell=0 \ell \sum m=-\ell (-1)m (\ell-|m|)! (\ell+|m|)! |m| l{P} \ell (\sigma<) |m| l{Q} \ell (\sigma>) m(\arccos\tau,\varphi) Y \ell m* Y \ell (\arccos\tau',\varphi') where m l{P} \ell (z) and m l{Q} \ell (z) are associated Legendre functions of the first and second kind, respectively, defined such that they are real for z\in(1,infty) . In analogy to the spherical coordinate case above, the relative sizes of the radial coordinates are important, as \sigma<=min(\sigma,\sigma') and \sigma>=max(\sigma,\sigma') . ## References • Griffiths, David J. (David Jeffery). Introduction to Electrodynamics. Englewood Cliffs, N.J.: Prentice-Hall, 1981. ## Notes and References 1. Rüdenberg . Klaus . A Study of Two‐Center Integrals Useful in Calculations on Molecular Structure. II. The Two‐Center Exchange Integrals . The Journal of Chemical Physics . AIP Publishing . 19 . 12 . 1951 . 0021-9606 . 10.1063/1.1748101 . 1459–1477.
# Large prime numbers in encryption? I have recently been reading about encryption and the importance of prime numbers and I have some questions that I would really appreciate some answers to, if possible: 1. Is it correct that when creating encryption keys you take one large prime number, and then multiply it by another prime number to leave you with an even larger prime number? 2. If 1 is correct, then is it correct to say "the reason for the large prime number calculation is it is very difficult and time consuming to work out what the initial prime numbers were used in the original calculation"? 3. What constitutes as a large prime number? The reason for this question is I have been doing some reading about encryption as stated above, and assuming the above statements are correct I think I have a really simple formula for working out what the original prime numbers used were. I have tested this prime numbers all the way up to $299993$. And I must be doing this wrong because it can not be this simple, I assume someone will point out where I am going wrong. • In the context of cryptography based on integer factorization, 299993 (which is 20-bit) is a small prime, and has been so since at least the 1960s. In the present century, large never started before 250 bits, and ECM has been used to pull factors of about that size. – fgrieu Sep 21 '16 at 11:10 Is it correct that when creating encryption keys you take one large prime number multiple it by another prime number to leave you with a even larger prime number? Any number that is a multiple of two primes is by definition not prime. This creates a semiprime: a number that has only two prime factors. This approach is only used in a few cryptosystems, the most common of which is RSA. Many other cryptosystems exist that do not rely on integer factorization. Some of these systems (e.g., AES, ChaCha20) are symmetric algorithms unlike RSA, and some (e.g., ECC) are asymmetric like RSA. RSA is gradually being phased out in favor of modern systems based on elliptic curves. If 1 is correct, then is it correct to say "the reason for the large prime number calculation is it is very difficult and time consuming to work out what the initial prime numbers were used in the original calculation"? Yes. As far as we know, integer factorization is a hard problem. What constitutes as a large prime number? 2048-bit RSA is a typical current recommendation. In this case, the modulus (e.g., the semiprime) is the part that is 2048 bits, so each prime is roughly 1024 bits long. For scale, a 1024-bit prime will be over 150 decimal digits long, so they are quite large. • Many thanks for your response made it a lot clearer. – Joseph Howarth Sep 20 '16 at 9:49 • There are other cryptosystems based on the factorization being hard, e.g. Paillier and Rabin. Maybe put an actual 4096 bit number in the answer, to show how far a tiny number like $299993$ is from what is considered a number large enough. (Although I would say recommendations differ quite a bit, and 4096 is mentioned not that often - 2048 or 3072 are more common) – tylo Sep 20 '16 at 12:03 • @JosephHowarth, it looks like you may have accidentally created two accounts. If you are the same person as "Joe" who posted the question. If that is the case, follow this guidance for how to merge your accounts. – mikeazo Sep 20 '16 at 12:12 • In my experience RSA 2048 is the typical current recommendation. – CodesInChaos Sep 20 '16 at 15:02 • Updated to reflect comments by tylo and CodesinChaos. – Stephen Touset Sep 20 '16 at 19:06
# Where is a particle bound in a delta potential? I can picture a bound state in a harmonic oscillator, or in an infinite square well, but where is a particle bound in a delta potential? - This is a good question because it seems like you can only measure the particle with negative kinetic energy. That is, the spatial part where the particle is in the well has width zero. –  santa claus Mar 17 '13 at 19:40 ## 1 Answer Where a particle is, in my opinion, it's not a very good to question to ask in the context of quantum mechanics. You can solve the delta problem and then compute the probability density for the particle. That will give you information of where may the particle be if you try to measure its position. But before doing that, it does not make too much to ask where the particle is. - See my comment above. It sort of does make sense to ask this, because the probability that you measure the particle in the well itself is vanishingly small. So it seems you can only "measure" the particle with negative kinetic energy (though I'm sure the measurement itself would kick the electron out of the bound state -- perhaps you can elaborate on this.) –  santa claus Mar 17 '13 at 19:42 I'm sorry but I can't actually say anything about how to do the actual measurement, but it is indeed a very good thought, you should consider posting it as a question. –  Jorge Mar 17 '13 at 20:34
## Abstract In many different fields, social scientists desire to understand temporal variation associated with age, time period, and cohort membership. Among methods proposed to address the identification problem in age-period-cohort analysis, the intrinsic estimator (IE) is reputed to impose few assumptions and to yield good estimates of the independent effects of age, period, and cohort groups. This article assesses the validity and application scope of IE theoretically and illustrates its properties with simulations. It shows that IE implicitly assumes a constraint on the linear age, period, and cohort effects. This constraint not only depends on the number of age, period, and cohort categories but also has nontrivial implications for estimation. Because this assumption is extremely difficult, if not impossible, to verify in empirical research, IE cannot and should not be used to estimate age, period, and cohort effects. ## Introduction For more than a century, social scientists have attempted to separate cohort effects from age and period effects on various social phenomena, including mortality, disease rates, and inequality (e.g., Fu 2000; Holford 1983; Mason et al. 1973; O’Brien 2000; Winship and Harding 2008). Whereas age effects represent the variation associated with growing older, period effects refer to effects due to social and historical shifts—such as economic recessions and prevalent unemployment—that affect all age groups simultaneously. Cohort refers to a group of people who experience an event—such as birth—at the same age. Cohort effects are defined as the formative effects of social events on individuals at a specific period during their life course (Ryder 1965). Age-period-cohort (APC) models, in which the three variables are simultaneously considered in a statistical equation, have been the conventional framework for quantifying age, period, and cohort effects. Unfortunately, such APC models suffer from a logical identification problem: when any two of the three variables (age, period, or cohort) are known, the value of the third is determined—that is, because Cohort = Period – Age. Because of this exact linear dependency, there exist no valid estimates of the distinct effects of the three variables. Various methods have been developed to address this identification problem. For example, Mason et al. (1973) introduced the APC multiple classification model and suggested the constrained generalized linear model (CGLM) as a means of estimating the independent effects of age, period, and cohort. More recently, Fu (2000) and Yang and colleagues (2004) proposed a new APC method, called the intrinsic estimator (IE). They recommended IE as “a general-purpose method of APC analysis with potentially wide applicability in the social sciences” (Yang et al. 2008:1699) on the grounds that IE has desirable statistical properties such as unbiasedness and consistency. However, in this article, I show that IE cannot be used to recover the true age, period, and cohort effects because IE, like CGLM, imposes a constraint on parameter estimation that is difficult to verify using theories or empirical evidence; that is, the validity of IE relies on assumptions that are very difficult to verify in applied practice. In this sense, IE is no better than CGLM. In fact, IE is equivalent to the principal component estimator, an estimator with a potential for bias that was noted by its developers (Kupper et al. 1985). Unfortunately, this has not been understood by the community of demographers, sociologists, and epidemiologists who have used IE in a wide variety of research applications. As I demonstrate in this article, many researchers have misunderstood what IE actually estimates and how IE estimates should be interpreted, resulting in inappropriate applications of IE in empirical research and potentially misleading substantive conclusions. This article contributes to the literature in two ways: First, although O’Brien (2011a) clarified that IE assumes a special constraint—the null-vector constraint—on parameters, it is challenging for researchers to fully appreciate and evaluate the appropriateness of this constraint when applying IE in substantive studies. In this article, I derive an easily understood form of IE’s constraint on the linear components of age, period, and cohort effects so that the implications of using IE to estimate the true age, period, and cohort effects can be better understood.1 Second, although scholars agree that IE is a constrained estimator, they debate whether IE can provide reliable estimates of the true age, period, and cohort trends (see Fu et al. 2011; O’Brien 2011b). I address this debate using several types of simulated data generated based on social theories. By comparing IE estimates with the true effects in various circumstances, I show that IE does not work better than CGLM for recovering the true age, period, and cohort trends in empirical research. This article is organized as follows. I begin with an introduction of the APC multiple classification model and the identification problem. While reviewing the methodological challenge that has hampered APC research for decades, this section establishes a framework for discussing the nature and limitations of different constrained APC estimators, including IE and CGLM. I then review how IE’s developers have described IE and how applied researchers have understood and used it in substantive studies: the two are often not the same. As a result, many scholars have misunderstood IE, causing misuse of this technique in empirical research. To clarify this common misunderstanding and avoid further misuse, in the section on the linear constraint implied by IE, I derive the constraint that IE imposes on the linear components of age, period, and cohort effects. In the section on IE’s application scope that follows the technical discussion of IE’s linear constraint, I use simulations to demonstrate how this constraint affects estimation of age, period, and cohort effects. Based on these mathematical derivations and simulation evidence, I conclude that IE cannot and should not be used to estimate true age, period, and cohort effects without theoretical justification. ## The Identification Problem To develop a framework for understanding the nature of IE and other constrained estimators, I first review the identification problem that these methods are intended to address. In APC analysis, researchers have conventionally used an analysis of variance (ANOVA) model to separate the independent age, period, and cohort effects: $gEYij=μ+αi+βj+γk,$ (1) for age groups i = 1, 2, . . . , a; periods j = 1, 2, . . . , p; and cohorts k = 1, 2, . . . , (a + p – 1); where ∑ i = 1aαi = ∑ j = 1pβj = ∑ k = 1a + p − 1γk = 0. E(Yij) denotes the expected value of the outcome of interest Y for the ith age group in the jth period of time; g is the “link function”; αi denotes the mean difference from the global mean μ associated with the ith age category; βj denotes the mean difference from μ associated with the jth period; and γk denotes the mean difference from μ related to the membership in the kth cohort. The usual ANOVA constraint applies where the sum of coefficients for each effect is set to zero. For a normally distributed outcome Yij, the ANOVA model above can also be written in a generic regression fashion: $Y=Xb+ε,$ (2) where Y is a vector of outcomes; X is the design matrix; b denotes a parameter vector with elements corresponding to the effects of age, period, and cohort groups; and ε denotes random errors with distribution centered on zero. Then the estimated age, period, and cohort effects can be obtained using the ordinary least squares (OLS) method: $b^=XTX−1XTY.$ (3) Unfortunately, the inverse (XTX)− 1 does not exist because of the age-period-cohort linear dependency, so the parameter vector b is inestimable. This is the identification problem in APC analysis: no unique set of coefficients can be obtained because an infinite number of solutions give identical fits to the data. This identification problem can be shown more explicitly. For simplicity, suppose that the data used are perfect, without random or measurement errors, so that ε = 0; then the problem is mathematical rather than statistical, and the regression model is $Y=Xb.$ (4) Because of the linear dependency between age, period, and cohort, there exists a nonzero vector b0—a linear function of the design matrix X—such that the product of the design matrix and the vector equals zero: $Xb0=0.$ (5) In other words, b0 represents the null space of the design matrix X, which has dimension equal to 1. (The null space has dimension 1 by the specification of model (1), and the value of b0 is given in Eq. (10).) It follows that the parameter vector b can be decomposed into components: $b=b1+s⋅b0,$ (6) where s is an arbitrary real number corresponding to a specific solution to Eq. (4), and b1 is a linear function of the parameter vector b, corresponding to the projection of b on the nonnull space of the design matrix X, orthogonal to the null space. Thus, b1 and b0 are orthogonal to each other. That is, b1 is the part of b that is in the nonnull space of the design matrix X, orthogonal (perpendicular) to the null space, so that b0 is orthogonal to b1; that is, b1 ⋅ b0 = 0. Given Eqs. (4) and (6), the following equation must hold: $Y=Xb=Xb1+s⋅b0=Xb1+s⋅Xb0.$ (7) However, Xb0 = 0, and thus s ⋅ Xb0 = 0, so equation Eq. (7) is true for all values of s. That is, s can be any real number, and each distinct value of s gives a distinct solution to Eq. (4). Therefore, an infinite number of possible solutions exists for b, and no solution can be deemed the uniquely preferred or “correct” solution without additional constraints on b. To illustrate, suppose the data have three age groups, three periods, and five cohorts, and that error is zero for ease of presentation (and without loss of generality). Table 1 presents three different parameter vectors bT = (μ,α1,α2,α3,β1,β2,β3,γ1,γ2,γ3,γ4,γ5) arising from three different values of s: namely 0, 2, and 10. In the top panel of Table 2, the observed value in each cell is represented in terms of the unknown parameters αi, βj, and γk. The bottom panel of Table 2 shows the fitted values μ + αi + βj + γk based on Table 1’s three different values of s in the same tabular form as in the top panel of Table 2. Note that these three sets of fitted values are identical, although the parameter vectors in Table 1 differ. In fact, these parameter vectors are not just different; their age and period effects change directions depending on s, and the data cannot distinguish between different values of s. Taken together, Tables 1 and 2 show that for a single data set, an infinite number of possible solutions for age, period, and cohort effects exist; and each solution corresponds to a specific value of s. Therefore, any solution—or alternatively, none of these solutions—can be viewed as reflecting the “true” effects even though different values of s give radically different age, period, and cohort effects. In social science research, data inevitably contain random and/or measurement errors, so researchers will not have the perfect fit of the idealized data here; however, the fundamental identification problem remains. Various methods have been developed to address the identification problem and find a set of uniquely preferred estimates. In the following section, I consider IE and other solutions to the identification problem that impose a constraint on b. ## The Constrained Approach: IE and CGLM A large body of literature dating back to the 1970s has addressed the identification problem. Mason et al. (1973) explicated the identification problem in APC analysis and proposed the constrained generalized linear model (CGLM), a coefficient-constrained approach that has been used as a conventional method for APC analysis. This method places at least one identifying restriction on the parameter vector b in Eq. (2). For example, the effects of the first two age groups, periods, or cohorts are usually constrained to be equal based on theoretical or external information. With this additional constraint, the APC model becomes just-identified, and unique OLS and maximum likelihood (ML) estimators exist. However, such theoretical information often does not exist or cannot easily be verified. Also, different choices of identifying constraint can produce widely different estimates for age, period, and cohort effects. That is, CGLM estimates are quite sensitive to the choice of constraints (Glenn 2005; Rodgers 1982a, b). More recently, a group of scholars developed a new APC estimator, called the intrinsic estimator (IE). They argued that IE has clear advantages over CGLM2 and can produce valid estimates of the true age, period, and cohort effects (see Fu 2000, Fu and Hall 2006; Yang et al. 2004, 2008). The most compelling evidence they provided to support this claim is from a simulation in which IE and CGLM estimates were compared with the true effects of age, period, and cohort (see Yang et al. 2008:1718–1719). They concluded that IE outperforms CGLM because IE estimates are closer to the true parameters that generate the data than CGLM (Yang et al. 2008:1719–1722). This evidence could easily be interpreted as confirmation that IE produces unbiased estimates of the true age, period, and cohort effects. Unfortunately, few clarifications are provided, and the developers of IE themselves are sometimes unclear about what IE actually estimates. For example, they state that for a finite number of time periods of data, the IE produces an unbiased estimate of the coefficient vector. (Yang 2008:400) Because of its estimability and unbiasedness properties, the IE may provide a means of accumulating reliable estimates of the trends of coefficients across the categories of the APC accounting model. (Yang et al. 2008:1711) The IE, by its very definition and construction, satisfies the estimability condition. . . . If other estimators do indeed satisfy the estimability condition, then they also produce unbiased estimates of the A, P, and C effect coefficients. If not, then the estimates they produce are biased. (Yang et al. 2008:1710) Perhaps most importantly for empirical applications of APC analysis, the IE produces estimated age, period, and cohort coefficients and their standard errors in a direct way, without the necessity of choosing among a large array of possible constraints on coefficients that may or may not be appropriate for a particular analysis. (Yang et al. 2004:105) Many researchers conducting substantive APC analyses have interpreted these and other statements to mean that IE produces unbiased estimates of true age, period, and cohort effects. Consequently, they have used IE in empirical research to address substantive issues, including mortality, disease, and religious activity (e.g., Keyes and Miech 2013; Langley et al. 2011; Miech et al. 2011; Schwadel 2011; Winkler and Warnke 2013). These authors seem convinced that IE produces unbiased estimates of age, period, and cohort effects: Recent advances in modeling APC effects with repeated cross-sectional data allow age, period, and cohort effects to be simultaneously estimated without making subjective choices requiring constraining data or dropping age, period, or cohort indicators from the model. In particular, APC intrinsic estimator models provide unbiased estimates of regression coefficients for age groups, time periods, and birth cohorts (Fu 2000). (Schwadel 2011:183) The intrinsic estimator provides unbiased estimates of age, period, and cohort effects. (Schwadel 2011:184) The IE model has been recommended as a better alternative to the widely discussed constrained generalized linear model (CGLM) (Yang et al. 2004). We used the IE model to estimate individual effects of age, period, and cohort for males and females separately. (Langley et al. 2011:106) The IE is an approach that places a constraint on the model, but not a constraint that affects the estimation of regression parameters for age, period, and cohort in any way. That is, the regression parameter estimates are unbiased by the constraint placed, and a unique set of regression estimates can be estimated. (Keyes and Miech 2013:2) Unfortunately, claims of this sort are incorrect. As I demonstrate in this article, IE does impose constraints that are as consequential as those imposed by CGLM. To help researchers better understand the constraint imposed by IE and make informed decisions in choosing an APC estimator, I first derive an easily understood form of IE’s constraint. Because an unbiased and consistent estimator is desirable and necessary to produce reliable and valid results, I then address how IE’s constraint affects these key properties: • Unbiasedness: Is the expectation of IE the “true” age, period, and cohort effects? • Consistency: As the sample size increases, does IE converge to the “true” effects? ## The Linear Constraint Implied by IE To understand IE’s constraint and its implications for estimation, it is helpful to review IE’s conceptual foundation and computational algorithm. IE can be viewed as an extension of principal component (PC) analysis, a multipurpose technique that can be used to deal with identification problems when explanatory variables are highly correlated. By transforming correlated explanatory variables to a set of orthogonal linear combinations of these variables, called principal components, PC analysis can be a useful tool for reducing data redundancy and developing predictive models. In contrast, the goal of IE is neither data reduction nor prediction, but rather estimation of the effects of, and capturing the general trends of, age, period, and cohort.3 IE’s computational algorithm includes five steps: (1) transform the design matrix X to the PC space using its eigenvector matrix; (2) in the PC space, identify the “null eigenvector”—the special eigenvector that corresponds to an eigenvalue of zero—and the corresponding null subspace (with 1 dimension) and nonnull subspace (with m – 1 dimensions, where m denotes the number of coefficients to be estimated); (3) in the nonnull subspace of m – 1 dimensions, regress the outcome of interest using OLS or ML on the m – 1 PCs to obtain m – 1 coefficient estimates; (4) extend the m – 1 coefficient estimates to the whole PC space of dimension m by adding an element corresponding to the null eigenvector direction and arbitrarily setting it to zero; and (5) use the eigenvector matrix to transform the extended coefficient vector estimated in the PC space, including the added zero element, back to the original age-period-cohort space to obtain estimates for age, period, and cohort effects (see Yang 2008; Yang et al. 2008).4 The fourth step—extend the m – 1 coefficient estimates to the whole PC space of dimension m by adding an element corresponding to the null eigenvector direction and arbitrarily setting it to zero—carries the key assumption of the IE approach to APC analysis. This assumption is implicit yet has major implications for the validity and application of the IE approach. Specifically, setting the coefficient of the null eigenvector, s, to zero is equivalent to assuming that $b⋅b0=0.$ (8) That is, the projection of b on b0 is zero, where b and b0 are as defined in Eq. (6). Kupper and colleagues (1985) provided a closed-form representation for the eigenvector b0. Using vector notation,5 $b0=0,A,P,CT,$ (9) where $A=1−1+a2,…,a−1−1+a2$ $P=−1⋅1−1+p2,…,p−1−1+p2$ $C=1−a+p2,…,a+p−2−a+p2.$ For example, when a = 3 and p = 3 (i.e., for three age groups and three time periods), $b0=0,–1,0,1,0,−2,−1,0,1T,$ (10) where A = (–1,0), P = (1,0), and C = (–2,–1,0,1). What does Eq. (8) mean? What is the specific form of this constraint for data sets with varying number of age, period, and cohort groups? To illustrate, suppose that age, period, and cohort each have effects on the outcome variable that show a linear trend. Denote these trends as ka, kp, and kc, respectively; the intercepts for the three variables as ia, ip, and ic; and the overall mean as μ. Thus, the effects associated with the three age categories are ia, ia + ka, and ia + 2 ⋅  ka, respectively. Similarly, the effects related to the three periods are ip, ip + kp, and ip + 2 ⋅ kp, respectively. For the five cohorts, the effects are ic, ic + kc, ic + 2 ⋅ kc, ic + 3 ⋅ kc, and ic + 4 ⋅ kc, respectively. Then, the parameter vector, b, can be written as $b=μ,ia,ia+ka,ip,ip+kp,ic,ic+kc,ic+2⋅kc,ic+3⋅kcT,$ (11) where the last category of each variable is omitted as the reference group. According to the constraint for age effects in model (1), we know that $∑i=1aαi=ia+ia+ka+ia+2⋅ka=3⋅ia+3⋅ka=0,$ (12) which implies that $ia=−ka.$ (13) Similarly, it can be shown using the constraint for period and cohort effects in model (1) that $ip=−kp,$ (14) and $ic=−2⋅kc.$ (15) Using Eqs. (13), (14), and (15), Eq. (11) can be simplified as $b=μ,−ka,0,−kp,0,−2⋅kc,−kc,0,kcT.$ (16) Because the constraint that IE implicitly imposes is b ⋅ b0 = 0, by Eqs. (8), (10), and (16), the specific form of IE’s linear constraint (LC) for APC data with three age categories, three periods, and five cohorts are $b⋅b0=μ⋅0+−ka⋅−1+0⋅0+−kp⋅1+0⋅0+−2⋅kc⋅−2+−kc⋅−1+0⋅0+kc⋅1=ka−kp+6⋅kc=0.$ (17) In other words, when age, period, and cohort show linear trends, IE’s implicit constraint is that these linear trends must satisfy Eq. (17). If, in fact, the true age, period, and cohort trends do not satisfy this equation, then the implicit LC imposed by IE is incorrect. To illustrate the implications of IE’s LC, I simulate normally distributed data as follows. For those at age i in period j, the mean response is 10 + ka ⋅ agei + kp ⋅ periodj + kc ⋅ cohortij, and the standard deviation of error ε equals 0.1. The number of age and period groups is fixed at three each. I consider three sets of true ka, kp, and kc: (1) ka = 1, kp = 7, kc = 1; (2) ka =1, kp = 7, kc = 10; and (3) ka = 3, kp = 1, kc = 4. For each selection of true ka, kp, and kc, I simulate 1,000 such data sets by drawing random errors. As shown in Table 3, for data set 1, the true effects for the three age categories are –1, 0, and 1, respectively, so ka (the linear trend in age effects) equals 1. The period effects are –7, 0, and 7, respectively, so kp is 7. Similarly, since the cohort effects are –2, –1, 0, 1, and 2, kc is 1. For this data set, $ka−kp+6⋅kc=1−7+6⋅1=0.$ (18) That is, the relationship between the linear trends in the true age, period, and cohort effects satisfies Eq. (17), the LC implicit in IE. However, for data sets 2 and 3 generated by the other sets of true ka, kp, and kc in Table 3, Eq. (17) does not hold. Specifically, for the second set, ka = 1, kp = 7, and kc = 10, so $ka−kp+6⋅kc=1−7+6⋅10=54≠0.$ (19) And for the third set, ka = 3, kp = 1, and kc = 4, so $ka−kp+6⋅kc=3−1+6⋅4=26≠0.$ (20) Table 3 presents IE estimates, averaged over the 1,000 simulated data sets, for the three sets of age, period, and cohort effects. The bias of IE is estimated by the difference between the truth and the averaged IE estimates. Table 3 shows that for data set 1, IE yields good estimates because the true ka, kp, and kc in the data satisfy Eq. (17), the implicit LC that IE imposes. Specifically, the estimated slopes for age, period, and cohort are $k^a=0.999$, $k^p=7.001$, and $k^c=1.000$, respectively. In contrast, IE returns highly biased estimates, very different from the true effects, for the second and third data sets because the true ka, kp, and kc do not satisfy IE’s LC. For example, for data sets 2 and 3, the estimated age effects, averaged over the 1,000 simulations, show a downward trend ($k^a=−5.750$ for data set 2 and $k^a=−2.582$ for data set 3) when the true trend is upward. (The true age slopes are ka = 1 for data set 2 and ka = 3 for data set 3.) Note that Eq. (17) is derived for the simplest scenario in which the age, period, and cohort trends are purely linear. For more complex scenarios in which these trends are not purely linear, IE’s constraint depends on the nonlinear components of the age, period, and cohort effects.6 For example, suppose that age, period, and cohort each has effects on the outcome of interest that include a linear and a quadratic trend. Denote the quadratic trends as , , and , respectively. Using the same derivation in this section, the specific form of IE’s constraint for APC data with three age categories, three periods, and five cohorts is (21) That is, when age, period, and cohort effects include quadratic components, these effects must satisfy Eq. (21) in order for IE to yield good estimates. Equation (17) can be viewed as a special case of Eq. (21) when there are no quadratic or higher-order nonlinear components in the age, period, and cohort effects. Alternatively, because the linear dependency between age, period, and cohort does not affect the identification of nonlinear effects, IE’s constraint can be said to bind only on the linear age, period, and cohort trends; the specific value of the constraint on the linear effects is determined by the nonlinear effects, which are estimable. For any coefficient-constraint approach, such as CGLM and IE, “the choice of constraint is the crucial determinant of the accuracy in the estimated age, period, and cohort effects” (Kupper et al. 1985:822). Because the constraint assumption strongly affects estimation results, no matter what constraint a statistical method assumes, that method produces good estimates only when its assumption approximates the true structure of the data under investigation. It follows that when there are three age groups, three periods, and five cohorts and their effects are purely linear, IE can yield accurate estimates only when these linear effects of age, period, and cohort satisfy Eq. (17). Unfortunately, researchers usually have no a priori knowledge about true age, period, and cohort effects that would allow them to evaluate whether the constraint implied in Eq. (17) holds. Therefore, researchers cannot assess whether IE produces unbiased estimates of age, period, and cohort effects for their data. Thus, IE is no better than CGLM in this respect. More importantly, the exposition in this section indicates that the LC assumed by IE also depends on the design matrix X—that is, on the number of age, period, and cohort groups. For example, if one age group is added to the example presented here, such that there are now four age groups, three periods, and six cohorts, then the LC implied by IE is $b⋅b0=2.75⋅ka−kp+11.25⋅kc=0,$ (22) or $b⋅b0=ka−kp+6⋅kc+1.75⋅ka+5.25⋅kc=0.$ (23) Compared with Eq. (17) for the case of three age groups, three periods, and five cohorts, Eqs. (22) and (23) show that adding an age group dramatically changes the constraint so that the true effects satisfying IE’s LC with three age categories no longer satisfy this LC when an age category is added. Readers can verify that increasing or reducing the number of periods or cohorts also greatly alters IE’s LC. These examples demonstrate that not only does IE rely on a constraint like CGLM does, but unlike CGLM, in which the constraint (e.g., equal effects for the first two age groups) is explicit and rationalized by theoretical account or side information, the LC of IE is implicit and varies depending on the number of age, period, and cohort groups. Although this constraint has been described as minimal (e.g., Schwadel 2011; Yang et al. 2008), in fact, it can have major implications for the quality of substantive results, as shown. Theoretically speaking, the limitation of IE results from a misinterpretation of the constraint that IE imposes on parameter estimation. It is true that b0, the null eigenvector, is determined by the design matrix, but it is incorrect to conclude that therefore b0 “should not play any role in the estimation of effect coefficients” (Yang et al. 2008:1705). Rather, both the null eigenvector and nonnull eigenvectors (with nonzero eigenvalues) are determined by the design matrix—that is, by the number of age, periods, and cohort groups. To this extent, it is no less likely that the data contain a significant component in the b0 direction than in the directions of the nonnull eigenvectors. The fact that s, the coefficient for b0, can be any real number without changing the fitted values Xb simply means that variation in Y in the direction of b0 is not estimable. If the data have variation in this direction, IE will mistakenly attribute that variation to other columns in the design matrix, causing significant errors in estimation. ## The Implications of IE’s Constraint: Is IE an Unbiased and Consistent Estimator? Because IE imposes a constraint on the linear age, period, and cohort trends, IE yields reliable estimates only when the true trends satisfy its constraint. However, Yang and colleagues argued that “because of its estimability and unbiasedness properties, the IE may provide a means of accumulating reliable estimates of the trends of coefficients across the categories of the APC accounting model” (Yang et al. 2008:1711). In the following discussion, I clarify that IE is not an unbiased estimator of the “true” age, period, and cohort effects. I also use concrete examples to illustrate that IE is not consistent and to explain why IE appears to be converging to the truth in the Yang et al. (2008) article. This section may be particularly helpful for nontechnical researchers. ## Biasedness By definition, an estimator δ is an unbiased estimator of a parameter θ if the expectation of δ over the distribution that depends on θ is equal to θ, or Eθ(δ) = θ. It follows that for an unbiased APC estimator, its expectation must be the true effects of age, period, and cohort.7 Per this definition, if IE is an unbiased estimator, the expected value of IE must be the true age, period, and cohort effects. The following mathematical computation shows, however, that the expectation of the IE estimator is not the true effects unless those true effects happen to satisfy IE’s implicit constraint. As noted in the preceding section, the key computation of IE is to extend the coefficient estimates in the PC space, b′, (24) by adding a zero element, such that (25) where corresponds to the projection of the coefficient vector b in the nonnull space, that is, b1 in Eq. (6). IE then transforms the extended coefficient vector , including the added zero element, back to the original age-period-cohort space to obtain coefficient estimates for age, period, and cohort. Given that OLS and ML estimators have been proven unbiased in simpler, identifiable problems with normally distributed errors as in Eq. (2), and because IE uses these methods to obtain estimates for b1, whose projection in the PC space corresponds to the extended coefficient vector , IE yields unbiased estimates for b1. In other words, $EbIE=b1.$ (26) Based on the preceding discussion of the identification problem, the true parameter space b can be decomposed into two orthogonal subspaces corresponding to b1 and b0 in Eq. (6), which is equivalent to $b1=b−s⋅b0.$ (27) Substituting Eq. (27) in Eq. (26) results in $EbIE=b1=b−s⋅b0.$ (28) Equation (28) means that the expectation of the IE estimator will be different from the true effects b unless s ⋅ b0 = 0—that is, unless s = 0. IE assumes s = 0; thus, IE is a biased estimator when the true value of s is anything but 0. The larger the absolute value of s, the more biased the IE estimates become. For researchers who wish to investigate age, period, and cohort effects for the purposes of substantive demographic, social, or other applied research, there exists little theoretical or empirical knowledge about the value of s and what b0—the null eigenvector—may imply about the outcome variable. In specific applications, then, IE must be assumed to be biased, resulting in misleading conclusions about the true age, period, and cohort effects unless proven otherwise. Note that IE’s developers argue that IE satisfies the “estimability criterion” proposed by Kupper et al. (1985), so IE is, in that sense, an unbiased estimator. However, estimability of a function of b implies unbiased estimation only of the estimable function of b, not necessarily of the true parameter b itself. The projection of the parameter vector onto the nonnull space, b1, is indeed an estimable function of b, the true parameter vector, and thus IE is an unbiased estimator of b1. However, IE is a biased estimator for the true APC effects when b1 is different from b. Therefore, it is not accurate to say that “Kupper et al. (1985) . . . suggested that an estimable function satisfying this condition resolves the identification problem” as claimed in Yang et al. (2008:1703). To emphasize, estimability in the nonnull space does not imply unbiasedness in estimating the true age, period, and cohort effects. Discovering a set of estimable functions is not the same as solving the identification problem. ## Consistency In statistics, for an estimator δ to be a consistent estimator of an unknown parameter space θ, δ must converge in probability to θ as the sample size grows. If δ is unbiased, consistency usually follows immediately. A biased estimator can be consistent if its bias decreases as the sample size increases. However, the bias of IE, s ⋅ b0, does not necessarily shrink as the sample size grows. Thus, IE is not a consistent estimator of the coefficient vector b. This theoretical argument can be illustrated with simulations. I simulate normally distributed data using the same function as that for data set 1 in Table 3: for those at age i in period j, the mean response is 10 + 1 ⋅ agei + 7 ⋅ periodj + 1 ⋅ cohortij, and the standard deviation of error is 0.1. I begin with 3 age groups and 3 periods, and then increase the number of periods to 6 and 12, respectively. For each scenario, I simulate 1,000 such data sets by drawing random errors. If IE is a consistent estimator, then as the number of periods increases, the resulting estimates should get closer and closer to the true effects that we know based on the simulation function. Table 4 presents the IE estimates, averaged over 1,000 data sets, for the three scenarios in which the number of periods is set at 3, 6, and 12, respectively. It shows that the IE estimates are not converging to the truth, and the bias appears to increase as the number of periods increases from 3 to 12. Specifically, when p (the number of periods) equals 6 and 12, although IE correctly captures the direction of the age, period, and cohort trends, there is no evidence that these estimates are converging to the truth; the estimated age, period, and cohort slopes are $k^a=2.144$, $k^p=5.857$, and $k^c=2.144$, respectively, when p = 6; and $k^a=3.017$, $k^p=4.983$, and $k^c=3.017$, respectively, when p increases to 12. In fact, even with an unrealistically large number of periods (e.g., 100 periods), as I show in Fig. S1 in Online Resource 1, the IE estimates do not appear to converge to the truth. The developers of IE correctly noted that the estimation of period and cohort effects will not improve with more time periods because “adding a period to the data set does not add information about the previous periods or about cohorts not present in the period just added” (Yang et al. 2008:1718). However, when they simulated data, the IE estimates for age effects did appear to become closer and closer to the true values as the number of periods increased. They simulated data using the following function: (29) It appears that IE estimates of the age effects converge to the true effects in this simulation as the number of periods increases because IE’s implicit LC is not satisfied by the true age, period, and cohort effects in the simulation mechanism Eq. (29) with five periods (b ⋅ b0 = − 0.339), but the true effects do approximately satisfy the LC (b ⋅ b0 = − 0.036) when the number of periods increases to 50. In other words, IE appears to perform better as the number of periods increases, not because IE is a consistent procedure but because the true effects used in the data-generating function Eq. (29) conform better to IE’s implicit LC as the number of periods increases. For demographic or social data in which the linear trends in the three variables are unknown, adding more periods or cohorts promises nothing about the accuracy of the coefficient estimation for age, period, or cohort effects. That is, even with a sufficiently large sample, researchers using IE to estimate the true age, period, and cohort effects are not guaranteed to have desirable results that are close to the true values. ## Application Scope: IE Versus CGLM The preceding discussions of IE’s linear constraint (LC) and statistical properties are fairly technical. In this section, I use several types of simulated data to illustrate how the implicit LC of IE affects its ability to recover the underlying age, period, and cohort effects in social science research.8 This exercise is important because scholars have debated the application scope of IE in empirical research. As Fu et al. (2011:455) suggested, “the important statistical issue about APC modeling is how to identify the trend that helps to resolve the real-world problem for a given APC data set.” Thus, I examine whether compared with CGLM, IE yields better (if not unbiased) estimates of the true age, period, and cohort patterns that may be observed in empirical research. IE’s developers provided simulations in which IE estimates are closer to the true age, period, and cohort effects than CGLM results. This, they argued, supports their conclusion that IE has clear advantages over CGLM. However, as noted earlier, the true age, period, and cohort effects in Yang et al.’s (2008) simulation in fact approximately satisfy the LC that IE imposes (b ⋅ b0 = − 0.036).9 For age, period, and cohort effects that do not satisfy IE’s implicit constraint, IE will not necessarily perform better than CGLM and may perform much worse. Thus, IE is no better than CGLM because the restriction that IE imposes is essentially no different from the constraints assumed in CGLM. To illustrate, I show simulations, as Yang and colleagues did, to compare the CGLM and IE estimates. However, here the data-generating mechanisms satisfy the constraint assumed by CGLM but not the constraint assumed by IE. Moreover, I simulate data from four models that embody specific social theories and thus conform to empirical reality. The first data set is simulated to represent the observation that overall health for adults deteriorates as they grow older, and that while recent developments in health knowledge and technology have improved health conditions for the entire population, people born in more recent years are likely to be healthier than older cohorts. On the other hand, the demographic literature has also suggested that age, period, or cohort effects may not all exist (Alwin 1991; Fabio et al. 2006; Preston and Wang 2006; Winship and Harding 2008). Accordingly, the other three simulations approximate likely empirical situations in which one of the three variables has little impact on the outcome variable. Specifically, I fix the number of age groups at 9 and the number of periods at 50 in all these simulations with little loss of generality. I then generate 1,000 data sets from each of the following four models: (30) (31) (32) (33) For instance, in Eq. (30), the outcomes for people with age i in period j are normally distributed with mean (10 +2⋅agei − 0.5 ⋅ agej2 + 1 ⋅ periodj−0.015 ⋅ periodj2 + 0.15 ⋅ cohortij + 0.03 ⋅ cohortij2), and standard deviation σ = 0.1. In Eqs. (31), (32), and (33), one of the age, period, and cohort effects is not present, while the effects for the other two variables are the same as in Eq. (30). Note that none of these models satisfies IE’s constraint. Specifically, for the first model, b ⋅ b0 = 115.01; and for the second, third, and last model, b ⋅ b0 = 115.72, 130.41, and 16.12, respectively. Figure 1 compares, for the simulated data from the four models, IE estimates and CGLM estimates using two different constraints. The IE estimates, averaged over 1,000 data sets, are largely different from the true effects for all models because for all four models, the constraint that IE assumes is not satisfied. For example, in Scenario 3 in Fig. 1, when there is no period effect in the data-generating mechanism Eq. (32), the IE estimates suggest a substantially positive period trend on top of inaccurate estimates for age and cohort effects. In contrast, the CGLM assuming equal age effects for the first and third age groups produces close estimates for all four models. It is equally important to note that the performance of the CGLM estimator also depends on whether its assumption approximates the truth. For instance, in Scenario 4, whereas the CGLM that assumes equal age effects for the first and third group yields good estimates, the same method with a different constraint—that is, the age effects are the same for the first and second groups—results in biased estimates. In sum, it must be concluded that (1) if there is a priori information or theoretical justification, the constrained solution that corresponds to such information (e.g., CGLM estimates assuming equal effects for the first and third age groups in data-generating functions Eqs. (30) to (33)) will yield better estimates than IE; and (2) without such a priori knowledge, IE is not necessarily better than other constrained estimators including CGLM. Without such knowledge, neither IE nor CGLM results are valid. ## Conclusion and Discussion In this article, I focus on the intrinsic estimator (IE), a statistical method intended to separate the independent effects of age, period, and cohort on various outcomes. I discuss the nature and application scope of IE theoretically and illustrate it with simulated data. This article shows that IE assumes a specific constraint on the linear age, period, and cohort effects. This assumption not only depends on the number of age, period, and cohort groups, but also is extremely difficult, if not impossible, to verify in empirical research. This feature of IE is no different from the constraint assumed in CGLM except that the CGLM constraint does not change automatically as the numbers of age, period, and cohort groups change. The conclusion is that IE is not an unbiased or consistent estimator of the “true” age, period, and cohort effects. Therefore, for demographers and social scientists whose goal is to understand the “true, simultaneously independent effects” of age, period, and cohort, IE’s strategy of circumventing the identification problem can yield biased and potentially misleading estimates. There is no doubt that Yang and associates have revitalized APC research and inspired many scholars. However, IE is nothing new in APC analysis. Kupper and his colleagues introduced the IE solution to APC analysts, calling this solution the principal component estimator (PCE) (Kupper et al. 1983:2795–2797). As O’Brien (2011a:420) noted, such an estimator “produces coefficients identical to those of the recently introduced intrinsic estimator.” However, instead of concluding that IE is preferable to CGLM, Kupper et al. (1983:2797) clearly stated that PCE (that is, IE) “could lead to more bias than the use of some other constraints.” As a result, Kupper and associates did not advocate PCE/IE as a general solution, then or subsequently. Generally speaking, PCE/IE or any other constrained estimator provides just one possible solution from the infinite number of solutions for an underdetermined problem (i.e., the rank deficiency problem in APC analysis). That said, the PCE/IE solution should not be regarded as the true solution or the uniquely preferred solution without theoretical justification. In fact, the statistical literature has recognized a variety of constrained estimators, including other types of generalized inverse solutions. Demographers and sociologists should understand that the PCE/IE estimates are not necessarily better (i.e., closer to the true parameters) than other constrained estimators. What should well-intentioned researchers who wish to investigate the age, period, and cohort patterns do? On the one hand, several alternative methods have been developed, some of which are more theoretically driven, taking external information into account,10 and some of which are statistical approaches.11 Although each method has advantages and limitations and a thorough examination is a topic for future research, I caution that purely statistical techniques are unlikely to yield accurate estimates. The methodological problem of IE and its nontrivial implications for empirical research identified in this article are not unique to IE. The biostatistics literature shows that use of the APC model (1), regardless of estimation technique, precludes valid estimation and meaningful interpretations of the linear components of age, period, and cohort effects (see, e.g., Holford 1983; Kupper et al. 1985). Therefore, my position is to encourage development of APC models that are informed by social theories and thus different from model (1) in basic structure. On the other hand, although the statistical difficulty in quantifying independent effects of age, period, and cohort was recognized long ago, decades of effort has only resulted in unsatisfactory solutions. Thus, it is not unreasonable to ask, Is this unusual challenge suggesting a problem that is not statistical but theoretical in nature? In other words, is the identification problem pointing to a more fundamental problem in the theoretical framework of APC analysis? Should the answers to these questions be positive, the identification problem inherent in model (1) “is a blessing for social science” (Heckman and Robb 1985:144) because it warns scientists that they want something—a general statistical decomposition of data—for nothing. ## Acknowledgments I am grateful to James Hodges, John Robert Warren, Robert O’Brien, Christopher Winship, Daniel Powers, Carolyn Liebler, Samir Soneji, Ann Meier, Ian Ross Macmillan, Caren Arbeit, Julia Drew, Catherine Fitch, Julian Wolfson, and Wenjie Liao for their helpful comments. I also thank the Maryland Population Research Center for support. A version of this article was presented at the 2012 meeting of the Population Association of America, San Francisco, CA. ## Notes 1 One way to characterize the effects of an interval variable like time is to break the effect into two components: linear and nonlinear (curvature or deviations from linearity) trends. It has been known at least since Holford (1983) that the linear components of age, period, and cohort effects cannot be estimated without constraints because they are not identified. In contrast, nonlinear age, period, and cohort trends can be estimated without bias. 2 Yang et al. (2008) refer to this as “CGLIM.” 3 It is important to distinguish data reduction or prediction from coefficient estimation. Because the identification problem does not prevent obtaining a set of solutions with good fit to the data, one can still make good predictions. The PC technique treats such problems as data redundancy and allows obtaining one solution. However, as noted earlier, none of these solutions is the uniquely preferred solution: the solution that APC techniques, including IE, aim to discover. Therefore, providing a solution for the purpose of prediction is not the same as finding a uniquely preferred solution for estimation of separate age, period, and cohort effects. 4 Alternatively, Yang (2008:413) described the computational algorithm of IE as follows: after obtaining r – 1 coefficients in the PC space (w2, . . . , wr), “set coefficient w1 equal to 0 and transform the coefficients vector w = (w1, . . . , wr)T,” where w1 corresponds to the null eigenvector direction. 5 Yang et al. (2004, 2008) used $b0*=b0b0$, where ‖b0‖ is the length of b0, so b0* has a length of 1. b0 is used in this article because it is simply a multiple of b0* and is simpler for exposition and computation. 6 The constraint imposed by IE depends on how model (2) is parameterized. If the model is parameterized in terms of orthogonal polynomial contrasts for each of the age, period, and cohort effects, as in Holford (1983), then IE imposes a constraint solely on the linear contrasts of age, period, and cohort effects irrespective of any nonlinear trends that are present. The parameterization used here is more common (e.g., Kupper et al. 1985), and in this parameterization, the constraint on the linear components of the age, period, and cohort effects depends on the nonlinear components when both components are present. 7 Yang and colleagues have used “unbiasedness” in a different sense to mean that the expectation of IE is equal to b1, the projection of parameter vector b onto the nonnull space of design matrix X (see, e.g., Yang et al. 2008:1709). This is an important distinction because the true parameter vector b can be very different from its projection b1 onto the nonnull space, the vector that IE actually estimates. Because APC analysts are usually interested in estimating the true age, period, and cohort effects, the classic concept of unbiasedness is more relevant to APC research than that used by IE’s proponents. Thus, I use “unbiasedness” in its classic sense in my discussion. 8 Yang and colleagues have used empirical data, in which the true effects are unknown, to assess the properties and performance of IE (Yang et al. 2008:1712–1716). However, it is logically impossible to assess the performance of an estimator when the true effects are unknown. If such a cross-model validation of IE for a specific empirical data set were to show that IE yields reasonable estimates, this can only depend on having selected examples that are consistent with the IE’s constraint. Therefore, cross-model comparisons using empirical data are not an appropriate method to validate IE. 9 Although Yang and colleagues correctly pointed out that IE estimates the projection of the “true” effects onto the nonnull space, they compared IE estimates with the “true” parameters, not to the projection (Yang et al. 2008:1718–1722). This is key because the true parameter vector can be very different from its projection onto the nonnull space (the vector that IE actually estimates). That is, what IE actually estimates can be very different from the true APC effects if the true effects do not at least approximately satisfy the LC implicit in IE. 10 Examples include age-period-cohort characteristic models developed by O’Brien (2000) and the mechanism-based approach proposed by Winship and Harding (2008). 11 Examples are the cross-classified random-effects models created by Yang and Land (2006, 2008). ## References Alwin, D. F. ( 1991 ). Family of origin and cohort differences in verbal ability . American Sociological Review , 56 , 625 638 . 10.2307/2096084 Fabio, A., Leober, R., Balasubramani, G. K., Roth, J., Fu, W., & Farrington, D. P. ( 2006 ). Why some generations are more violent than others: Assessment of age, period, and cohort effects . American Journal of Epidemiology , 164 , 151 160 . 10.1093/aje/kwj172 Fu, W. ( 2000 ). Ridge estimator in singular design with applications to age-period-cohort analysis of disease rates . Communications in Statistics Theory and Method , 29 , 263 278 . 10.1080/03610920008832483 Fu, W. J., & Hall, P. ( 2006 ). Asymptotic properties of estimators in age-period-cohort analysis . Statistics & Probability Letters , 76 , 1925 1929 . 10.1016/j.spl.2006.04.051 Fu, W. J., Land, K. C., & Yang, Y. ( 2011 ). On the intrinsic estimator and constrained estimators in age-period-cohort models . Sociological Methods & Research , 40 , 453 466 . 10.1177/0049124111415355 Glenn, N. D. ( 2005 ). Cohort analysis . Thousand Oaks, CA : Sage Publications . Heckman, J., & Robb, R. ( 1985 ). Using longitudinal data to estimate age, period, and cohort effects in earnings equations . In Mason, W. M., & Fienberg, S. E. (Eds.), Cohort analysis in social research (pp. 137 150 ). New York : Springer-Verlag . Holford, T. R. ( 1983 ). The estimation of age, period and cohort effects for vital rates . Biometrics , 39 , 311 324 . 10.2307/2531004 Keyes, K., & Miech, R. ( 2013 ). Age, period, and cohort effects in heavy episodic drinking in the US from 1985 to 2009 . Drug and Alcohol Dependence , . Kupper, L. L., Janis, J., Karmous, A., & Greenberg, B. G. ( 1985 ). Statistical age-period-cohort analysis: A review and critique . Journal of Chronic Diseases , 38 , 811 830 . 10.1016/0021-9681(85)90105-5 Kupper, L. L., Janis, J., Salama, I. A., Yoshizawa, C. N., Greenberg, B. G., & Winsborough, H. H. ( 1983 ). Age-period-cohort analysis: An illustration of the problems in assessing interaction in one observation per cell data . Communications in Statistics—Theory and Methods , 12 , 2779 2807 . Langley, J., Samaranayaka, A., Davie, J., & Campbell, A. J. ( 2011 ). Age, cohort and period effects on hip fracture incidence: Analysis and predictions from New Zealand data 1974–2007 . Osteoporosis International , 22 , 105 111 . 10.1007/s00198-010-1205-6 Mason, K. O., Mason, W. H., Winsborough, H. H., & Poole, W. K. ( 1973 ). Some methodological issues in cohort analysis of archival data . American Sociological Review , 38 , 242 258 . 10.2307/2094398 Miech, R., Koester, S., & Dorsey-Holliman, B. ( 2011 ). Increasing US mortality due to accidental poisoning: The role of the baby boom cohort . , 106 , 806 815 . 10.1111/j.1360-0443.2010.03332.x O’Brien, R. M. ( 2000 ). Age period cohort characteristic models . Social Science Research , 29 , 123 139 . 10.1006/ssre.1999.0656 O’Brien, R. M. ( 2011 ). Constrained estimators and age-period-cohort models . Sociological Methods & Research , 40 , 419 452 . 10.1177/0049124111415367 O’Brien, R. M. ( 2011 ). Intrinsic estimators as constrained estimators in age-period-cohort accounting models . Sociological Methods & Research , 40 , 467 470 . 10.1177/0049124111415369 Preston, S. H., & Wang, H. ( 2006 ). Sex mortality differences in the United States: The role of cohort smoking patterns . Demography , 43 , 631 646 . 10.1353/dem.2006.0037 Rodgers, W. L. ( 1982 ). Estimable functions of age, period, and cohort effects . American Sociological Review , 47 , 774 787 . 10.2307/2095213 Rodgers, W. L. ( 1982 ). Reply to comment by Smith, Mason, and Fienberg . American Sociological Review , 47 , 793 796 . 10.2307/2095215 Ryder, N. B. ( 1965 ). The cohort as a concept in the study of social change . American Sociological Review , 30 , 843 861 . 10.2307/2090964 2011 ). Age, period, and cohort effects on religious activities and beliefs . Social Science Research , 40 , 181 192 . 10.1016/j.ssresearch.2010.09.006 Winkler, R., & Warnke, K. ( 2013 ). The future of hunting: An age-period-cohort analysis of deer hunter decline . Population Environment , 34 , 460 480 . 10.1007/s11111-012-0172-6 Winship, C., & Harding, D. J. ( 2008 ). A mechanism-based approach to the identification of age-period-cohort models . Sociological Methods & Research , 36 , 362 401 . 10.1177/0049124107310635 Yang, Y. ( 2008 ). Trends in U.S. adult chronic disease mortality, 1960–1999: Age, period, and cohort variations . Demography , 45 , 387 416 . 10.1353/dem.0.0000 Yang, Y., Fu, W. J., & Land, K. C. ( 2004 ). A methodological comparison of age-period-cohort models: The intrinsic estimator and conventional generalized linear models . Sociological Methodology , 34 , 75 110 . 10.1111/j.0081-1750.2004.00148.x Yang, Y., & Land, K. C. ( 2006 ). A mixed models approach to the age-period-cohort analysis of repeated cross-section surveys, with an application to data on trends in verbal test scores . Sociological Methodology , 36 , 75 97 . 10.1111/j.1467-9531.2006.00175.x Yang, Y., & Land, K. C. ( 2008 ). Age–period–cohort analysis of repeated cross-section surveys: Fixed or random effects? . Sociological Methods & Research , 36 , 297 326 . 10.1177/0049124106292360 Yang, Y., Schulhofer-Wohl, S., Fu, W. J., & Land, K. C. ( 2008 ). The intrinsic estimator for age-period-cohort analysis: What it is and how to use it . American Journal of Sociology , 113 , 1697 1736 . 10.1086/587154
Hi John. A couple of small typos: \$\$ x \sim_{f^{\ast}(P \wedge Q)} x' \textrm{ if and only if } x \sim_{ f^{\ast}(P)} x' \textrm{ and } x_{f^{\ast}(Q) } x'. \$\$ should be: \$\$ x \sim_{f^{\ast}(P \wedge Q)} x' \textrm{ if and only if } x \sim_{ f^{\ast}(P)} x' \textrm{ and } x \sim_{f^{\ast}(Q) } x'. \$\$ The next equation also has a small typo and should be: \$\$ f(x) \sim_{P \wedge Q} f(x') \textrm{ if and only if } f(x) \sim_P f(x') \textrm{ and } f(x) \sim_Q f(x'). \$\$ Also I think the join \\(P \vee Q \\) has two parts: \$\$ P \vee Q = \\{\\{11,12,13,22,23\\},\\{21\\}\\} . \$\$
# What are the units of the Bekenstein bound? Working with the Wikipedia definition of the Bekenstein bound: $S \leq \frac{2 \pi R k_bE}{\hbar c}$ $2\pi R \$ is $m^2$ $k_b$ is $\frac{J}{K}$ $E$ is $J$ $\hbar$ is $J*s$ $c$ is $\frac{m}{s}$ $\frac{m^2 \frac{J}{K} J}{(Js \frac{m}{s})} = m \frac{J}{K}$ Am I overlooking something? - In theoretical physics, entropy is typically dimensionless. For example, instead of defining $S=k_B \log W$, we would define $S=\log W$. This is precisely what has been done in this equation: $\hbar \cdot c$ has units of $J \cdot m$, which cancels the units up top.
# Wilson loop "Wilson line" redirects here. For the Wilson Line shipping company, see Thomas Wilson Sons & Co. In gauge theory, a Wilson loop (named after Kenneth G. Wilson) is a gauge-invariant observable obtained from the holonomy of the gauge connection around a given loop. In the classical theory, the collection of all Wilson loops contains sufficient information to reconstruct the gauge connection, up to gauge transformation.[1] In quantum field theory, the definition of Wilson loop observables as bona fide operators on Fock spaces is a mathematically delicate problem and requires regularization, usually by equipping each loop with a framing. The action of Wilson loop operators has the interpretation of creating an elementary excitation of the quantum field which is localized on the loop. In this way, Faraday's "flux tubes" become elementary excitations of the quantum electromagnetic field. Wilson loops were introduced in the 1970s in an attempt at a nonperturbative formulation of quantum chromodynamics (QCD), or at least as a convenient collection of variables for dealing with the strongly interacting regime of QCD.[2] The problem of confinement, which Wilson loops were designed to solve, remains unsolved to this day. The fact that strongly coupled quantum gauge field theories have elementary nonperturbative excitations which are loops motivated Alexander Polyakov to formulate the first string theories, which described the propagation of an elementary quantum loop in spacetime. Wilson loops played an important role in the formulation of loop quantum gravity, but there they are superseded by spin networks (and, later, spinfoams), a certain generalization of Wilson loops. In particle physics and string theory, Wilson loops are often called Wilson lines, especially Wilson loops around non-contractible loops of a compact manifold. ## An equation The Wilson loop variable is a quantity defined by the trace of a path-ordered exponential of a gauge field ${\displaystyle A_{\mu }}$ transported along a closed line C: ${\displaystyle W_{C}:=\mathrm {Tr} \,(\,{\mathcal {P}}\exp i\oint _{C}A_{\mu }dx^{\mu }\,)\,.}$ Here, ${\displaystyle C}$ is a closed curve in space, ${\displaystyle {\mathcal {P}}}$ is the path-ordering operator. Under a gauge transformation ${\displaystyle {\mathcal {P}}e^{i\oint _{C}A_{\mu }dx^{\mu }}\to g(x){\mathcal {P}}e^{i\oint _{C}A_{\mu }dx^{\mu }}g^{-1}(x)\,}$, where ${\displaystyle x\,}$ corresponds to the initial (and end) point of the loop (only initial and end point of a line contribute, whereas gauge transformations in between cancel each other). For SU(2) gauges, for example, one has ${\displaystyle g^{\pm 1}(x)\equiv \exp\{\pm i\alpha ^{j}(x){\frac {\sigma ^{j}}{2}}\}}$; ${\displaystyle \alpha ^{j}(x)}$ is an arbitrary real function of ${\displaystyle x\,}$, and ${\displaystyle \sigma ^{j}}$ are the three Pauli matrices; as usual, a sum over repeated indices is implied. The invariance of the trace under cyclic permutations guarantees that ${\displaystyle W_{C}}$ is invariant under gauge transformations. Note that the quantity being traced over is an element of the gauge Lie group and the trace is really the character of this element with respect to one of the infinitely many irreducible representations, which implies that the operators ${\displaystyle A_{\mu }\,dx^{\mu }}$ don't need to be restricted to the "trace class" (thus with purely discrete spectrum), but can be generally hermitian (or mathematically: self-adjoint) as usual. Precisely because we're finally looking at the trace, it doesn't matter which point on the loop is chosen as the initial point. They all give the same value. Actually, if A is viewed as a connection over a principal G-bundle, the equation above really ought to be "read" as the parallel transport of the identity around the loop which would give an element of the Lie group G. Note that a path-ordered exponential is a convenient shorthand notation common in physics which conceals a fair number of mathematical operations. A mathematician would refer to the path-ordered exponential of the connection as "the holonomy of the connection" and characterize it by the parallel-transport differential equation that it satisfies. At T=0, where T corresponds to temperature, the Wilson loop variable characterizes the confinement or deconfinement of a gauge-invariant quantum-field theory, namely according to whether the variable increases with the area, or alternatively with the circumference of the loop ("area law", or alternatively "circumferential law" also known as "perimeter law"). In finite-temperature QCD, the thermal expectation value of the Wilson line distinguishes between the confined "hadronic" phase, and the deconfined state of the field, e.g., the quark–gluon plasma.
Article Search eISSN 0454-8124 pISSN 1225-6951 ### Article Kyungpook Mathematical Journal 2022; 62(3): 407-423 Published online September 30, 2022 ### On Representable Rings and Modules Seyed Ali Mousavi, Fatemeh Mirzaei and Reza Nekooei∗ Department of Pure Mathematics, Mahani Mathematical Research Center, Shahid Bahonar University of Kerman, Kerman, Iran e-mail : [email protected], [email protected] and [email protected] Received: November 11, 2020; Revised: October 8, 2021; Accepted: October 10, 2021 In this paper, we determine the structure of rings that have secondary representation (called representable rings) and give some characterizations of these rings. Also, we characterize Artinian rings in terms of representable rings. Then we introduce completely representable modules, modules every non-zero submodule of which have secondary representation, and give some necessary and sufficient conditions for a module to be completely representable. Finally, we define strongly representable modules and give some conditions under which representable module is strongly representable. Keywords: Secondary modules, Secondary representation, Representable rings and modules, Laskerian rings and modules, Semiperfect rings, Artinian rings, Noetherian spectrum, Compactly packed rings, Q-rings, AB5* modules, Linearly compact modules. ### 1. Introduction Throughout this paper, R will denote a commutative ring with a non-zero identity and every module will be unitary. Given an R-module M, we shall denote the annihilator of M (in R) by AnnR(M) or Ann(M). We shall follow Macdonald's terminology in [18] concerning secondary representation. Thus, an R-module N is secondary if N ≠ 0 and for each r ∈ R, either rN=N or there exists some positive integer n, such that rnN=0. If N is a secondary module then, Ann(N) is a primary ideal and hence P=Ann(N) is prime and we say that N is P-secondary. A secondary representation of M is an expression for M as a sum M=N1+N2++Nt of finitely many secondary submodules of M, such that Ni is Pi-secondary for i=1,,t. If such a representation exists, we shall say M is representable. Such a representation is said to be minimal if P1,,Pt are all different and none of the summands Ni are redundant. Every representable module has a minimal secondary representation. As for the primary decomposition of submodules, we have two uniqueness theorems for secondary representation of modules. The first uniqueness theorem (see [18, 2.2]) says that if M=N1++Nt, with Ni being Pi-secondary, is a minimal secondary representation of M, then the set {P1,...,Pt} is independent of the choice of minimal secondary representation of M. This set is called the set of attached prime ideals of M and is denoted by Att(M). Every Artinian module and every injective module over a Noetherian ring is representable. These and other propositions about representable modules can be found in [5, 18, 22]. An R-module M is said to be Laskerian if every proper submodule of M is an intersection of a finite number of primary submodules, i.e. has a primary decomposition. A ring R is Laskerian if it is Laskerian as an R-module over itself. We denote the set of all prime ideals and the set of all maximal ideals of a ring R by Spec(R) and Max(R), respectively. The Jacobson radical J(R) of a ring R is defined to be the intersection of all the maximal ideals of R. The set of all nilpotent elements of R is called the nilradical of R and denoted by N(R). The Krull dimension of R is denoted by dim(R). If I is a proper ideal of R, then I and Min(I) denote the radical ideal of I and the set of prime ideals of R minimal over I, respectively. A topological space X is said to be irreducible if X, and whenever X=Z1Z2 with Zi closed, we have X=Z1 or X=Z2. The maximal irreducible subsets of X are called irreducible components of X. A topological space X is said to be Noetherian if the ascending chain condition holds for open subsets of X. If X=Spec(R) with the Zariski topology, then X is Noetherian if and only if R satisfies the ascending chain condition for radical ideals. In Section 2, we investigate representable rings and show that these rings and Artinian rings have similar properties in some cases. We then show that representable rings are strictly between Artinian and semiperfect rings. Therefore, we determine the structure of these rings and characterize them. Next we characterize Artinian rings in terms of representable rings. In Section 3, we consider modules for which all non-zero submodules are representable and called these modules completely representable. Then we give some necessary and sufficient conditions for a module to be completely representable. In Section 4, we define strongly representable modules as modules that can be written as a direct sum of finitely many secondary submodules and then give some conditions such that a representable module is strongly representable. ### 2. Representable Rings A ring R is representable if it is representable as a module over itself. In this section, we first show that representable rings are strictly between Artinian and semiperfect rings. Then we determine the structure of these rings. Finally we give some characterization of these rings and characterize Artinian rings in terms of representable rings. By [18, 5.2], any Artinian ring is representable. But the converse is not true. Example 2.1. The ring R=k[X1,X2,]/(X1,X2,)2, where k is a field, is representable (in fact it is m=(x1,x2,)-secondary, where xi denotes the equivalence class of Xi in R) but it is not Artinian, because it is not Noetherian. Representable rings have similar properties to Artinian rings. Artinian rings are Noetherian. We show that representable rings are Laskerian. Proposition 2.2. Let R be a representable ring. We have the following statements. • (i) If R is a domain, then R is a field. • (ii) dim(R)=0. • (iii) J(R)=N(R), and hence J(R) is a nil ideal. • (v) Spec(R) is Noetherian. Proof. (i) Let R be a representable domain and R=I1+I2++It be a minimal secondary representation of R, where Ii is Pi-secondary (i=1,,t). Let rPi. Then there exists n1 such that rnIi=0. But Ii0, hence r=0. Thus, Pi=0 for all i, and hence R=I1. So, R is 0-secondary and hence is a field. (ii) Let P be a prime ideal of R. Then R/P is a representable R-module and also a representable domain. Hence by part (i), R/P is a field. Thus, P is a maximal ideal. (iii) It follows by part (ii). (iv) Let I be a proper ideal of a representable ring R. Then R/I is a representable R-module. Let R/I=J1++Jt be a minimal secondary representation of R/I. Then I=Ann(R/I)= i=1tAnn(Ji) is a primary decomposition for I. (v) It follows from (iv) and [10, Theorem 4]. Every representable ring has finitely many prime ideals. Indeed, prime ideals are maximal, and we have the following. Proposition 2.3. Let R be a Laskerian ring. If dim(R)=0, then it has finitely many maximal ideals. In particular, every representable ring has finitely many maximal ideals. Proof. Since R is Laskerian, it follows from [10][Theorem 4] that R has Noetherian spectrum. Hence by [20, Theorem 3.A.16], the Spec(R) has finitely many irreducible components and by [20, Corollary 3.A.14], these components are of the forms V(P)={QSpec(R)PQ}, where P is a minimal prime ideal of R. Since dim(R)=0, Min(R)=Max(R). Thus, V(P)={P}, for every PMin(R). Therefore, Spec(R) is finite. The last statement follows from Proposition 2.2 (ii), (iv). In the following, we show that every representable ring is semiperfect. By [27, 42.6], a ring R is semiperfect if and only if R/J(R) is semisimple and idempotents lift modulo J(R). Proposition 2.4. Every representable ring is semiperfect. Proof. Let R be a representable ring and let Max(R)={m1,m2,,mt}=Spec(R). By the Chinese Remainder Theorem, R/J(R)i=1tR/mi. Hence R/J(R) is semisimple. By Proposition 2.2 (iii), J(R) is a nil ideal. Thus by [27, 42.7], idempotents lift modulo J(R). Therefore, R is a semiperfect ring. The converse of Proposition 2.4, is not true in general. For example, the ring of formal power series R=k[[X]] where k is a field, is semiperfect (because it is local) but it is not representable (because dim(R)0). Thus, representable rings are strictly between Artinian and semiperfect rings. In the following, we will determine the structure of representable rings. To this purpose we need the following. Proposition 2.5. Let R be a ring and e ∈ R be an idempotent element. Then we have the following statements. • (i) Re is a ring with 1Re=e. • (ii) Every ideal of Re is of the form Ie, where I is an ideal of R. • (iii) If m is a maximal (prime) ideal of R and e∉ m, then me is a maximal (prime) ideal of Re. • (iv) If I is an P-secondary (as R-module) ideal of R, then Ie is either zero or Pe-secondary (as Re-module) ideal of Re. Proof. (i), (ii) and (iv) are easily proven by definitions. For (iii), one can consider ring isomorphism Re/Ie(R/I)(e+I). Lemma 2.6. A ring R is representable and local if and only if it is secondary as an R-module. Proof. Let (R,m) be a representable and local ring. Then m=N(R), and hence every elements of R is a unit or nilpotent. Thus, R is m-secondary. Conversely, if R is m-secondary, then RU(R)=m, where U(R) is the set of unit elements of R. Thus, R is local with unique maximal ideal m. Lemma 2.7. Let R be a representable ring with Max(R)={m1,,mt} and e∈ R be an idempotent element such that e1m1 and e i=2tmi. Then Re is a representable local ring with unique maximal ideal m1e. Proof. According to Lemma 2.6, it is sufficient to show that the ring Re is m1e-secondary. Let R=I1+I2++Is be a minimal secondary representation for R, where Ii be mi-secondary (i=1,,s). In fact, s=t. Because, if s<t then for rmt i=1smi, we have rR=rI1++rIs=I1++Is=R. So r is unit, which is a contradiction. Since em1 and emi(2is), we have, Re=I1e(=I1). Therefore, Re is m1e-secondary. Theorem 2.8. (Structure theorem for representable rings) A representable ring is uniquely (up to isomorphism) a finite direct product of representable local rings. Proof. Let Max(R)={m1,,mt}. Since J(R) is a nil ideal, hence by [27, 42.7], every idempotent in R/J(R) can be lifted to an idempotent in R. But by Chines Reminder Theorem, R/J(R) i=1tR/mi. Thus, there exists a set {e1,,et} of orthogonal idempotents (i.e. ei2=ei for all i and eiej=0 for ij) in R such that ei1mi and ei jitmj, for i=1,,t. Thus, we have (e11)(e21)(et1)J(R)=N(R). So there exists n1, such that (e11)n(e21)n(et1)n=0. Hence e1+e2++et=1. By [13, Exercise 24, page 135], we have RRe1××Ret. But by Lemma 2.7, each Rei is representable and local. Therefore, R is a finite direct product of representable local rings. For uniqueness, suppose R i=1sRi, where the (Ri,mi) are representable local (=mi-secondary) rings. For each i, we have a natural injective homomorphism φi:RiR. Let Ii=Im(ϕi). Then one can simply see that R=I1++Is is a minimal secondary representation of R. On the other hand by proof of Lemma 2.7, R=Re1++Ret is a minimal secondary representation of R and all the secondary components Rei are isolated. Hence by 2nd uniqueness theorem for secondary representation (see [18, 3.2]), we have s=t and RiIiRei for all i, and the proof is complete. Corollary 2.9. Let R be a representable ring. Then every non-zero ideal of R is representable (as an R-module). Proof. By Theorem 2.8, R=R1××Rn, where Ri is a representable local ring (or equivalently secondary ring by Lemma 2.6). Let I be a non-zero ideal of R. Then I=I1××In, where Ii is an ideal of Ri for all i. Since every non-zero ideal of a secondary ring is secondary, Ii is secondary for all i with Ii0. For simplicity, we can assume that all of Ii's are non-zero. Then I=(I1×0××0)++(0××0×In), is a secondary representation of I and the proof is complete. Proposition 2.10. Let R be a ring. Then R is a representable ring if and only if Spec(R) is Noetherian and for every non-zero ideal I of R and every minimal prime ideal P of Ann(I), there exists rRP such that rI is P-secondary. Proof. Let R be a representable ring. Then by Proposition 2.2 (v), Spec(R) is Noetherian. Let I be a non-zero ideal of R and P be a minimal prime ideal of Ann(I). By Corollary 2.9, I has a minimal secondary representation, say, I=I1++It. Thus, Ann(I)= i=1tPiP. So, there exists i such that PiP, and hence Pi=P because dim(R)=0. By rearranging Ii's (if necessary), we can assume that i=1. If t=1, then I=I1 is P-secondary and the proof is complete for r=1. Otherwise, suppose s( i=2tPi)P. Then there exists k1 such that skAnn(Ii) for all i, 2it. Let r=sk. Then rI=rI1++rIt=rI1=I1 is P-secondary. Conversely, let P0 be a minimal prime ideal of R=I0 . Then by assumption, there exists r0RP0 such that r0R is P0-secondary. Let I1=(0:Rr0)=AnnR(Rr0). Since r02I0=r0I0, so we have R=r0I0+I1. If I1=0, then R=r0I0 is a representation of R. Otherwise, let P1 be a minimal prime ideal of Ann(I1). Again by assumption, there exists r1RP1 such that r1I1 is P1-secondary and hence I1=r1I1+I2 , where I2=(0:Rr1). Thus, R=r0I0+r1I1+I2. If I2=0, then R=r0I0+r1I1 is representation of R. Otherwise we continue this process and claim that there exists some n1 such that In=0. If not, then since I3I2I1I0=R and hence Ann(R)Ann(I1)Ann(I2) . Since Spec(R) is Noetherian, there exists n1 such that Ann(In)=Ann(In+1). Thus, there exists t1 such that rntIn=0. But rntIn=rnIn. So rnAnn(In)Pn, which is a contradiction. Therefore, there exists n0 such that R=r0I0+r1I1++rnIn, a representation of R and the proof is complete. In the next theorem we give some characterization of representable rings. A ring R is called von Neumann regular ring if for each a ∈ R, there exists b ∈ R such that a=a2b. A ring R is said to be a Q-ring, if every ideal of R is a finite product of primary ideals. A ring R is called a completely packed ring if whenever I αΓPα, where I is an ideal and Pα's are prime ideals of R then IPβ, for some βΓ. It is well known that R is a compactly packed ring if and only if every prime ideal is the radical of a principle ideal [24, Theorem]. We need the following lemma. Lemma 2.11. Let I⊆ P be ideals of a ring R with P a prime ideal. Then the following statements are equivalent. • (i) P is a minimal prime ideal of I. • (ii) For each x ∈ P, there is yRP and a positive integer n such that yxnI. Proof. [12, Theorem 2.1, Page 2]. Theorem 2.12. Let R be a ring. The following statements are equivalent. • (i) R is a representable ring. • (ii) R/J(R) is a semisimple ring and J(R) is a nil ideal. • (iii) R/N(R) is a Noetherian and von Neumann regular ring. • (iv) R is a zero dimensional compactly packed ring. • (v) R has Noetherian spectrum and dim(R)=0. • (vi) R is a zero dimensional Q-ring. Proof. (i)(ii) It follows from Propositions 2.2 and 2.4. (ii)(iii) A ring is semisimple if and only if it is Noetherian Von Neumann regular ring [17, Theorem 4.25]. Since J(R) is a nil ideal, J(R)=N(R). Hence the result follows. (iii)(iv)(v)(vi) [15, Theorem 2.12]. Now by Proposition 2.10, we show that the equivalent conditions (iv) and (v) imply (i). Let I be a non-zero ideal of R and P a minimal prime ideal of Ann(I). By [24][Theorem], there exists a ∈ R such that P=(a). Now by Lemma 2.11, there exists rRP and n1 such that ranAnn(I). We claim that rI is P-secondary. Since r∉ P, rI ≠ 0. Let s ∈ R. If s ∈ P, then there exists m1 such that sm(a). Thus sm=ta, for some t ∈ R. We have snm(rI)=tnan(rI)=tn(ranI)=0. If s ∉ P, since P is maximal, then R=Rs+P. Therefore, 1=ys+x, for some y ∈ R and s ∈ P. Since P=(a), sk=za for some k1 and z ∈ R. Now we have 1=(ys+x)kn=αs+xkn=αs+znan, for some αR. Thus, rI=(αsr+znanr)I=αsrI. So, s(rI)rI=s(αrI)s(rI). Therefore, s(rI)=rI and the proof is complete. The following result is the expected generalization of [3, Theorem 8.5]. Corollary 2.13. A ring R is representable if and only if R is Laskerian and dim(R)=0. Proof. It follows from Proposition 2.2, [10, Theorem 4] and Theorem 2.12 (v). Corollary 2.14. Let R be a local ring. Then R is representable if and only if dim(R)=0. Proof. It follows form Proposition 2.2 (ii) and Theorem 2.12 (iii). Corollary 2.15. A ring R is reduced and representable if and only if R is a Noetherian von Neumann regular ring (or equivalently a semisimple ring). Proof. Let R be reduced (i.e. N(R)=0) and representable. Then by Theorem 2.12 (iii), R is a Noetherian von Neumann regular ring. Conversely, if R is a Noetherian von Neumann regular ring, then it is semisimple [17, Theorem 4.25]. So it is a finite direct product of fields and hence is representable and reduced. Representability is not a local property. However, we have the following theorem. Theorem 2.16. Let R be a ring. Then Rm is a representable ring for all maximal ideal m of R if and only if dim(R)=0. Proof. If dim(R)=0 then by Corollary 2.14, Rm is a representable ring, for all maximal ideals m of R. Now let Rm be a representable ring, for all maximal ideal m of R. Let q be a prime ideal and m be a maximal ideal of R such that q ⊆ m. Then by Proposition 2.2 (ii), dim(Rm)=0. So qm=mm, and hence q=m. This shows that dim(R)=0. Corollary 2.17. Let R be a ring. Then R is reduced and Rm is a representable ring, for all maximal ideal m of R if and only if R is a von Neumann regular ring. Proof. It follows from Theorem 2.16 and [12, Remark, Page 5]. Now we characterize Artinian rings in terms of representable rings. Theorem 2.18. Let R be a ring. The following statements are equivalent. • (i) R is an Artinian ring. • (ii) R is a representable ring and locally Noetherian. • (iii) R is a Noetherian ring and Rm is a representable ring, for all maximal ideals m of R. Proof. (i)(ii) Follows from [18, 5.2] and [3, Theorem 8.5]. (ii)(iii) By Theorem 2.12 (vi), R is a Q-ring with dim(R)=0. Now by [14, Theorem 3] and Theorem 2.16, (iii) holds. (iii)(i) Follows from Theorem 2.16 and [3, Theorem 8.5]. ### 3. Completely Representable Modules In this section, we consider modules that all non-zero submodules are representable. This is similar to definition of Laskerian modules and, in a sense, a dual of that notion. We call these modules "completely representable". Artinian modules (see [18]), representable modules over regular rings ([8, Theorem 2.3]) and modules over local rings that their maximal ideals are nilpotent, are examples of such modules. Definition 3.1. An R-module M is said to be completely representable, if M ≠ 0 and every non-zero submodule of M is representable. A ring R is completely representable, if it is completely representable as a module over itself. Remark 3.2. Both representable and completely representable modules are generalizations of Artinian modules. But the second seems to be a better generalization, because we know that every submodule of an Artinian module is again Artinian, and we may want this feature to be preserved in generalization and this holds in definition of completely representable modules (but not in representable modules). For the case of rings, representable and completely representable are the same. Proposition 3.3. A ring R is completely representable if and only if it is representable. Proof. If R is completely representable, then R is representable by definition. The converse is Corollary 2.9. By definition, every completely representable module is representable. But the converse is not true. Example 3.4. The -module is (0)-secondary and hence representable. But it is not completely representable, because the submodule is not representable. In the next, we give some necessary and some sufficient conditions for a module to be completely representable. An R-module M is said to satisfy (dccr) if the descending chain INI2N terminates, for every submodule N of M and every finitely generated ideal I of R. The following result mentioned in [25, Proposition 3] without complete proof. We give its proof. Proposition 3.5. Let M be a completely representable R-module. Then M satisfies (dccr). Proof. By [25, Theorem, Page 2 ], M satisfies (dccr) if and only if for any submodule N of M and a ∈ R, N=akN+(0:Nak), for all large k. Let N=N1++Nt be a minimal secondary representation of N with Ni, Pi-secondary. If a i=1tPi, then aNi=Ni, for all i. Hence N=aN and the proof is complete. Otherwise, let (by rearranging if necessary) a i=1lPi and a i=l+1tPi. Then there exists an integer k such that akNi=0, for all i, 1il. So, akN=Nl+1++Nt and N1++Nl(0:Nak). Hence N=akN+(0:Nak) and the proof is complete. The following examples show that the converse of Proposition 3.5, is not true in general. Example 3.6. Let R be a non Noetherian von Neumann regular ring (e.g. R= i=12). Then dim(R)=0. So, by [26, Proposition 1.2], R satisfies (dccr). By Theorem 2.12, R is not representable. (Note that any von Neumann regular ring is reduced). Example 3.7. Let M=pp, where is the set of all prime numbers. Then by [26, Remark 1.10], M satisfies (dccr) condition as a -module. But this module is not representable (and therefore, is not completely representable). Because if N is a p-secondary submodule of M, for some p, then every component of every element of N is zero except probably the component that belongs to p. Obviously, the finite sum of this submodules can not be equal to M. Therefore, the class of completely representable modules is strictly between modules that satisfies (dcc) (i.e. Artinian modules) and modules that satisfies (dccr) condition. Also, by Proposition 3.3 and [26, Proposition 1.2], representable rings are strictly between Artinian rings and rings with zero dimension. Although, the converse of Proposition 3.5, is true under some additional conditions. A module M over a Noetherian ring R is said to have finite Goldie dimension, if M does not contain an infinite direct sum of non-zero submodules. Proposition 3.8. Let M be a module of finite Goldie dimension over a commutative Noetherian ring R. Then M is completely representable if and only if M satisfies (dccr). Proof. [25, Proposition 3]. Bourbaki in [4, Chap. IV, Sect. 2, Exercise 23, Page 295], give a necessary and sufficient conditions for a finitely generated module to be Laskerian. In the following we dualize these conditions for a module to be completely representable. Let N be an R-module and S be a multiplicatively closed subset of R. We consider S(N)= rSrN. If P be a prime ideal of R and S=RP, we denote S(N) by SP(N). Proposition 3.9. Let M be a completely representable R-module. Then we have • (i) For every non-zero submodule N of M and every multiplicatively closed subset S of R, there exists r ∈ S such that S(N)=rN. • (ii) For every non-zero submodule N of M, every increasing sequence (Sn(N))n1 is stationary, where (Sn)n1 is any decreasing sequence of multiplicatively closed subset of R. Proof. (i) Let N= i=1tNi be a minimal secondary representation with Ann(Ni)=Pi, (1it). If SPi= for all i, then for every s ∈ S, sNi=Ni for all i, and hence sN=N. Thus, S(N)=N and (i) holds, for r=1∈ S. Otherwise, there exists l, 1l<t, such that PiS=, (1il) and PiS (l+1it). Thus, for every s ∈ S, sN=N1++Nl+sNl+1++sNt. On the other hand, there exists r ∈ S such that rNi=0, (l+1it). So rN=N1++NlS(N)rN=N1++Nl. Thus, S(N)=rN and (i) holds. (ii) Suppose the contrary is true; i.e., there exists a decreasing sequence (Sn)n1 such that S1(N)S2(N). Let PiS1=(i=1,,l) and PiS1(i=l+1,,t). Since S2S1 and S1(N)S2(N), we have S2Pk=, for some k, (l+1kt). If similarly continue this, then there exists some i, such that SiPj=(j=1,2,,t), and so N=Si(N)=Si+1(N)=, which is a contradiction. Corollary 3.10. Let M be a completely representable R-module. Then for every non-zero submodule N of M and every minimal prime ideal P over Ann(N), there exists rRP such that rN is P-secondary. Proof. Let S=RP. Then by Proposition 3.9 (i), there exists rRP such that SP(N)=rN. Let s ∈ R. If s ∉ P, rN=SP(N)srNrN. Thus, s(rN)=rN. If s∈ P, by Lemma 2.11, there exists tRP and n1 such that tsnAnn(N). So, tsnN=0, and hence tsnrN=0. But trN=rN. Thus, sn(rN)=0. Therefore, rN is P-secondary. Conditions of Proposition 3.9 are sufficient for a finitely cogenerated AB5* module to be completely representable. An R-module M is said to be finitely cogenerated if for every family {Mi}iI of submodules of M with iIMi=0, there is a finite subset FI with iFMi=0. It is clear that every submodule of a finitely cogenerated module is finitely cogenerated. Also if N and M/N are finitely cogenerated then so is M. Hence every direct sum of finitely cogenerated modules is finitely cogenerated. A family {Mi}iI of submodules of a module M is called inverse (direct) if, for all i,j ∈ I there exists k ∈ I such that MkMiMj (Mi+MjMk). For example, every chain of submodules is an inverse and direct family. The module M is said to be satisfy the AB5* (AB5) condition (and is called an AB5* (AB5) module) if, for every submodule K of M and every inverse (direct) family {Mi}iI of submodules of M, K+ iIMi= iI(K+Mi) (K( iIMi)= iIKMi). Every Artinian module is finitely cogenerated and AB5*. For more information about this class of modules one can see [6, 9, 19, 27]. Theorem 3.11. Let M be a finitely cogenerated AB5* module. Then M is completely representable if and only if • (i) For every non-zero submodule N of M and every minimal prime P over Ann(N), there exists r∈ R∖ P such that SP(N)=rN (or equivalently rN is P-secondary). • (ii) For every non-zero submodule N of M, every increasing sequence (Sn(N))n1 is stationary, where (Sn)n1 is any decreasing sequence of multiplicatively closed subset of R. Proof. () It follows by Proposition 3.9. () Let conditions (i) and (ii) holds and let N be a non-zero submodule of M. Let P1 be a minimal prime of Ann(N). Then by (i), and by proof of Corollary 3.10, there exists r1RP1 such that Q1=r1N is P1-secondary. Let N1=(0:Nr1). Since r12N=r1N, so we have N=Q1+N1. If N1=0, then N=Q1 is representable. Otherwise, let Σ={0GMN=Q1+G,GN1}. N1Σ, and hence Σ. Since M is finitely cogenerated and AB5* module, so every chain in Σ has a lower bound. Hence by Zorn's lemma, Σ has a minimal element with respect to inclusion, N1 say. Let P2 be a minimal prime of Ann(N1) and r2RP2 such that Q2=r2N1 is P2-secondary. Then N1=Q2+N2 where N2=(0:N1r2). Thus, N=Q1+Q2+N2. If N2=0, N=Q1+Q2 is representable. Otherwise, let Σ={0GMN=Q1+Q2+G,GN2}. Again by Zorn's lemma, Σ has a minimal element N2 with respect to inclusion. We continue this process and claim that there exists n1 such that Nn=0. Suppose on the contrary that Nn0 for all n. Let Sn=R i=1 nPi (n=1,2,3,). We show that SnAnn(Nn). If SnAnn(Nn)=, then Ann(Nn) i=1 nPi and by Prime Avoidance Theorem, Ann(Nn)Pi for some i, 1in. But, N2N2N1N1N. So, NnNiNi, and hence, Ann(Ni)Ann(Ni)Ann(Nn)Pi. Thus, riPi, a contradiction. Therefore, SnAnn(Nn). Let sSnAnn(Nn). Then Q1++QnSn(N)sN=Q1++Qn, so; Sn(N)=Q1++Qn (n=1,2,). Now we have a decreasing sequence (Sn)n1 of multiplicatively closed subsets of R, such that the sequence (Sn(N))n1 is strictly increasing. Because if Sn(N)=Sn+1(N) for some n; then Qn+1Q1++Qn. Hence, N=Q1++Qn+Nn+1. But, Nn+1Nn, and hence by minimality of Nn, Nn+1=Nn. Since, rn+1Nn+1=0, thus rn+1Nn+1=0. So, rn+1Nn=0, and hence rn+1Ann(Nn)Pn+1. Thus, rn+1Pn+1, which is a contradiction. Therefore, the sequence (Sn(N))n1 is not stationary, which contradicts condition (ii). Thus, there exists n1 such that Nn=0, and hence N=Q1++Qn is representable. Remark 3.12. We note that the notion of "completely representable" is the dual of the notion "primary decomposition". In some basic theorems for primary decomposition, the authors assume that the condition "finitely generated" (see [4, Chap. IV, Sect. 2, Exercise 23]). Also the AB5 condition is true for all modules (see [19, Lemma 6.22]). Since the "cofinitely generated" and "AB5*" are the dual notions of "finitely generated" and "AB5" conditions respectively, so it is natural to use these conditions in the proof or results about "completely representable". Proposition 3.13. Let M be a finitely cogenerated AB5* module and N be a non-zero submodule of M. Then M is completely representable if and only if both N and M/N are completely representable. Proof. If M is completely representable, then it is straightforward to show that N and M/N are completely representable. Conversely, Let N and M/N be completely representable. Let K be a non-zero submodule of M and P be a minimal prime of Ann(K). By condition (i) of Proposition 3.9, there exist r1, r2RP such that SP(KN)=r1(KN) and SP(K/KN)=r2(K/KN). Let r=r1r2. We show that SP(K)=rK. If αrK, then there exists xK such that α=rx. We have r2(x+KN)r2(K/KN)=SP(K/KN)\$. So, r2x+KN sRPs(K/KN)= sRP((sK+KN)/KN)= sRP(sK+KN)/KN. Thus, r2x sRP(sK+KN). Hence, α=r1r2xr1( sRP(sK+KN)) sRP(r1sK+r1(KN)). On the other hand, it is obvious that for every sRP, r1(KN)=r1s(KN). So α sRP(sK+r1sK)= sRPsK=SP(K). Thus, αSP(K). Now we check condition (ii) of Theorem 3.11. Let (Sn)n1 be a decreasing sequence of multiplicatively closed subset of R. Then there exists n1 and r1,r2Sn+1 such that Sn+1(KN)=Sn(KN)=r1(KN) and Sn+1(K/KN)=Sn(K/KN)=r2(K/KN). Let r=r1r2. Then Sn(K)=rK=Sn+1(K). Thus, conditions (i) and (ii) of Theorem 3.11 are hold. So M is completely representable. Proposition 3.14. Let M1, M2,,Mn be R-modules such that are finitely cogenerated, AB5* and completely representable. If M=M1M2Mn is AB5*, then M is completely representable. Proof. We prove this by induction on n. For n=1, M=M1 is completely representable. Now assume the assertion is true for n=k. For n=k+1, M=M1M2MkMk+1. Since M is AB5*, M=M1M2Mk is AB5*. Hence by induction hypothesis, it is completely representable. Since MM/Mk+1 and Mk+1 are completely representable, by Proposition 3.13, M is completely representable and the proof is complete by induction. Note that finite direct sum of AB5* modules need not to be AB5* ([6, Lemma 2.5]). A family of sets has the finite intersection property if every finite subfamily has a nonempty intersection. A module M is linearly compact (with respect to the discrete topology) if every collection of cosets of submodules of M which has the finite intersection property has non-empty intersection. Linearly compact modules are AB5* (see [9, Lemma 7.1]). Also every finite direct sum of linearly compact modules is linearly compact (see [27, 29.8(2)]). So, we have the following Corollary. Corollary 3.15. Let M1, M2,,Mn be linearly compact, finitely cogenerated and completely representable R-modules. Then M=M1M2Mn is completely representable. For relation between linearly compact and AB5* modules, see ([1],[2],[6]). Heinzer, in [11, Proposition 2.1], gives a new restatement of Bourbaki's conditions. Inspired by this, we have the following results. Theorem 3.16. Let M be an R-module such that Spec(R/Ann(M)) is Noetherian and for all non-zero submodule N of M, there exists a minimal prime P of Ann(N) and rRP such that rN is P-secondary. Then M is completely representable. Proof. Let N be a non-zero submodule of M. Then by assumption, there exists a minimal prime P1 of Ann(N) and r1RP1 such that r1N is P1-secondary. Let N1=(0:Nr1). Since r12N=r1N, so we have N=r1N+N1. If N1=0, then N=r1N is representation of N. Otherwise, there exists a minimal prime P2 of Ann(N1) and r2RP2 such that r2N1 is P2-secondary and N1=r2N1+N2, where N2=(0:N1r2). Thus, N=r1N+r2N1+N2. If N2=0, then N=r1N+r2N1 is representation of N. Otherwise, we continue this process and claim that there exists some n1 such that Nn=0. If not, then since N3N2N1NM and hence Ann(M)Ann(M)Ann(N)Ann(N1)Ann(N2). Since Spec(R/Ann(M)) is Noetherian so there exists n1 such that Ann(Nn)=Ann(Nn+1). Thus, there exists t1 such that rn+1tNn=0. But rn+1tNn=rn+1Nn. So rn+1Ann(Nn)Pn+1, rn+1Pn+1, which is a contradiction. Therefore, there exists n1 such that N=r1N+r2N1++rnNn+1, a representation of N and the proof is complete. For the case of rings, these conditions are also necessary. Theorem 3.17. A ring R is completely representable if and only if Spec(R) is Noetherian and for every non-zero ideal I of R, there exists minimal prime P of Ann(I) and rRP such that rI is P-secondary. Proof. () It follows from Theorem 3.16. () Let R be completely representable. Then R is representable and by Proposition 2.2 (iv), it is Laskerian. So by [10, Theorem 4], R has Noetherian spectrum. Now by Corollary 3.10, the proof is complete. Remark 3.18. Note that by Corollary 3.10, Theorem 3.17, is the same as Proposition 2.10. In [7], authors have considered representable linearly compact modules. In the next theorem we give a necessary and sufficient conditions for a finitely cogenerated linearly compact module to be completely representable. Theorem 3.19. Let M be a finitely generated, linearly compact and finitely cogenerated R-module. Then M is completely representable if and only if Spec(R/Ann(M)) is Noetherian and for all non-zero submodule N of M, there exists a minimal prime ideal P of Ann(N) and rRP such that rN is P-secondary. Proof. () Theorem 3.16. () By Corollary 3.10, second condition is true. We show that R/Ann(M) has Noetherian spectrum. Let {x1,x2,...,xn} be a set of generators of M. Let φ:RMMM by r(rx1,rx2,,rxn). Then φ is an R-homomorphism, and hence R/Ann(M) is a submodule of MMM. But by Corollary 3.15, MMM is completely representable. So, R/Ann(M) is a representable ring. Therefore, by [10, Theorem 4], R/Ann(M) has Noetherian spectrum and the proof is complete. In the following, we show that, if dim(R)=0, then the converse of Theorem 3.16 is true. For this purpose, we need the following lemmas. Lemma 3.20. Let I be a primary ideal of a ring R such that every regular element of R/I is unit. Then R/I is a secondary ring. Proof. By [23, Lemma 4.3], R/I is non-zero and every zero divisor in R/I is nilpotent. Thus, every element of R/I is unit or nilpotent. Hence R/I is a secondary ring. Lemma 3.21. Let R be a ring with dim(R)=0. Then every regular element of R is unit. Proof. Let r ∈ R be a regular element. By [26][Proposition 1.2], R satisfies (dccr). Thus, there exists integer n1 such that Rrn=Rrn+1. So rn=rn+1s, for some s ∈ R. Since r is regular, we have rs=1. Proposition 3.22. Let M be a representable R-module and M=N1++Nt be a minimal secondary representation of M, with Ni, Pi-secondary (1it). Let Ii=Ann(Ni), (1it). Suppose, Pi's are pairwise comaximal and every regular element of R/Ii, (1it), is unit. Then Spec(R/Ann(M)) is Noetherian. Proof. Since (Ii=)Pi's are pairwise comaximal, by [3, Proposition 1.16], Ii's are also pairwise comaximal. Therefore, by Chines Reminder Theorem, R/Ann(M)i=1tR/Ii. So, by Lemma 3.20, R/Ann(M) is representable. Thus, by Proposition 2.2 (v), Spec(R/Ann(M)) is Noetherian. Corollary 3.23. Let R be a ring with dim(R)=0. Then R-module M is completely representable if and only if Spec(R/Ann(M)) is Noetherian and for every non-zero submodule N of M and every minimal prime ideal P over Ann(N), there exists rRP such that rN is P-secondary. Proof. () Follows form Proposition 3.22 and Corollary 3.10. () Follows form Theorem 3.16. ### 4. Strongly Representable Modules By definition of representable modules, we can also consider the following definition. Definition 4.1. Let M be an R-module. We say M is a strongly representable module if there exists secondary submodules N1,,Nt such that M=N1Nt. Obviously, every strongly representable module is representable. But the converse is not true in general, see [21, Example 2.4]. We give some condition such that representable modules be strongly representable. Theorem 4.2. Let M be a representable R-module such that the elements of Att(M) are pairwise comaximal. Then M is strongly representable. Proof. Let M=N1++Nt be a minimal secondary representation of M with Ni, Pi-secondary (1it). Thus, Att(M)={P1,,Pt}. Let Ii=Ann(Ni) for i=1,,t. Since (Ii=)Pi's are pairwise comaximal, Ii's are also pairwise comaximal by [3, Proposition 1.16]. We show that Ni( jiNj)=0, for all i=1,,t. For simplicity, let i=1. Suppose xN1( j=2tNj). By [23, Proposition 3.59], I1+ j=2tIj=R. So, there exists αI1 and β j=2tIj such that α+β=1. Thus, x=1x=(α+β)x=αx+βx=0+0=0. Therefore, M=N1Nt. Hence M is strongly representable. Corollary 4.3. Let M be a representable R-module such that Att(M)Max(R). Then M is strongly representable. Proof. Every two distinct maximal ideals are comaximal. Hence Corollary follows form Theorem 4.2. Corollary 4.4. Let R be a ring with dim(R)=0. Then every representable R-module is strongly representable. In particular, every representable ring is strongly representable. Proof. If dim(R)=0, then Max(R)=Spec(R) and result follows from Corollary 4.3. The "in particular" statement follows from Proposition 2.2 (ii). Remark 4.5. According to Proposition 3.3 and Corollary 4.4, representable, completely representable and strongly representable rings are the same. Proposition 4.6. Let R be a domain with dim(R)=1 (e.g. R be a Dedekind domain) and M be a representable R-module such that contains no non-zero divisible submodule. Then M is strongly representable. Proof. Over a domain, divisible modules and 0-secondary modules are the same. Since M contains no divisible submodule, Att(M)Spec(R){0}=Max(R). Hence, result follows from Corollary 4.3. Corollary 4.7. Every representable module over a Dedekind domain is strongly representable. Proof. By [16, Theorem 8], every module M over a Dedekind domain R can be decomposed as M=DE, where D is divisible and E has no non-zero divisible submodules. If M is representable, then E(M/D) is also representable (if non-zero). Hence result follows form Proposition 4.6. Finally, we show that every finitely generated Artinian module is strongly representable. For this, we need the following lemma. Lemma 4.8. Let P be a maximal ideal of a ring R and M be an R-module such that PnM=0, for some integer n1. Then M is a P-secondary R-module. Proof. Let r ∈ R. If r ∈ P, then rnM=0. If rP, then since P is maximal, P+Rr=R. Hence 1=α+sr, for some αP and s ∈ R. Thus, 1=αn+γr, for some γR. So, for every x ∈ M, x=αnx+γrx=γrx. Hence rM=M. Proposition 4.9. Every finitely generated Artinian module is strongly representable. Proof. By [18, 6.3], if M is an Artinian R-module, then there exist (distinct) maximal ideals m1,,mt of R such that M=M(m1)M(mt), where M(I)= n=1(0:MIn)={xM|Inx=0,n1}, for every ideals I of R. If M is finitely generated, then every M(mi) is finitely generated and hence will be annihilate by some power of mi. So by Lemma 4.8, M(mi) is mi-secondary (1it). Therefore, M=M(m1)M(mt) is a secondary representation of M and M is strongly representable. 1. P. N. Anh, Morita Duality, Linear Compactness and AB5∗, Math. Appl., 343(1995). 2. P. N. Anh, D. Herbera and C. Menini, AB5∗ and Linear Compactness, J. Algebra, 200(1)(1998), 99-117. 3. M. F. Atiyah and I. G. MacDonald. Introduction to commutative algebra. Addison-Wesley; 1969. 5. M. Brodmann and R. Y. Sharp. Local Cohomology: An Algebraic Introduction with Geometric Applications. Cambridge: Cambridge University Press; 1998. 6. G. M. Brodskii and R. Wisbauer, On duality theory and AB5∗ modules, J. Pure Appl. Algebra, 121(1)(1997), 17-27. 7. N. T. Cuong and L. T. Nhan, On representable linearly compact modules, Proc. Amer. Math. Soc., 130(2002), 1927-1936. 8. S. E. Atani, Submodules of secondary modules, Int. J. Math. Math. Sci., 31(6)(2002), 321-327. 9. L. Fuchs and L. Salce, Modules over Non-Noetherian Domains, American Mathema-ical Society, (2001). 10. R. Gilmer, W. Heinzer and The Laskerian property, power series rings and Noetherian spectra, Proc. Amer. Math. Soc., 79(1)(1980), 13-16. 11. W. Heinzer and D. Lantz, The Laskerian property in commutative rings, J. Algebra, 72(1981), 101-114. 12. J. Huckaba. Rings with Zero-Divisors. New York/Basil: Marcel Dekker; 1988. 13. T. W. Hungerford. Algebra, Holt. New York: Rinehart and Winston; 1974. 14. C. Jayaram, Almost Q-rings, Arch. Math. (Brno), 40(2004), 249-257. 15. C. Jayaram, K. H. Oral and U. Tekir, Strongly 0-dimensional rings, Comm. Algebra, 41(6)(2013), 2026-2032. 16. I. Kaplansky, Modules over Dedekind rings and valuation rings, Trans. Amer. Math. Soc., 72(1952), 327-340. 17. T. Y. Lam. A First Course in Noncommutative Rings. Graduate Texts in Mathematics. Berlin-Heidelberg, New York: Springer-Verlag; 1991. 18. I. G. Macdonald, Secondary representation of modules over commutative rings, Symposia Matematica, Istituto Nazionale di Alta Matematica, Roma, (1973), 23-43. 19. W. K. Nicholson and M. F. Yousif. Quasi-Frobenius Rings. Cambridge Tracts in Mathematics. Cambridge University Press; 2003. 20. D. P. Patil and U. Storch, Introduction to Algebraic Geometry and Commutative Algebra, IISc Lecture Notes Series, Volume 1(2010). 21. N. Pakyari, R. Nekooei and E. Rostami, Associated and attached primes of local cohomology modules over almost Dedekind domains, Submitted, (). 22. R. Y. Sharp, Secondary representation for injective modules over commutative Noethe-rian rings, Proc. Edinburgh Math. Soc., 20(2)(1976), 143-151. 23. R. Y. Sharp. Steps in commutative algebra. Cambridge University Press; 2000. 24. W. W. Smith, A Covering Condition for Prime Ideals, Proc. Amer. Math. Soc., 30(1971), 451-452. 25. A. J. Taherizadeh, Modules satisfying dcc on certain submodules, Southeast Asian. Bull. Math., 26(2002), 517-521. 26. A. J. Taherizadeh, On modules satisfying DCCR*, Southeast Asian. Bull. Math., 32(2008), 321-325. 27. R. Wisbauer. Foundations of module and ring theory, A handbook for study and research. Revised and translated from the 1988 German edition. Philadelphia: Gordon and Breach Science Publishers; 1991.
# What is the verb of everyone should work ###### Question: What is the verb of everyone should work ### Find S10 and explain why it is impossible to find so for the series2, 6, 18, ...Find the number of terms in the series + 8 -12 Find S10 and explain why it is impossible to find so for the series 2, 6, 18, ... Find the number of terms in the series + 8 -12 +18 - -307.546875 ANSWER BOTH WITH WORK SHOWN. USE FORMULAS sn = n/2(t1 + (n-1)d) OR sn t1(1-r^n)/ 1-r... ### For a project, Mr. Green's physics students made rockets. During class last week, the students launched their rockets from the For a project, Mr. Green's physics students made rockets. During class last week, the students launched their rockets from the bleachers of the school's outdoor football stadium. The bleacher location where Martin launched his rocket was 48 feet off the ground. The height of Martin's rocket in feet,... ### The dot plots below show the ages of students belonging to two groups of painting classes: A dot plot shows two divisions labeled The dot plots below show the ages of students belonging to two groups of painting classes: A dot plot shows two divisions labeled Group A and Group B. The horizontal axis is labeled as Age of Painting Students in years. Group A shows 1 dot at 9, 7 dots at 10, 8 dots at 15, 8 dots at 17, and 6 dots a... ### Find the sum of the first 8 terms of a geometric series in which a1 = 3 and r = 2. Find the sum of the first 8 terms of a geometric series in which a1 = 3 and r = 2.... ### HELP PLEASE, ITS DUE TODAY HELP PLEASE, ITS DUE TODAY $HELP PLEASE, ITS DUE TODAY$$HELP PLEASE, ITS DUE TODAY$$HELP PLEASE, ITS DUE TODAY$... ### What does jem call miss caroline’s teaching methods? What does jem call miss caroline’s teaching methods?... ### Explain how surface and deep-water circulation patterns impact energy transfer in the environment. Explain how surface and deep-water circulation patterns impact energy transfer in the environment.... ### D/12 = 0.25 bbhhgshdhrjrhfbbdb D/12 = 0.25 bbhhgshdhrjrhfbbdb... ### Round the number 296,386,482 to the nearest million Round the number 296,386,482 to the nearest million... ### Papa Bear growled when he found porridge in his bowl.lessfewer Papa Bear growled when he found porridge in his bowl. less fewer... ### The act of creating an agency relationship in which the principal accepts the conduct of someone who acted without prior The act of creating an agency relationship in which the principal accepts the conduct of someone who acted without prior authorization is called what?... ### A ramp forms the angles shown to the right. What are the values of a and​ b? A ramp forms the angles shown to the right. What are the values of a and​ b?... ### 4. What was the context or the primary source' production by the poem of Pag big sa tinubuang lupa​ 4. What was the context or the primary source' production by the poem of Pag big sa tinubuang lupa​... ### What is the number code for 19,000 ohmss resistor What is the number code for 19,000 ohmss resistor... ### What’s your understanding of anomie? what is a component of this concept that is particularly striking What’s your understanding of anomie? what is a component of this concept that is particularly striking to you?... ### A medical specialist is a doctor who specializes in a particular branch of medicine.TrueFalse A medical specialist is a doctor who specializes in a particular branch of medicine. True False... ### 3.2. Assume at time 5, no system resources are being used except for the processor and memory. Now consider the following events: 3.2. Assume at time 5, no system resources are being used except for the processor and memory. Now consider the following events: At time 5: P1 executes a command to read from disk unit 3. At time 15: P5’s time slice expires. At time 18: P7 executes a command to write to disk unit 3. At time 20: P... ### Define sustainability from an environmental science point of view and explain its importance. Define sustainability from an environmental science point of view and explain its importance....
# Please help me. topic: ( motion with constant acceleration : free fall ) ## Homework Statement A scaffolding is rising at 1.0 m/s when at a height of 50.0 m , then the man on the scaffolding drop a can. determine: a. the time needed for the can to reach the ground . b. what is the final velocity ? ## Homework Equations note: g=-9.8 m/s the letter o in the equation stands for initial example : Vo( initial velocity) . V=Vo +(-g)t y=yo + Vot + 1/2(-g)T^2 V^2=Vo^2 + 2(-g)(y-yo) y-yo=(v+vo)/2 (t) t = time y=position v = velocity g=gravity ## The Attempt at a Solution The illustration was illustrated by our professor . the attempt that i was trying to solve is i tried to get each of all the final velocity. guys can you help me on how to solve this ! thank you ! #### Attachments • illustration.jpg 15.1 KB · Views: 405 ## Answers and Replies Doc Al Mentor note: g=-9.8 m/s That's confusing. I assume you mean that a = -g, where g = 9.8 m/s^2. y=yo + Vot + 1/2(-g)T^2 Use this formula to solve for the time. That's confusing. I assume you mean that a = -g, where g = 9.8 m/s^2. Use this formula to solve for the time. sir yes ! a= g . and the g is equals to -9.8 m/s^2 sir ! how many times should i compute for the time ? is it 1 time or 4 times ? base on the illustration ??? Doc Al Mentor sir yes ! a= g . and the g is equals to -9.8 m/s^2 If you insist on g being negative, then your formula is incorrect. (It has a -g in it.) sir ! how many times should i compute for the time ? is it 1 time or 4 times ? base on the illustration ??? Why not just once? Solve for the final time when it hits the ground. What values will you put for y, y0, and v0? If you insist on g being negative, then your formula is incorrect. (It has a -g in it.) Why not just once? Solve for the final time when it hits the ground. What values will you put for y, y0, and v0? sir ! our professor says that it is always negative . so he is wrong ? and the right is -g=9.8 m/s^2 ? right ? thank you ! so sir we will disregards the illustration ?? and compute the final velocity just once ? Doc Al Mentor sir ! our professor says that it is always negative . so he is wrong ? and the right is -g=9.8 m/s^2 ? right ? Taking up as positive, the acceleration is -9.8 m/s^2. (The constant g is usually taken as positive, so the acceleration would be -g.) Taking up as positive, the acceleration is -9.8 m/s^2. (The constant g is usually taken as positive, so the acceleration would be -g.) ah i see sir ! thank you. sir so i will disregard the illustration that our prof was made ? and i will focus in solving using the given value's in the problem ? Doc Al Mentor ah i see sir ! thank you. sir so i will disregard the illustration that our prof was made ? and i will focus in solving using the given value's in the problem ? I suppose the illustration is there to help you visualize what's going on. (I think the velocity at point 3 should be negative.) But you can solve for the final time in one step, you don't need to solve for all the intermediate points. I suppose the illustration is there to help you visualize what's going on. (I think the velocity at point 3 should be negative.) But you can solve for the final time in one step, you don't need to solve for all the intermediate points. so sir. i will be focus on the formula that u was given to me ? Doc Al Mentor so sir. i will be focus on the formula that u was given to me ? Yes. Yes. t^2=-2(y)/g t^2=2(50)/9.8 t=3.19 s y-50=v^2-(1.0)^2/2(9.8) v^2-(1.0)^2 = (-50)*2(9.8) v^2-1 =-980 V^2/-1=-980/-1 v^2=980 V=31.30 m/s sir am i right ? regards on solving the problem ? thank you ! Doc Al Mentor t^2=-2(y)/g t^2=2(50)/9.8 t=3.19 s Why didn't you use the formula I indicated in post #2? Why didn't you use the formula I indicated in post #2? sir i derived the formula that you indicate like what i saw in my lecture . Doc Al Mentor sir i derived the formula that you indicate like what i saw in my lecture . You posted some formulas. I quoted one and said "use this one". So use it. (The one you derived assumes that the initial speed is zero.)
### Pythagoras theorem: find vertical and horizontal distance Question Sample Titled 'Pythagoras theorem: find vertical and horizontal distance' In the figure, ${A}{B}={3}$ $\text{cm}$ , ${F}{G}={G}{H}={H}{I}={I}{J}={4}$ $\text{cm}$ , ${C}{D}={D}{E}={E}{F}={5}$ $\text{cm}$ and ${B}{C}={6}$ $\text{cm}$ . Find the distance between ${A}$ and ${J}$ correct to ${4}$ significant figures. ${A}$${D}$${E}$${H}$${I}$${B}$${C}$${F}$${G}$${J}$ A ${19.24}$ $\text{cm}$ B ${22.00}$ $\text{cm}$ C ${19.10}$ $\text{cm}$ D ${19.92}$ $\text{cm}$ Horizontal distance $={B}{C}+{D}{E}+{F}{G}+{H}{I}$ $={6}+{5}+{4}+{4}$ $={19}$ Vertical displacement (take upwards positive) $={A}{B}-{C}{D}+{E}{F}-{G}{H}+{I}{J}$ $={3}-{5}+{5}-{4}+{4}$ $={3}$ ${A}{J}$ $=\sqrt{{{\left({19}\right)}^{{2}}+{\left({3}\right)}^{{2}}}}$ $={19.235384061671343}$ # 專業備試計劃 Level 4+ 保證及 5** 獎賞 ePractice 會以電郵、Whatsapp 及電話提醒練習 ePractice 會定期提供溫習建議 Level 5** 獎勵:會員如在 DSE 取得數學 Level 5** ,將獲贈一套飛往英國、美國或者加拿大的來回機票,唯會員須在最少 180 日內每天在平台上答對 3 題 MCQ。 Level 4 以下賠償:會員如在 DSE 未能達到數學 Level 4 ,我們將會全額退回所有會費,唯會員須在最少 180 日內每天在平台上答對 3 題 MCQ。 # FAQ ePractice 是甚麼? ePractice 是一個專為中四至中六而設的網站應用程式,旨為協助學生高效地預備 DSE 數學(必修部分)考試。由於 ePractice 是網站應用程式,因此無論使用任何裝置、平台,都可以在瀏覽器開啟使用。更多詳情請到簡介頁面。 ePractice 可以取代傳統補習嗎? 1. 會員服務期少於兩個月;或 2. 交易額少於 HK\$100。 Initiating... HKDSE 數學試題練習平台
# How to enumerate all combinations of $n$ binary variables s.t. their sum is $k$? Suppose we are given $n$ variables $X_i, i=1,\dots,n$, each taking values from $\{0,1\}$, and a constant integer $k$ with $0\leq k \leq n$. What are some efficient ways to enumerate all possible combinations of values of $X_i$'s subject to the constraint $\sum_{i=1}^n X_i = k$? A naive way is to first enumerate one by one all possible combinations of values of $X_i$'s without the constraint $\sum_{i=1}^n X_i = k$, and for each combination, check if it satisfies $\sum_{i=1}^n X_i = k$ (if it does, keep it; if it doesn't, discard it). That naive way may be inefficient. For example, when $k=1$, a more efficient way will be for each $i$, letting $X_i=1$ and $X_j =0, \forall j \neq i$. So I wonder for general $k$, what are some efficient ways to do the above task? • Are you asking for an ${n}\choose{k}$-iterator? You can check out this source code of NChooseKIterator. – Pål GD Mar 28 '13 at 9:11 • @PålGD: Thanks, and close (probably yet). Is there some source for description of the algorithm? I am not familiar with Java, but I will have to implement it in Matlab. – Tim Mar 28 '13 at 9:13 • See also this stackoverflow question: n choose k implementation. – Pål GD Mar 28 '13 at 10:41 • What does "efficient" mean here? Note that for $k=n/2$, we must enumerate exponentially many (in $n$) items. – Raphael Mar 28 '13 at 10:44 • Efficiency is not big-oh, but compact ways of computing the next item. So, elegant iterators. With combinatorial explosions that can save quite some (real) time. Knuth's Volume 4 has extensive info on generating partitions, combinations, and partitions, so it is not a what-ever implementation question. (end of proof by authority) – Hendrik Jan Mar 28 '13 at 11:10 ## 1 Answer First, separate the set of variables $I = \{1,\dots,n\}$ according to their values: \qquad\begin{align} I_+ &= \{ i \mid X_i = 1\} \text{ and} \\ I_- &= \{ i \mid X_i = 0\} \;. \end{align} Let $n_+ = |I_+|$ and $n_- = |I_-|$. Given $X_1, \dots, X_n$ and $k$, it is clear that we have to choose $k$ indices from $I_+$ and arbitrarily many from $I_-$. That means we will have to output $\qquad \binom{n_+}{k} \cdot 2^{n_-}$ solutions. Note that if we assume $\binom{a}{b} = 0$ for $b<0$ and $b>a$, this extends nicely to inputs that allow no solution. Generating all valid solutions in a list can be done by two nested recursions: the outer creates all $k$-subsets of $I_+$, the inner all subsets of $I_-$. This way, no illegal solutions are investigated, which is as efficient as you can expect (up to list operations). outer(I+, 0, I-) = inner(I-) | outer(I+, k, I-) = result = [] foreach ( i in I+ ) { suffices = outer(I+ - i, k-1, I-) result ++= (suffices map { [i] ++ _ }) } return result inner([]) = [] inner(x::rest) = smaller = inner(rest); return smaller ++ (smaller map { [x] ++ _ }) If you want an iterator, note that you can replace recursion with stacks (which allows you to compute it step by step) and emit solutions in the innermost call of inner. Of course, you'll have to reverse the way solutions are constructed (while descending, as opposed to during ascension as I did above).
# A bound on the number of bilinear functions needed in order to obtain the minmax For $n\in\mathbb N$, let $\Delta(n)=\{x\in\mathbb R^n:x_i\geq 0, \sum_ix_i=1\}$ be the set of probability vectors in $\mathbb R^n$. Is there a function $m:\mathbb N\to\mathbb N$ such that for any finite list $A_1,A_2,\ldots,A_k$ of $n\times n$ (real valued) matrices, there exists a subset $J\subset[k]$ with cardinality $m(n)$ such that $$\min_{x,y\in\Delta(n)}\max_{i\in[k]} xA_i y \ = \min_{x,y\in\Delta(n)}\max_{i\in J} xA_i y\quad ?$$ Moreover, can $m(n)$ be polynomial in $n$? For $n=2$, let $m=m (2)$. Take matrices $A_0,\ldots,A_k$ such that $$(x,1-x)A_i (y,1-y)=-(x-\tfrac i k)(y-\tfrac i k).$$ By setting $i$ as close as possible to $\frac {x+y}{2}k$, it follows that the min-max over all $i\in\{0,\ldots,k \}$ is at least $-(\frac {1}{2k})^2$. Whereas, a subset $J\subset\{0\leq i_1<\cdots <i_m\leq k \}$ defines partition the unit interval into $m+1$ intervals of the form $[\frac {i_j}{k},\frac {i_{j+1}}{k}]$; setting $x=y$ at the center of the largest interval shows that the min-max over $J$ is at most $-(\frac {1}{2(m+1)})^2$.
# NEET Chemistry Hydrogen Questions Solved Which of the following statement regarding ${\mathrm{H}}_{2}{\mathrm{O}}_{2}$ is wrong. (A) It is stable in acid medium (B) It acts as oxidising as well as reducing agent (C) It has zero dipole moment (D) Pure ${\mathrm{H}}_{2}{\mathrm{O}}_{2}$ is slightly acidic (C).  ${\mathrm{H}}_{2}{\mathrm{O}}_{2}$ is a polar compound and its dipole moment is not zero. It is stable in acid medium because its decomposition can be retarded by adding small amounts of acid. It also acts as oxidising and reducing agent. Pure ${\mathrm{H}}_{2}{\mathrm{O}}_{2}$ is slightly acidic, because its aqueous solution is slightly or weakly acidic. Difficulty Level: • 22% • 7% • 56% • 17%
## Thinking Mathematically (6th Edition) $\frac{y}{12}$ + $\frac{1}{6}$ = $\frac{y}{2}$ - $\frac{1}{4}$ First, we can do away with the fractions if we find the least common denominator and multiply each term of the equation by that number. We will write the number over one so that it is in fraction form. The least common denominator for the problem is 12, so we will multiply every term by $\frac{12}{1}$. Multiply every term by $\frac{12}{1}$ ($\frac{12}{1}$)($\frac{y}{12}$) + ($\frac{12}{1}$)($\frac{1}{6}$) = ($\frac{12}{1}$)($\frac{y}{2}$) - ($\frac{12}{1}$)($\frac{1}{4}$) Completing the multiplication, we getL $\frac{12y}{12}$ + $\frac{12}{6}$ = $\frac{12y}{2}$ - $\frac{12}{4}$ We can now simplify each fraction to get: y + 2 = 6y - 3 Subtract 6y from both sides of the equation. y - 6y + 2 = 6y - 6y - 3 Complete the arithmetic to get: -5y + 2 = -3 Now, subtract 2 from each side. -5y + 2 - 2 = -3 - 2 Complete the arithmetic. -5y = -5 Divide both sides by -5. $\frac{-5y}{-5}$ = $\frac{-5}{-5}$ Complete the arithmetic and get: y = 1 Checking our answer by substituting 1 back in to the original equation for y. $\frac{1}{12}$ + $\frac{1}{6}$ = $\frac{1}{2}$ - $\frac{1}{4}$ Rewrite each fraction as an equivalent fraction using the least common denominator of 12. $\frac{1}{12}$ + $\frac{2}{12}$ = $\frac{6}{12}$ - $\frac{3}{12}$ Perform the arithmetic and we get $\frac{3}{12}$ = $\frac{3}{12}$ Since out last line matches, we know that our solution of y = 1 is correct.
Weak solutions of a parabolic-elliptic type system for image inpainting ESAIM: Control, Optimisation and Calculus of Variations, Volume 16 (2010) no. 4, p. 1040-1052 In this paper we consider the initial boundary value problem of a parabolic-elliptic system for image inpainting, and establish the existence and uniqueness of weak solutions to the system in dimension two. DOI : https://doi.org/10.1051/cocv/2009032 Classification:  35D05,  68U10 Keywords: weak solutions, parabolic-elliptic system, image inpainting @article{COCV_2010__16_4_1040_0, author = {Jin, Zhengmeng and Yang, Xiaoping}, title = {Weak solutions of a parabolic-elliptic type system for image inpainting}, journal = {ESAIM: Control, Optimisation and Calculus of Variations}, publisher = {EDP-Sciences}, volume = {16}, number = {4}, year = {2010}, pages = {1040-1052}, doi = {10.1051/cocv/2009032}, zbl = {1205.35041}, mrnumber = {2744161}, language = {en}, url = {http://www.numdam.org/item/COCV_2010__16_4_1040_0} } Jin, Zhengmeng; Yang, Xiaoping. Weak solutions of a parabolic-elliptic type system for image inpainting. ESAIM: Control, Optimisation and Calculus of Variations, Volume 16 (2010) no. 4, pp. 1040-1052. doi : 10.1051/cocv/2009032. http://www.numdam.org/item/COCV_2010__16_4_1040_0/ [1] C. Ballester, M. Bertalmio, V. Caselles, G. Sapiro and J. Verdera, Filling-in by joint interpolation of vector fields and gray levels. IEEE Trans. Image Process. 10 (2001) 1200-1211. | Zbl 1037.68771 [2] M. Bertalmio, G. Sapiro, V. Caselles and C. Ballsester, Image Inpainting. Computer Graphics, SIGGRAPH (2000) 417-424. [3] M. Bertalmio, A. Bertozzi and G. Sapiro, Navier-Stokes, fluid-dynamics and image and video inpainting, in Proc. Conf. Comp. Vision Pattern Rec. (2001) 355-362. [4] F. Catte, P.L. Lions, J.M. Morel and T. Coll, Image selective smoothing and edge detection by nonlinear diffusion. SIAM. J. Num. Anal. 29 (1992) 182-193. | Zbl 0746.65091 [5] T. Chan and J. Shen, Variational restoration of nonflat image feature: Models and algorithms. SIAM J. Appl. Math. 61 (2000) 1338-1361. | Zbl 1011.94001 [6] T. Chan and J. Shen, Mathematical models for local nontexture inpaintings. SIAM J. Appl. Math. 63 (2002) 1019-1043. | Zbl 1050.68157 [7] T. Chan, S. Kang and J. Shen, Euler's elastica and curvature based inpaintings. SIAM J. Appl. Math. 63 (2002) 564-592. | Zbl 1028.68185 [8] T. Chan, J. Shen and L. Vese, Variational PDE models in image processing. Notices Am. Math. Soc. 50 (2003) 14-26. | Zbl 1168.94315 [9] L.C. Evans, Partial differental equations. American Mathematical Society (1998). | Zbl 1194.35001 [10] Z.M. Jin and X.P. Yang, Viscosity analysis on the BSCB models for image inpainting. Chinese Annals of Math. (Ser. A) (to appear). | Zbl pre05846732 [11] J.L. Lions, Quelques méthodes de résolution des problèmes aux limites non linéaries. Dunod (1969). | Zbl 0189.40603 [12] S. Masnou, Disocclusion: a variational approach using level lines. IEEE Trans. Image Process. 11 (2002) 68-76. [13] P. Perona and J. Malik, Scale-space and edge detection using anisotropic diffusion. IEEE Trans. Pattern Anal. Machine Intell. 12 (1990) 629-639. [14] X.C. Tai, S. Osher and R. Holm, Image inpainting using a TV-Stokes equation, in Image Processing based on partial differential equations, X.C. Tai, K.-A. Lie, T.F. Chan and S. Osher Eds., Springer, Heidelberg (2007) 3-22. | Zbl 1107.68518
## Mapping Language Models to Grounded Conceptual Spaces ### Roma Patel · Ellie Pavlick [ Abstract ] Mon 25 Apr 10:30 a.m. PDT — 12:30 p.m. PDT Abstract: A fundamental criticism of text-only language models (LMs) is their lack of grounding---that is, the ability to tie a word for which they have learned a representation, to its actual use in the world. However, despite this limitation, large pre-trained LMs have been shown to have a remarkable grasp of the conceptual structure of language, as demonstrated by their ability to answer questions, generate fluent text, or make inferences about entities, objects, and properties that they have never physically observed. In this work we investigate the extent to which the rich conceptual structure that LMs learn indeed reflects the conceptual structure of the non-linguistic world---which is something that LMs have never observed. We do this by testing whether the LMs can learn to map an entire conceptual domain (e.g., direction or colour) onto a grounded world representation given only a small number of examples. For example, we show a model what the word left" means using a textual depiction of a grid world, and assess how well it can generalise to related concepts, for example, the wordright", in a similar grid world. We investigate a range of generative language models of varying sizes (including GPT-2 and GPT-3), and see that although the smaller models struggle to perform this mapping, the largest model can not only learn to ground the concepts that it is explicitly taught, but appears to generalise to several instances of unseen concepts as well. Our results suggest an alternative means of building grounded language models: rather than learning grounded representations from scratch'', it is possible that large text-only models learn a sufficiently rich conceptual structure that could allow them to be grounded in a data-efficient way. Chat is not available.
# ☑ Passwords: You’re doing it wrong There are few technical topics about which there’s more FUD than picking a strong password. I’m getting a little sick of how much misinformation there is about passwords. More or less everyone who’s been online for any length of time knows that picking a “good” password is important. After all, it’s more or less the only thing which stands between a potential attacker and unlimited access to your online services. So, what constitutes “good”? Well, many, many services will recommend that you make your passwords at least 8 characters in length, avoid words that you’d find in a dictionary and make sure you include mixed case letters, numbers and symbols. Now, if you need a password which is both good and short then this is probably quite reasonable advice. If you picked 8 characters truly at random from all letters, numbers and symbols, that’s something like 4 x 1015 (four million billion) passwords to choose from, and that’s secure enough. However, these passwords have a bit of a drawback — humans are bloomin’ awful at remembering things like this: 1Xm}4q3=. Hence, people tend to start with something which is a real word — humans are good at remembering words. Then apply various modifications like converting letters to similar digits. This process gives people a warm, fuzzy feeling because their password ends up with all these quirky little characters in it, but in reality something like sH33pd0g is a lot worse than just picking random characters. Let’s say there are about 10,0001 common words and that each letter can be one of four variants2. That gives around 6 x 108 (six hundred million) possibilities. This might seem like a lot, but if someone can check a thousand passwords every second then it’ll only take them about a week to go through them all. So, if that’s a bad way to pick memorable passwords, what’s a good way? Well, there are a few techniques which can help if you still need a short password. One of the best I know of is to pick a whole sentence and simply include the first letter of each word. Then mix up the case and swap letters for symbols as already discussed. This at least keeps the password away from being a dictionary word that can be easily guessed, although it’s still a pain to have to remember which letters have been swapped for which numbers and so on. But wait, do we really need such a short password? If we make it longer, perhaps we don’t need to avoid dictionary words at all? After all, the best password is one that is both secure and memorable, so we don’t have to write it down on a Post-It note3 stuck on our computer. Fortunately the repository of all knowledge xkcd has the answer4! As it turns out, picking four words more or less at random is much better from a security point of view and, for most people at least, is probably quite a bit easier to remember. Using the same 10,000 word estimate from earlier, picking four words entirely at random gives us 1 x 1016 (ten quadrillion, or ten million billion) possibilities. At a thousand per second this would take over 300,000 years to crack. The beauty is that you don’t need to remember anything fancy like strange numbers of symbols — just four words. Is it any more memorable? Well, a random series of characters doesn’t give you anything to get to grips with — it’s a pure memory task, and that’s tough for a lot of people. However, if you’ve got something like enraged onlooker squanders dregs or scandalous aardvark replies trooper then you can immediately picture a scene to help you remember it. So let’s stop giving people daft advice about passwords and certainly let’s remove all those irritating checks on passwords to make sure they’re “secure”, when in reality the net effect of forcing people to use all these numbers and strange symbols more or less the opposite. Most of all, let’s make sure that our online services accept passwords of at least 128 characters so that people can pick properly good passwords, not the ones that everyone’s been browbeaten into believing are good. As an aside, even with this scheme it’s still really important to pick words at random and that’s something humans don’t do very well. Inspired by the xkcd comic I linked earlier, this site was set up to generate random passwords. Alternatively, if you’re on a Linux system you could use a script something like this to pick one for you5: 1 2 3 4 5 6 7 8 9 #!/usr/bin/python import random import string with open("/usr/share/dict/words", "r") as fd: words = set("".join(c for c in line.strip().lower() if c in string.ascii_lowercase) for line in fd) print("Choosing 10 phrases from dict of %d words\n" % (len(words),)) print("\n".join(" ".join(random.sample(words, 4)) for i in range(10))) One final point. You might be thinking it’s going to be a lot slower to type four whole words than eight single characters, but actually it’s often almost as fast once you don’t need to worry about fiddling around with the SHIFT key and all those odd little symbols you never normally use, like the helium-filled snake (~)6 and the little gun (¬)7. Especially on a smartphone keyboard. Let’s face it, if you’ve just upgraded your phone and are seeking help with it in some online forum, there isn’t a way to ask “how do I get the little floating snake?” without looking like a bit of an idiot — clearly this is the most important advantage of all. 1. While it’s true that the OED has something like a quarter of a million words, the average vocabulary is typically significantly smaller. 2. So o could also be O, 0 or (), say. 3. Although the dangers of writing passwords down is generally heavily overestimated by many people. I’m not saying it’s a good idea, but having a really good password on a scrap of paper in your wallet, say, is still a lot better for most average users than a really poor password that you can remember. 4. Note that Randall’s figures differ from mine somewhat and are probably rather more accurate — I was just plucking some figures in the right order of magnitude out of the air to illustrate the issues involved. 5. It’s written to run on a Linux system but about the only thing it needs that’s platform-specific is the filename of a word list. 6. Yes, yes, I know it’s called a tilde really. 7. OK, fine, it’s a logical negation symbol. Still looks like a little gun to me; or maybe hockey stick; or an allen key; or the edge of a table; a table filled with cakes… Yes, I like that one best. Hm, now I’m hungry. 11 Jul 2013 at 3:55PM by Andy Pearce in Software  | Photo by Jose Fontano on Unsplash  | Tags: security python
# Tag Info ## Hot answers tagged vlf 6 If you zoom in on these signals, there are labels describing what the stations are. Example, many of the signals around 20kHz are military: If you mouse over the labels, it will expand them. You can zoom with the mouse wheel or with the controls in the "Waterfall view" section. 6 There are plenty of ways to make antennas smaller. Unfortunately, all of these things also make the antenna less efficient. Economics provides a compelling proof: if efficient, small, low-frequency antennas could be realized, why do AM broadcast stations erect huge, expensive towers to support their enormous antennas? Antenna inefficiency isn't necessarily ... 5 I don't think it will work to add suppression to the fence. The fence charger is designed to support heavily reactive loads by generating pulses every second or two. The pulses are formed by dumping the charge of a capacitor bank through a step up transformer. Any additional reactance will be a very minor load delta. My experience (several miles of e fence ... 4 "Ferrite" is not a fungible material. There are many kinds of ferrite materials, each with very different properties. To make a comparison, you need the material datasheets, which answer questions like: What is the relative permittivity? What is the loss at the frequencies you intend to use them? What is the Curie temperature of the material? You can see ... 4 At 4 kHz you could use your laptop as a rudimentary two-channel oscilloscope. First find out if it has a useful two-channel input sound card, and find a way of recording a stereo 16-bit WAV and processing it. Octave or Python would work, or you could write your own. Then build a circuit like this, adjusting parameters to suit: simulate this circuit –... 4 A squarish shape antenna is the best sender/receiver There is no practical basis for this assertion. Wavelength of the 15Khz [SIC] frequency is around 14990 meter [SIC] The formula for the reasonable approximation of wavelength is $$\lambda (in\space meters) = \frac{300}{Frequency\space in\space MHz} \tag 1$$ Since 15 kHz is 0.015 MHz, it has a ... 3 Below is a rough estimate of the near fields calculated by NEC4.2 for an elevation of 3 meters above level Earth from a transmit antenna and system as generally described in the OP, and the followup comments of the OP. "Use with due diligence." 3 Estimating the near field is best done with modeling software or empirical measurement. The details of the antenna construction and environment (the ground and any nearby conductive objects in particular) can significantly impact the near field. That said, 100W is not much, and often regulations specify that no particular evaluation need be done if the ... 3 display_qt is a GNU Radio example program. It does not display any actually existing electromagnetic signal but internally generates a 1500 Hz sinusoid. The code creates a sine source and a noise source, // Source will be sine wave in noise src0 = analog::sig_source_f::make(rate, analog::GR_SIN_WAVE, 1500, 1); src1 = analog::noise_source_f::make(analog::... 3 The point of a huge antenna is efficiency. Ground rods alone will result in a very small signal being received, and high transmitter mismatch. VLF antennas are massive and require both very large grounding radials as well as antennas. The links from the VLF page at Wikipedia, especially the Long Wave Radio Club of America and K4NYW's page on vintage Navy ... 3 Your method only crudely measures the magnitude of the feedline impedance, without taking into account the fact that it is probably a complex value, with a phase shift between voltage and current. Also, the output impedance of most modern audio power amplifiers is very low — usually way less than an ohm. I'm surprised that you get any useful answers ... 1 I want to add to Phils answer. The VLF around 3-30 kHz are used for submarine communication. From WikiPedia: VLF radio waves (3–30 kHz) can penetrate seawater to a depth of approximately 20 meters. Hence a submarine at shallow depth can use these frequencies. A vessel more deeply submerged might use a buoy equipped with an antenna on a long cable. The ... 1 See the US allocation of frequencies for the source data for some of this. Right now, the current users of VLF/LF are either maritime mobile, radio navigation, time frequencies, or aeronautical use. There are a few stations that have experimental privileges, as ARRL mentions. The bandwidth for these bands drops dramatically, often only a few kilohertz. ... 1 If you're going to be using this antenna strictly for reception, then increasing the windings is fine, but as the previous answer stated you're on increasing inductance, and when you increase inductance, you are by nature adding more radiation resistance. So, for transmitting that is not ideal at the wavelengths of VLF. These are small loops and can only ... Only top voted, non community-wiki answers of a minimum length are eligible
# [OS X TeX] Square cells in tabular Will Robertson will at guerilla.net.au Thu Dec 23 04:23:22 EST 2004 Hi all Arno contacted me off-list about some help with this problem, and I've posted my solution here for all to see. Hope it's still of help and sorry about the delay. Regards, Will === snip LaTeX document from here to end === \documentclass{article} \usepackage{array,calc} \setlength\parindent{0pt} \setlength\parskip{1em} \begin{document} \section*{Square cells in a tabular} Arno Kruse asked if it was possible to create a tabular with square cells. The solution involves some use of the \textsf{array} package, uses several \LaTeX\ length commands and the \textsf{calc} package. I thought that the curious might like to look at my proposal for interest in the solution, critique of my coding style, or (I hope!) to learn a for further explanation. First, set every column to some width (I have used 2\,em in the current font). This is done with the \verb|p{...}| column specifier. \newlength\squarecell \setlength\squarecell{2em} \begin{tabular}{|p{\squarecell}|p{\squarecell}|} \hline 1 & 2 \\ \hline 3 & 4 \\ \hline \end{tabular} If you measure the widths of the cells, however, you'll notice that they are in fact \emph{larger} than 2\,em since the tabular environment inserts space in between each column. (This is controlled by the \verb|\tabsepcol| length, but I leave it alone in case other tabulars are being used in the document.) The intercolumn space may be suppressed by using the \verb|@{}| column specifier. Now each cell is exactly \verb|\squarecell|. I have also added a centring command to each cell of the tabular. \newcolumntype{C}{@{}>{\centering\arraybackslash}p{\squarecell}@{}} \begin{tabular}{|C|C|} \hline 1 & 2 \\ \hline 3 & 4 \\ \hline \end{tabular} Now we need to set the height of each cell equal to the width. This can be achieved by inserting a strut (a zero width rule) of a certain height and a certain depth. First, I measure the height of a left parenthesis (\verb|(|') to get the approximate height required to vertically centre the text. Then I subtract this value from the square cell size and divide by two to get the surrounding space. \newlength\fontheight \newlength\surroundcellheight \settoheight\fontheight{(} \setlength\surroundcellheight{(\squarecell - \fontheight)/2} \newcolumntype{S}{ @{} >{\centering\arraybackslash} p{\squarecell} <{\rule{0pt}{\fontheight + \surroundcellheight}\rule[- \surroundcellheight]{0pt}{\surroundcellheight}} % change 0pt to 1pt to see what the rules do. @{} } \begin{tabular}{|S|S|} \hline 1 & 2 \\ \hline 3 & 4 \\ \hline \end{tabular} \newpage \section*{Example} So now that's done, a proper example. I define a new environment for setting these special tables called \texttt{squarecells}. \newenvironment{squarecells}[1] {\setlength\squarecell{2em} % set the cell dimension to 2em in the current font \settoheight\fontheight{(} \setlength\surroundcellheight{(\squarecell - \fontheight)/2} \begin{tabular}{#1}} {\end{tabular}} \hfill \begin{minipage}{0.3\textwidth} \large \begin{squarecells}{|S|S|S|S|} \hline 17 & & 0 & 11 \\ \hline & & 16 & 6 \\ \hline & 4 & & \\ \hline 2 & 16 & & 3 \\ \hline \end{squarecells} \end{minipage} \hfill \begin{minipage}{0.45\textwidth} \begin{verbatim} \begin{squarecells}{|S|S|S|S|} \hline 17 & & 0 & 11 \\ \hline & & 16 & 6 \\ \hline & 4 & & \\ \hline 2 & 16 & & 3 \\ \hline \end{squarecells} \end{verbatim} \end{minipage} \hfill\null \normalsize So, in order to get this working in your own document, you'll need something along the lines of the following into your preamble. (I'm too lazy to make a proper package for it.) It doesn't work perfectly, but it scales reasonably well so I hope it may be of use to at least Arno. \begin{verbatim} \newlength\squarecell \newlength\fontheight \newlength\surroundcellheight \newenvironment{squarecells}[1] {\setlength\squarecell{2em} \settoheight\fontheight{(} \setlength\surroundcellheight{(\squarecell - \fontheight)/2} \begin{tabular}{#1}} {\end{tabular}} \end{verbatim} \end{document} --------------------- Info --------------------- Mac-TeX Website: http://www.esm.psu.edu/mac-tex/ & FAQ: http://latex.yauh.de/faq/ TeX FAQ: http://www.tex.ac.uk/faq List Post: <mailto:MacOSX-TeX at email.esm.psu.edu> `
# Dual operator relationship with complex conjugate. Let $V$ be a $n$ dimensional vector space spanned by $\{e_{i}\}_{i=1}^{n}$. Let $T:V\to V$ be a linear operator with matrix transformation $A$. Is there any relationship between the dual operator $T^{*}:V^{*}\to V^{*}$, and the complex conjugate $A^{*}$ of $A$? - –  wj32 Nov 9 '12 at 23:33 So in part (2) where they explain it, is it just the regular transpose that corresponds to $T^{*}$, not the complex conjugate? –  Kyle Schlitt Nov 9 '12 at 23:35 I've heard of "dual space" and "dual basis" but not "dual operator". Is there any link to it or short explanation? –  DonAntonio Nov 9 '12 at 23:36 @DonAntonio: Usually we define it as $(T^*f)(v)=fTv$. –  wj32 Nov 9 '12 at 23:38 Thanks, @wj32... –  DonAntonio Nov 9 '12 at 23:39 See Stone and Goldbart, Mathematics for Physics, page 753 for a brief explanation. The dual operator is linear, so it does not have a simple relationship to the complex conjugate of an operator. -
# Field lines question A friend of mine asked me this question, that is asked in an entrance examination. It shouldn't be that difficult, but I fail to find a rigorous answer for it. The figure shows three charges, that are fixed on a line, so that they can't move. Now the question is, if $q$ is an amount of positive charge, then what are the charge of $A$, $B$ and $C$? It would be nice if anyone could give a rigorous answer, because it seems at the moment that this question has to be solved by intuition, which is strange for an entrance examination. - Electric fields go from what kind of charge to what kind of charge? –  Michael Brown Jun 1 '13 at 12:45 Also: think about the total charge. –  Vibert Jun 1 '13 at 12:46 BTW, read the homework policy if you are confused about the homework tag. It doesn't just apply to assigned homework problems. –  Michael Brown Jun 1 '13 at 12:47 @Michael Brown: this question can't be answered just by recalling to which direction the field lines point. You can only exclude one answer by doing that. Thanks for the homework tag tip though, I wasn't familiar with that. –  yarnamc Jun 1 '13 at 19:22 It's not intuition.It's a problem which can be solved. First we identify the sign of the charges. By seeing the direction of field lines we can see that the sign of charges. Field lines originate from $+ve$ and end at $-ve$ charges. Next by Definition of Flux, The number of field lines cutting per unit surface surface . And Gauss' Law The flux through a closed surface is equal to $\dfrac {Q_{enclosed}}{\epsilon_0}$ Now create imaginary spheres of same surface area enclosing each charge at a time, calculate flux and use Gauss' Law, and you'll get the ratio of charges . - +1 And for a test-taking situation, you can simplify it a little further. Just counting the field lines shows (via Guass's Law) that the central charge is larger than the side charges, and there's only one option satisfying that requirement. No need to wait to calculate ratios! –  Mike Jun 1 '13 at 13:16 @Mike: This being a HW question, I prefer not to provide full solution till the answer. As stated by DavidZ –  Mr.ØØ7 Jun 1 '13 at 13:17 The problem is nicely stated as it does not require to remember the direction of field lines between charges. –  babou Jun 1 '13 at 15:50 Well, the problem does not display all field lines. There can be lines going from A to C without passing through B. Also, there can be lines going from A to infinity. As a consequence, you cannot really count lines and use Gauss' law. –  fffred Jun 1 '13 at 16:29 Thanks, I was not thinking about Gauss law, because this question can normally be answered with knowledge of secondary school, because it comes from an entrance examination and Gauss law isn't taught in secondary school here. –  yarnamc Jun 1 '13 at 19:19
# How to change an int value in the Unity editor The player of my game can collect logs and can use them for hold on the fires. Each fire has its own status. So one is off and the other one is for 50% on. I created a public int inside my script that is linked to my gameObject. Because it is public, I can change the value also in the Unity editor. See image: The int is linked to a int parameter of "Intensity". And use this as follow: public Animator anim; public int campfireStatus; public float timer; public float beforeDecrease; // timer for campfires bool inTrigger = false; // Use this for initialization void Start () { anim = GetComponent<Animator>(); } void OnTriggerEnter2D(Collider2D other) { if (other.gameObject.tag=="Player"){ inTrigger = true; } } void OnTriggerExit2D(Collider2D other) { if (other.gameObject.tag=="Player"){ inTrigger = false; } } // Update is called once per frame void Update () { anim.SetInteger("Intensity", campfireStatus); //Debug.Log (campfireStatus); if(inTrigger && Input.GetKeyDown(KeyCode.E)) { if (WoodBehaviour.count > 0) { campfireStatus++; } } timer+= Time.deltaTime; if (timer > beforeDecrease) { timer = 0; if (campfireStatus !=0){ campfireStatus--; Debug.Log ("Decrease"); } } } But the added value inside the Unity editor will not be use by the script. It will start from the default state, as you can see here: Each transition, like the one is selected, will have the condition: intensity is equals to 1 till 4. So I look for a solution, so that I am be able to change which start animation should fired for that object, by changing the Campfire Status inside the Unity editor with the values of 1 till 4. Look out for some advice! • try putting in 1. since it will start from defaul animation, a value of 2 won't trigger your animator as it requires to be 1 to go into the next state. if you want to skip 1 state you should put a transition from default to the desired state – Leggy7 Apr 3 '15 at 11:23 • I don't know what's the exit condition from the second animation, but it looks like it exits, go back to idle and cannot trigger with 2. try putting >= 1 for first transition, >= 2 for transition from 2 to 3 and so on – Leggy7 Apr 3 '15 at 12:38 • The conditional statement is equal to – Caspert Apr 3 '15 at 12:46 • I have not unity here at work but can't you spec just >? in that case to have >= 2 just put >1 – Leggy7 Apr 3 '15 at 12:47 ## 1 Answer According to what evaluated in the comment section, your problem was that you had to set transition conditions to be greater then some_value instead of equal.
Choose the most appropriate alternative from the options given below to complete the following sentence: Despite several ________ the mission succeeded in its attempt to resolve the conflict. 1. attempts 2. setbacks 3. meetings 4. delegations Despite several $\color{purple}{\underline{setbacks}}$ the mission succeeded in its attempt to resolve the conflict 540 points 3 3
# Variation of resistivity with temperature and current voltage relation ## (7) variation of resistivity with temperature • Resistance and hence resistivity of conductor depends on mubers of factors • One of the most important factors is dependence of resistance of metals on temperature • Resistivity of the metallic conductor increases with increase on temperature • when we increase the temperature of the metallic conductor,its constituent atoms vibrate with greater amplitudes then usual.This results to the more frequent collison between ions and electrons • As a result average time between the two successive collision decreases resulting the decrease in drift velocity • Thus increase collison with the increase in tempearture results in increase resistivity • For small temperature variations ,resistivity of the most of the metals varies according to the following relations ρ(T)=ρ(T0)[1 + α(T-T0)]                    (14) Where ρ(T) and ρ(T0) are the resistivies of the material at temperature T and T0 respectively and α is the constant for given materail and is known as coefficient of resistivity. • Since resistance of a given conductors depends on the length and crosssectional area of the conductor through the relation R=ρl/A Hence temperature variation of the resistance can be given as R=R(T0)[1 + α(T-T0)]                    (15) • Resistivity of alloys also increase with temperature but this increase is much small as compared to metals • Resistivies of the non-metals decreases with increase in temperature .This is because at high temperature more electrons becomes avialable for conduction as they set themselves loose from atoms and hence temperature coefficient of resistivity is negative for non-metals • A similar behavior occurs in case of semi-conductors .temperature coefficient of resistivity is negative for semi-conductors and its value is often large for a semi-conductor materials ## (8) Current Voltage relations • We know that current through any electrical device such as resistors depends on potential defference between the terminals • Devices obeying ohm's law follow a linear relationship between current following and potential applied where current is directly proportional to voltage applied .Graphical relation between V and I is shown below in figure • Graph for a resistor obeying ohm's law is a straight line through the origin having some finite slope • There are many electrical devices that does not obey the ohm's law and current may depends on voltage in more complicated ways.Such devices are called non-ohmic devices for examples vaccum tubes,semiconductor diodes ,transistors etc • Consider the case of a semi conductor junction diode which are used to convert alternating current to direct current and are used to perform variety of logic functions is a non=ohmic device • Graphical voltage relation for a diode is shown below in the figure • Figure clearly shows a non linear depeence of current on voltage and diode clearly does not follow the ohm's w • When a device does not follow obey ohm's law,it has non linear voltage -current relation and the quantity V/I is no longer a constant however ratio is still known as resistance which now varies with current • In such cases we define a quantity dV/dI known as dynamic resistance which expresses the relation between smaal change in current and resulting change in voltage • Thus for non-ohmic electrical devices resistance is not constant for different values of V and I
## User: aangajala Reputation: 0 Status: New User Location: Last seen: 3 months, 1 week ago Joined: 1 year, 4 months ago Email: a********@mytu.tuskegee.edu #### Posts by aangajala <prev • 15 results • page 1 of 2 • next > 1 191 views 1 ... I apologize for this,thanks so much for your time.  ... written 3 months ago by aangajala0 1 191 views 1 ... You just made my day saying so :) I did a lot of work yesterday with out removing. So, I was worried. ... written 3 months ago by aangajala0 1 191 views 1 ... What is the difference between , using group and interaction? Is it going to give same results? dds <- DESeqDataSetFromMatrix(countData = cts,                               colData = coldata,                               design = ~ type + ER) dds$group <- factor(paste0(dds$type, dds$ER)) des ... written 3 months ago by aangajala0 1 answers 191 views 1 answers ... Thank you. What will happen if I do not remove. Then it will just overwrite the dds, so basically it is new dds right? or does it append the dds , I mean mix with the previous one? ... written 3 months ago by aangajala0 1 answers 191 views 1 answers ... Here is my Coldata, I am trying to verify what I am doing is correct? ... written 3 months ago by aangajala0 1 answers 191 views 1 answers ... > head(coldata) sampletype race androgen_receptor_statu TCGA.3C.AAAU.01A.11R.A41G.13 Primary Tumor white positive TCGA.3C.AALI.01A.11R.A41G.13 Primary Tumor black or african american negative TC ... written 3 months ago by aangajala0 1 answer 191 views 1 answer ... Question #1: I tried to look the documentation for interaction.But, Unable to understand difference between groups and interaction.So difference between following codes, Is it basically same, different ways of doing it? dds$group <- factor(paste0(dds$race, dds$sampletype)) design(dds) <- ~ g ... written 3 months ago by aangajala0 • updated 3 months ago by Michael Love17k 1 118 views 1 ... Sorry... design(ddsMF) <- formula(~ type + condition) My question is how many variables can we give. Here in this code only two variables are used? Or is there a limit? ... written 3 months ago by aangajala0 • updated 3 months ago by Michael Love17k 1 118 views 1 ... Hello, I have multiple conditions, such as condition1( Normal and Cancer), condition2( age bellow 50 and above50), so I want to see DEGs of age in Normal samples and cancer. So for this do I have produce dds twice by separating coldata , one for normal and one for primary. And then use condition fo ... written 3 months ago by aangajala0 • updated 3 months ago by Michael Love17k 1 1.1k views 1 Comment: C: Normalization using DESeq ... Okay thank you so much. ... written 16 months ago by aangajala0 #### Latest awards to aangajala Popular Question 14 months ago, created a question with more than 1,000 views. For Normalization using DESeq Content Help Access Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
to Scalable Global Optimization via Local Bayesian Optimization Bayesian optimization has recently emerged as a popular method for the sample-efficient optimization of expensive black-box functions. However, the application to high-dimensional problems with several thousand observations remains challenging, and on difficult problems Bayesian optimization is often not competitive with other paradigms. In this paper we take the view that this is due to the implicit homogeneity of the global probabilistic models and an overemphasized exploration that results from global acquisition. This motivates the design of a local probabilistic approach for global optimization of large-scale high-dimensional problems. We propose the $\texttt{TuRBO}$ algorithm that fits a collection of local models and performs a principled global allocation of samples across these models via an implicit bandit approach. A comprehensive evaluation demonstrates that $\texttt{TuRBO}$ outperforms state-of-the-art methods from machine learning and operations research on problems spanning reinforcement learning, robotics, and the natural sciences. Learning Search Space Partition for Black-box Optimization using Monte Carlo Tree Search High dimensional black-box optimization has broad applications but remains a challenging problem to solve. Given a set of samples $\{\vx_i, y_i\}$, building a global model (like Bayesian Optimization (BO)) suffers from the curse of dimensionality in the high-dimensional search space, while a greedy search may lead to sub-optimality. By recursively splitting the search space into regions with high/low function values, recent works like LaNAS shows good performance in Neural Architecture Search (NAS), reducing the sample complexity empirically. In this paper, we coin LA-MCTS that extends LaNAS to other domains. Unlike previous approaches, LA-MCTS learns the partition of the search space using a few samples and their function values in an online fashion. While LaNAS uses linear partition and performs uniform sampling in each region, our LA-MCTS adopts a nonlinear decision boundary and learns a local model to pick good candidates. If the nonlinear partition function and the local model fits well with ground-truth black-box function, then good partitions and candidates can be reached with much fewer samples. LA-MCTS serves as a \emph{meta-algorithm} by using existing black-box optimizers (e.g., BO, TuRBO) as its local models, achieving strong performance in general black-box optimization and reinforcement learning benchmarks, in particular for high-dimensional problems. Adaptive Local Bayesian Optimization Over Multiple Discrete Variables In the machine learning algorithms, the choice of the hyperparameter is often an art more than a science, requiring labor-intensive search with expert experience. Therefore, automation on hyperparameter optimization to exclude human intervention is a great appeal, especially for the black-box functions. Recently, there have been increasing demands of solving such concealed tasks for better generalization, though the task-dependent issue is not easy to solve. The Black-Box Optimization challenge (NeurIPS 2020) required competitors to build a robust black-box optimizer across different domains of standard machine learning problems. This paper describes the approach of team KAIST OSI in a step-wise manner, which outperforms the baseline algorithms by up to +20.39%. We first strengthen the local Bayesian search under the concept of region reliability. Then, we design a combinatorial kernel for a Gaussian process kernel. In a similar vein, we combine the methodology of Bayesian and multi-armed bandit,(MAB) approach to select the values with the consideration of the variable types; the real and integer variables are with Bayesian, while the boolean and categorical variables are with MAB. Empirical evaluations demonstrate that our method outperforms the existing methods across different tasks. High-Dimensional Bayesian Optimization with Manifold Gaussian Processes Bayesian optimization (BO) is a powerful approach for seeking the global optimum of expensive black-box functions and has proven successful for fine tuning hyper-parameters of machine learning models. The Bayesian optimization routine involves learning a response surface and maximizing a score to select the most valuable inputs to be queried at the next iteration. These key steps are subject to the curse of dimensionality so that Bayesian optimization does not scale beyond 10--20 parameters. In this work, we address this issue and propose a high-dimensional BO method that learns a nonlinear low-dimensional manifold of the input space. We achieve this with a multi-layer neural network embedded in the covariance function of a Gaussian process. This approach applies unsupervised dimensionality reduction as a byproduct of a supervised regression solution. This also allows exploiting data efficiency of Gaussian process models in a Bayesian framework. We also introduce a nonlinear mapping from the manifold to the high-dimensional space based on multi-output Gaussian processes and jointly train it end-to-end via marginal likelihood maximization. We show this intrinsically low-dimensional optimization outperforms recent baselines in high-dimensional BO literature on a set of benchmark functions in 60 dimensions. Scaling Gaussian Process Regression with Derivatives Gaussian processes (GPs) with derivatives are useful in many applications, including Bayesian optimization, implicit surface reconstruction, and terrain reconstruction. Fitting a GP to function values and derivatives at $n$ points in $d$ dimensions requires linear solves and log determinants with an ${n(d 1) \times n(d 1)}$ positive definite matrix-- leading to prohibitive $\mathcal{O}(n 3d 3)$ computations for standard direct methods. We propose iterative solvers using fast $\mathcal{O}(nd)$ matrix-vector multiplications (MVMs), together with pivoted Cholesky preconditioning that cuts the iterations to convergence by several orders of magnitude, allowing for fast kernel learning and prediction. Our approaches, together with dimensionality reduction, allows us to scale Bayesian optimization with derivatives to high-dimensional problems and large evaluation budgets. Papers published at the Neural Information Processing Systems Conference.
# American Institute of Mathematical Sciences 2019, 15: 209-236. doi: 10.3934/jmd.2019019 ## The local-global principle for integral Soddy sphere packings Department of Mathematics, Rutgers University, 110 Frelinghuysen Rd., Piscataway, NJ 08854, USA Received  November 08, 2017 Revised  March 23, 2019 Published  August 2019 Fund Project: The author is partially supported by an NSF CAREER grant DMS-1254788 and DMS-1455705, an NSF FRG grant DMS-1463940, an Alfred P. Sloan Research Fellowship, and a BSF grant Fix an integral Soddy sphere packing $\mathscr{P}$. Let $\mathscr{B}$ be the set of all bends in $\mathscr{P}$. A number $n$ is called represented if $n\in \mathscr{B}$, that is, if there is a sphere in $\mathscr{P}$ with bend equal to $n$. A number $n$ is called admissible if it is everywhere locally represented, meaning that $n\in \mathscr{B}( \operatorname{mod} q)$ for all $q$. It is shown that every sufficiently large admissible number is represented. Citation: Alex Kontorovich. The local-global principle for integral Soddy sphere packings. Journal of Modern Dynamics, 2019, 15: 209-236. doi: 10.3934/jmd.2019019 ##### References: show all references
# Math Help - Solve this polynomial inequality algebraically (number line)? (7x+2) (1-x)(2x-5)>0 1. ## Solve this polynomial inequality algebraically (number line)? (7x+2) (1-x)(2x-5)>0 Could you explain it to me thoroughly please So far, I have: -2/7<x<1 or -2.5<x<1 or x>1 Am I wrong? 2. Solve this polynomial inequality algebraically (number line)? (7x+2) (1-x)(2x-5)>0 set each factor equal to 0 and solve for x ... you should get $x = -\frac{2}{7}$ , $x = 1$ , and $x = \frac{5}{2}$ plot these three x-values on a number line in correct order. the three plotted values of x break up the number line into four intervals ... $x < -\frac{2}{7}$ $-\frac{2}{7} < x < 1$ $1 < x < \frac{5}{2}$ $x > \frac{5}{2}$ pick any single value in each interval, substitute it into the original inequality, and see if the result makes the original inequality true or false. if true, then all values in that interval will make the inequality true ... the interval is part of the solution set. if false, reject that interval as part of the solution set. 3. A quick way to do that is to recognize that x- a is negative if x< a, positive if x> a. In (7x+2) (1-x)(2x-5)= (7x+2)(-1)(x-1)(2x-5)>0 there are three factors which are 0 at x= -2/7, x= 1, and x= 5/2 and a single (-1) factor. Notice that I changed 1-x to (1)(x- 1) to get the form "x-a". Start with x< -2/7. Then x is less than all three of those numbers so all 3 factors are negative and we have (-1) which is always negative for 4 negative factors. Since there are an even number of negatives, the product is positive. Now jump past -2/7: -2/7< x< 1. The factor 7x+2 changes to positive while the other three factors remain negative. Since there are an odd number of negatives, the product is negative. Jump past 1 to 1< x< 5/2. The factor x-1 changes to positive so now we have two negative factors. Since there are an even number of negatives, the product is positive. Finally jump past 5/2 to 5/2< x. Now all the factors except (-1) are positive so we have only one negative factor. Since there are an odd number of negatives, the product is negative. Summary: the product is positive for x< -2/7 and for 1< x< 5/2. This works nicely when the function is a polynomial that can be factored or a rational function in which the numerator and denominator can both be factored. Skeeter's method, choose one point in each interval, will work for all kinds of functions. 4. so are my answers right or wrong? I don't get it... 5. oh and sorry, the question was actually: (7x+2) (1-x) (2x+5) and my answers were -2/7<x<1 or -2.5<x<1 or x>1 6. Originally Posted by skeske1234 oh and sorry, the question was actually: (7x+2) (1-x) (2x+5) and my answers were -2/7<x<1 or -2.5<x<1 or x>1 sorry ... if the inequality is (7x+2) (1-x) (2x+5) > 0 , then your solution set is incorrect. the correct solution set is $x < -\frac{5}{2}$ or $-\frac{2}{7} < x < 1 $ 7. Originally Posted by skeeter sorry ... if the inequality is (7x+2) (1-x) (2x+5) > 0 , then your solution set is incorrect. the correct solution set is $x < -\frac{5}{2}$ or $-\frac{2}{7} < x < 1 $ ok, but could you explain to me how you got your answers using the algebraic method? please 8. one more time ... $(7x+2) (1-x) (2x+5) > 0$ 1. set each factor equal to 0 and solve for x. this time, you should get $x = -\frac{2}{7}$ , $x = 1$ , and $x = -\frac{5}{2}$ 2. plot these three x-values on a number line in correct order. the three plotted values of x break up the number line into four intervals ... $x < -\frac{5}{2}$ $-\frac{5}{2} < x < -\frac{2}{7}$ $-\frac{2}{7} < x < 1$ $x > 1$ 3. pick any single value in each interval, substitute it into the original inequality, and see if the result makes the original inequality true or false. if true, then all values in that interval will make the inequality true ... the interval is part of the solution set. if false, reject that interval as part of the solution set. follow the above directions carefully, and you will find the solution set. 9. A quick way on solving inequalities like this one is the following: locate critical points then pick every factor and construct the following table, $\begin{array}{*{20}c} {}&\vline &{\left( {-\infty ,-\dfrac{5} {2}} \right)} &\vline & {\left( {-\dfrac{5} {2},-\dfrac{2} {7}} \right)} &\vline & {\left( {-\dfrac{2} {7},1} \right)} &\vline & {(1,\infty )}\\ \hline {7x + 2}&\vline&-&\vline &-&\vline &+&\vline&+\\ \hline {1 - x}&\vline&+&\vline &+&\vline &+&\vline&-\\ \hline {2x + 5}&\vline&-&\vline&+&\vline&+&\vline&+\\ \hline{}&\vline&+&\vline&-&\vline&+&\vline&- \end{array}$ So, the solution is $S=\left( -\infty ,-\frac{5}{2} \right)\cup \left( -\frac{2}{7},1 \right).$ 10. Originally Posted by skeeter one more time ... $(7x+2) (1-x) (2x+5) > 0$ 1. set each factor equal to 0 and solve for x. this time, you should get $x = -\frac{2}{7}$ , $x = 1$ , and $x = -\frac{5}{2}$ 2. plot these three x-values on a number line in correct order. the three plotted values of x break up the number line into four intervals ... $x < -\frac{5}{2}$ $-\frac{5}{2} < x < -\frac{2}{7}$ $-\frac{2}{7} < x < 1$ $x > 1$ 3. pick any single value in each interval, substitute it into the original inequality, and see if the result makes the original inequality true or false. if true, then all values in that interval will make the inequality true ... the interval is part of the solution set. if false, reject that interval as part of the solution set. follow the above directions carefully, and you will find the solution set. ok, so if all answers give me 0, does that mean it's true? and all are right? 11. Originally Posted by skeske1234 ok, so if all answers give me 0, does that mean it's true? and all are right? no 12. Do you understand what this question is asking? "If all answers give me 0" then its NOT "> 0" which is what you initially asked. 13. ok, so it's all wrong.
## Michigan Tech Publications #### Title An optimal upper bound on the tail probability for sums of random variables Article 10-22-2019 #### Department Department of Mathematical Sciences #### Abstract Let \$s\$ be any given real number. An explicit construction is provided of random variables (r.v.'s) \$X\$ and \$Y\$ such that \$\sup{P}(X+Y\ge s)\$ is attained, where the \$\sup\$ is taken over all r.v.'s \$X\$ and \$Y\$ with given distributions. #### Publication Title Theory of Probability & Its Applications COinS
# Properties Label 119952.ct Number of curves $1$ Conductor $119952$ CM no Rank $1$ # Related objects Show commands: SageMath sage: E = EllipticCurve("ct1") sage: E.isogeny_class() ## Elliptic curves in class 119952.ct sage: E.isogeny_class().curves LMFDB label Cremona label Weierstrass coefficients j-invariant Discriminant Torsion structure Modular degree Faltings height Optimality 119952.ct1 119952cm1 $$[0, 0, 0, -63, 154]$$ $$81648/17$$ $$5757696$$ $$[]$$ $$16128$$ $$0.012181$$ $$\Gamma_0(N)$$-optimal ## Rank sage: E.rank() The elliptic curve 119952.ct1 has rank $$1$$. ## Complex multiplication The elliptic curves in class 119952.ct do not have complex multiplication. ## Modular form 119952.2.a.ct sage: E.q_eigenform(10) $$q - q^{5} - q^{13} - q^{17} + O(q^{20})$$
The hadronic vacuum polarization with twisted boundary conditions The hadronic vacuum polarization with twisted boundary conditions Abstract The leading-order hadronic contribution to the anomalous magnetic moment of the muon is given by a weighted integral over the subtracted hadronic vacuum polarization. This integral is dominated by euclidean momenta of order the muon mass which are not available on current lattice volumes with periodic boundary conditions. Twisted boundary conditions can in principle help access momenta of any size even in a finite volume. We investigate the implementation of twisted boundary conditions both numerically (using all-mode averaging for improved statistics) and analytically, and present our initial results. \ShortTitle The hadronic vacuum polarization with twisted boundary conditions \FullConference31st International Symposium on Lattice Field Theory LATTICE 2013 July 29 - August 3, 2013 Mainz, Germany 1 Introduction The leading-order hadronic (HLO) contribution to the anomalous magnetic moment of the muon of the muon is given by the integral [1, 2]2 aHLOμ = 4α2∫∞0dp2f(p2)(Πem(0)−Πem(p2)) , (1) f(p2) = m2μp2Z3(p2)1−p2Z(p2)1+m2μp2Z2(p2) , Z(p2) = √(p2)2+4m2μp2−p22m2μp2 , where is the muon mass, and for non-zero momenta is defined from the hadronic contribution to the electromagnetic vacuum polarization : Πemμν(p)=(p2δμν−pμpν)Πem(p2) (2) in momentum space. Here is the euclidean momentum flowing through the vacuum polarization. The integrand in Eq. (1) is dominated by momenta of order the muon mass; it typically looks as shown in Fig. 1, with the peak located at . The figure includes lattice data for the vacuum polarization on the GeV MILC Asqtad ensemble and the curve is a [1,1] Padé fit to this data [4]. The resulting value for is extremely sensitive to the fitting choice, and while a fit may give a “good result” (i.e., may have a reasonable per degree of freedom), it has been recently shown in Ref. [5] that such fits may not reproduce the correct result. For a precision computation of this integral using lattice QCD, one would therefore like to access the region of this peak. In a finite volume with periodic boundary conditions, for the smallest available non-vanishing momentum to be at this peak, we require  fm, which is out of reach of present lattice computations, if the lattice spacing is chosen to be such that one is reasonably close to the continuum limit. Clearly, a different method for reaching such small momenta is needed, and here we discuss the use of twisted boundary conditions in order to vary momenta arbitrarily in a finite volume. More details on this work can be found in Ref. [6]. 2 Twisted Boundary Conditions Given the electromagnetic current, Jemμ(x)=∑iQi¯¯¯qi(x)γμqi(x) , (3) in which runs over quark flavors, and quark has charge , we wish to calculate the connected part of the two-point function in a finite volume, but with an arbitrary choice of momentum.3 In order to do this, we will employ quarks which satisfy twisted boundary conditions [7, 8, 9], qt(x)=e−iθμqt(x+^μLμ) ,¯¯¯qt(x)=¯¯¯qt(x+^μLμ)eiθμ , (4) where the subscript indicates that the quark field obeys twisted boundary conditions, is the linear size of the volume in the direction ( denotes the unit vector in the direction), and is the twist angle in that direction. For a plane wave , the boundary conditions (4) lead to the allowed values for the momenta (we set ) pμ=2πnμ+θμLμ ,nμ∈{0,1,…,Lμ−1} . (5) The twist angle can be chosen differently for the two quark lines in the connected part of the vacuum polarization, resulting in a continuously variable momentum flowing through the diagram. (Clearly, this trick does not work for the disconnected part.) If this momentum is chosen to be of the form (5), then allowing to vary over the range between 0 and allows to vary continuously between and . This momentum is realized if, for example, we choose the anti-quark line in the vacuum polarization to satisfy periodic boundary conditions (i.e., Eq. (4) with for all ), and the quark line twisted boundary conditions with twist angles . Thus, we define two currents4 j+μ(x) = 12[¯¯¯q(x)γμUμ(x)qt(x+^μ)+¯¯¯q(x+^μ)γμU†μ(x)qt(x)] , (6) j−μ(x) = 12[¯¯¯qt(x)γμUμ(x)q(x+^μ)+¯¯¯qt(x+^μ)γμU†μ(x)q(x)] . (7) In the case where we remove the twist (), these become equal to each other and the standard conserved vector current used for lattice calculations. We thus consider a mixed-action theory, where we have periodic sea quarks and (quenched) twisted valence quarks. Formally this amounts to quarks with periodic boundary conditions, quarks with twisted boundary conditions, and ghost quarks with the same twisted boundary conditions. The ghost quarks thus cancel the fermionic determinant of the twisted quarks. Then, under the field transformations, δq(x) = iα+(x)e−iθx/Lqt(x) ,δ¯¯¯q(x)=−iα−(x)eiθx/L¯¯¯qt(x) , (8) δqt(x) = iα−(x)eiθx/Lq(x) , δ¯¯¯qt(x)=−iα+(x)e−iθx/L¯¯¯q(x) , in which we abbreviate . We obtain, following the standard procedure, the Ward-Takahashi Identity (WTI) ∑μ∂−μ⟨j+μ(x)j−ν(y)⟩+12δ(x−y)⟨¯¯¯qt(y+^ν)γνU†ν(y)qt(y)−¯¯¯q(y)γνUν(y)q(y+^ν)⟩ −12δ(x−^ν−y)⟨¯q(y+^ν)γνU†ν(y)q(y)−¯¯¯qt(y)γνUν(y)qt(y+^ν)⟩=0 , (9) where is the backward lattice derivative, which for this paper always acts on : . From this WTI, we define the vacuum polarization function as Π+−μν(x−y) = ⟨j+μ(x)j−ν(y)⟩−14δμνδ(x−y)(⟨¯¯¯q(y)γνUν(y)q(y+^ν)−¯¯¯q(y+^ν)γνU†ν(y)q(y)⟩ (10) In the case where we set the twist to zero in all directions, , this definition reduces to the standard result for the vacuum polarization and is transverse. However in the twisted case, is not transverse, but instead obeys the identity ∑μ∂−μΠ+−μν(x−y)+14(δ(x−y)+δ(x−^ν−y))⟨jtν(y)−jν(y)⟩=0 , (11) in which and are currents defined by jμ(x) = (12) jtμ(x) = 12(¯¯¯qt(x)γμUμ(x)qt(x+^μ)+¯¯¯qt(x+^μ)γμU†μ(x)qt(x)) . It is important to note that other choices for are possible, but there will always be a non-vanishing contact term in the WTI. The reason is that the contact term in Eq. (11) (or, equivalently, in Eq. (2)) cannot be written as a total derivative, because the fact that and fields satisfy different boundary conditions breaks explicitly the isospin-like symmetry that otherwise would exist. (For constant and , Eq. (8) is an isospin-like symmetry of the action. As a check, we see that for , i.e., for , the contact term in Eq. (11) vanishes.) The resulting non-transverse part of therefore will need to be subtracted. 3 Subtraction of contact term In momentum space, we can decompose the vacuum polarization tensor as Π+−μν(^p)=(^p2δμν−^pμ^pν)Π+−(^p2)+δμνa2Xν(^p) ,^pμ=2asin(apμ2) , (13) and as such, we can determine by using the WTI in momentum space, i∑μ^pμΠ+−μν(^p) = −cos(apν/2)⟨jtν(0)⟩=i^pνa2Xν(^p) (14) ⇒Xν(^p) = (15) There is a pole in only when is equal to an integer multiple of , which is only possible if for our allowed values of , but then this term would vanish because for the current from which is constructed is conserved. From dimensional analysis and axis-reversal symmetry, we see for small : ⟨jtν(y)⟩=−ica2^θν[1+O(^θ2)] . (16) This must be odd under the interchange , and we see that this vanishes when we take away the twisting (so that for all ). In that case, is conserved. We can determine the vacuum polarization at one-loop in perturbation theory to get an estimate for the size of this effect. In the twisted case, we have for colors (again for ), Π+−μν(p) = −NcV∑ktr[γμcos(kμ+pμ/2)i∑κγκsin(kκ+pκ)+mγνcos(kν+pν/2)i∑λγλsinkκ+m] +i2δμνNcV∑ktr[γν(sinkνi∑κγκsinkκ+m+sin(kν+^θν)i∑κγκsin(kκ+^θκ)+m)] , and the WTI, i∑μ^pμΠ+−μν(p) = −2icos(pν/2)NcV∑k(sin(2kν)∑κsin2kκ+m2−sin(2(kν+^θν))∑κsin2(kκ+^θk)+m2) = 2icos(pν/2)^θ[NcV∑k(2cos(2kν)∑ksin2kκ+m2−sin2(2kν)(∑ksin2kκ+m2)2)]+O(^θ3) . For the MILC Asqtad ensemble with and a light quark mass of , this gives ⟨jtν(0)⟩=(7.30×10−5)i for a twist of in the spatial directions. Thus, generally this effect could be very small. As the WTI holds on a configuration-by-configuration basis, it is straightforward to test Eq. (14) numerically. In Fig 2(a) we show the ratio of the right-hand side to the left-hand side of Eq. (13). This was performed on a typical configuration on an Asqtad MILC ensemble with , GeV, , . In this case, the stopping residual for the conjugate gradient was . For small momenta this ratio is near one, and at most deviates from one by about 0.3% for larger momenta. The ratio is expected to numerically converge to one as the CG stopping criterion is reduced. As one is interested in using twisted boundary conditions for lower momenta this does not appear to introduce a significant systematic. In Fig 2(b) we plot the quantity Xν(^p)a2Π+−νν(^p) (19) on the same configuration as in Fig 2(a). For very small momenta, this counterterm can become quite significant, especially in the primary region of interest. While averaging over configurations seems to diminish this effect, this is still under investigation. Of course, even if averaging over an ensemble reduces the effect of the counterterm, one must worry about the systematic error introduced in such large cancellations during such averaging. 4 Conclusions The use of twisted boundary conditions is promising in obtaining the connected portion of the leading hadronic contribution to the muon anomalous magnetic moment. While the introduction of twisted boundary conditions does not allow one to write a purely transverse vacuum polarization, it is straightforward to subtract the term which arises due to the partial twisting of the quarks. Currently it appears as though averaging over an ensemble makes a large effect (on each configuration) negligible. The reason for this is under investigation, and there is no guarantee that it will be true for all ensembles. As such, when attempting to obtain a high-precision calculation of the muon , it is imperative that one gets a measurement of the contact term that arises in the vacuum polarization and subtract it if it is not negligible, as small errors in the low momentum region can lead to large errors in the final determination of the muon . Footnotes 1. Permanent address: Department of Physics and Astronomy, San Francisco State University, San Francisco, CA 94132, USA 2. For an overview of lattice computations of the muon anomalous magnetic moment, see Ref. [3] and references therein. 3. This method is only useful for the connected part of the two-point function, although for a full calculation of the photon vacuum polarization one must also look at the disconnected part. 4. Note this is shown here for naïve quarks, but the arguments that follow would hold for any other discretization in which a conserved vector current can be defined. For example, for staggered quarks we merely make the replacement and carry through the argument. References 1. T. Blum, Lattice calculation of the lowest order hadronic contribution to the muon anomalous magnetic moment, Phys.Rev.Lett. 91 (2003) 052001, [hep-lat/0212018]. 2. B. Lautrup, A. Peterman, and E. De Rafael, On sixth-order radiative corrections to a(mu)-a(e), Nuovo Cim. A1 (1971) 238–242. 3. T. Blum, M. Hayakawa, and T. Izubuchi, Hadronic corrections to the muon anomalous magnetic moment from lattice QCD, PoS LATTICE2012 (2012) 022, [arXiv:1301.2607]. 4. C. Aubin, T. Blum, M. Golterman, and S. Peris, Model-independent parametrization of the hadronic vacuum polarization and g-2 for the muon on the lattice, Phys.Rev. D86 (2012) 054509, [arXiv:1205.3695]. 5. M. Golterman, K. Maltman, and S. Peris, Tests of hadronic vacuum polarization fits for the muon anomalous magnetic moment, arXiv:1310.5928. 6. C. Aubin, T. Blum, M. Golterman, and S. Peris, The hadronic vacuum polarization with twisted boundary conditions, Phys.Rev. D88 (2013) 074505, [arXiv:1307.4701]. 7. P. F. Bedaque, Aharonov-Bohm effect and nucleon nucleon phase shifts on the lattice, Phys.Lett. B593 (2004) 82–88, [nucl-th/0402051]. 8. G. de Divitiis, R. Petronzio, and N. Tantalo, On the discretization of physical momenta in lattice QCD, Phys.Lett. B595 (2004) 408–413, [hep-lat/0405002]. 9. C. Sachrajda and G. Villadoro, Twisted boundary conditions in lattice simulations, Phys.Lett. B609 (2005) 73–85, [hep-lat/0411033]. You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
# Associators with Frozen Feet (diff) ← Older revision | Latest revision (diff) | Newer revision → (diff) ## The Goal The purpose of the paperlet is to find an explicit formula for an associator with frozen feet. As I'm starting to write, I don't know such a formula. My hope is that as I type up all the relevant equations, a solution will emerge. I'll be just as happy if it emerges in somebody else's mind, provided (s)he shares her/his thoughts. The Pentagon and the Hexagons for Parenthesized Braids A horizontal associator is a solution $\Phi\in A_3$ of the pentagon equation and the hexagon equations (, , ): [Pentagon] $\Phi^{123}\cdot(1\otimes\Delta\otimes 1)(\Phi)\cdot\Phi^{234}=(\Delta\otimes 1\otimes 1)(\Phi)\cdot(1\otimes 1\otimes\Delta)(\Phi)$     in     $A_4$, [Hexagons] $(\Delta\otimes 1)(R^{\pm 1}) = \Phi^{123}\cdot (R^{\pm 1})^{23}\cdot(\Phi^{-1})^{132}\cdot(R^{\pm 1})^{13}\cdot\Phi^{312}$     in     $A_3$. Here $A_n$ is the algebra of horizontal chord diagrams on $n$ vertical strands; i.e., $A_n=\left.\left\langle t^{ij}=t^{ji}\right\rangle\right/[t^{ij},t^{kl}]=[t^{ij},t^{ik}+t^{jk}]=0$, with $i$, $j$, $k$ and $l$ all different integers between $1$ and $n$. Also, here $R\in A_2$ will be $\exp\,t^{12}$ (slightly different than the normal convention of $\exp\,t^{12}/2$, just to save some denominators). The frozen feet quotient of any associative algebra $A$ is the quotient $A^{ff}:=A\left/\left\langle xyz-xzy\right\rangle\right.$. (In a vertical presentation of chord diagrams we may think of the first letter of a word as its foot. The frozen feet relation makes all letters of a word commute, except perhaps the first one. "Molten bodies" is perhaps more accurate, but it is definitely less catchy). What we are looking for is an associator with frozen feet, a solution of exactly the same equations, except regarded with the frozen feet quotients $A^{ff}_n$ of $A_n$. ## Why Bother? This isn't the place to explain the need for an explicit associator. The need is there even if it is a bit under-appreciated. Why frozen feet? Because finding an honest explicit horizontal associator seems too hard, and as a warm up and perhaps a step, it seems worthwhile to look for associators in some quotient spaces. The frozen feet quotient is the simplest quotient I'm aware off for which the answer is unknown. ## Why might it be doable? Because frozen feet are really molten bodies. It is known that there exists and associator of the form $\Phi=1+F(t^{13},t^{23})$, where $F$ is some non-commutative power series with no constant term. Once the body melts (but keeping the feet frozen), this becomes $\Phi=1+t^{13}F_{13}(t^{13},t^{23})+t^{23}F_{23}(t^{13},t^{23})$, where $F_{13}$ and $F_{23}$ are commutative power series (i.e., "functions"). So finding an associator with frozen feet reduces to finding two, just two, functions of two, just two, variables, satisfying some algebraic equations. How hard can that be? Well, it's actually even easier. Taking $\Phi$ to be the projection from $A_3$ to $A^{ff}_3$ of a group like horizontal associator we find (by means to be documented later) that me may actually take it to be of the form [Phi] $\Phi=1+t^{13}t^{23}F(t^{13},t^{23})-t^{23}t^{13}F(t^{13},t^{23})$, where $F$ is a single unknown function of two commuting variables. ## So is it doable? Me? Why are you asking me? My forte is in daydreaming; not in solving math problems. Anyway, let's try. The hexagon equations are easier (they are written in $A_3$ and not the "bigger" $A_4$) and in some sense more important (they force $\Phi$ to be non-trivial; $\Phi=1$ solves the pentagon but not the hexagons), so we will start with the hexagons. First we simply the hexagon in $A^{ff}_3$, factor by factor: Factor Written in $A^{ff}_3$ Comments $(\Delta\otimes 1)(R^{\pm 1})$ $1+t^{13}\frac{\exp(\pm t^{13}\pm t^{23})-1}{t^{13}+t^{23}}+t^{23}\frac{\exp(\pm t^{13}\pm t^{23})-1}{t^{13}+t^{23}}$ We use $e^{\pm x}=1+x\frac{e^{\pm x}-1}{x}$ to write exponentials with their frozen parts made explicit $\Phi^{123}$ $1+t^{13}t^{23}F(t^{13}, t^{23})-t^{23}t^{13}F(t^{13}, t^{23})$ $(R^{\pm 1})^{23}$ $1+t^{23}\frac{\exp(\pm t^{23})-1}{t^{23}}$ $(\Phi^{-1})^{132}$ $(R^{\pm 1})^{13}$ $\Phi^{312}$ ## References [Bar-Natan_97] ^  D. Bar-Natan, Non-associative tangles, in Geometric topology (proceedings of the Georgia international topology conference), (W. H. Kazez, ed.), 139-183, Amer. Math. Soc. and International Press, Providence, 1997. [Drinfeld_90] ^  V. G. Drinfel'd, Quasi-Hopf algebras, Leningrad Math. J. 1 (1990) 1419-1457. [Drinfeld_91] ^  V. G. Drinfel'd, On quasitriangular Quasi-Hopf algebras and a group closely connected with $\operatorname{Gal}(\bar{\mathbb Q}/{\mathbb Q})$, Leningrad Math. J. 2 (1991) 829-860.
### Introduction This Section introduces you to the basic ideas of hypothesis testing in a non-mathematical way by using a problem solving approach to highlight the concepts as they are needed. We only consider situations involving a single sample. In Section 41.3 we will introduce you to situations involving two samples and while the basic ideas will follow through, their practical application is a little more complex than that met in this Workbook. However, once you have learned how to apply the basic ideas of hypothesis testing covered in this Workbook, you should be capable of applying hypothesis testing to a very wide range of practical problems and learning about methods of hypothesis testing which are not covered here. #### Prerequisites • be familiar with the results and concepts met in the study of probability • be familiar with a range of statistical distributions • understand the term hypothesis • understand the concepts of Type I error and Type II error #### Learning Outcomes • apply the ideas of hypothesis testing to a range of problems underpinned by elementary statistical distributions and involving only a single sample. 1.1 Problem 1 1.3 Problem 2
# zbMATH — the first resource for mathematics Elementary duality of modules. (English) Zbl 0815.16002 The duality that is the topic of this paper is one which connects the model theories of right and left modules over a given ring. It was defined at the level of formulas by the reviewer [M. Prest, J. Lond. Math. Soc., II. Ser. 38, 403-409 (1988; Zbl 0674.16019)] and, at this level, it can also be discerned, in rather different form, in earlier work of L. Gruson and C. U. Jensen [Lect. Notes Math. 867, 234-294 (1981; Zbl 0505.18005)] (also see [B. Zimmermann- Huisgen and W. Zimmermann, Trans. Am. Math. Soc. 320, 695-713 (1990; Zbl 0699.16019)]). In Herzog’s paper, this duality is extended considerably and put to many uses. For example, the author shows that the right and left Ziegler spectra over a ring are essentially homeomorphic and that there is a bijection between theories of right and left $$R$$-modules. He shows that a wide class of indecomposable pure-injectives (= points of the Ziegler spectrum) have well-defined duals. The paper contains interesting and sometimes startling results on a variety of special and general topics. These include modules over Dedekind domains, localisation at a closed subset of the Ziegler spectrum, strongly minimal and $$pp$$-simple indecomposables, local purity, the relation between the dual $$DU$$ of an indecomposable pure-injective $$U$$ and $$\operatorname{Hom}(U,DU\otimes U)$$, duality between flat and absolutely pure modules, totally transcendental modules, Morita duality and elementary duality versus Hom-duality. The paper is well written and the background material is summarised. An early, key, lemma is a reformulation of the usual criterion for a tensor to be zero: if $$\overline a$$ is an $$n$$-tuple from a left $$R$$-module $$M$$ and if $$\overline c$$ is an $$n$$-tuple from a right $$R$$-module $$N$$ then $$\overline a\otimes\overline c=$$ (that is, $$\sum a_ i\otimes c_ i = 0$$) iff there is some $$pp$$ formula $$\varphi$$ for left modules such that $$M$$ satisfies $$\varphi(\overline a)$$ and $$N$$ satisfies $$D\varphi(\overline c)$$, where $$D \varphi$$ denotes the dual of $$\varphi$$. In many parts of the paper the author works in the context of a category which he denotes $$(R\text{-Mod})^{\text{eq}}$$ – this can be thought of as “positive-$$^{\text{eq}}$$” of the incomplete theory of left $$R$$-modules, with $$pp$$-defined sorts for objects and $$pp$$-definable morphisms between sorts. This category, which is a very natural one for the study of modules using model-theoretic ideas, is, in fact, equivalent to the category ($$R\text{-mod},Ab)^{fp}$$ of finitely presented additive functors from the category of finitely presented $$R$$-modules to the category of abelian groups. The ideas and techniques of this paper have already found many uses and are sure to find many more. ##### MSC: 16B70 Applications of logic in associative algebras 16D90 Module categories in associative algebras 03C60 Model-theoretic algebra 16D40 Free, projective, and flat modules and ideals in associative algebras Full Text: ##### References: [1] Paul Eklof and Gabriel Sabbagh, Model-completions and modules, Ann. Math. Logic 2 (1970/1971), no. 3, 251 – 295. · Zbl 0227.02029 [2] Ivo Herzog, Some model theory of modules, Doctoral Dissertation, Univ. Notre Dame, 1989. · Zbl 0898.03014 [3] Carl Faith, Algebra. II, Springer-Verlag, Berlin-New York, 1976. Ring theory; Grundlehren der Mathematischen Wissenschaften, No. 191. · Zbl 0335.16002 [4] Mike Prest, Model theory and modules, London Mathematical Society Lecture Note Series, vol. 130, Cambridge University Press, Cambridge, 1988. · Zbl 0634.03025 [5] Gabriel Sabbagh and Paul Eklof, Definability problems for modules and rings, J. Symbolic Logic 36 (1971), 623 – 649. · Zbl 0251.02052 [6] Bo Stenström, Rings of quotients, Springer-Verlag, New York-Heidelberg, 1975. Die Grundlehren der Mathematischen Wissenschaften, Band 217; An introduction to methods of ring theory. · Zbl 0296.16001 [7] Martin Ziegler, Model theory of modules, Ann. Pure Appl. Logic 26 (1984), no. 2, 149 – 213. · Zbl 0593.16019 [8] Birge Zimmermann-Huisgen and Wolfgang Zimmermann, On the sparsity of representations of rings of pure global dimension zero, Trans. Amer. Math. Soc. 320 (1990), no. 2, 695 – 711. · Zbl 0699.16019 This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.
X Classified Rank-Maximal Matchings and Popular Matchings – Algorithms and Hardness Published in Springer Verlag 2019 Volume: 11789 LNCS Pages: 244 - 257 Abstract In this paper, we consider the problem of computing an optimal matching in a bipartite graph $$G=(A\cup P, E)$$ where elements of A specify preferences over their neighbors in P, possibly involving ties, and each vertex can have capacities and classifications. A classification $$\mathcal {C}_u$$ for a vertex u is a collection of subsets of neighbors of u. Each subset (class) $$C\in \mathcal {C}_u$$ has an upper quota denoting the maximum number of vertices from C that can be matched to u. The goal is to find a matching that is optimal amongst all the feasible matchings, which are matchings that respect quotas of all the vertices and classes. We consider two well-studied notions of optimality namely popularity and rank-maximality. The notion of rank-maximality involves finding a matching in G with maximum number of rank-1 edges, subject to that, maximum number of rank-2 edges and so on. We present an $$O(|E|^2)$$ -time algorithm for finding a feasible rank-maximal matching, when each classification is a laminar family. We complement this with an NP-hardness result when classes are non-laminar even under strict preference lists, and even when only posts have classifications, and each applicant has a quota of one. We show an analogous dichotomy result for computing a popular matching amongst feasible matchings (if one exists) in a bipartite graph with posts having capacities and classifications and applicants having a quota of one. To solve the classified rank-maximal and popular matchings problems, we present a framework that involves computing max-flows iteratively in multiple flow networks. Besides giving polynomial-time algorithms for classified rank-maximal and popular matching problems, our framework unifies several algorithms from literature [1, 10, 12, 15]. © Springer Nature Switzerland AG 2019. Journal Data powered by TypesetLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Data powered by TypesetSpringer Verlag 03029743 No Concepts (13) • C (programming language) • Hardness • Iterative methods • Polynomial approximation • Bipartite graphs • MATCHINGS • MAXIMALITY • NP-HARDNESS RESULTS • Polynomial-time algorithms • POPULAR MATCHING PROBLEMS • POPULARITY • RANK-MAXIMAL MATCHING • Graph theory
## Cryptology ePrint Archive: Report 2013/205 Practical and Employable Protocols for UC-Secure Circuit Evaluation over $Z_n$ Jan Camenisch and Robert R. Enderlein and Victor Shoup Abstract: We present a set of new, efficient, universally composable two-party protocols for evaluating reactive arithmetic circuits modulo n, where n is a safe RSA modulus of unknown factorization. Our protocols are based on a homomorphic encryption scheme with message space $Z_n$, zero-knowledge proofs of existence, and a novel "mixed" trapdoor commitment scheme. Our protocols are proven secure against adaptive corruptions (assuming secure erasures) under standard assumptions in the CRS model (without random oracles). Our protocols appear to be the most efficient ones that satisfy these security requirements. In contrast to prior protocols, we provide facilities that allow for the use of our protocols as building blocks of higher-level protocols. An additional contribution of this paper is a universally composable construction of the variant of the Dodis-Yampolskiy oblivious pseudorandom function in a group of order n as originally proposed by Jarecki and Liu. Category / Keywords: cryptographic protocols / Two-party computation, Practical Protocols, UC-Security Publication Info: Accepted for publication at ESORICS 2013. Date: received 9 Apr 2013, last revised 7 Jan 2016 Contact author: eprint at e7n ch Available format(s): PDF | BibTeX Citation Note: This is the full version of a paper due to appear at the 18th European Symposium on Research in Computer Security (ESORICS 2013). Short URL: ia.cr/2013/205 [ Cryptology ePrint archive ]
# Cosecant squared formula ### Expansion form $\csc^2{\theta} \,=\, 1+\cot^2{\theta}$ ### Simplified form $1+\cot^2{\theta} \,=\, \csc^2{\theta}$ ## How to use The cosecant squared identity is used as a formula in two ways in trigonometry. 1. The square of cosecant function is expanded as sum of one and cotangent squared function. 2. The sum of one and cot squared function is simplified as cosecant squared function. #### Proof The cosecant squared formula is originally derived from the Pythagorean identity of co-secant and cot functions. If theta is used to denote angle of a right triangle, then the subtraction of squares of cot function from cosecant function equals to one. $\csc^2{\theta}-\cot^2{\theta} \,=\, 1$ $\therefore \,\,\,\,\,\, \csc^2{\theta} \,=\, 1+\cot^2{\theta}$ Therefore, it is proved that cosecant squared theta equals to the summation of one and cot squared theta. ##### Alternative form The cosecant squared identity is also usually written in terms of different angles. For example, if $x$ is used to write as angle of right angled triangle, then the csc squared formula is written as $\csc^2{x} \,=\, 1+\cot^2{x}$ Remember, the angle of a right triangle can be represented by any symbol but the csc squared formula has to be written in terms of the respective symbol.
Copied to clipboard ## G = C6.SD32order 192 = 26·3 ### 1st non-split extension by C6 of SD32 acting via SD32/Q16=C2 Series: Derived Chief Lower central Upper central Derived series C1 — C24 — C6.SD32 Chief series C1 — C3 — C6 — C12 — C2×C12 — C2×C24 — C2×C3⋊C16 — C6.SD32 Lower central C3 — C6 — C12 — C24 — C6.SD32 Upper central C1 — C22 — C2×C4 — C2×C8 — C2.D8 Generators and relations for C6.SD32 G = < a,b,c | a6=b16=1, c2=a3, bab-1=cac-1=a-1, cbc-1=b7 > Smallest permutation representation of C6.SD32 Regular action on 192 points Generators in S192 (1 73 126 44 23 191)(2 192 24 45 127 74)(3 75 128 46 25 177)(4 178 26 47 113 76)(5 77 114 48 27 179)(6 180 28 33 115 78)(7 79 116 34 29 181)(8 182 30 35 117 80)(9 65 118 36 31 183)(10 184 32 37 119 66)(11 67 120 38 17 185)(12 186 18 39 121 68)(13 69 122 40 19 187)(14 188 20 41 123 70)(15 71 124 42 21 189)(16 190 22 43 125 72)(49 174 149 139 103 87)(50 88 104 140 150 175)(51 176 151 141 105 89)(52 90 106 142 152 161)(53 162 153 143 107 91)(54 92 108 144 154 163)(55 164 155 129 109 93)(56 94 110 130 156 165)(57 166 157 131 111 95)(58 96 112 132 158 167)(59 168 159 133 97 81)(60 82 98 134 160 169)(61 170 145 135 99 83)(62 84 100 136 146 171)(63 172 147 137 101 85)(64 86 102 138 148 173) (1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16)(17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32)(33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48)(49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64)(65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80)(81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96)(97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112)(113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128)(129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144)(145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160)(161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176)(177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192) (1 58 44 132)(2 49 45 139)(3 56 46 130)(4 63 47 137)(5 54 48 144)(6 61 33 135)(7 52 34 142)(8 59 35 133)(9 50 36 140)(10 57 37 131)(11 64 38 138)(12 55 39 129)(13 62 40 136)(14 53 41 143)(15 60 42 134)(16 51 43 141)(17 102 67 173)(18 109 68 164)(19 100 69 171)(20 107 70 162)(21 98 71 169)(22 105 72 176)(23 112 73 167)(24 103 74 174)(25 110 75 165)(26 101 76 172)(27 108 77 163)(28 99 78 170)(29 106 79 161)(30 97 80 168)(31 104 65 175)(32 111 66 166)(81 117 159 182)(82 124 160 189)(83 115 145 180)(84 122 146 187)(85 113 147 178)(86 120 148 185)(87 127 149 192)(88 118 150 183)(89 125 151 190)(90 116 152 181)(91 123 153 188)(92 114 154 179)(93 121 155 186)(94 128 156 177)(95 119 157 184)(96 126 158 191) G:=sub<Sym(192)| (1,73,126,44,23,191)(2,192,24,45,127,74)(3,75,128,46,25,177)(4,178,26,47,113,76)(5,77,114,48,27,179)(6,180,28,33,115,78)(7,79,116,34,29,181)(8,182,30,35,117,80)(9,65,118,36,31,183)(10,184,32,37,119,66)(11,67,120,38,17,185)(12,186,18,39,121,68)(13,69,122,40,19,187)(14,188,20,41,123,70)(15,71,124,42,21,189)(16,190,22,43,125,72)(49,174,149,139,103,87)(50,88,104,140,150,175)(51,176,151,141,105,89)(52,90,106,142,152,161)(53,162,153,143,107,91)(54,92,108,144,154,163)(55,164,155,129,109,93)(56,94,110,130,156,165)(57,166,157,131,111,95)(58,96,112,132,158,167)(59,168,159,133,97,81)(60,82,98,134,160,169)(61,170,145,135,99,83)(62,84,100,136,146,171)(63,172,147,137,101,85)(64,86,102,138,148,173), (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16)(17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32)(33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48)(49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64)(65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80)(81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96)(97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112)(113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128)(129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144)(145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160)(161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176)(177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192), (1,58,44,132)(2,49,45,139)(3,56,46,130)(4,63,47,137)(5,54,48,144)(6,61,33,135)(7,52,34,142)(8,59,35,133)(9,50,36,140)(10,57,37,131)(11,64,38,138)(12,55,39,129)(13,62,40,136)(14,53,41,143)(15,60,42,134)(16,51,43,141)(17,102,67,173)(18,109,68,164)(19,100,69,171)(20,107,70,162)(21,98,71,169)(22,105,72,176)(23,112,73,167)(24,103,74,174)(25,110,75,165)(26,101,76,172)(27,108,77,163)(28,99,78,170)(29,106,79,161)(30,97,80,168)(31,104,65,175)(32,111,66,166)(81,117,159,182)(82,124,160,189)(83,115,145,180)(84,122,146,187)(85,113,147,178)(86,120,148,185)(87,127,149,192)(88,118,150,183)(89,125,151,190)(90,116,152,181)(91,123,153,188)(92,114,154,179)(93,121,155,186)(94,128,156,177)(95,119,157,184)(96,126,158,191)>; G:=Group( (1,73,126,44,23,191)(2,192,24,45,127,74)(3,75,128,46,25,177)(4,178,26,47,113,76)(5,77,114,48,27,179)(6,180,28,33,115,78)(7,79,116,34,29,181)(8,182,30,35,117,80)(9,65,118,36,31,183)(10,184,32,37,119,66)(11,67,120,38,17,185)(12,186,18,39,121,68)(13,69,122,40,19,187)(14,188,20,41,123,70)(15,71,124,42,21,189)(16,190,22,43,125,72)(49,174,149,139,103,87)(50,88,104,140,150,175)(51,176,151,141,105,89)(52,90,106,142,152,161)(53,162,153,143,107,91)(54,92,108,144,154,163)(55,164,155,129,109,93)(56,94,110,130,156,165)(57,166,157,131,111,95)(58,96,112,132,158,167)(59,168,159,133,97,81)(60,82,98,134,160,169)(61,170,145,135,99,83)(62,84,100,136,146,171)(63,172,147,137,101,85)(64,86,102,138,148,173), (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16)(17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32)(33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48)(49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64)(65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80)(81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96)(97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112)(113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128)(129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144)(145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160)(161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176)(177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192), (1,58,44,132)(2,49,45,139)(3,56,46,130)(4,63,47,137)(5,54,48,144)(6,61,33,135)(7,52,34,142)(8,59,35,133)(9,50,36,140)(10,57,37,131)(11,64,38,138)(12,55,39,129)(13,62,40,136)(14,53,41,143)(15,60,42,134)(16,51,43,141)(17,102,67,173)(18,109,68,164)(19,100,69,171)(20,107,70,162)(21,98,71,169)(22,105,72,176)(23,112,73,167)(24,103,74,174)(25,110,75,165)(26,101,76,172)(27,108,77,163)(28,99,78,170)(29,106,79,161)(30,97,80,168)(31,104,65,175)(32,111,66,166)(81,117,159,182)(82,124,160,189)(83,115,145,180)(84,122,146,187)(85,113,147,178)(86,120,148,185)(87,127,149,192)(88,118,150,183)(89,125,151,190)(90,116,152,181)(91,123,153,188)(92,114,154,179)(93,121,155,186)(94,128,156,177)(95,119,157,184)(96,126,158,191) ); G=PermutationGroup([[(1,73,126,44,23,191),(2,192,24,45,127,74),(3,75,128,46,25,177),(4,178,26,47,113,76),(5,77,114,48,27,179),(6,180,28,33,115,78),(7,79,116,34,29,181),(8,182,30,35,117,80),(9,65,118,36,31,183),(10,184,32,37,119,66),(11,67,120,38,17,185),(12,186,18,39,121,68),(13,69,122,40,19,187),(14,188,20,41,123,70),(15,71,124,42,21,189),(16,190,22,43,125,72),(49,174,149,139,103,87),(50,88,104,140,150,175),(51,176,151,141,105,89),(52,90,106,142,152,161),(53,162,153,143,107,91),(54,92,108,144,154,163),(55,164,155,129,109,93),(56,94,110,130,156,165),(57,166,157,131,111,95),(58,96,112,132,158,167),(59,168,159,133,97,81),(60,82,98,134,160,169),(61,170,145,135,99,83),(62,84,100,136,146,171),(63,172,147,137,101,85),(64,86,102,138,148,173)], [(1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16),(17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32),(33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48),(49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64),(65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80),(81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96),(97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112),(113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128),(129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144),(145,146,147,148,149,150,151,152,153,154,155,156,157,158,159,160),(161,162,163,164,165,166,167,168,169,170,171,172,173,174,175,176),(177,178,179,180,181,182,183,184,185,186,187,188,189,190,191,192)], [(1,58,44,132),(2,49,45,139),(3,56,46,130),(4,63,47,137),(5,54,48,144),(6,61,33,135),(7,52,34,142),(8,59,35,133),(9,50,36,140),(10,57,37,131),(11,64,38,138),(12,55,39,129),(13,62,40,136),(14,53,41,143),(15,60,42,134),(16,51,43,141),(17,102,67,173),(18,109,68,164),(19,100,69,171),(20,107,70,162),(21,98,71,169),(22,105,72,176),(23,112,73,167),(24,103,74,174),(25,110,75,165),(26,101,76,172),(27,108,77,163),(28,99,78,170),(29,106,79,161),(30,97,80,168),(31,104,65,175),(32,111,66,166),(81,117,159,182),(82,124,160,189),(83,115,145,180),(84,122,146,187),(85,113,147,178),(86,120,148,185),(87,127,149,192),(88,118,150,183),(89,125,151,190),(90,116,152,181),(91,123,153,188),(92,114,154,179),(93,121,155,186),(94,128,156,177),(95,119,157,184),(96,126,158,191)]]) 36 conjugacy classes class 1 2A 2B 2C 3 4A 4B 4C 4D 4E 4F 6A 6B 6C 8A 8B 8C 8D 12A 12B 12C 12D 12E 12F 16A ··· 16H 24A 24B 24C 24D order 1 2 2 2 3 4 4 4 4 4 4 6 6 6 8 8 8 8 12 12 12 12 12 12 16 ··· 16 24 24 24 24 size 1 1 1 1 2 2 2 8 8 24 24 2 2 2 2 2 2 2 4 4 8 8 8 8 6 ··· 6 4 4 4 4 36 irreducible representations dim 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 4 4 4 4 type + + + + + - + + - + - - + - + image C1 C2 C2 C2 C4 S3 Q8 D4 D6 Q16 D8 Dic6 C4×S3 C3⋊D4 SD32 C3⋊Q16 D4⋊S3 D8.S3 C8.6D6 kernel C6.SD32 C2×C3⋊C16 C24⋊1C4 C3×C2.D8 C3⋊C16 C2.D8 C24 C2×C12 C2×C8 C12 C2×C6 C8 C8 C2×C4 C6 C4 C22 C2 C2 # reps 1 1 1 1 4 1 1 1 1 2 2 2 2 2 8 1 1 2 2 Matrix representation of C6.SD32 in GL4(𝔽97) generated by 0 1 0 0 96 1 0 0 0 0 1 0 0 0 0 1 , 79 21 0 0 3 18 0 0 0 0 87 53 0 0 44 87 , 80 50 0 0 33 17 0 0 0 0 63 94 0 0 94 34 G:=sub<GL(4,GF(97))| [0,96,0,0,1,1,0,0,0,0,1,0,0,0,0,1],[79,3,0,0,21,18,0,0,0,0,87,44,0,0,53,87],[80,33,0,0,50,17,0,0,0,0,63,94,0,0,94,34] >; C6.SD32 in GAP, Magma, Sage, TeX C_6.{\rm SD}_{32} % in TeX G:=Group("C6.SD32"); // GroupNames label G:=SmallGroup(192,49); // by ID G=gap.SmallGroup(192,49); # by ID G:=PCGroup([7,-2,-2,-2,-2,-2,-2,-3,56,589,36,346,192,851,102,6278]); // Polycyclic G:=Group<a,b,c|a^6=b^16=1,c^2=a^3,b*a*b^-1=c*a*c^-1=a^-1,c*b*c^-1=b^7>; // generators/relations Export ׿ × 𝔽
## English Clearspeak Direct speech attributes. Locale: en, Style: Verbose. 0 $\stackrel{↔}{\mathrm{AC}}$ AC line 1 $\stackrel{˙}{x}=\sigma \left(y-x\right)$ derivative of x is equal to sigma times left parenthesis y minus x right parenthesis 2 $\stackrel{↔}{\mathrm{AC}}$ A C under line 3 $\stackrel{↔}{\mathrm{AC}}$ AC line 4 $\stackrel{↔}{\mathrm{AC}}$ A C under line 5 $\stackrel{↔}{\mathrm{AC}}$ AC line 6 my glyph 7 your glyph 8 my glyph 9 your glyph 10 my glyph 11 your glyph 12 $\text{}$ my glyph 13 $\text{}$ your glyph 14 $""$ my glyph 15 $""$ your glyph 16 $+=$ 23braid plus 132braid equals 13braid 17 $N{M}_{1\subset }$ N M sub 1 subset of mfin ## English Mathspeak Direct speech attributes. Locale: en, Style: Verbose. 0 $\stackrel{↔}{\mathrm{AC}}$ ModifyingAbove upper A upper C With line 1 $\stackrel{˙}{x}=\sigma \left(y-x\right)$ derivative of x is equal to sigma left parenthesis y minus x right parenthesis 2 $\stackrel{↔}{\mathrm{AC}}$ A C under line 3 $\stackrel{↔}{\mathrm{AC}}$ ModifyingAbove upper A upper C With line 4 $\stackrel{↔}{\mathrm{AC}}$ A C under line 5 $\stackrel{↔}{\mathrm{AC}}$ ModifyingAbove upper A upper C With line 6 my glyph 7 your glyph 8 my glyph 9 your glyph 10 my glyph 11 your glyph 12 $\text{}$ my glyph 13 $\text{}$ your glyph 14 $""$ my glyph 15 $""$ your glyph 16 $+=$ 23braid plus 132braid equals 13braid 17 $N{M}_{1\subset }$ upper N upper M Subscript 1 subset of mfin
Euclidean space Euclidean space is the fundamental space of classical geometry. Originally, it was the three-dimensional space of Euclidean geometry, but in modern mathematics there are Euclidean spaces of any nonnegative integer dimension,[1] including the three-dimensional space and the Euclidean plane (dimension two). It was introduced by the Ancient Greek mathematician Euclid of Alexandria,[2] and the qualifier Euclidean is used to distinguish it from other spaces that were later discovered in physics and modern mathematics. Ancient Greek geometers introduced Euclidean space for modeling the physical universe. Their great innovation was to prove all properties of the space as theorems by starting from a few fundamental properties, called postulates, which either were considered as evident (for example, there is exactly one straight line passing through two points), or seemed impossible to prove (parallel postulate). After the introduction at the end of 19th century of non-Euclidean geometries, the old postulates were re-formalized to define Euclidean spaces through axiomatic theory. Another definition of Euclidean spaces by means of vector spaces and linear algebra has been shown to be equivalent to the axiomatic definition. It is this definition that is more commonly used in modern mathematics, and detailed in this article.[3] In all definitions, Euclidean spaces consist of points, which are defined only by the properties that they must have for forming a Euclidean space. There is essentially only one Euclidean space of each dimension; that is, all Euclidean spaces of a given dimension are isomorphic. Therefore, in many cases, it is possible to work with a specific Euclidean space, which is generally the real n-space ${\displaystyle \mathbb {R} ^{n},}$ equipped with the dot product. An isomorphism from a Euclidean space to ${\displaystyle \mathbb {R} ^{n}}$ associates with each point an n-tuple of real numbers which locate that point in the Euclidean space and are called the Cartesian coordinates of that point.
## Introduction For many years, Landau’s theory of symmetry breaking was believed to be the ultimate explanation of continuous phase transitions1. In the liquid-crystal transition, for instance, the continuous translational and rotational symmetry at high temperatures break into a set of discrete symmetries in the low-temperature phase. This paradigm was challenged by Berezinskii, Kosterlitz, and Thouless (BKT) in the two-dimensional XY model2,3,4. For this model, the Mermin–Wanger theorem5 states that there is no ordered phase even at zero temperature, so that a phase transition in Landau’s sense cannot exist. Yet, BKT showed that, in fact, there is a finite temperature phase transition driven by topological defects: vortices and antivortices. At low temperature, vortex-antivortex pairs are bound together. Above the critical temperature, vortex-antivortex pairs unbind, moving freely on the surface. No symmetry is broken in the transition since both phases are rotationally invariant and so magnetization is zero in both phases. Topological order and topological phase transitions are nowadays fundamental to understand the properties of quantum matter6. We study this type of transition in the framework of complex networks, more specifically that of sparse geometric random network models. We use a geometric description of networks7 as it provides a simple and comprehensive approach to complex networks. The existence of latent metric spaces underlying complex networks offers a deft explanation for their intricate topologies, giving at the same time important clues on their functionality. The small-world property, high levels of clustering, heterogeneity in the degree distribution, and hierarchical organization are all topological properties observed in real networks that find a simple explanation within the network geometry paradigm7. Within this paradigm, the results found in this work hold in a very general class of spatial networks defined in compact homogeneous and isotropic Riemannian manifolds of arbitrary dimensionality8,9,10,11,12,13. Yet, in this paper, we focus on the $${{\mathbb{S}}}^{1}$$ model9 and its isomorphically equivalent formulation in the hyperbolic plane, the $${{\mathbb{H}}}^{2}$$ model14. Interestingly, many analytic results have been derived for the $${{\mathbb{S}}}^{1}/{{\mathbb{H}}}^{2}$$ model, e. g. degree distribution9,14,15, clustering14,15,16,17, diameter18,19,20, percolation21,22, self-similarity9, or spectral properties23 and it has been extended to growing networks24, as well as to weighted networks25, multilayer networks26,27, networks with community structure28,29,30 and it is also the basis for defining a renormalization group for complex networks31,32. The analytical tractability of the $${{\mathbb{S}}}^{1}$$ model makes it the perfect framework for our work. In this paper, we study a transition taking place in a very general class of sparse spatial random network models and show that it is, in fact, topological in nature. We show that both its thermodynamic properties as well as the finite size scaling behavior are, to the best of our knowledge, novel, and different to those observed in the BKT transition. We structure the paper in the following way: First, we introduce the $${{\mathbb{S}}}^{1}$$-model, which will be used to obtain both analytical as well as numerical results for the phase transition. Then, by mapping the network model to a model of non-interacting fermions, we are able to study analytically the behavior of the entropy at the critical point, showing that it diverges in the thermodynamic limit at the critical point, unlike in the case of the BKT transition. Next, we prove that the transition is topological in nature by noticing that in the transition, chordless cycles in the network play the role of topological defects with respect to a tree. The critical temperature separates a low-temperature phase, where the underlying metric space forces chordless cycles to be short range –mostly triangles– and a high-temperature phase, where chordless cycles decouple from the metric space and become of the order of the network diameter. This is similar to the unbinding of vortex-antivortex pairs in the BKT transition. These two distinct topological orders of the transition can be quantified by means of the average local clustering coefficient, a measure of the fraction of triangles attached to nodes. Clustering is finite in the geometric phase with short-range cycles and vanishes in the thermodynamic limit of the non-geometric phase with long-range chordless cycles. Thus, the local average clustering coefficient can be used to study the finite-size scaling behavior of the transition. This geometric to non-geometric phase transition shows interesting atypical scaling behavior as compared with standard continuous phase transitions, where one observes a power law decay at the critical point and a faster decay in the disordered phase. Instead, at the critical point, the average local clustering coefficient decays logarithmically to zero for very large systems and, in the non-geometric phase, where the coefficient decays as a power law, we discover a quasi-geometric region where the exponent that characterizes this decay depends on the temperature. ## Results and discussion ### The $${{\mathbb{S}}}^{1}$$-model In the $${{\mathbb{S}}}^{1}$$ model, nodes are assumed to live in a metric similarity space, where similarity refers to all the attributes that control the connectivity in the network, except for the degrees. At the same time, nodes are heterogeneous, with nodes with different levels of popularity coexisting within the same system. The popularity of a given node is quantified by its hidden degree. In our model, expected degrees can match observed degrees in real networks and we fix the positions of nodes in the metric space so that generated networks can be compared against real networks. This imposes constraints on the connection probability. Specifically, a link between a pair of nodes is created with a probability that resembles a gravity law, increasing with the product of nodes’ popularities and decreasing with their distance in the similarity space. We further ask the model to define an ensemble of geometric random graphs with maximum entropy under the constraints of having a fixed expected degree sequence. This determines completely the form of the connection probability depending on the value of one of the model parameters: temperature8. Next, we describe the $${{\mathbb{S}}}^{1}$$ model in the low and high-temperature regimes. Further technical details can be found in Supplementary Note 1.1. The $${{\mathbb{S}}}^{1}$$ is a model with hidden variables representing the location of the nodes in a similarity space and their popularity within the network. Specifically, each node is assigned a random angular coordinate θi distributed uniformly in [0, 2π], fixing its position in a circle of radius R = N/2π. In this way, in the limit N 1 nodes are distributed in a line according to a Poisson point process of density one with periodic boundary conditions. Each node is also given a hidden degree κi, which corresponds to its ensemble expected degree. In the low temperature regime, each pair of nodes is connected with probability $${p}_{ij}=\frac{1}{1+{\left(\frac{{x}_{ij}}{\hat{\mu }{\kappa }_{i}{\kappa }_{j}}\right)}^{\beta }},$$ (1) where xij = RΔθij is the distance between nodes i and j along the circle, and β > βc = 1 and $$\hat{\mu } = \frac{\beta }{2\pi \langle k\rangle }\sin \frac{\pi }{\beta }$$ are model parameters fixing the average clustering coefficient ($$\overline{c}$$) and average degree ($$\langle k \rangle$$) of the network, respectively8. In this representation, the parameter β plays the role of the inverse temperature, controlling the level of noise in the system. In the high temperature regime β < βc we again fix the angular coordinate and expected degree of the nodes (κi, θi) so that the degree distribution of the network remains unaltered when temperature is increased beyond the critical point and the model can be directly compared with real networks. Under these constraints, maximizing the entropy of the ensemble leads to the following connection probability8 $${p}_{ij}=\frac{1}{1+\frac{{x}_{ij}^{\beta }}{\hat{\mu }{\kappa }_{i}{\kappa }_{j}}},$$ (2) with $$\hat{\mu }\simeq (1-\beta ){2}^{-\beta }{N}^{\beta -1}/\langle k\rangle$$ for β < 1 and $$\hat{\mu }\simeq {(2\langle k\rangle \ln N)}^{-1}$$ when β = 1 (Here we define ‘AB’ as ‘A is asymptotically equal to B’, i.e. that the equality becomes exact as N → . This in contrast to ‘A ~ B’ which means that A and B are asymptotically proportional to one another). Notice that this definition of the model converges to the soft configuration model with a given expected degree sequence33,34,35,36 in the limit of infinite temperature β = 0. As we show in Supplementary Note 1.2, in this regime long range connections dominate, which causes the entropy density to scale as $$\ln N$$ (see Fig. 1) in the whole interval β [0, 1] (and so to diverge in the limit N → ) and the clustering to vanish in the thermodynamic limit. ### Entropy and the phase transition Now that we have defined the model both above and below the critical point βc = 1, we can study if the transition in the local properties (the presence of triangles attached to nodes) affects the global behavior of the system (codified by the thermodynamic properties, specifically the entropy). To this end, we show that, for β > βc, the networks generated by the $${{\mathbb{S}}}^{1}$$-model can be mapped exactly to a gas of identical particles with Fermi statistics. First, we note that the connection probability in Eq. (1) can be rewritten as the Fermi distribution8 $${p}_{ij}=\frac{1}{{e}^{\beta ({\epsilon }_{ij}-\mu )}+1},$$ (3) where the energy of state ij is $${\epsilon }_{ij}=\ln \left[\frac{{x}_{ij}}{{\kappa }_{i}{\kappa }_{j}}\right]$$ (4) and where the chemical potential $$\mu =\ln \hat{\mu }$$ fixes the expected number of links, as in the grand canonical ensemble. Second, links in our model are unlabeled –and so indistinguishable– objects. Third, the model generates simple graphs such that only one link can occupy a given state of energy ϵij, which implies that the links respect the Fermi exclusion principle. Finally, such a state is occupied with the probability given in Eq. (3), which is the occupation probability of the Fermi statistics in the grand canonical ensemble. Thus, the $${{\mathbb{S}}}^{1}$$ model is equivalent to a system of noninteracting fermions at temperature $$T=\frac{1}{\beta }$$8,14. These Fermi-like “particles” correspond to the links of the network and live on a discrete phase space defined by the N(N − 1)/2 pairs among the N nodes of the network. Each such state ij has an associated energy given by ϵij, which grows slowly with the distance between nodes i and j in the metric space. Despite the fact that links in the model are noninteracting particles, the system undergoes a continuous phase transition at a critical temperature $${T}_{c}={\beta }_{c}^{-1}=1$$, separating a geometric phase, with a finite fraction of triangles attached to nodes induced by the triangle inequality, and a non-geometric phase, where clustering vanishes in the thermodynamic limit9. We can analyze the nature of the transition by studying the entropy of the ensemble. Given the mapping of the $${{\mathbb{S}}}^{1}$$ model to a system of non-interacting fermions in the grand canonical ensemble, we start from the grand canonical partition function $$\ln {{{{{{{\mathcal{Z}}}}}}}}=\mathop{\sum}\limits_{i < j}\ln \left[1+{\left(\frac{{x}_{ij}}{\hat{\mu }{\kappa }_{i}{\kappa }_{j}}\right)}^{-\beta }\right],$$ (5) where $$\hat{\mu }=\exp \mu$$. Given the homogeneity and rotational invariance of the distribution of nodes in the similarity space, we can place the i’th node on the origin, leading to N identical terms. When the system size is large, we can approximate the sums in Eq. (5) by integrals. This leads to the following expression $$\ln {{{{{{{\mathcal{Z}}}}}}}} = \;N\iint {{{{{{{\rm{d}}}}}}}}\kappa {{{{{{{\rm{d}}}}}}}}\kappa ^{\prime} \rho (\kappa )\rho (\kappa ^{\prime} )\int\nolimits_{0}^{\infty }{{{{{{{\rm{d}}}}}}}}x\ln \left[1+{\left(\frac{x}{\hat{\mu }\kappa \kappa ^{\prime} }\right)}^{-\beta }\right]\\ = \;N\hat{\mu }{\langle k\rangle }^{2}\int\nolimits_{0}^{\infty }{{{{{{{\rm{d}}}}}}}}t\ln \left[1+{t}^{-\beta }\right]=N\frac{\hat{\mu }{\langle k\rangle }^{2}\pi }{\sin \frac{\pi }{\beta }}.$$ (6) We can then use the above expression to find the grand potential $${{\Xi }}=-{\beta }^{-1}\ln {{{{{{{\mathcal{Z}}}}}}}}$$ and the entropy as $$S={\beta }^{2}{(\frac{\partial {{\Xi }}}{\partial \beta })}_{\mu }$$ From this, we can find the entropy per link of the system as $$\frac{S}{E}=\beta -\pi \cot \frac{\pi }{\beta }\mathop{ \sim }\limits^{\beta \to {\beta }_{c}^{+}}\frac{1}{\beta -1},$$ (7) where in the last step $$\hat{\mu }$$ was plugged in. Note that E = Nk〉/2 is the number of links –and so particles– in the network. Interestingly, the entropy density is only a function of β, and so independent of the degree distribution. From Eq. (7), we see that the entropy per link diverges at the critical temperature $$\beta \to {\beta }_{c}^{+}=1$$. This implies that there is a sudden change in the behavior of the system at the critical point β = βc, which could indicate the presence of a phase transition. This transition is, however, anomalous –at odds with the continuous entropy density usually observed in continuous phase transitions– and thus cannot be described by Landau’s symmetry-breaking theory of continuous phase transitions. Figure 1 shows a numerical evaluation of the entropy for different system sizes in homogeneous networks confirming the divergence of the entropy per link at the critical temperature as predicted by our analysis. Nevertheless, as we show in Supplementary Note 1.2, entropy per link diverges logarithmically with the system size at β = βc so that the divergence can only be detected for very large systems. Notice that the $${{\mathbb{S}}}^{1}$$ model is rotationally invariant both above and below the critical temperature, which implies that there is no symmetry breaking at the critical point. In fact, we argue that βc separates two distinct phases with different organization of the cycles, or topological defects, in the network. Indeed, the cycle space of an undirected network with N nodes, E links, and Ncom connected components is a vector space of dimension E − N + Ncom37. This dimension is also the number of independent chordless cycles in the network as they form a complete basis of the cycle space. In complex networks, we are typically interested in connected or quasi-connected networks, with a giant connected component extending almost to the entire network. In the $${{\mathbb{S}}}^{1}$$ model this is achieved in the percolated phase when the average degree is sufficiently high, but still in the sparse regime so that the vast majority of cycles are contained in the giant component. In this case, by changing temperature without changing the degree distribution, the number of nodes, links, and components remain almost invariant and so does the number of chordless cycles. Thus, the two different phases correspond to a different arrangement of the chordless cycles of the network, as illustrated in the sketch in Fig. 1. This is again similar to the BKT transition since the number of vortices and antivortices is preserved in both phases. We, however, notice that the exact preservation of the number of cycles is not a necessary condition for the transition to take place. This difference in arrangement of the cycles is caused by the following process. At low temperatures, the high energy associated with connecting spatially distant points causes the majority of links attached to a given node to be local. This defines the geometric phase at β > βc where the triangle inequality plays a critical role in the formation of cycles of finite size. As temperature increases, the number of energetically feasible links connecting very distant pairs of nodes grows, and at ββc the number of available long-range states becomes macroscopic due to the logarithmic dependence of the energy on distance, which causes the entropy per link to be infinite in this regime. This defines a non-geometric phase where links are mainly long-ranged and the fraction of finite size cycles vanishes because the triangle inequality stops playing a role. This in turn implies that chordless cycles are necessarily of the order of the network diameter. In the geometric phase, there are finite cycles of any order, although, as we show in Fig. 2, the density of triangles is much higher than the density of squares, pentagons, etc. In the non-geometric phase, the cycles are of the order of the network diameter. However, due to the (ultra) small-world property and finite size effects, the diameter of the network can be quite small, so that the distinction between finite cycles of order higher than three and long-range cycles can be difficult. Therefore, the average local clustering coefficient –measuring the density of the shortest possible cycles, which are also the most numerous– is the perfect order parameter to quantify this topological phase transition. ### Finite size scaling behavior of the transition To quantify the behavior of clustering in this transition, we compute the average local clustering coefficient, $$\bar{c}$$, as the local clustering coefficient averaged over all nodes in a network. The local clustering coefficient for a given node i, with hidden variables (κi, θi), is defined as the probability that a pair of randomly chosen neighbors are neighbors themselves and, using results from38, can be computed as $${c}_{i}=\frac{{\sum }_{j\ne i}{\sum }_{k\ne i}{p}_{ij}{p}_{jk}{p}_{ik}}{{\left({\sum }_{j\ne i}{p}_{ij}\right)}^{2}}.$$ (8) In Supplementary Notes 1.3 and 1.4 we derive analytic results for the behavior of the average local clustering coefficient when hidden degrees follow a power law distribution ρ(κ) ~ κγ with 2 < γ < 3 and a cutoff κ < κc ~ Nα/2. Notice that the arguments above, presenting the average local clustering coefficient as an appropriate order parameter, should be valid for all choices of the distribution of the hidden degrees, as long as they lead to sparse graphs. Here, we choose this specific definition because it is the most common in the literature and allows for analytically tractable results. Notice also that it includes both the heterogeneous case with (α > 1) and without (0 < α ≤ 1) degree-degree correlations39, as well as the homogeneous case (α = 0, see Supplementary Note 1.3 for the derivation) where ρ(κ) = δ(κ − 〈k〉). When β > 1, i.e. in the geometric region, the average local clustering coefficient behaves as9 $$\mathop{\lim }\limits_{N\to \infty }\overline{c}(N,\beta )=Q(\beta ),$$ (9) for some constant Q(β) that depends on β. Moreover, there exists a constant $$Q^{\prime}$$ such that $$\mathop{\lim }\limits_{\beta \to {1}^{+}}\frac{Q(\beta )}{{(\beta -1)}^{2}}=Q^{\prime} .$$ (10) The analytic results for β ≤ 1 are derived by finding appropriate bounding functions $$f(N,\beta )\le \overline{c}(N,\beta )\le g(N,\beta )$$ that are both asymptotically proportional to Nσ(β)h(N, β), where h(N, β) represents some non-power law function of N, implying that $$\overline{c} \sim {N}^{-\sigma (\beta )}h(N,\beta )$$ as well. When $$\beta ^{\prime} < \beta \le 1$$, i.e. in the quasi-geometric region, $$\overline{c}(N,\beta ) \sim \left\{\begin{array}{ll}{(\log N)}^{-2} &{{{{{{{\rm{if}}}}}}}}\,\beta =1\\ {N}^{-2({\beta }^{-1}-1)} &{{{{{{{\rm{if}}}}}}}}\,\beta ^{\prime}\; < \; \beta \; < \; 1\end{array}\right.$$ (11) where the value of $$\beta ^{\prime}$$ depends on the parameter α. If α > 1 it is given by $$\beta ^{\prime} =2/\gamma$$ and if κc grows with N slower than any power law (α = 0) then $$\beta ^{\prime} =\frac{2}{3}$$. Notice that the behavior in a close neighborhood of βc is independent of γ. The fact that the microscopic details of the model, in particular the hidden degree distribution, do not affect this scaling behavior points to the universality of our results. Finally, when $$\beta \; < \; \beta ^{\prime}$$ (in the non-geometric region), the exact scaling behavior depends on α (see the Supplementary Note 1.3 for the case 0 < α ≤ 1): $$\overline{c}(N,\beta ) \sim \left\{\begin{array}{ll}{N}^{-(\gamma -2)}\log N\, & \quad {{{{{{{\rm{if}}}}}}}}\,\alpha \; > \; 1\\ {N}^{-1}\,\hfill & \quad {{{{{{{\rm{if}}}}}}}}\,\alpha =0.\end{array}\right.$$ (12) These results are remarkable in many respects. First, clustering undergoes a continuous transition at βc = 1, attaining a finite value in the geometric phase β > βc and becoming zero in the non-geometric phase β < βc in the thermodynamic limit. The approach to zero when $$\beta \to {\beta }_{c}^{+}$$ is very smooth since both clustering and its first derivative are continuous at the critical point. Second, right at the critical point, clustering decays logarithmically with the system size, and it decays as a power of the system size when β < βc. This is at odds with traditional continuous phase transitions, where one observes a power law decay at the critical point and an even faster decay in the disordered phase. Third, there is a quasi-geometric region $$\beta ^{\prime} \; < \; \beta \; < \; {\beta }_{c}$$ where clustering decays very slowly, with an exponent that depends on the temperature. Finally, for $$\beta \; < \; \beta ^{\prime}$$, we recover the same result as that of the soft configuration model for scale-free degree distributions34. The results in Eqs. (11) and (12)) around the critical point suggest that $${N}_{{{{{{{{\rm{eff}}}}}}}}}=\ln N$$ plays the role of the system size instead of N. Indeed, in terms of this effective size, we observe a power law decay at the critical point and a faster decay in the unclustered phase, as expected for a continuous phase transition. Consequently, we expect the finite size scaling ansatz of standard continuous phase transitions to hold with this effective size. We then propose that, in the neighborhood of the critical point, clustering at finite size N can be written as $$\bar{c}(\beta ,N)={\left[\ln N\right]}^{-\frac{\eta }{\nu }}f\left((\beta -{\beta }_{c}){\left[\ln N\right]}^{\frac{1}{\nu }}\right),$$ (13) with η = 2, ν = 1, and where f(x) is a scaling function that behaves as f(x) ~ xη for x → . We test these results with numerical simulations and by direct numerical integration of Eq. (8) using Eq. (1) for β > βc and Eq. (2) for ββc. Simulations are performed with the degree-preserving geometric (DPG) Metropolis-Hastings algorithm introduced in40, that allows us to explore different values of β while preserving exactly the degree sequence. Given a network, the algorithm selects at random a pair of links connecting nodes i, j and l, m and swaps them (avoiding multiple links and self-connections) with a probability given by $${p}_{swap}=\min \left[1,{\left(\frac{{{\Delta }}{\theta }_{ij}{{\Delta }}{\theta }_{lm}}{{{\Delta }}{\theta }_{il}{{\Delta }}{\theta }_{jm}}\right)}^{\beta }\right],$$ (14) where Δθ is the angular separation between the corresponding pair of nodes. This algorithm maximizes the likelihood that the network is $${{\mathbb{S}}}^{1}$$ geometric while preserving the degree sequence and the set of angular coordinates, and it does so independently of whether the system is above or below the critical temperature. Notice that the continuity of Eq. (14) as a function of β makes it evident that, even if the connection probability takes a different functional form above and below the critical point, the model is the same. Figure 3 shows the behavior of the average local clustering coefficient as a function of the number of nodes for homogeneous $${{\mathbb{S}}}^{1}$$ networks with different values of β, showing a clear power law dependence Nσ(β) in the non-geometric phase β < βc, with an exponent that varies with β as predicted by our analysis. These results are used to measure the exponent σ(β) as a function of the inverse temperature β, which in Fig. 4 are compared with the theoretical value given by Eqs. (11) and (12)). The agreement is in general very good, although it gets worse for values of β very close to βc and for very heterogeneous networks. This discrepancy is expected due to the slow approach to the thermodynamic limit in the non-geometric phase, which suggests that the range of our numerical simulations, N [5 × 102, 105], is too limited. To test for this possibility, we solve numerically Eq. (8) for sizes in the range N [5 × 105, 108] and measure numerically the exponent σ(β). In this case, the agreement is also very good for heterogeneous networks. The remaining discrepancy when β ≈ βc is again expected since, as shown in Eq. (11), right at the critical point clustering decays logarithmically rather than as a power law. Finally, Fig. 5 shows the finite size scaling Eq. (13) both for the numerical simulations and numerical integration of Eq (8). In both cases, we find a very good collapse with exponent η/ν ≈ 2 in all cases. The exponent ν, however, departs from the theoretical value ν = 1 in numerical simulations due to their small sizes but improves significantly with numerical integration for bigger sizes. We then expect Eq. (13) to hold, albeit for very large system sizes. The slow decay of clustering in the non-geometric phase implies that some real networks with significant levels of clustering may be better described using the $${{\mathbb{S}}}^{1}$$ model with temperatures in the quasi-geometric regime β < βc. Given a real network, the DPG algorithm can be used to find its value of β. To do so, nodes in the real network are given random angular coordinates in (0, 2π). Then the DPG algorithm is applied, increasing progressively the value of β until the average local clustering coefficient of the randomized network matches the one measured in the real network. Many real networks have very high levels of clustering and lead to values of β > βc. However, there are notable cases with values of β below the critical point. As an example, in Supplementary Note 2 we show values of β obtained for several real networks with values below or slightly above βc. In fact, some of them are found to be very close to the critical point, like protein-protein interaction networks of specific human tissues41, with β ≈ 1, or the genetic interaction network of the Drosophila Melanogaster42, β ≈ 1.1. ## Conclusions The $${{\mathbb{S}}}^{1}$$ model shows different behavior of the average local clustering coefficient on the left and right side of βc = 1. To understand if there is a phase transition that goes beyond clustering, we cast the model into a framework of Fermi statistics and compute the entropy of the ensemble. The result shows that the entropy diverges at the critical point, implying a change in the structural organization of the system as a whole. Because the model is rotational invariant in both regimes, one can conclude that this transition is not due to symmetry breaking. The behavior of clustering—non-zero on the right and vanishing on the left of the critical point—indicates that the transition is of topological nature related to the organization of cordless cycles. As the model around the critical point is in the small-world regime, the largest cycles are, at most, of the order $$\ln N$$ on both sides. This implies that short cycles, like triangles, are more appropriate as the order parameter to study the phase transition. As the $${{\mathbb{S}}}^{1}$$ model is geometric in nature, the set of states that edges—considered noninteracting particles in the Fermi description—can occupy are correlated by the triangle inequality in the underlying metric space. This correlation induces an effective interaction between particles, ultimately leading to a clustered phase at low temperatures and to the anomalous phase transition described above. Interestingly, the logarithmic dependence of the state-energy with the metric distance results in the divergence of the entropy at a finite temperature βc and, thus, to a different ordering of cycles below βc, where clustering vanishes in the thermodynamic limit. The finite size behavior of the transition is anomalous, with $$\ln N$$ and not N playing the role of the system size. This slow approach to the thermodynamic limit is relevant for real networks in the quasi-geometric phase $$\beta ^{\prime} \; < \; \beta \; < \; 1$$, for which high levels of clustering can still be observed. All together, our results describe an anomalous topological phase transition that cannot be described by the classic Landau theory but that, nevertheless, differs from other topological phase transitions, such as the BKT transition, in the behavior of thermodynamic properties.
# Question 5 10 pts (A) Calculate AG for the reaction: 2H20 (1) = H3O+ (aq) +... ###### Question: Question 5 10 pts (A) Calculate AG for the reaction: 2H20 (1) = H3O+ (aq) + OH- (aq) at 25°C given the following initial concentrations, and thermodynamic values: [H30*) - 1.0 x 10-12 M and (OH) = 2.0 x 108M Substance AG; (H) 34.0 H30+ (aq) OH(aq) H2O(1) 1595 -237 (B) Predict the direction in which the reaction will proceed spontaneously to establish equilibrium. For full credit be sure to upload a copy of All work necessary to arrive at the answer for this question. Please note that work submitted of any kind that is not clear, legible, complete or in a format that can be opened by the instructor will be given a grade of zero! Upload Choose a File #### Similar Solved Questions ##### 1. Let L: R2-R2 be defined by L(x.y) (x +2y, 2x - y). Let S be... 1. Let L: R2-R2 be defined by L(x.y) (x +2y, 2x - y). Let S be the natural basis of R2 and let T = {(-1,2), (2,0)) be another basis for R2 . Find the matrix representing L with respect to a) S b) S and1T c) T and S d) T e) Find the transition matrix Ps- from T basis to S basis. f) Find the transitio... ##### Show Intro/Instructions Using your favorite statistics software package, you generate a scatter plot with a regression... Show Intro/Instructions Using your favorite statistics software package, you generate a scatter plot with a regression equation and correlation coefficient. The regression equation is reported as y = 81.28.1 + 71.93 and the r = -0.85. What percentage of the variation in y can be explained by the var... ##### Two large containers A and B of the same size are filled with different fluids. The... Two large containers A and B of the same size are filled with different fluids. The fluids in containers A and B are maintained at 0C and 100C, respectively. A small metal bar, whose initial temperature is 100C, is lowered into container A. After 1 minute the temperature of the bar is 90C. After 2 m... ##### How many chiral carbon atoms exist in the following compound? (Fill in a number) How many... How many chiral carbon atoms exist in the following compound? (Fill in a number) How many possible stereoisomers are there of dextrose, an organic compound found in Lactaid and also in blood-sugar in hospital IVs? (Fill in a number) Is the letter "J" chiral or achiral? Achiral ... ##### Review the tenets of the Patient Bill of Rights. Are any of these protections more important... Review the tenets of the Patient Bill of Rights. Are any of these protections more important than the others? Which one stands out to you as the most important and why? A minimum word count of 150 words... ##### Problem 1 REGIONAL AIRLINES Regional Airlines is establishing a new telephone system for handling flight reservations.... Problem 1 REGIONAL AIRLINES Regional Airlines is establishing a new telephone system for handling flight reservations. During the 10:00 A.M. to 11:00 A.M. time period, calls to the reservation agent occur ran- domly at an average of one call every 3.75 minutes. Historical service time data show that... ##### Clifford, Inc., has a target debt-equity ratio of .65. Its WACC is 8.1 percent, and the... Clifford, Inc., has a target debt-equity ratio of .65. Its WACC is 8.1 percent, and the tax rate is 23 percent. a. If the company's cost of equity is 11 percent, what is its pretax cost of debt? (Do not round intermediate calculations and enter your answer as a percent rounded to 2 decimal place... ##### A certain process for removing copper from copper bearing ore had been in use for a... A certain process for removing copper from copper bearing ore had been in use for a period of time, and it is known that this process removes 36 pounds per ton on average. A new process that is supposed to increase the recovered amount is being tested. In a simple random sample of 100 batches of ore... ##### If the quadratic formula of a quadratic function yields 14, how many unique real and complex zero(s) does the function have? If the quadratic formula of a quadratic function yields 14, how many unique real and complex zero(s) does the function have?... ##### Scorer033 af 1 pt 9 of 10 (10 complete) HW Score: 79.67 % , 7.97 of 10 pt P 8-27 (book/static) EQuestion Heip You a... Scorer033 af 1 pt 9 of 10 (10 complete) HW Score: 79.67 % , 7.97 of 10 pt P 8-27 (book/static) EQuestion Heip You are considering making a movie. The movie is eapected to cost $10.0 million up front and take a year to produce Ater that, tis expected to make$5.0 milion in the year it is released and... ##### 6.28 Rank the three carbocations shown in terms of increasing stability: (a) 6.28 Rank the three carbocations shown in terms of increasing stability: (a)...
# A poll from a previous year showed that 10% of smartphone owners relied on their data plan as their primary form of internet access. Researchers were curious if that had changed, so they tested H_0:p=10% versus H_a:p=!10% where p is the proportion of smartphone owners who rely on their data plan as their primary form of internet access. They surveyed a random sample of 500 smartphone owners and found that 13% of them relied on their data plan. The test statistic for these results was z aprox 2.236, and the corresponding P-value was approximately 0.025.Assuming the conditions for inference were met, which of these is an appropriate conclusion? A poll from a previous year showed that 10% of smartphone owners relied on their data plan as their primary form of internet access. Researchers were curious if that had changed, so they tested ${H}_{0}:p=10\mathrm{%}$ versus ${H}_{a}:p\ne 10\mathrm{%}$ where p is the proportion of smartphone owners who rely on their data plan as their primary form of internet access. They surveyed a random sample of 500 smartphone owners and found that 13% of them relied on their data plan. The test statistic for these results was $z\approx 2.236$, and the corresponding P-value was approximately 0.025. Assuming the conditions for inference were met, which of these is an appropriate conclusion? a) At the $\alpha$=0.01 significance level, they should conclude that the proportion has changed from 10%. b) At the $\alpha$=0.01 significance level, they should conclude that the proportion is still 10%. c) At the $\alpha$=0.05 significance level, they should conclude that the proportion has changed from 10%. d) At the $\alpha$=0.05 significance level, they should conclude that the proportion is still 10%. The correct answer is c but why could it not have been b? Why is it c? You can still ask an expert for help • Questions are typically answered in as fast as 30 minutes Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it snowman8842 Two issues: (c) has 0.05 while (b) has 0.01 Your choices are conventionally "reject the null hypothesis" or "do not reject the null hypothesis", but not "accept the null hypothesis" which is close to the wording of (b) and (d)
### Symmetry, Integrability and Geometry: Methods and Applications (SIGMA) SIGMA 11 (2015), 083, 11 pages      arXiv:1509.00886      https://doi.org/10.3842/SIGMA.2015.083 Contribution to the Special Issue on Orthogonal Polynomials, Special Functions and Applications ### Certain Integrals Arising from Ramanujan's Notebooks Bruce C. Berndt a and Armin Straub b a) University of Illinois at Urbana-Champaign, 1409 W Green St, Urbana, IL 61801, USA b) University of South Alabama, 411 University Blvd N, Mobile, AL 36688, USA Received September 05, 2015, in final form October 11, 2015; Published online October 14, 2015 Abstract In his third notebook, Ramanujan claims that $$\int_0^\infty \frac{\cos(nx)}{x^2+1} \log x \,\mathrm{d} x + \frac{\pi}{2} \int_0^\infty \frac{\sin(nx)}{x^2+1} \mathrm{d} x = 0.$$ In a following cryptic line, which only became visible in a recent reproduction of Ramanujan's notebooks, Ramanujan indicates that a similar relation exists if $\log x$ were replaced by $\log^2x$ in the first integral and $\log x$ were inserted in the integrand of the second integral. One of the goals of the present paper is to prove this claim by contour integration. We further establish general theorems similarly relating large classes of infinite integrals and illustrate these by several examples. Key words: Ramanujan's notebooks; contour integration; trigonometric integrals. pdf (295 kb)   tex (12 kb) References 1. Berndt B.C., Ramanujan's notebooks. Part I, Springer-Verlag, New York, 1985. 2. Berndt B.C., Ramanujan's notebooks. Part IV, Springer-Verlag, New York, 1994. 3. Gradshteyn I.S., Ryzhik I.M., Table of integrals, series, and products, 8th ed., Academic Press Inc., San Diego, CA, 2014. 4. Ramanujan S., Collected papers, Cambridge University Press, Cambridge, 1927, reprinted by Chelsea, New York, 1962, reprinted by Amer. Math. Soc., Providence, RI, 2000. 5. Ramanujan S., Notebooks. Vols. 1, 2, Tata Institute of Fundamental Research, Bombay, 1957.
home   |   primer index   |   1. Intro: Why 6502?   |   2. addr decode   |   3. mem map req.s   |   4. IRQ/NMI conx   |   5. 74 families & timing   |   6. clk gen   |   7. RST   |   8. mystery pins   |   9. AC performance construction   |   10. exp bus & interfaces   |   11. get more on a board   |   12. WW Q&A   |   13. custom PCBs   |   14. I/O ICs   |   15. displays   |   16. getting 65xx parts   |   17. project steps   |   18. program-writing   |   19. debugging   |   20. pgm tips   |   21. workbench equip   |   22. circuit potpourri 6502 PRIMER: Building your own 6502 computer Most circuits here are ones I've actually been using.  A few may be ideas I only sketched without trying—but I do have a strong record of circuits working right the first time. All links below have been verified or fixed Dec 12, 2020. Note that there are many ways to do most of these things, and probably just as many that a newcomer might think of that are DOA for various reasons.  The goal of this page is to make your home-made computer a useful "Swiss army knife" of the workbench, rather than just a novelty for fun only, or a consumer item.  There's no OS or kernel for it on my website yet, but I hope to publish my 6502 and 65816 Forths as time allows. First: a very basic whole-computer schematic Since I have been asked for a whole-computer schematic of a really basic 6502 computer, here's one.  It's the first time in 18 years that I've put the whole computer in one diagram.  It is a modified (actually simplified) version of the schematic for the computer shown near the bottom of the page on address decoding.  There was the temptation to add other easy things, but the point here is to keep it almost as simple as it could be. Notes: 1. Three clock options are shown.  You will need to pick one.  Do not install the parts for all three on the same board. • For using an oscillator can that goes into a 14-pin DIP socket, omit C4, C5, C6, Y1, and R4. • For connecting a crystal directly to the processor, omit U3 (the oscillator can) and C4, and make R4 220K.  The data books say to make C5 and C6 51pF and make R4 200K, but more-standard values are 47pF and 220K which I suspect will work fine.  The books also say 1MHz; and although I suspect you could go much higher and it would still work, I don't know how high. • For using a resistor and capacitor for a non-speed-critical timebase, omit U3, C5, C6, and Y1, and make C4 68pF and make R4 5.6K to get approximately 1MHz.  You could vary this also but again I don't know how high you can dependably go.  You won't hurt anything by trying. NOTE about the RC and crystal options above:  WDC no longer tests or specifies the gate delays between Φ0 in, Φ1 out, and Φ2 out for their newer 65c02's, and they would prefer that the designer use the external oscillator option.  All the same internal inverters seem to still be in place however, so I have little doubt that the circuit above will still work fine; but I had to pass the info on.  If you do it per WDC's preference, the output of your external oscillator (probably an oscillator can) goes to everything requiring Φ2, and pins 3 and 39 go unused. 2. The reason the quad NAND gate is a 74HC132 instead of a 74HC00 is that the reset circuit needs the schmitt-trigger input.  The '132 will be a little slower than the '00, but there's a maximum of only two gate delays and that's in the I/O-select circuit.  It will be plenty fast for operation at a few MHz.  You might be able to find a 74AC132 which would be faster than the 74HC132. 3. For the power input connector J1, choose a type that will not let you accidentally connect the power backwards.  DC-10 jacks are popular. 4. JW1 selects which interrupt input you use to the processor.  If you stay with only one I/O IC, you will normally use only IRQ, not NMI.  For this, connect JW1 pin 2 to pin 3, and also pin 4 to pin 5.  If you add more I/O ICs later, you may want jumper selections for each one; but be sure to observe the methods in the IRQ/NMI connections section. 5. JW2 needs to be shorted for non-WDC 65(c)02's.  For WDC, leave it open, because it is a vector-pull output instead of a ground connection. 6. JW3 selects whether you're using a 27c256 EPROM or a 28c256 EEPROM.  For the 27c256, connect 2 to 3, and 4 to 5.  For the 28c256, connect 1 to 2, and 3 to 4.  Slight additional circuit complexity will be needed to allow the computer to write to its own EEPROM (see Daryl Rictor's example code in this forum post), using a WR signal as shown near the bottom of section 6 of this primer, on clock generation. RAM: 0000-3FFF (using half of a 62256 32Kx8 SRAM.  Running OE to A14 keeps the top half from interfering.) I/O:    4000-7FFF (but the 6522 VIA shown above will be addressed at 6000-600F.  More on additional I/O below.) ROM: 8000-FFFF (using all 32KB of EPROM or EEPROM) 8. If you don't ever add another I/O IC, another way to do the address decoding would be to connect A15 to the VIA's CS2, and A14 to its CS1.  This removes one level of gate delay in the address decoding without changing the above addresses, but precludes ever adding more I/O ICs.  For most uses of the computer I would highly recommend at least one more VIA though! 9. For each additional I/O IC you may want to add, connect its CS pin to I/O SEL and its CS pin to the next address line down.  This way you can have up to ten I/O ICs with no additional glue logic.  You can probably see already that each IC will have more than one possible addresses range; but it is still possible to address each IC individually.  In decades of doing it this way, I have never had any problem with it. 10. The I/O pin headers are standard dual-row 14-pin headers with .025" square posts on .100" centers.  You will typically plug projects into these with IDCs on ribbon cables.  The pin numbering is given for as you look down on the pins from the top.  The organization is the same as Daryl's except that I made the power and ground connections at the ends such that accidentally reversing the IDC on the header will not swap power and ground and damage things.  If you want it compatible with his SBCs, make pins 1 & 2 to be ground, and 13 & 14 to be +5V.  Be sure to put enough room between the pin headers to plug the IDCs onto.  Insufficient room will prevent connecting both at once. 11. The capacitors shown in the bottom-right corner should go from power to ground as close as possible to the power and ground pins of each IC, with the shortest possible leads and connections.  The value is not critical, and 0.1µF is common and is fine; but depending on the construction, .01µF sometimes results in better bypassing at higher frequencies where groundbounce is more of a problem. In the tips below, you will see that you can interface an unexpectedly high amount of things on a single VIA; but again, I strongly recommend at least one additional VIA for most uses of the computer.  Note #9 above tells how to add more.  Even if you don't want to add it to start, if there's any possibility at all that you'll want it in the future, at least leave room for it and the connectors. Do get cozy with the VIA's registers, timers, and capabilities.  Its name, Versatile Interface Adapter, is quite appropriate.  If you have questions, you can email me at wilsonminesBdslextremeBcom (replacing the B's with @ and .) or post them to the 6502.org forum. Then: Connecting I/O devices Using the 6522's shift register for tons of output bits The 65(c)22 VIA's shift register (SR) has many modes of operation and uses.  When speed doesn't need to be superfast (as when controlling relays, audio connections, the output voltage of a programmable power supply, etc.), you can use the SR to expand the number of input and/or output bits, virtually without limit.  It becomes extra helpful when those bits need to be at a different logic voltage (like 12V), something other than the voltage the computer runs at.  (More on that later.)  First, consider a way to give lots of output bits at the computer's own logic voltage levels: Although four 74HC595's (giving 32 output bits) are shown, you can expand the chain almost indefinitely, as long as the 595's are CMOS (ie, have high-impedance inputs, unlike 74LS or other non-CMOS).  CA2 is shown as the output to latch the shifted values into the 595's, but you could use any output bit for that.  I like to use CA2 to leave PA and PB free for other uses where you might want all 8 bits of a port, like for an 8-bit A/D or D/A converter.  (I'll address that further down too.)  Use 6522 SR mode 101 to clock the data out under control of T2, or mode 110 to clock it at half the Φ2 rate.  Example code is given about 80% of the way down the front page of the 16-Bit Fixed-Point/Scaled-Integer Math section.  There's more there than you need for output only, but it shouldn't be hard to figure out.  I want to put code on this page here when I figure out how to put scrollable windows in it so the code doesn't make the page so long. Using the 6522's shift register for tons of input bits Similar to the above, the 65(c)22 VIA's shift register can be used to expand the number of input bits, again virtually without limit.  For this, use SR mode 001 to clock the data in under control of T2, or mode 010 to clock it at half the Φ2 rate.  These are bits 4, 3, and 2 of the VIA's auxiliary control register, or ACR. (The '165 signal pin not shown is the inverted serial output, pin 7.) Althought only three 74HC165's (giving 24 input bits) are shown, you can expand the chain almost indefinitely, as long as the 165's are CMOS (ie, have high-impedance inputs, unlike 74LS or other non-CMOS).  CA2 is shown as the output to load the 165's when it's low and allow them to shift the loaded data bits when it's high.  (Later, when we address using the same SR for both and input chain and an output chain, we will need a 74HC126, and the CA2-high output will then also enable the '126.)  Again, I like to use CA2 to leave PA and PB free for other uses where you might want all 8 bits of a port. "So why the 74HC74 D-type flip-flop at the end of the chain?" you might be asking.  The answer is that the VIA looks at the data line at the first rising Φ2 clock edge after the serial clock goes up; but the '165 also goes to the next bit when the serial clock goes up, so the first bit the VIA sees is bit 6, not bit 7.  Since the '165 chain gets ahead by one bit, we delay it by one bit with the flip-flop.  There are other ways to get around this, like putting the output of the last '165 around to the input of the first one and rotating in software (which is not practical if the chain spans multiple circuit boards), or offsetting the bits in the hardware connections (which might require adding an extra '165 to every board if the chain spans multiple boards). When we get to the software, note that after you latch the data into the 165's, you have to do a dummy read of the VIA's SR to start the shifting process.  Then allow enough time to finish the process before reading again—at least 16 Φ2 clocks in mode 010, or longer in mode 001, depending on what you have T2 set to.  Read once more for each additional '165.  To restart the process and get a new set of readings, put CA2 low to load the new set of data into all the 165's, and then put it back high again to shift, then begin the shifting again, starting with the dummy read. Using the 6522's shift register for both input and output The circuit at approximately the middle of the front page of the 16-bit scaled-integer math section gives the idea for combining the two uses above.  The circuit there is a last-resort method to interface to the huge math look-up tables in ROM if all your other I/O is taken.  Interfacing to huge memory this way is slow, but still much faster than actually calculating the various trigonometric, logarithmic, and other functions, and it's accurate to all 16 bits. SS22: Using the 6522's shift register for a link between computers Article is at http://forum.6502.org/viewtopic.php?f=4&t=2175 Converting to and from higher-voltage logic Some situations require interfacing to control circuitry that works on 12V or other voltage substancially higher than the 5V (or 3.3V, or whatever your creation works at).  I have, many times, used for example 4066, 4051, 4052, and 4053 analog switches to control 12V analog circuits (mostly audio), and these ICs needed the higher-voltage logic.  (Maxim has these available with 5V logic control, but it's a very expensive way to go.)  Consider the common and cheap 14-pin LM339.  This diagram shows going from TTL-level logic to 12V logic levels: If you take the top of R1 to a different voltage, you will need to change the resistor value.  If this goes to a separate board that only has 12V (or other high voltage) available and not 5V, you can of course take R1 to that voltage, and calculate the adjusted value.  1.4V across 22K gives 63.6µA, so take your high voltage minus 1.4V and divide that result by 63.6µA to get your resistor value.  It's not critical to get the reference voltage very exact in this case, so you can round to the nearest standard value within 20% or so and do just fine.  The input bias current of each LM339 section is typically only 25nA, not enough to make a dent in the calculation, so we can leave it out for simplicity's sake, even if you run the same reference to many LM339 sections. For going the other direction, ie, higher voltage to 5V logic, you can do something like this: It does not have to be 12V of course.  The LM339 can work up to 36V, and although its power supply voltage needs to be at least 2V, the input & output can go even much lower than that.  If 12V (or similar) logic is driving the input, you will do just fine if you want to connect the reference to the 5V supply, and then you won't need to get the reference from resistors. A comparator is similar to an op amp, but it's not an op amp.  The comparator is made to have its output hit the rails and recover more quickly than an op amp of similar price can.  The penalty is that the phase response is not kept under control to keep it stable in an analog output range; but that's not what it's for anyway.  One of the reasons for this comparator however is the open-collector output that lets the high output voltage be independent of the comparator's supply voltage. For best speed: • Bypass the 339's power to ground with a .1µF capacitor, like you would other logic ICs, with short connections. • Bypass the reference voltage input to ground with another .1µF capacitor. • Connect the 339's power pin to the higher voltage.  (This improves the speed a little but does not affect the output voltage, as the latter depends on what you pull it up to, not on the 339's power-supply voltage.) • Do not divide the signal input with resistors, since along with the input capacitance and other stray capacitance, you would get an RxC that would slow it down.  If you really must divide it down, see the forum topic at Mixed Voltage Systems: Interfacing 5V and 3.3V Devices.  (It's not very long.) • Keep the pull-up resistor at around 6.8K for 12V, 4.7K for 5V, and 3.3K for 3.3V.  There may be a temptation to increase the value to save power; but that will slow it down. • Minimize the capacitive load on the output. • Note that the LP339 is the lower-power version, but it is also much slower than the LM339. I've gotten over 4MHz with a 4049 following the LM339 this way.  The LM360 and probably others are faster, but often the circuitry on the 12V side can't use any more speed than the 339 can deliver anyway.  I used the LM339 in my home-made PIC microcontroller programmer because the workbench computer that runs it runs at 5V but the programmer can verify at anywhere from 2V to 6V (so yes, it is production-worthy), controlled by the workbench computer's D/A output.  This requires logic-level translation, hence the LM339; but although it worked for many years on many different PIC variations I programmed, there was trouble sometimes with the PIC16F72 and PIC16F74 which turned out to be because the slew rate on the clock line coming out of an LM339 section was not fast enough.  The solution was to add a pair of 74HC14 Schmitt-trigger inverter sections in series to improve the rise time.  (The 'HC14 can work at 2V to 6V, so it was ok.) For going between 5V and 3.3V or other low logic voltages, there are other ICs on the market that would be more suitable than the 339 (although I have used the 339 there too), like the 74LVCxx family.  Additionally, after I wrote the above, "ttlworks" on the forum alerted me to the 74LS06 (inverting) and 74LS07 (non-inverting) open-collector hex buffers which are much faster than the 339 and have a maximum output voltage of 30V but whose inputs do make for a greater load and are not available in CMOS; and Jeff Laughton alerted us to this series of TI videos on their LSF family of voltage-level translation ICs. Efficient bit-twiddling and testing The 74xx251 and '259 can be used to twiddle or test I/O bits independently of others in a set, much more efficiently than you can with just a 6522 VIA.  On the front page of this site, I offer a module with 8 input bits and 8 output bits for this.  It requires neither RMW instructions, nor AND nor ORA instructions, nor altering A, X, or Y.  The data sheet (.pdf) has the circuit, ideas for implementation, sample code, and more, which you can use to build such a circuit onto your own board, or you can buy mine.  The idea came from the book "Advanced 6502 Interfacing" by John M. Holland, first edition, pages 37 & 53. Driving 12V relay coils For driving 12V relay coils, the 339's output might be slightly anemic; so instead, it's probaly best to connect 5V logic outputs to the base of something like a 2N4401 transistor, connect the emitter to ground, and the collector to the low side of the relay coil.  Don't forget the diode to protect the transistor from the inductive turn-off transient! The resistor value was chosen for a 15mA coil and making sure the transistor is really saturated even if its gain is at the low extreme of its allowable current-gain range and the circuit feeding it can't pull up to more than about 3V.  CMOS will be able to pull it up higher but the overdrive is nowhere near enough to hurt anything. If you need a lot of these, you might do better to use an IC like the ULN2803 which has 8 Darlington drivers in an 18-pin package with integral resistors and protection diodes.  That would replace 24 discrete components, three for each circuit above, times eight.  The only negative about it is that being Darlington, the output voltage won't come down to much under a volt.  The configuration shown above however allows the output voltage to come nearly all the way to ground if the load is light enough and the bias current strong enough for the chosen transistor. High-voltage shift registers If you have a lot of higher-voltage (e.g., 10V, 12V, 15V) circuits you need to interface to your computer which has 5V I/O, it makes sense to use higher-voltage shift registers and only convert the three bits that go to and from the 6522's SR, instead of using the 74HC165's and 595's above and converting gobs of bits.  So use the LM339 circuits (three bits fit into a single 339's four sections with one left over), and, instead of the 165's and 595's, use 4094's for output bits and 4021's for input bits.  Again, the shift-register chains can be quite long.  4000-series logic is very slow compared to 74HC, 74AC, 74LS, etc., but is good for 18V max.  Note that there is no "74" in front of the 4000 numbers in this case. In the automated test equipment I designed, built, and programmed in approximately 1990, I drove approximately 75 relays by way of Allegro Microsystems UCN5821A (I think it was made by Sprague at the time) 8-bit, serial-in, parallel-out 16-pin shift registers which are rated for 50V, 350mA, made specifically for driving relays and other heavyish loads.  It looks like Micrel, now acquired by Microchip, sells it now under the part number MIC5821BN.  These were mixed in with lots of 4094's in the same serial chain all controlled by the same three pins of the 6522 VIA. There's also TI's TPIC6 line of logic ICs, with for example the TPIC6A595 shift register that is similar to the 74HC595 mentioned above in the section "Using the 6522's shift register for tons of output bits" but with open-drain outputs that can handle 50V when off and 350mA each when on, even though the IC's power supply and logic inputs are 5V, and it has extra ground pins to handle all that extra current (IOW, it cannot be used as a drop-in replacement for the 74HC595).  Also available are the '259 8-bit addressable latch and the '273 octal D-type latch, although these last two are much more expensive, at over $4 each in singles. Interfacing to I²C There are hundreds, possibly thousands, of peripheral ICs on the market, from many manufacturers, that are interfaced by way of I²C (commonly pronounced "eye squared see") which stands for "inter-integrated circuit" and was invented by Philips. Wikipedia has quite a write-up on it, and Philips has a seminar-style overview on it. It is a synchronous-serial interface that requires only two wires; a bi-directional data line and a clock line. Each transaction is begun with a "start" condition and then the address of the target device. The other devices stand by and don't respond or do anything until there's a "stop" condition followed by another "start" condition and their address. As long as you don't go too fast for the bus with its stray capacitance, there are no particular timing requirements like RS-232 has, so bit-banging it is very easy. More on that later. Here's one way to interface I²C to a 6522 VIA: You can use any I/O bits you want, but I chose the ones above for particular reasons. • Bit 0 makes a good clock bit, because as long as you know its value, you can pulse it with only two instructions, INC<port> DEC<port> to make a positive pulse, or swap them for a negative pulse, without affecting the other bit values. If you put the VIA in zero page, you could use the SMB and RMB instructions to put the clock on any other bit, or if you can keep the desired mask in the accumulator, you can use the TSB and TRB instructions, then it's still just a pair of instructions to pulse any bit in the port. Otherwise you would need to AND and OR the desired bit values. That's still perfectly doable, but it requires more instructions. New way: There is now another possibility for even faster twiddling of these bits, using illegal op codes of the 65c02. See Jeff Laughton's circuit tricks much further down the page! • Bit 7 (along with bit 6) are easily tested with the BIT instruction without affecting or needing any processor registers. BIT puts bit 7's value in the N flag and bit 6's value in the V flag, which you can then branch on with BMI, BPL, BVS, and BVC. If you have the VIA in zero page, you could use the BBS and BBR instructions and then it's only one instruction to test and branch on any bit you might want to put the data line on. Otherwise you would need to read the port into the accumulator and use AND to test the desired bit. That's still perfectly doable, but it requires more instructions. Again, see Jeff Laughton's circuit tricks, way down the page, for more possibilities. • The I²C spec does not tell how you have to apply power, but I used the method above because the VIA can be set up in software to toggle PB7 automatically by the T1 rollover, which allows you to produce a beep on a piezoelectric beeper with the software dictating the frequency but not having to babysit it in a loop; ie, the computer can be doing other things during the beep. The circuit above doubles up on PB7 (actually I use it for a third thing as well on my workbench computer), and beeping does not shut down the I²C power. You can shut it down when not beeping by setting PB7 high. Really the only reason to shut it down, even briefly, is to plug in and unplug I²C modules. Note that although many I²C devices can clock at 1MHz, others have a 400kHz speed limit, so if you run a 6502 at even a few MHz, you may exceed that and have to add NOP instructions to slow it down. In that case, using the bits I assigned above to reduce the number of instructions may not give any real advantage; so you might as well assign the bits any way you like. I will address SPI later. (I already have generic code to bit-bang SPI posted here.) SPI can go much, much faster, so you will probably want to reserve PA0 or PB0 to bit-bang the SPI clock unless you do SPI through something like Daryl Rictor's 65SPI chip. (And again, now there are Jeff Laughton's circuit tricks much further down the page.) In any case, I²C's passive pull-ups coupled with the unavoidable bus capacitance will always keep a lid on I²C's speed. Next you might be wondering about the pull-up resistors. This is how I²C manages to send signals in both directions without bus contention. Devices can only pull it down, not up. I've done it such that the master does pull the clock line (but not the data line) both up and down, and that's normally not a problem; but the spec does allow multiple masters, and also a few devices can hold the clock low to tell the master not to send the next bit yet (which would require that the master check that the clock line has floated up before it pulls it back down). Handshaking is usually accomplished through the ACKnowledge bit though, and that's on the data line at the end of a byte. Ok, so how do you have the VIA only pull down and not up? Simple—store a "0" in the applicable bit of the output register, then write to the data-direction register (DDRA or DDRB) for the particular port. When you want it to pull it down, you set the bit to be an output; or to let the pull-up resistor pull it up, you set it as an input, even if you don't particularly need to read it. (The interesting thing about that is that since a "1" in the data-direction register bit means "output" and a "0" means "input", pulling it down requires setting it to "1", and letting it float up means setting it to "0", instead of vice-versa.) You don't actually keep writing to the output register itself. This is one place where the 6522 is far more efficient codewise than the 6521 or 6520 which do not allow a light-footed method of quickly going from accessing the ports to accessing the data-direction registers and vice-versa. The I²C devices are all connected in parallel. Each class of device has an address group assigned to a few bits in the address byte, and then the device might also have a few extra pins that you can connect to Vcc or ground to provide more bits in the address byte and allow several devices of the same class to be on the bus at the same time. This way you can have more than one EEPROM, more than one digital thermometer, more than one D/A converter, etc.. If you want to make module to plug in, I would suggest the I2C-6 connector standard we devised on the 6502.org forum. The page has diagrams and photos. Use of the interrupt pin is optional, but it would be good if you incorporated it on your computer so that for example if you make a module for time and date and it has an alarm capability which uses the interrupt, or a keypad controller that interrupts when a key is pressed, you're all set. I offer a tiny I2C-6 32KB 24256 EEPROM module on the front page of my site. There is 65c02 assembly generic I²C code here. Various devices will have their own detailed operation which will be given in their data sheets; but there will be basic things that they all share. The link also has Forth and PIC code for 24xx I²C EEPROMs, and Forth code for the MAX520 I²C quad 8-bit D/A converter. Interfacing to SPI and Microwire Another popular synchronous-serial interface is SPI (pronounced "ess-pee-eye") which stands for "serial peripheral interface." In principle, the way it works is similar to the interfacing to dumb shift registers as shown further up this web page; but SPI devices tend to be much more intelligent. SPI was named by Motorola who rounded up various synchronous-serial protocols that were already in use and asigned the mode numbers and tried to make it all more understandable. If you implement SPI, you'll also have National Semiconductor's Microwire too, as it came first as is compatible with SPI mode 0. As a bonus, SD cards have an SPI mode. Wikipedia has quite a write-up on SPI. Unlike I²C, SPI can be full duplex; and because it is not limited by passive pull-ups like I²C is, SPI's maximum speeds are dozens of times as fast as I²C's. For this reason, the larger flash memories especially will be found in SPI and not I²C, as storing a 10-megapixel photograph for example to an I²C memory would take much too long. Bit-banging with a 6522 VIA as shown below will not operate SPI anywhere near its maximum speed, but it still enables us to help ourselves to hundreds (if not thousands) of great ICs on the market, made by dozens of manufacturers. (For maximum SPI speed available in a 65-family IC, see the 65SPI (not to be confused with 65SIB) I/O IC which is designed and sold by Daryl Rictor and provides direct and complete SPI support in a 65-family IC, without bit-banging.) Jeff Laughton's circuit tricks much further down this page also make possible much faster bit-banging, using illegal 65c02 op codes. For example, you can set or clear an output bit in a single clock cycle without affecting the other bits in the port! Not quite as fast as Jeff's, but more efficient than using a VIA, is the bit-I/O module I offer on the front page of this site, starting Aug 17, 2022. SPI normally involves four wires for one SPI device (or slave), plus an extra one for each additional device. There's a clock line (often abbreviated SCLK for "serial clock"), a master-in, slave-out (MISO) data line, a master-out, slave-in (MOSI) data line, plus one negative-logic chip-select (CS) for each device or slave. Each transaction is begun by putting the slave's CS line low and then sending the command, ie, telling the slave what you want it to do, whether to take the following data and do something with it, or output some data, etc.. Non-selected devices stand by and don't respond to anything on the clock, MISO, or MOSI lines. The SPI modes used by the different slaves on the bus do not need to match, as only one slave will be enabled at a time (unless you daisychain, something which will not be addressed here.) As long as you don't exceed the device's speed limits (which you're not likely to do if you bit-bang with a 6522 VIA), there are no particular timing requirements like RS-232 has, so bit-banging it is very easy. More on that later. Here's a really basic SPI connection: You can use any I/O bits you want, but I chose the ones above because again bit 0 of a port is the easiest and quickest to pulse in software (using INC and DEC), and bit 6 or 7 of a port are easiest to test (using the BIT instruction). The arrows show signal direction. Since normally no line is bi-directional, it makes it easy to convert logic voltage levels if necessary. It wouldn't be a bad idea to put weak pull-up resistors on all the lines so that no line floats between logic levels before the VIA is set up. Non-WDC VIAs (like Rockwell, even the CMOS ones) do effectively have a pull-up resistor similar to 74LS, but WDC's are truly high-impedance CMOS in the input mode. However, at least an accidentally selected SPI slave won't do anything if the clock and MOSI lines don't toggle in a sequence that it understands as a valid command. Depending on the SPI modes you need, you can sometimes share clock and data lines with I²C. You'll have to observe the following: • Never transition the data line when the clock is high except to produce start and stop conditions on I²C. • Keep the EN (of Microwire) or CS (of SPI) lines high (false) when addressing I²C devices. To interface to more devices, you will need more CS lines, using more VIA output bits. This is shown below, along with the entire 65SIB (6502.org Serial Interface Bus) circuit on my workbench computer. 65SIB is basically SPI but expanded in several directions at once, making it suitable as an external bus too, and accommodating devices ranging from dumb to highly intelligent. (Don't be scared off by the complexity of the circuit. Half of it is just to feed the annunciator LEDs.) Notes: 1. Solid arrows on connections (except to VR, the reference voltage) show signal direction. 2. There's sample code at SPI.ASM (augmented on 12/11/20). 3. I have a ton of I/O-bit sharing on my VIAs, and the various DIP switch sections allow me to isolate the 65SIB from other circuits if it becomes necessary. 4. You can use any I/O bits you want, but I chose the ones above because of which ones I had available on the workbench computer, and because again bit 0 of a port is the easiest and quickest to pulse in software (using INC and DEC), and bit 6 or 7 of a port are easiest to test (using the BIT instruction). 5. The LED current-limiting resistors on the IRQ and CONFIG lines a have lower value than the CLK one because these two LEDs will flash so briefly at at such a low duty cycle that they will need some help (by way of more current) to be seen. 6. The ±12V is not part of the SPI spec, but we put it in the 65SIB spec so that external equipment that does not consume much power could get power off the bus and not need its own power supplies. (Explanation is given in the 65SIB specification.) 7. The configuration line (CONFIG) is not part of the SPI spec either. Intelligent 65SIB devices can use it, but it will have no effect on non-intelligent devices, so in that case you could use it for something else on them if desired. For example, an SPI LCD may use it for a register-select line, and a flash memory might use it for a HOLD line. I have ones here that do that.) 8. The pull-up resistors as mentioned in the notes for the simpler diagram further up are included, to prevent ambiguous states before the VIA is initialized in software. The one on MISO is heavier, to get faster rise time if you use a 74LS07 or other open-collector parts for voltage translation on 65SIB devices. 9. PA6 and S7 allow both a software-controlled method and a manual method to disable 65SIB (and SPI) interrupts. 10. I used an LM339 open-collector quad comparator mainly because of the IRQ output; but there are different ways you could handle the comparator functions here. They do not need to be super fast. The LM339 does not do well with inputs that are within 1.5V of its Vcc, but I powered it with 12V as I have it available anyway for other things on the workbench computer. The 74LS07 hex open-collector buffer would work too. 11. To free up some VIA pins and make your bit-bang code a little faster and more compact, you can use the WM-3a bit-I/O module I offer on the front page of this site, or use ideas from its data sheet (.pdf) to implement the circuit on your own board. 12. To ease the construction of 65SIB devices, I plan to offer a 65SIB breakout board. The reason for it is that after we developed 65SIB on the 6502.org forum, I made a peripheral with the small graphic LCD and an SPI flash, and found it was a lot of work to wire up the connectors, regulator, and do the voltage translation to 3.3V. The breakout board takes care of all that, and has lots of stuffing options for voltage regulators, interfacing to 5V, 3.3V, or 2.5V parts, and more. (PCBs are already made, but modules have not been assembled yet.) 13. For other explanations on 65SIB, see the specification. We also have the SPI-10 connector standard for small modules, similar to I2C-6 above. See the spec here. I have a tiny SPI-10 flash memory module available on the front page of this site. When I built my first 65SIB device on a proto board (not a PCB), I found that it was quite a bit of work to wire up the connectos and the voltage-translation circuits. So later I laid out a PCB for future projects, and am making it available to others as well, on the front page of this site. Keypads and keyboards There are many ways to implement a keypad or keyboard. The goal here is to give some basic ideas and some considerations the beginner might not have thought of, and then hopefully the reader's imagination will be freed up to meet the need. Starting at the simplest point, you can have a pushbutton switch from one of an I/O IC's input pins to ground, and a resistor of 4.7K or 10K to pull the pin up to Vcc when the button is not being pushed. Note that as long as you're not needing to push the button, you can use this pin for something else too. When I've designed products with PIC microcontrollers which are programmed with a serial connection with one line for clock and one for data, I've put pushbuttons on the same pins, as there will never be a need to push a button while the device is being programmed, and the programmer will never be connected when a button needs to be pushed. Going to the next step, you can have several buttons connected to output bits, with a common connection to a pulled-up input bit, like this: Note that the output bits can be connected to other things as well. Just don't press more than one key at a time if you keep the bits in output mode full time, or you'll have two or more outputs shorted together, unless you put a diode (preferably a Schottky one) above each key switch, with the cathode at the top. (There's another way to get around that, discussed in a minute.) I'm partly out to get maximum function with minimum parts though, as construction takes time, and hardware is a bigger job to swap out or modify than software is. Your software needs to bring one of the output lines down at a time, then read the input line and see if it's high or low. A high input bit means the key that is on the low output is not being pressed. Make the software cycle through them. More on software considerations in a minute. The five-key keypad is what I have on my workbench computer, and the five VIA output bits feeding the keys also go to the LCD and my printer interface. I do the software development on a DOS PC and send code over the RS-232 line; but the keypad is convenient for various things once a piece of code is set to running. The keys can be used for anything you determine in your software, but the functions I usually give mine are: YesEnterContinue Up NoCancelExit Down HelpMenuEdit I have this to the right of the LCD on my 6"x4.5" workbench computer, with 6mm tactile keyswitches soldered to DIP headers that are plugged into DIP sockets; but as I gave it commands from the PC on the desk opposite the workbench and turn around toward the workbench to see instant results, there were times that I wanted the keys within reach without having to get up each time; so I also made a remote keypad and added a pin header on the workbench computer to connect it with a long cord, like this: (This is in my Tip of the Day column too.) The connections to the two keypads are in parallel, so it all goes to the same I/O bits, and a given key has the same effect regardless of which keypad it is pressed on. The pin header it plugs into has a few extra pins so I could also put a duplicate LCD on the remote module. For more keys, even a full keyboard, you can do something like the following. (To save drawing time, I'll just use a circle to represent each switch connecting the vertical line and the horizontal line that go through that circle.) As above, pressing more than one key at a time could cause contention (depending on which ones they are), unless you add the diodes or other means to prevent shorting outputs to each other. The software needs to lower one output line at a time, then see if any of the input lines are low, then go to the next output line, cycling through the set. It might be best to have it return a pair of bytes with one bit to represent each of the 16 keys. Actually, there is a no-diode way to prevent the shorting-outputs problem. You can write 0's to the output bits (by writing to the VIA's ORA or ORB), and then make only one of the output bits an output at any given time (by writing to the VIA's data-direction registers DDRA or DDRB). When it's not pulling the line down, any given bit is an input. This has the effect of an open-drain (or open-collector) output. We pulled this trick above in the section on interfacing to I²C, which has sample code linked. The only pullups then are passive, ie, the resistors—and BTW, it might then be a good idea to put pull-up resistors on those as well, not just the inputs, so there won't be ambiguous logic levels at any pins when keys aren't being pressed. 16 keys is what I put on the last automated test equipment setup I designed, built, and programmed at my last place of work in about 1990: An earlier set I did used a full PC keyboard, and it just took a lot of space on the workbench, and it, plus the big monitor, turned out to somehow make the test operator think she knew more than she did and it became a bit of a pain. The smaller keypad afforded ways to get to all the menu items and enter the needed info just fine. The blank key could be used for a shift key to give other ones more functions. These ½" square Grayhill type 87 keys are really nice because their pins fit into prototyping board and they have clear caps you can remove to put labels underneath. (As you can see, I didn't go to the effort to make the labels look really professional.) You can get away with taking fewer bits on the 6522 VIA (or other I/O IC) if you use something like a 74HC138 3-to-8-line decoder. The above shows 16 keys on 8 I/O bits; but with the '138, you could put three 6522 outputs to the '138 and have the 8 outputs of the '138 feed 8 columns, then with only three inputs on the 6522, you could have 24 keys (3x8) with only 6 bits total on the 6522 instead of 8. You can take it further of course. A keypad is not a high-speed device by any means, and the serial I/O above, using 74HC165's and 74HC595's, can easily put an 8x8 (or bigger) key matrix on the 6522's shift-register port, along with lots of other things, and not take any additional pins on the 6522. The 6502-based Oric computer keyboard used a 4051 instead of a 74HC138. Since it's a 1-to-8 analog switch (with three control bits), the non-selected outputs were high-impedance, so no other provision was necessary for allowing multiple keys to be pressed at once. 4000-series logic is very slow and not suitable for a lot of computer use, but you won't be making hundreds of thousands of keystrokes per second anyway. Here's the Oric keyboard schematic: You must debounce! "What does 'debouncing' mean?", you might ask. Switches are not perfect, and if you look at the output as they make or break contact, it will not be a single, solid, fast edge, but a bunch of up-and-down for a few thousandths of a second until the switch quits bouncing and settles down. This was illustrated in the "Reset requirements" page, and it shows deboucing there in hardware because you cannot do it in software while you're resetting the processor. You have probably had the experience of a bad calculator or other keypad where it acted like you pressed a key many times even though you pressed it only once. The calculator probably had some debouncing, but not good enough for a less-than-new keypad. It's very irritating! It is much easier to do debouncing in software than hardware. What I have done for debouncing on many products is to require that the switch be closed for at least 30ms at a time before it is recognized. It is sampled many times in that amount of time, and if any sample shows it released again, the count starts over. The same goes for releasing it. You may think that 30ms is unreasonably long; and indeed new switches won't need that long, but as the switch gets older, it will need increasing amounts of debounce time. (What I plan to do in the future is to register the keypress almost immediately, but not register the next one until the needed debounce time is met and it has clearly been released and pressed again.) I frequently also add auto-repeat, and, on the workbench computer, I have a variable to set the delay before repeat begins, and another variable for repeat rate. Auto-repeat is useful even on a small keypad, because you might for example want to increment or decrement something with the up- and down-arrows while watching the results, and having sometimes even hundreds of button pushes (ie, no auto-repeat) would be tiresome, slow, and also reduce the life of the switches. Newcomers tend to do the timing in software delay loops, but I would encourage use of a real-time clock (RTC) instead, whether it's an RTC chip with adequate time resolution, or a software RTC that is interrupt-driven by timer 1 (T1) of the 6522 VIA, or other method. My interrupts primer tells how to set up the RTC using the 6522, and has complete code, starting a little over half way down the page. (Enjoy my out-of-date cartoons!) Having an RTC allows the computer to be doing something useful between key repeats, and the repeat rate stays consistent regardless of the varying size of the job between repeats. My article on simple methods to do multitasking without a multitasking OS has example code for doing keypad debouncing and handling a shift key plus auto repeat of the cursor keys, near the end, timing things while allowing the computer to be done other things at the same time. Two-key rollover can be done in sofware too. Just stop scanning the keypad or keyboard until the currently pressed key is released. Until then, you only watch that one key. For other keypad- & keyboard-interfacing possibilities, I should mention that there are many I²C keypad controllers on the market that have built-in debouncing and sometimes also drivers for 7-segment LEDs and other I/O built in, and you can connect them (along with other things) to your I²C interface described above. Note also that the common PS/2 keyboard interface is very similar to I²C. Daryl Rictor shows how to interface it to a 6522 here, with code, and shows how to do it with an AVR microcontroller here, again with code, and interface it to his 6502 SBCs. (Please note that the power pins on his are slightly different from what I put at the top of this page.) The darnedest thing I've ever seen in keypad interfacing was to connect the key switches to an arrary of resistors pulled up to Vcc and produce a single voltage to go into an A/D converter. The application was a microcontroller that was short on I/O pins but had a spare A/D input! Displays Displays are discussed in the Displays section of the primer but here I'll isolate the part of the schematic that I have on the workbench computer showing a simple interface to the common intelligent character LCDs: NOTES: 1. The first thing you might be thinking is "Where are D0 through D3?" They're not connected, because I'm using it in 4-bit-interface mode to save VIA pins. More below. 2. The 10K potentiometer adjusts the viewing angle. Note that many supertwist LCDs that are more attractive need a negative voltage for this. I find people have quite a bit of resistance to implementing voltages other than 5V, especially negative voltages, but there are various things that having a ±10 or ±12 volts (non-critical) will be useful for, so it would be good to have them available. There is not typically much current required of these. Farther down this page at "Non-typical power-supply circuits," I give easy ways to get these voltages without having to change your power supply. 3. Here we tie the R/W to ground. The main reason to read the LCD is to see if it's still busy with an operation before giving it something more to do; but if you don't need to go absolutely as fast as possible, you can just wait the maximum amount of time the data sheet says the various operations, and you'll be assured that you're in the clear. Otherwise we only need to write to it. It might be a bigger deal for a large display, but here we can save a pin. NOTE: It may be tempting to put a resistor between this pin and ground in case you later want to drive it. A jumper selection would be better, because it's like a LSTTL input which takes a lot to pull down. I wasted some time and discovered this the hard way. 4. Sample code to operate this is at LCDcode.asm. 5. Note again the doubling up (or in this case, tripling up) on VIA pins' usage. In another product I developed, the lines were also shared with a D/A converter. 6. Since a small display like 1x16 (one line of 16 characters) does not need the data fed super fast, you might even choose to feed it through 74HC595's on a serial chain as described above. It would partly pay for itself in that there would be little or no need to go with the 4-bit interface to save pins (as opposed to the 8-bit interface). The 8-bit would make the software a little simpler, not having to split bytes and feed the nybbles separately. 7. Pin-out may differ slightly between manufacturers, especially by swapping ground and Vcc, so do not use the pinout of mine without checking your data sheet. 8. Note that the E (enable) input on these LCDs is positive logic, ie, active high, not active low. Also, it must be gated by Φ2. I have interfaced to 1x16, 1x24, 2x20, 2x24, 2x40, and 4x40. The nice thing about these intelligent character displays is that they all use pretty much the same instruction set; so after you've used one, your software may not need to change to use one of another brand. The 4x40's work like a pair of 2x40's, one above the other, with all lines paralleled except E. I have only a mild interest in graphics. I made a very simple circuit and software to do raster graphics on an analog oscilloscope. See http://forum.6502.org/viewtopic.php?p=15348#p15348. I also have a small 128x64-dot graphics LCD that is interfaced by SPI, so I put it on the 65SIB. I have working sample code for it here. Here's a few seconds of demo video done for experiment purposes, using a random number generator to produce sets of 25 random segments then displaying the screen memory: Printer interface A parallel printer is written to similar to other kinds of latches—that you set up the data and then strobe a line to tell it to take it—but it has other lines to tell you if it's busy (mostly meaning its print buffer is full and it needs to catch up a bit before it can accept more data), has some error condition, is offline, etc., plus inputs for things like reset. 8/30/22 note: I just got a new Epson LX-350 to replace a couple of decades-old Epson printers here that were ailing. It was$217 brand new on eBay, no more than I was expecting to have to pay for a repair on the others.  I'm very pleased with it.  Print quality is excellent for a 9-pin, and it's faster than any I've seen before, maybe a little smaller too.  It has parallel (which I use), RS-232, and USB interfaces.  It's the cheapest of the fourteen (!) models of dot-matrix impact printers that Epson still offers.  They have 24-pin ones too.  They know that this market isn't going away, and they've picked up the slack where other manufacturers have stopped making them.  Epson puts the same command set in all of them, the ESC/P printer control language, the industry standard for simple, sophisticated, efficient operation of dot-matrix printers, and you can download the command reference here (.pdf) (since this information was not in the manual). The timing diagram from the manual for one of my several parallel printers shows: The lower two lines, data and strobe, are computer outputs to the printer, while the upper two, busy and acknowledge, are computer inputs from the printer.  You do need to pay attention to the busy line, but I've never paid attention to the acknowledge line.  Technically, the busy line is not an acknowledgement that the byte you just sent it was actually received; but this is the way I have done it since 1990, with a half-dozen different printers from four different manufacturers, and it has always worked flawlessly.  the reason I started doing it this way was probably to save another VIA I/O bit.  That additional line would have to be on one of the CA or CB lines for the edge-sensitivity since the ACK\ pulse is narrow enough that it might get missed if I only sampled it. Since there's no hurry with printers and the occasional poll will stay ahead of the printing and keep data in the buffer anyway, I've never put them on interrupts.  I might if I were spooling a super long printout, but I've never done more than a couple of pages at a time with the workbench computer and the automated test equipment at my last place of work; and in those cases, an interrupt-driven process could continue while the non-interrupt-driven print routine babysat the printer. Your software needs to check the printer's status before every single byte.  Before stopping with an error condition saying the printer is not ready, your software should allow a couple of seconds for the printer to say everything is ok.  Especially in non-graphics mode, you will be able to send data much faster than the printer can print it, meaning you may fill up its buffer and have to wait for the printer to be ready to take more data. Parallel printers normally have a 36-contact female Centronics connector, and for reasons unknown to me, computers with a parallel printer port usually have a female DB-25 connector on them, so the standard printer cables (before USB) are generally terminated in the male of each of these two connector connector types.  I put a 16-pin header on my workbench computer since it takes so much less board space, then made up an adapter cable to go from IDC (to plug into the pin header) to DB-25 (to mate with the standard printer cable).  The connections go this way: IDC pin DB-25 pin Centronics contact function (Note 1) into or out of printer? Notes 1 1 1 strobe IN 2 2 2 D0 IN 3 3 3 D1 IN 4 4 4 D2 IN 5 5 5 D3 IN 6 6 6 D4 IN 7 7 7 D5 IN 8 8 8 D6 IN 9 9 9 D7 IN 10 10 10 ACK OUT 2 11 11 11 Busy OUT 12 12 12 Paper out OUT 2 13 13 13 -- -- 14 14 auto feed IN 3 15 15 32 Error OUT 16 16 31 init IN -- 17 36 select IN 3 14 18 33 twisted pair ret Gnd 14 19 19 twisted pair ret Gnd 14 20 21 twisted pair ret Gnd 14 21 23 twisted pair ret Gnd 14 22 25 twisted pair ret Gnd 14 23 27 twisted pair ret Gnd 14 24 29 twisted pair ret Gnd 14 25 30 twisted pair ret Gnd Note 1.  Note the six negative-logic signals.  (Their overbars are not very visible.) Note 2.  These pins on my workbench computer's IDC are not connected. Note 3.  I did not connect these, but they have DIP switches in the printer so external connection is not always necessary. Columns 2 & 3 represent the standard parallel printer cable.  Columns 1 & 2 represent a ribbon cable I made up, but you might want to just put a DB-25 on your computer board for the purpose. From the other things on this page, you're probably getting the idea by now of how to connect this to a 6522 VIA or even through shift registers.  (After all, a line printer is another thing that's not exactly fast.  Putting it on the VIA's SR is not going to be a burden speedwise!)  To get as many things on the first VIA on the workbench computer, I even have to split the eight bits of printer data between five bits of port A (shared with the LCD and keypad, as shown in the schematic in the display section above) and three bits of port B.  (I know it sounds like a mess, but once the software for it is written, it just does its job and you forget all about it.) I might post code later.  All I have now is Forth code from various projects—not a single one in 6502 assembly. Digital-to-analog converters The DAC0808 is a simple, fast, 8-bit multiplying D/A converter that has been around for decades.  I have one on my workbench computer that gets used to do everything from setting power supply voltages, to simulating loads, to using it as a volume control (it is a multiplying DAC!), to arbitrary waveform generation, to doing audio sampling at rates of tens of thousands of samples per second.  Its settling time to ½lsb is 150ns (right—not µs, but ns).  Wanna see how easy the software is?  Suppose you have it on port B of a VIA, and have the value in the 6502's accumulator.  Just write the value to the port like this: STA VIA_PB That's it.  No separate .asm file for this one.  (Just make sure you've put the port bits in output mode, by storing $FF to the data-direction register.) Here's the circuit: Notes: 1. As shown, an input of 0 gives 0V output, and$FF gives 5V*255/256 output (ie, 4.98V if your supply really is 5V even).  It is quite accurate throughout the range.  Each step is about 19.5mV. 2. The ±10V shown is not critical at all.  I keep it in the neighborhood of ±9V, but the actual value does not affect the accuracy.  The op amp also uses it to perform well all the way down to ground on the output, and to output $FF which is 5V*255/256. The DAC needs a negative voltage anyway, and using the higher voltages makes the op amp work better than 5V rail-to-rail ones do. (Remember it was mentioned above for some supertwist LCDs' backplane bias, and I said it's good to have the ± voltages for several things. This is one.) I tell how to derive these voltages from 5V farther down, under "Non-typical power-supply circuits." 3. Purists would probably want you to use 1% resistors, but I've found the 5% carbon-film ones are usually not more than 1% off anyway. The important thing is not so much their absolute value, but their matching. As shown here, the 5V power supply is the reference; and if your +5V is within 1% or better, half lsb which would be 10mV, congratulations. If it's not close enough for your liking, you can adjust for power-supply inaccuracy by trimming R5. 4. C1 and C2 filter out noise that's on the power-supply line so the reference for the DAC is quiet. 5. The reference voltage input on the right end of R3 usually is left unconnected, but you can use the DAC as a multiplier for that voltage, even tens of thousands of times per second, so you can put not only DC on it but also an AC signal if you want. 6. For applications where you don't need much speed (like for setting a power-supply voltage), nearly any op amp that can work with the voltage range will do. I started with a common LM358 dual, but later changed to an LT1124 (also a dual) which is about ten times as good in every way (but very expesive, which was ok since I only needed one). The LM358 has a little crossover distortion when trying to drive the 2.2K R5 at higher speeds unless you bias the output pretty heavily with a pull-down resistor. I use the other half of the dual op amp in my A/D converter circuit (which I will present further down). 7. C4 is not critical either. With the 2.2K R5, 330pF puts the -3dB frequency at about 220kHz (actually a little lower since the op amp is not perfect). Since there's a little capacitance in the circuit at the inverting input, just make it standard practice to put in at least 22pF or so to get rid of the pole that could otherwise make the op amp oscillate. The 220-ohm resistor at the output serves a similar purpose. Since the op amp's output does have some resistance, capacitive loading produces another pole, and the feedback circuit gets its signal source phase-shifted accordingly. If the capacitive loading is heavy enough, you can lose enough phase margin for the op amp to oscillate. The 220-ohm load is enough to prevent that under all load conditions, and, at the operating frequencies of interest, probably won't have any effect on the circuit you're driving. Actually, depending on the op amp selected, you might be able to go as low as 100 ohms or even less. 8. Note the single-point ground connection. If the circuit you're driving with the DAC is not practically shoulder-to-shoulder close to this circuit, it would be good to put ferrite beads over the pair of wires, voltage output and analog ground (labeled AGND in the diagram), ie, two wires through the same bead, to cut common-mode signals and force the analog ground wire to carry the equal and opposite return current. You can see a bunch of ferrite beads in the first picture on the primer page here for construction for AC performance. The ferrite bead I use for so many things is the Fair-Rite 2643000801 which is 0.3" long and 0.3" diameter and has a minimum impedance of 50 ohms at 25MHz and approximately 90 ohms at 100MHz. Remember that even a 2MHz square wave has substancial harmonic content around 25MHz. This bead has a few ohms' impedance even at 1MHz, and of course does not affect DC. If you put two wires through it, differential-mode siganls, ie, where equal but opposite currents flow in the two wires, will be unaffected. In fact, the ferrite bead will try to make the two AC currents equal and opposite. The bead is$.08 each in singles at Mouser.  Keeping digital noise out of data converters is a science, but fortunately not too difficult for 8-bit.  The various A/D and D/A manufacturers have lots of ap. notes on it, and each data sheet will have a little on it. What I have not shown here is the sockets for plug-in modules for anti-alias filters and other signal conditioning.  (Depending on what you're doing, you may not initially have any need for that.)  I have various places to connect to the outside world.  I put 3.5mm plugs on the front panel which I can plug audio straight in, other projects have been plugged into the module sockets, and some went through the board-edge connector at the back.  The speaker amplifier can be switched between A/D input and D/A output, both post-anti-alias filters. If you need more sharing of VIA ports, like to be able to use the same VIA bits to talk to something else without changing the DAC's output, you could add an 8-bit latch such as the 74HC373 or '374 and use just one extra bit from the VIA to trigger the latch after putting the desired bit pattern out the 8-bit port.  The point of this particular DAC however is speed; so if you want to go with serial (which is much slower) to further extend the number of things the existing VIA pins can interface to, you might as well go with the I²C MAX520 or 521 discussed below which have 4 and 8 DACs and need virtually no support parts. This DAC0808 BTW is the same converter I used on my analog oscilloscope raster graphics circuit to take the counters' outputs and drive the X and Y inputs of the oscilloscope. In the automated test equipment I designed, built, and programmed at my last job in about 1990, I used DAC1220's which are nearly the same but in 12-bit, in an 18-pin DIP.  I used these to set power supply and other voltages and to control signal amplitudes.  The DAC1220 is apparently no longer made, but the AD7541A is nearly the same thing, somewhat improved. The MAX520 quad I²C 8-bit DAC in 16-pin DIP I mentioned earlier needs virtually no external components, so there's really no circuit to draw.  There's simple Forth code to drive it about 2/3 of the way down the GENRLI2C.ASM file which starts with general I²C-driver 6502 assembly code.  I might recommend the MAX521 in 20-pin DIP instead though, since it has 8 DACs instead of 4 and it has output buffer amps so it can drive a decent load without the output voltage drooping seriously.  The 520's outputs should only drive high-impedance loads. For a super-simple 9-level D/A converter, remember you can use the VIA's SR data output as mentioned earlier: shown with a little more explanation at http://wilsonminesco.com/6502interrupts/index.html#3.3 in my interrupts primer (where this figure came from).  Driver code is too simple for a separate file, so here it is: INIT_SR_DAC: LDA #00010000B ; Set SR to shift out free-running at STA VIA_ACR ; T2 rate. (T2CH does not get used.) STZ VIA_T2CL ; Zero the T2 counter's low-byte latch for fastest RTS ; possible shift rate, to make it the easiest to filter. ;------------------ _3BIT_TBL: ; "DFB" in C32 is "DeFine Byte." DFB 00000000B, 00010000B, 00100010B ; The point here is to distribute the bits DFB 01010010B, 10101010B, 10101101B ; such that the toggling frequency is kept DFB 11011101B, 11101111B, 11111111B ; as high as possible for easy filtering. WR_SR_DAC: ; Start with number in the range of 0-9 (inclusive) in A. TAX ; Skip the TAX if you already have the number in X. LDA _3BIT_TBL,X ; Get the distributed-bits pattern from the table, STA VIA_SR ; and write it to our make-shift D/A converter. RTS ;------------------ This is a Forth-to-assembly translation of code I used when I did DTMF output with this method.  Worked like a charm.  Believe it or not, it worked quite well for speech too, if the dynamic range is really compressed.  I don't remember trying it for music, but I can't imagine it would be very enjoyable!  Using a CMOS 6522 (65c22) will make give a higher, more-consistent output voltage, since CMOS can pull all the way up to the 5V (unlike NMOS), and with a lower impedance.  The SR always finishes sending out a byte before starting the next, so you don't have to worry about glitches from not timing the write just right.  And again, this SR mode keeps shifting the same value out over and over, free-running, until you write another one to it.  It uses T2 to govern the shift rate, so T1 is left available to produce interrupts for the sampling rate. Digital potentiometer Related to D/A converters (especially multiplying ones), I have used the National Semiconductor LM1973 (unfortunately now discontinued, as I found out in Dec 2020) 3-channel digital potentiometer in a product.  Working Forth code and an untested translation to 6502 assembly is in UPOT.ASM. Analog-to-digital converters The MAX153 is a half-flash, 8-bit A/D converter with internal track-and-hold, and it's almost fast enough to just read as a memory location.  I have one on my workbench computer that gets used to do everything from measure voltages and current in circuits, including as a digital oscilloscope, to temperature, to audio sampling at rates of tens of thousands of samples per second.  It has various modes of operation, but I set it up this way: Notes: 1. As shown, an input of 0V gives an outut of 0, and an input of 5V*255/256 gives $FFH. Each step is about 19.5mV. 2. The ±10V shown is just for the op amp power supply and is not critical at all. I keep it in the neighborhood of ±9V. If you want to try to use a rail-to-rail 5V op amp, be my guest. I always have several things that use the higher voltages, so I think it's just as well to have them available than to try to avoid everything outside of +5V. I tell how to derive these voltages from 5V further down, under "Non-typical power-supply circuits." 3. The op amp bias resistors shown are for when you want capacitive coupling. The trimmer pot is for adjusting the zero-input-signal output value. As shown, you can get approximately$0C above and below the $7F value. You can typically go a little more one side than the other depending on your op amp choice, because of its input bias current. 4. For applications where you don't need much speed (like for measuring temperature), nearly any op amp that can work with the voltage range will do. I started with a common LM358 dual, but later changed to an LT1124 (also a dual) which is about ten times as good in every way (but very expesive, which was ok since I only needed one). The latter was more for the D/A (presented above) than this A/D. 5. The ferrite bead shown on pin 12, along with the .1µF capacitor there, keep digital noise on the power supply line from affecting the readings. If you use a separate voltage reference, this may not be an issue. The ferrite bead I use for so many things is the Fair-Rite 2643000801 which is 0.3" long and 0.3" diameter and has a minimum impedance of 50 ohms at 25MHz and approximately 90 ohms at 100MHz. Remember that even a 2MHz square wave has substancial harmonic content around 25MHz. The impedance at even 1MHz will, along with the .1µF capacitor, give a lot of attenuation. There might be a temptation to use a resistor instead, but that would affect the reading a lot, since the ADC may have as little a 1K between Vref+ and Vref-. In that case, even a 4.7-ohm resistor would change it by 1 lsb. The ferrite might have more than that at 2MHz but with no DC resistance. It costs$.08 each in singles at Mouser. 6. The reference voltage output on the right is usually left unconnected, but may occasionally have a use, as in ratiometric measurements. 7. The .01µF capacitor and 330-ohm resistor at the ADC's input have a -3dB point of 48kHz, so it's essentially flat throught the entire audio range.  The 330-ohm resistor also helps preserve the phase margin on the op amp to keep it stable when there's a capacitive load which would otherwise produce a pole. 8. The Schottky diodes just help protect the ADC's input from excessive current from voltages outside the 0-5V range.  I used 1N5817's because I had a lot from making switching power supplies. 9. Consider pins 10 & 11 to be the center of the star of the grounding system for the ADC.  For best AC performance, capacitor leads should be as short as you can make them, since lead length adds inductance.  If the circuit you're connecting to the ADC is not practically shoulder-to-shoulder close to this circuit, it would be good to put ferrite beads over the pair of wires, voltage output and analog ground (labeled AGND in the diagram), ie, two wires through the same bead, to cut common-mode signals and force the analog ground wire to carry an equal and opposite return current.  You can see a bunch of ferrite beads in the first picture on the primer page here for construction for AC performance.  Keeping digital noise out of data converters is a science, but fortunately not too difficult for 8-bit.  The various A/D and D/A manufacturers have lots of ap. notes on it, and each data sheet will have a little on it. The code to set it up and to read it is: INIT_AtoD: LDA #00001110B TSB VIA_PCR ; Set CA2 high. I put the CS\ of the A/D on CA2. STZ VIA_DDRB ; Make all bits of Port B to be inputs. ; First reading after power-up is inaccurate, so get it out of the way, by continuing: READ_AtoD: LDA #00000010B ; No entry requirements for READ_AtoD. At end, A/D result is in Y. TRB VIA_PCR ; Set the CS\ of the A/D low. LDY VIA_PB ; Read the A/D result. At 5MHz phase-2 clock freq, you'll need one NOP between TRB & LDY. TSB VIA_PCR ; Set the CS\ of the A/D back high. RTS ;----------------------- What I have not shown above is the sockets for plug-in modules for anti-alias filters and other signal conditioning.  (Depending on what you're doing, you may not initially have any need for that.)  I have various places to connect to the outside world.  I put 3.5mm plugs on the front panel which I can plug audio straight in, other projects have been plugged into the module sockets, and some went through the board-edge connector at the back.  The monitor speaker amplifier can be switched between A/D input and D/A output, both post-anti-alias filters. I originally had an ADC0820 in the circuit which is nearly pin-compatible, but later went to the faster MAX153.  (I needed more NOPs with the ADC0820.)  It required very minimal change to the circuit.  Edit, 7/16/22:  I was just shopping for more parts, and found that the MAX153 has gotten super expensive, like $24US! The ADC0820 is one-quarter to one-third the price. The MAX114 seems to be the same thing inside as the MAX153 but is about half the price and yet has four input channels. The MAX118 is the same thing for eight channels instead of four. I'm finding that all the fast A/D converters are expensive now. My purpose for the '153 is speed, and getting exact timing on the readings; so if you don't need the speed but you want more resolution, you you could use SPI or I²C A/D converters, perhaps putting them in SPI-10 or I2C-6 or 65SIB devices you plug into those ports on your computer. For a home project I have not done yet, I piggy-backed a second MAX153 to get two in the space of one (but see my 7/16/22 update note above about the MAX114 which is a newer version, 4-channel and yet cheaper, and the MAX118 which is the 8-channel version): (BTW, you can see a lot of stacking of different ICs, sometimes to extremes, in the forum topic "OT: stacked (DIP) chips.") My double MAX153 above only takes one extra VIA pin to interface it, for the second CS. The data bits' connections are common. A note about A/D and D/A jitter (and other performance considerations) "Jitter" is the term used to refer to the slop, or uncertainty, in timing of the sampling of a changing signal. (Please note that this is separate from the matter of sampling rate and preventing aliasing.) Ideally the samples come at a very exact rate; for example, if you sample at 10,000 times per second, you really want them to be exactly 0.1ms apart (add as many zeroes after it as you like), every time, whether you just want a faithful reproduction upon playback, or you actually want to do some mathematical analysis on the set of samples (which I have done many times, with the fast fourier transform, or FFT for short). If the sample timing keeps "rattling around" within the jitter window, and the signal changes fast enough to experience a substancial change within that jitter window, noise and distortion result. The following sketch illustrates the problem: With something like an I²C data converter, the exact time of the sample may be difficult to nail down accurately; but we would probably not be using that interface for a signal that changes quickly, let alone for audio. SPI is considerably faster; but the parallel converters here would offer much better timing for audio if we interface them to a 6502 with timing based on interrupts. If the sytem clock (ie, Φ2) jitter and aperture jitter are negligible such that variations in interrupt latency make up basically the whole picture, the total RMS jitter for record-playback is the square root of the sum of the RMS record jitter squared and the playback jitter squared: $t_{j}\left&space;(&space;tot&space;\right&space;)=&space;\sqrt{t_{j}\left&space;(&space;rec&space;\right&space;)^{2}+t_{j}\left&space;(&space;PB&space;\right&space;)^{2}}$ 6502 instructions are mostly 2-6 cycles (7 is rare) and the average is 4. (Caveat: See the forum topic "A taken branch delays interrupt handling by one instruction.") Since all instructions take at least two cycles, an interrupt could hit during those last two on any of them; but as the cycle count increases, there are fewer and fewer instructions where an interrupt could more and more cycles before the end. With an equal distribution of 2-, 3-, 4-, 5-, and 6-cycle instructions (which is perhaps not the safest assumption), the median time from interrupt to the end of the instruction execution is 2 cycles, and the RMS jitter is 1.8 cycles for record and 1.8 again for playback, meaning a total (if you need both) of a little over 2.5 cycles. At 5MHz that's 500ns. The S/N ratio is: $20log_{10}\left&space;(&space;\frac{1}{2\pi&space;f&space;t_{j}}&space;\right&space;)$ where f is the analog input frequency, and tj is the RMS jitter time, or 500ns in this case. So at 4kHz input frequency, the noise resulting from jitter will be about 38dB below the amplitude of the signal, which means you're good to a maximum of about 6 bits (ratio of 80), even if you used a 16-bit converter. This applies no matter how high you get the sampling rate, even if over 100,000 samples per second, because jitter is separate from sampling rate. If you wanted to get that S/N ratio at 16kHz analog input frequency, with the sampling being interrupt-driven on a 6502, you'd need a 20MHz Φ2 (with no wait states), which is about as fast as you can reliably get with current-production, off-the-shelf 6502's. If you don't plan to play it back, like if you only record in order to do a mathematical analysis on it (as I've done many times), you're only dealing with the recording jitter; so the 4kHz input frequency for a 5MHz Φ2 and the jitter-induced noise being 38dB down from the signal becomes 5.66kHz. Since I was doing spectrum analysis on aircraft and vehicle noise coming through headset microphones whose frequency response took a nosedive at 4kHz, it worked out fine. (I also used a 5th-order 5.6kHz anti-alias filter.) That's really not bad for an 8-bit converter, considering that audio amplitudes tend to be very inconsistent and you probably won't be able to keep the signal in the top 1½ bits of the converter's resolution anyway. As the top end of your input signal's frequency spectrum drops, so will the problems resulting from jitter. The maximum S/N ratio you can get with a perfect 8-bit converter and no jitter is 50dB (from 8bits * 6.02dB + 1.76dB). (That's while there's signal. With no signal, the converter's output remains constant, with no noise at all if the reference voltage is quiet; so it's not like cassettes which gave tape hiss between songs.) Without stopping the 65c02 using the WAIt instruction for the immediate interrupt response it affords (at the expense of doing something useful while it's waiting), you can still reduce the jitter in an interrupt-driven sampling time driven by a timer like the VIA's T1, by putting code in the ISR to read T1 and see how many cycles ago it timed out and adjusting the delay before doing the next data conversion. It makes for more delay in the ISR, but you can make that delay consistent, getting rid of the jitter. I will post code later. There's a tutorial on jitter and ENB (effective number of bits) here, and an excellent lecture and demonstration of what is and is not important in audio, and down-to-earth proofs of the "golden ears" baloney, at http://www.youtube.com/watch?v=BYTlN6wjcvQ. Yes, it's on YouTube which compresses the audio and loses information, but he gives the URL where you can download the raw wave files if you want to, otherwise see what he does with various experiments right there. If you are sampling AC signals, it would be expected that you have at least a basic understanding of the Nyquist frequency and aliasing. I might comment however that there are times that it is appropriate to undersample if you are dealing with a limited bandwidth of for example 200-220kHz. You don't necessarily have to sample much above 40kSPS; but the jitter still needs to be suitable for a 220kHz signal, not 20kHz. Just timing the samples off of timer interrupts on a 6502 won't get you there. But outside of that, (read on...) How many samples do I need? and Do I need smoothing to produce a clean waveform? The answer might surprise you. Let's say you try to synthesize a sine wave with only 32 samples per cycle, or 8 samples per quarter cycle (90°). With no filtering, the first major harmonic distortion products will be the 31st and 33rd harmonics, down about 30dB from the fundamental. IOW, for a fundamental frequency above 600Hz, these will be out of the hearing range. The hottest harmonic distortion product below that is the 5th harmonic at about 54dB down from the fundamental. For a 16-sample sine wave, ie, only 4 samples per 90° (does that seem atrocious?), the first major harmonic distortion products are the 15th and 17th harmonics, down about 24db below the fundamental. The hottest harmonic distortion product below that is the 7th harmonic, at about 55dB down from the fundamental. If you go up to 64 samples per cycle, which is 16 samples per 90°, the first major harmonic distortion products are the 63rd and 65th harmonics at about 36dB below the fundamental. If you want to filter it, it will be easy, not requiring a many-order brick-wall filter, as long as you're not nearing the Nyquist frequency. How many bits do I really need? As mentioned above, an 8-bit converter can theoretically reach a SNR of 50dB when signal is present, and have an output as quiet as that of a 24-bit converter if there's no signal, since the number won't be changing. There were some fine-sounding music cassette tapes before CDs took over, and their SNR was virtually no better than what an 8-bit converter can give you; yet the cassettes' frequency response and distortion at high record levels were quite inferior to what you can get with a good 8-bit converter and adequate sampling rate! I know it's unlikely anyone will ever use 8-bit recording for music for enjoyment, but this might put things in perspective. For things like machine control and measurements in test equipment, you will have to determine how accurate you need them, and whether external scaling and offsets will let you get by with fewer bits than you might think. There is some discussion of this in the first quarter of the front page about large look-up tables for hyperfast, accurate, 16-bit scaled-integer math. I'm definitely not against using converters of 12, 14, 16, or more bits when they are right for the application (in fact, I'm about to start shopping for a multi-channel SPI A/D of at least 12 bits), but I always like to point out what might be a pleasant unexpected discovery of what can be done with less. Some tricks from Jeff Laughton (Dr Jefyll on 6502.org's forum) His website is http://laughtonelectronics.com/. He has given me permission to publish these here. Ultra-fast output port using 65C02 illegal instructions: This is a 5-bit parallel output port for 65c02 that uses the illegal op codes (65C02 NOPs which are only informally but not thoroughly documented by the manufacturers) in the _3 and _B columns to get single-cycle output, twice as fast as the otherwise 2-clock minimum for instructions, and you don't need extra instructions to first load a register. However, all five bits will be affected at once per the illegal op code used, unlike the 4- and 8-bit I/Os further down. See the description on the forum for more info. Get 4 bits of 65c02 single-cycle output, plus 4 bits of input requiring a single cycle to conditionally set the oVerflow flag to branch on afterward. (For the input, do a CLV first) This one (and the next one, below) lets you address one bit at a time, independently of the others, unlike the 5-bit one above. Get 8 bits of 65c02 single-cycle output (see the note on the usability of bit 6), plus 8 bits of input requiring a single cycle to conditionally set the oVerflow flag to branch on afterward. (For the input, do a CLV first.) This one also lets you address one bit at a time, independently of the others. Note again that if you have a 65c02 made by WDC (as opposed to Rockwell, GTE, CMD, Synertek, etc.), the added STP (SToP) and WAI (WAIt) op codes ($DB and $CB, respectively) make output bit 6 usable only to indicate that there was a STP instruction executed. It will normally come out of reset as low; then if a STP is executed, it goes high and stays high until there's a reset. Since STP stops the processor, a subsequent WAI will not happen before there's a reset. Neither IRQ nor NMI will wake it up from a STP. If you would prefer to give up input bit 6 rather than output bit 6, you can use this version instead. Re-mapping Op-codes (eg: to create virtual instructions) This causes illegal op codes (or any opcodes you choose!) to trigger a BReaK or other opcode, leading to clever software (supplied by you) to simulate new instructions. See the description on the forum for more info. Non-typical power-supply circuits run by the 65c22 VIA and digital components In some of our products, I have needed a higher voltage at negligible current for FET bias, well outside the power rails in order to get a good pinch-off for JFETs or saturation for MOSFETs, for controlling 12V analog signals. With a microcontroller there, it was natural to use a pin or two to to output a square wave to run a voltage multiplier, and then I could make the high voltage go off or on under software control. One way to do it with a 65c22 is the following: Notes: • The unmarked capacitors are .1µF. When I breadboarded this to check it, I used decades-old ceramic disc capacitors. • The diodes can be 1N4148, or, to get more output voltage, Schottky diodes. I tried this circuit both with 1N4148 and 1N5817. I have a lot of 1N5817's here (which I use in switching power supplies), but if you need to shop for Schottky diodes, you might as well get something smaller for this application. I might suggest the SD103C. Each Schottky diode will drop at least 0.3V less than plain silicon diodes, and the circuit above gave me almost three more volts output with the Schottky diodes (17.35V versus 14.68V connected as shown, with Point A fed by the VIA's PB7). • Point A is fed by (typically) a 5V square wave, possibly generated by a 65c22. Note however that it does not have to have a constant frequency or duty cycle, so even the unused SYNC output of a 65c02 would work. For more power, see the circuit below, with a dedicated oscillator feeding paralleled inverters for more drive strength. • Point B typically goes to ground (if you want the output to come all the way to ground when you stop the signal), or you can connect it to something higher (like 5V) if you want the output higher than it would otherwise be. (Connecting point B to 5V will add less than 5V to the output for a given load resistance though, since the higher voltage will spell higher output current, so there will be greater voltage drops in the circuit. • Point C may go to ground (as shown), or, for higher output voltage with a given number of capacitors and diodes, it can instead be connected to a square wave source that's out of phase with the one feeding point A. • The input frequency is not critical. 100kHz works well for many applications, and you can stray from that with half an order of magnitude with negligible effect. Higher frequencies allow smaller capacitors, but I already used 0.1µF's, so you won't get much smaller physically; and lower frequencies tend to make it more efficient. But again, you will see little or no difference from 30kHz to 300kHz, and very little difference going up to 1MHz or more. • For experimenting, I fed this one at point A with PB7 of a Rockwell VIA, run by T1 toggling it in free-run mode, and 100kHz worked best (although that was on a solderless breadboard, with wires feeding it from the workbench computer). To set it up, do this: LDA VIA_DDRB ORA #$80 STA VIA_DDRB ; Set the high bit in DDRB, to make PB7 an output. LDA VIA_ACR ; Set the two high bits in the ACR to get the ORA #$C0 ; square-wave output on PB7. (Don't enable the STA VIA_ACR ; T1 interrupt in the IER though.) LDA #$17 ; Set the T1 timeout period. $17 is for 100kHz if STA VIA_T1CL ; the Φ2 rate is 5MHz. To get it going, write STZ VIA_T1CH ; first to the ACR (above) and then to the counters. ; You can write directly to the latches later if desired. (The LDA/ORA pairs above could be replaced with single LDA#'s if you don't already have other bits set up in DDRB and the ACR that you want to preserve. The PB7 frequency is: fPB7=fΦ2/2(n+2), where n is the T1 latch value. Another option to feed it a continuous signal from the VIA is to use CB1 which is the clock output of the shift register, using mode 100 to shift out free-running at the T2 rate like above under "Digital-to-analog converters," after the second diagram, where it says, "For a super-simple 9-level D/A converter." (For that matter, you could also use the data line, CB2.) Set it up like this: LDA VIA_ACR AND #11110011B ; Clear bits 2 & 3, and ORA #00010000B ; set bit 4, to put the STA VIA_ACR ; SR into mode 100. LDA #$17 ; Set the T1 timeout period. \$17 is for 100kHz if STA VIA_T2CL ; the Φ2 rate is 5MHz. There is no need to write ; to T2CH, as it has no effect on the SR clock rate. LDA #10101010B ; These last two lines are only for if you want STA VIA_SR ; to use CB2, whether you also use CB1 or not. The CB1 frequency is:  fCB1=fΦ2/2(n+2), where n is the T2 latch value. • The 100K load resistor is really just to bleed off the output resistor when you stop the signal input.  FET bias, if that's what you want it for, takes negligible current—not even 1µA (which also means power efficiency is a non-issue).  With no load at all, the voltage will slowly rise much higher than expected, because of the peaks in the ringing. • The number of stages can be varied depending on how much output voltage you need.  You can add sections ad infinitum for higher voltages; but make sure the parts' voltage ratings are adequate, and remember that the available output current will drop as the number of stages increases. • Reversing the direction of the diodes will make the output voltage negative instead of positive.  Don't forget to turn the output capacitor around also, if it's polarized. I have used charge pumps for up to 8 or 10 watts.  Sometimes it's not very efficient, but it may be worth it anyway.  Here's a circuit I breadboarded recently to get -9V from +5V.  This circuit puts out over -19V with a very light load, but I put the output through an LM337T adjustable negative linear regulator set to put out -9V at up to -200mA from this. Notes: • There are 12 parallelled sections of 74AC14 for the top driver, and 11 for the bottom.  (The 12th one is used for the 160kHz oscillator.)  I used four 74AC14's, stacked, so all the inverters combined only took one 14-pin DIP socket space.  This shows how: The reason for the '14 instead of the '04 is that the oscillator needs to be Schmitt-trigger.  The reason for the 74AC (instead of 74HC) is that AC has a much stronger output.  Even 74ABT is not significantly stronger for this one, and 74ABT cannot pull up to 5V like 74AC can anyway, which is a big disadvantage. • The oscillator frequency will depend partly on the size of the hysteresis of the particular IC; but it will not vary enough to cause any significant effect on performance.  With different ICs, the 27K and 220pF got me anywhere from 140kHz to 220kHz. • The unmarked capacitors are 47µF.  I used plain aluminum electrolytics; but the best for switching or charge-pump power supplies is OS-CON.  They're kind of expensive, but much better than even tantalum for having low impedance at high frequencies.  This is a type of capacitor (Organic SemiCONductor), not a brand.  It is made by a few different companies.  They look like an electrolytic with epoxy on the bottom. • The diodes are 1N5817 Schottky diodes.  Plain silicon diodes (like 1N4001) will have a lot more voltage drop, so the output will be a couple of volts less. • D1 through D4 are there to protect the 74AC14 outputs in the event the load changes suddenly and makes the capacitors discharge into them.  (I found out about the need the hard way.) • Efficiency of this circuit (not counting the LM337 linear regulator I followed it with) was near 90% at light loads, and dropped below 50% near maximum load (near -200mA output). • Note that non-CMOS 74xx14's will give a much lower output voltage, because they cannot swing rail-to-rail like CMOS can, and they also cannot pull up hard like CMOS can. There are lots of ways to do the above, and I can't cover them all; so if you have an idea for a different one, about all I can say is to breadboard it and try it to see if you can get the needed voltage and current from it.  If you want my opinion on a circuit idea, you can email me at wilsonminesBdslextremeBcom (replacing the B's with @ and .). For other needs of doubling or negating a voltage, sometimes the 7660-related 8-pin DIP ICs are appropriate.  These are capacitive charge pumps.  There are many variations, each with their own strength; for example, some have a higher maximum operating voltage, some have higher operating current, and some even have regulators built in.  See the data sheets for the various operating modes.  Ones I have tried are (with a brief description of the part and a few actual numbers from my own experience with it) are: • Linear Technology LTC1044A:  1.5 to 12V input, and I used it in the inverting configuration to get an output that was the negative of the input.  With 15µF/20V OS-CON capacitors and 8.3V in (from a 9V battery), and 8 to 25mA load, I got an output ripple of 32mV peak-to-peak @ 10kHz switching frequency (10mV @ 60kHz), and a maximum output impedance of about 35Ω, less under some conditions.  The LTC1144 works to a higher voltage: up to 18V in. • TI LT1054:  +6 to +15V input, -5V output (in the configuration I built up, but the voltage can be changed), 100mA max.  With an 87mA load, I got a 0.67Ω output impedance (while able to regulate), with a 15mV peak-to-peak ripple @ 30kHz, with 15µF OS-CON capacitors for the charge pump, and 150µF OS-CON on the output.  Wow!  At the same time, I have used its pin-1 square-wave output to feed another diode-capacitor voltage multiplier to get +17V; so I got -5V and +17V from a 9V battery. • National Semiconductor LM2662:  1.5V to 5.5V input (2-3 AA batteries, including when they get really low), and it inverts the voltage.  The specs say 3.5Ω typical Routput (6.5Ω @ 1.5V in), 86% typical efficiency @ 200mA, with 5.5V input.  16kHz typical operating frequency.  I built it up but misplaced my test results.  I'll post them later if I find them or have a need to use the circuit again and make measurements. Switching regulators (which use inductors), whether buck or boost, are typically a better way than charge pumps to do power supplies, due to efficiency, particularly when regulation is needed; but they tend to do poorly on breadboards, and their PC-board layout is not for beginners.  (The one I've used most in our commercial products is the MAX732 which allows you to start with anywhere from 4V to 9.3V input and get 12V out.  Another is the MAX669 which let us start with a pair of AA batteries and get 12V out.)  One way around the difficulty that switching regulators present for the hobbyist is to use integrated switching regulators which put all the difficult stuff in a small pre-made module, sometimes with the same pinout as the popular 7805 for example.  I've used the Power Trends 78SR112 and 78ST105 three-pin 12V and 5V switching regulator modules in prducts.  They worked great but unfortunately TI took over Power Trends and discontinued these.  Pololu Robotics and Electronics, MicroPower Direct, and Traco Power are three other suppliers of switching-regulator modules. < I realize now I should add a circuit for a complete modem for storing data and programs on tape, since even at this late date (2022), we still frequently see interest on the forum.  Although tape is pretty much gone, people are using their cell phones and other digital devices to record audio. > workbench equip <--Previous last updated Feb 28, 2023
User: Rules of exponents Multiplying powers Remember that: $a^0=1\;\;and\;\;\;a\fs2^1\fs3=a$ Multiplying powers with the same base. You can multiply powers with the same base by adding the exponent. an · am = an + m $3\fs2^4\fs3\;\cdot\;\;3\fs2^7\fs3=3\fs2^{4+7}\fs3=3\fs2^{11}$ $2\fs2^5\fs3\;\cdot\;2\fs2^4\fs3\;\cdot\;2=2\fs2^{5+4+1}\fs3=2\fs2^{10}$ Multiplying powers with the same exponent. You can multiply powers with the same exponent by multiplying the bases. $a\fs2^n\fs3\;\cdot\;b\fs2^n\fs3=(a\;\cdot\;b)\fs2^n$ $3\fs2^4\fs3\;\cdot\;5\fs2^4\fs3=(3\;\cdot\;5)\fs2^4\fs3=15\fs2^4$ $(2\;\cdot\;3)\fs2^5\;=\fs3\;2\fs2^5\;\fs3\cdot\;3\fs2^5$ Fill in all the gaps, by typing in the correct answer: $8\fs2^8\cdot\;8=$
R news and tutorials contributed by (750) R bloggers Updated: 5 hours 34 min ago ### Applications of R at EARL San Francisco 2017 Fri, 06/16/2017 - 22:33 (This article was first published on Revolutions, and kindly contributed to R-bloggers) The Mango team held their first instance of the EARL conference series in San Francisco last month, and it was a fantastic showcase of real-world applications of R. This was a smaller version of the EARL conferences in London and Boston, but with that came the opportunity to interact with R users from industry in a more intimate setting. Hopefully Mango will return to the venue again next year, and if so I'll definitely be back! As always with EARL events, the program featured many interesting presentations of how R is used to implement a data-driven (or data-informed) policy at companies around the world. With a dual-track program I couldn't attend all of the talks, but here are some of the applications that caught my interest: • Ricardo Bion (AirBnB): An keynote with an update on data science practice at AirBnB: training for everybody, the Knowledge Repository, trends in Python and R usage, and even a version of the AirBnB app implemented in Shiny! • Hilary Parker (Stitchfix): A wonderful keynote on bringing principles of engineering (and in particular blameless post-mortems) to data science (slides) • Prakhar Mehrotra (Uber): Using R to forecast demand and usage (sadly no slides, but here's R charting Uber traffic data) • David Croushore (Pandora): Using R and Shiny to monitor and forecast demand by users for different music services (slides). • David Bishop (Hitachi Solutions). Using R in the clean energy industry, with a nice demo of a Power BI dashboard to predict wind turbine anomalies, and even visualize the turbines with HoloLens (slides) • Tyler Cole (Amgen). Using R packages and Microsoft R Server for clinical trial submissions to the FDA (slides) • Luke Fostvedt (Pfizer). How R and R packages are used at various stages of the drug development process at Pfizer (slides) • Gabriel Becker (Genentech). Processes and R packages used at Genentech to manage the various tradeoffs for reproducibility in a multi-collaborator environment (slides) • Shad Thomas (Glass Box Research): Using R and Shiny for segmentation analysis in market research (slides) • Aaron Hamming (ProAssurance): Using R to combat the opioid epidemic by identifying suspect prescribers (slides) • Eduardo Ariño de la Rubia (Domino Data Lab): Using R from APIs including Rook, Rapache, Plumbr, OpenCPU and Domino (slides forthcoming) • Madhura Raju (Microsoft). Using R to detect distributed denial-of-service attacks (slides) • Slides from my own presentation, Predicting patient length-of-stay in hospitals, are available too. There are many more interesting presentations to browse at the EARL San Francisco website. Just click on the presenter name to see the talk description and a link to slides (where available). EARL Conference: San Francisco 2017 var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); To leave a comment for the author, please follow the link and comment on their blog: Revolutions. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more... ### qualtRics 2.0 now available from CRAN Fri, 06/16/2017 - 21:37 (This article was first published on Jasper Ginn's blog, and kindly contributed to R-bloggers) qualtRics version 2.0 is now available from CRAN. This version contains several additional features and simplifies the process of importing survey exports from Qualtrics. Below, I outline the most important changes. You can check the full changelog here. qualtRics configuration file In previous versions, users needed to register their API key by calling the registerApiKey() function to avoid having to pass it to each function. However, they did need to pass the root url to each function, which is annoying and inconsistent. In this version, the registerApiKey() function has been replaced by registerOptions(), which stores both the API key and the root url in environment variables. It also stores several other options (e.g. verbose logs to the R console). You can find more information about this here This version also supports the use of a configuration file called ‘.qualtRics.yml’ that the user can store in the working directory of an R project. If a config file is present in the working directory, then it will be loaded when the user loads the qualtRics library, eliminating the need to register credentials at all. You can find more information about this here. getSurvey() supports all available parameters The getSurvey() function now supports all parameters that the Qualtrics API provides. Concretely, I’ve added the following parameters since version 1.0: 2. limit: Maximum number of responses exported. Defaults to NULL (all responses). 3. useLocalTime: Use local timezone to determine response date values. 4. includedQuestionIds: Export only specified questions. You can use a new function called getSurveyQuestions() to retrieve a data frame of question IDs and labels, which you can in turn pass to the getSurvey() function to export only a subset of your questions. getSurveys() retrieves > 100 surveys In previous versions of qualtRics, getSurveys() only retrieved 100 results. It now fetches all surveys. If you manage many surveys, this does mean that it could take some time for the function to complete. var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); To leave a comment for the author, please follow the link and comment on their blog: Jasper Ginn's blog. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more... ### Stats NZ encouraging active sharing for microdata access projects Fri, 06/16/2017 - 14:00 (This article was first published on Peter's stats stuff - R, and kindly contributed to R-bloggers) The Integrated Data Infrastructure (IDI) is an amazing research tool One of New Zealand’s most important data assets is the integrated data infrastructure: “The Integrated Data Infrastructure (IDI) is a large research database containing microdata about people and households. Data is from a range of government agencies, Statistics NZ surveys including the 2013 Census, and non-government organisations. The IDI holds over 166 billion facts, taking up 1.22 terabytes of space – and is continually growing. Researchers use the IDI to answer complex questions to improve outcomes for New Zealanders.” Quote from Stats NZ Security and access is closely controlled The IDI can be accessed only from a small number of very secure datalabs with frosted windows and heavy duty secure doors. We have one of these facilities at my workplace, the Social Investment Unit (to become the Social Investment Agency from 1 July 2017). Note/reminder: as always, I am writing this blog in my own capacity, not representing the views of my employer. User credentials and permissions are closely guarded by Stats NZ and need to be linked to clearly defined public-good (ie non-commercial) research projects. All output is checked by Stats NZ against strict rules for confidentialisation (eg random rounding; cell suppression in specified situations; no individual values even in scatter charts; etc) before it can be taken out of the datalab and shown to anyone. All this is quite right – even though the IDI does not contain names and addresses for individuals (they are stripped out after matching, before analysts get to use the data) and there are strict rules against trying to reverse the confidentialisation, it does contain very sensitive data and must be closely protected to ensure continued social permission for this amazing research tool. More can be done to build capability, including by sharing code However, while the security restrictions are necessary and acceptable, other limitations on its use can be overcome – and Stats NZ and others are working to do so. The biggest such limitation is capability – skills and knowledge to use the data well. The IDI is a largish and complex SQL Server datamart, with more than 40 origin systems from many different agencies (mostly but not all of them government) outside of Stats NZ’s direct quality control. Using the data requires not just a good research question and methodology, but good coding skills in SQL plus at least one of R, SAS or Stata. Familiarity with the many data dictionaries and metadata of varied quality is also crucial, and in fact is probably the main bottleneck in expanding IDI usage. A typical workflow includes a data grooming stage of hundreds of lines of SQL code joining many database tables together in meticulously controlled ways before analysis even starts. That’s why one of the objectives at my work has been to create assets like the “Social Investment Analytical Layer”, a repository of code that creates analysis-ready tables and views with a standardised structure (each row a combination of an individual and an “event” like attending school, paying tax, or receiving a benefit) to help shorten the barrier to new analysts. We publish that and other code with a GPL-3 open source license on GitHub and have plans for more. Although the target audience of analysts with access to the IDI is small, documenting and tidying code to publication standard and sharing it is important to help that group of people grow in size. Code sharing until recently has been ad hoc and very much depending on “who you know”, although there is a Wiki in the IDI environment where key snippets of data grooming code is shared. We see the act of publishing on GitHub as taking this to the next step – amongst other things it really forces us to document and debug our code very thoroughly! Stats NZ is showing leadership on open data / open science Given the IDI is an expensive taxpayer-created asset, my personal view is that all code and output should be public. So I was very excited a few weeks back when Liz MacPherson, the Government Statistician (a statutory position responsible for New Zealand’s official statistics system, and also the Chief Executive of Stats NZ) sent out an email to researchers “encouraging active sharing for microdata access projects” including the IDI. I obtained Liz’s permission to reproduce the full text of her email of 29 May 2017: Subject: Stats NZ: Encouraging active sharing for microdata access projects Stats NZ is all about unleashing the power of data to change lives. New Zealand has set a clear direction to increase the availability of data to improve decision making, service design and data driven innovation. To support this direction, Stats NZ is moving to an open data / open science environment, where research findings, code and tables are shared by default. We are taking steps to encourage active sharing for microdata access projects. All projects that are approved for access to microdata will be asked to share their findings, code and tables (once confidentiality checked). The benefits of sharing code and project findings to facilitate reproducible research have been identified internationally, particularly among research communities looking to grow access to shared tools and resources. To assist you with making code open there is guidance available on how to license and use repositories like GitHub in the New Zealand Government Open Access and Licensing (NZGOAL) Framework – Software Extension. Stats NZ is proud to now be leading the Open Data programme of work, and as such wants to ensure that all tables in published research reports are made available as separate open data alongside the reports for others to work with and build on. There are many benefits to sharing: helping to build communities of interest where researchers can work collaboratively, avoid duplication, and build capability. Sharing also provides transparency for the use that is made of data collected through surveys and administrative sources, and accountability for tax-payer dollars that are spent on building datasets and databases like the IDI. One of the key things Stats NZ checks for when assessing microdata access applications is that there is an intention to make the research findings available publicly. This is why the application form asks about intended project outputs and plans for dissemination. While I have been really pleased to see the use that is made of the IDI wiki and Meetadata, the sharing of findings, code and tables is still disproportionately small in relation to the number of projects each year. This is why I am taking steps to encourage sharing. Over time, this will become an expectation. More information will be communicated by the Integrated Data team shortly by email, at the IDI Forums and on Meetadata. In the meantime, if you have any suggestions about how to make this process work for all, or if you have any questions, please contact ….., Senior Manager, Integrated Data …… Warm regards Liz Liz MacPherson Government Statistician/Chief Executive This is great news, and a big step in the right direction. It really is impressive for the Government Statistician to be promoting reproducibility and open access to code, outputs and research products. I look forward to seeing more microdata analytical code sharing, and continued mutual support for raising standards and sharing experience. Now to get version control software into the datalab environment please! (and yes, there is a Stats NZ project in flight to fix this critical gap too). My views on the critical importance of version control are set out in this earlier post var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); To leave a comment for the author, please follow the link and comment on their blog: Peter's stats stuff - R. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more... ### A “poor man’s video analyzer”… Fri, 06/16/2017 - 11:30 (This article was first published on R – Longhow Lam's Blog, and kindly contributed to R-bloggers) Introduction Not so long ago there was a nice dataiku meetup with Pierre Gutierrez talking about transfer learning. RStudio recently released the keras package, an R interface to work with keras for deep learning and transfer learning. Both events inspired me to do some experiments at my work here at RTL and explore the usability of it for us at RTL. I like to share the slides of the presentation that I gave internally at RTL, you can find them on slide-share. As a side effect, another experiment that I like to share is the “poor man’s video analyzer“. There are several vendors now that offer API’s to analyze videos. See for example the one that Microsoft offers. With just a few lines of R code I came up with a shiny app that is a very cheap imitation Set up of the R Shiny app To run the shiny app a few things are needed. Make sure that ffmpeg is installed, it is used to extract images from a video. Tensorflow and keras need to be installed as well. The extracted images from the video are parsed through a pre-trained VGG16 network so that each image is tagged. After this tagging a data table will appear with the images and their tags. That’s it! I am sure there are better visualizations than a data table to show a lot of images. If you have a better idea just adjust my shiny app on GitHub…. Using the app, some screen shots There is a simple interface, specify the number of frames per second that you want to analyse. And then upload a video, many formats are supported (by ffmpeg), like *.mp4, *.mpeg, *.mov Click on video images to start the analysis process. This can take a few minutes, when it is finished you will see a data table with extracted images and their tags from VGG-16. Click on ‘info on extracted classes’ to see an overview of the class. You will see a bar chart of tags that where found and the output of ffmpeg. It shows some info on the video. If you have code to improve the data table output in a more fancy visualization, just go to my GitHub. For those who want to play around, look at a live video analyzer shiny app here. A Shiny app version using miniUI will be a better fit for small mobile screens. Cheers, Longhow var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); To leave a comment for the author, please follow the link and comment on their blog: R – Longhow Lam's Blog. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more... var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); ### Using Partial Least Squares to conduct relative importance analysis in Displayr Fri, 06/16/2017 - 03:40 (This article was first published on R – Displayr, and kindly contributed to R-bloggers) Partial Least Squares (PLS) is a popular method for relative importance analysis in fields where the data typically includes more predictors than observations. Relative importance analysis is a general term applied to any technique used for estimating the importance of predictor variables in a regression model. The output is a set of scores which enable the predictor variables to be ranked based upon how strongly each influences the outcome variable. There are a number of different approaches to calculating relative importance analysis including Relative Weights and Shapley Regression as described here and here. In this blog post I briefly describe an alternative method – Partial Least Squares. Because it effectively compresses the data before regression, PLS is particularly useful when the number of predictor variables is more than the number of observations. Partial Least Squares PLS is a dimension reduction technique with some similarity to principal component analysis. The predictor variables are mapped to a smaller set of variables and within that smaller space we perform a regression against the outcome variable.  In contrast to principal component analysis where the dimension reduction ignores the outcome variable, the PLS procedure aims to choose new mapped variables that maximally explain the outcome variable. First I’ll add some data with Insert > Data Set > URL and paste in this link: http://wiki.q-researchsoftware.com/images/6/69/Stacked_Cola_Brand_Associations.sav Dragging Brand preference onto the page from the Data tree on the left table produces a table showing the breakdown of the respondents by category. This includes a Don’t Know category that doesn’t fit in the ordered scale from Love to Hate.  To remove Don’t Know I click on top of Brand preference in the Data tree on the left and then click on Value Attributes. Changing Missing Values for the Don’t Know category to Exclude from analyses produces the table below. Creating the PLS model Partial least squares is easy to run with a few lines of code. Select Insert > R Output and enter the following snippet of code into the R CODE box: dat = data.frame(pref, Q5r0, Q5r1, Q5r2, Q5r3, Q5r4, Q5r5, Q5r6, Q5r7, Q5r8, Q5r9, Q5r10, Q5r11, Q5r12, Q5r13, Q5r14, Q5r15, Q5r16, Q5r17, Q5r18, Q5r19, Q5r20, Q5r21, Q5r22, Q5r23, Q5r24, Q5r25, Q5r26, Q5r27, Q5r29, Q5r28, Q5r30, Q5r31, Q5r32, Q5r33) library(pls) library(flipFormat) library(flipTransformations) dat = AsNumeric(ProcessQVariables(dat), binary = FALSE, remove.first = FALSE) pls.model = plsr(pref ~ ., data = dat, validation = "CV") The first line selects pref as the outcome variable (strength of preference for a brand) and then adds 34 predictor variables, each indicating whether the respondent perceives the brand to have a particular characteristic. These variables can be dragged across from the Data tree on the left. Next, the 3 libraries containing useful functions are loaded. The package pls contains the function to estimate the PLS model, and our own publicly-available packages, flipFormat and flipTransformations are included for function to help us transform and tidy the data. Since the R pls package requires inputs to be numerical I convert the variables from categorical. In the final line above the plsr function does the work and creates pls.model. Automatically Selecting the Dimensions The following few lines recreate the model having found the optimal number of dimensions, # Find the number of dimensions with lowest cross validation error cv = RMSEP(pls.model) best.dims = which.min(cv$val[estimate = "adjCV", , ]) - 1 # Rerun the model pls.model = plsr(pref ~ ., data = dat, ncomp = best.dims) Producing the Output Finally, we extract the useful information and format the output, coefficients = coef(pls.model) sum.coef = sum(sapply(coefficients, abs)) coefficients = coefficients * 100 / sum.coef names(coefficients) = TidyLabels(Labels(dat)[-1]) coefficients = sort(coefficients, decreasing = TRUE) The regression coefficients are normalized so their absolute sum is 100. The labels are added and the result is sorted. The results below show that Reliable and Fun are positive predictors of preference, Unconventional and Sleepy are negative predictors and Tough has little relevance. TRY IT OUT You can perform this analysis for yourself in Displayr. var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); To leave a comment for the author, please follow the link and comment on their blog: R – Displayr. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more... var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); ### Some R User Group News Fri, 06/16/2017 - 02:00 (This article was first published on R Views, and kindly contributed to R-bloggers) This week, members of the Bay Area useR Group (BARUG) celebrated the group’s one hundred and first meetup with beer, pizza and three outstanding presentations at the cancer diagnostics company GRAIL. Pete Mohanty began the evening with the talk Did “Communities in Crisis” Elect Trump?: An Analysis with Kernel Regularized Least Squares. Not only is the Political Science compelling, but the underlying Data Science is top shelf. The bigKRLS package that Pete and his collaborator Robert Shaffer wrote to support their research uses parallel processing and external memory techniques to make the computationally intensive Regularized Least Squares algorithm suitable for fairly large data sets. The following graph from Pete’s presentation gives some idea of the algorithm’s running time. In the second talk, long time BARUG contributor Earl Hubbell described the production workflow that supports GRAIL’s scientific work. “Rmarkdown, tidy data principles, and the RStudio ecosystem serve as one foundation for [GRAIL’s] reproducible research.” In the evening’s third talk, Ashley Semanskee the Kaiser Family Foundation is using R to automate the annual production of its Employer Health Benefits Survey. Also recently: Note that members of the Portland group also helped to organize the first ever CasdadiaRConf conference. Finally, I would like to mention that the recently reformed Austin R User Group is thinking big. They are planning Data Day Texas 2018! and have already assembled an impressive list of speakers. _____='https://rviews.rstudio.com/2017/06/16/some-r-user-group-news/'; var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); To leave a comment for the author, please follow the link and comment on their blog: R Views. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more... ### An easy way to accidentally inflate reported R-squared in linear regression models Thu, 06/15/2017 - 23:43 (This article was first published on R – Win-Vector Blog, and kindly contributed to R-bloggers) Here is an absolutely horrible way to confuse yourself and get an inflated reported R-squared on a simple linear regression model in R. We have written about this before, but we found a new twist on the problem (interactions with categorical variable encoding) which we would like to call out here. First let’s set up our problem with a data set where the quantity to be predicted (y) has no real relation to the independent variable (x). We will first build our example data: library("sigr") library("broom") set.seed(23255) d <- data.frame(y= runif(100), x= ifelse(runif(100)>=0.5, 'a', 'b'), stringsAsFactors = FALSE) Now let’s build a model and look at the summary statistics returned as part of the model fitting process: m1 <- lm(y~x, data=d) t(broom::glance(m1)) ## [,1] ## r.squared 0.002177326 ## adj.r.squared -0.008004538 ## sigma 0.302851476 ## statistic 0.213843593 ## p.value 0.644796456 ## df 2.000000000 ## logLik -21.432440763 ## AIC 48.864881526 ## BIC 56.680392084 ## deviance 8.988463618 ## df.residual 98.000000000 d$pred1 <- predict(m1, newdata = d) I strongly prefer to directly calculate the the model performance statistics off the predictions (it lets us easily compare different modeling methods), so let’s also do that also: cat(render(sigr::wrapFTest(d, 'pred1', 'y'), format='markdown')) F Test summary: (R2=0.0022, F(1,98)=0.21, p=n.s.). So far so good. Let’s now remove the "intercept term" by adding the "0+" from the fitting command. m2 <- lm(y~0+x, data=d) t(broom::glance(m2)) ## [,1] ## r.squared 7.524811e-01 ## adj.r.squared 7.474297e-01 ## sigma 3.028515e-01 ## statistic 1.489647e+02 ## p.value 1.935559e-30 ## df 2.000000e+00 ## logLik -2.143244e+01 ## AIC 4.886488e+01 ## BIC 5.668039e+01 ## deviance 8.988464e+00 ## df.residual 9.800000e+01 d$pred2 <- predict(m2, newdata = d) Uh oh. That appeared to vastly improve the reported R-squared and the significance ("p.value")! That does not make sense, anything m2 can do m1 can also do. In fact the two models make essentially identical predictions, which we confirm below: sum((d$pred1 - d$y)^2) ## [1] 8.988464 sum((d$pred2 - d$y)^2) ## [1] 8.988464 max(abs(d$pred1 - d$pred2)) ## [1] 4.440892e-16 head(d) ## y x pred1 pred2 ## 1 0.007509118 b 0.5098853 0.5098853 ## 2 0.980353615 a 0.5380361 0.5380361 ## 3 0.055880927 b 0.5098853 0.5098853 ## 4 0.993814410 a 0.5380361 0.5380361 ## 5 0.636617385 b 0.5098853 0.5098853 ## 6 0.154032277 a 0.5380361 0.5380361 Let’s double check the fit quality of the predictions. cat(render(sigr::wrapFTest(d, 'pred2', 'y'), format='markdown')) F Test summary: (R2=0.0022, F(1,98)=0.21, p=n.s.). Ah. The prediction fit quality is identical to the first time (as one would expect). This is yet another reason to directly calculate model fit quality from the predictions: it isolates you from any foibles of how the modeling software does it. The answer to our puzzles of "what went wrong" is something we have written about before here. Roughly what is going on is: If the fit formula sent lm() has no intercept (triggered by the "0+") notation then summary.lm() changes how it computes r.squared as follows (from help(summary.lm)): r.squared R^2, the ‘fraction of variance explained by the model’, R^2 = 1 - Sum(R[i]^2) / Sum((y[i]- y*)^2), where y* is the mean of y[i] if there is an intercept and zero otherwise. This is pretty bad. Then to add insult to injury the "0+" notation also changes how R encodes the categorical variable x. Compare: summary(m1) ## ## Call: ## lm(formula = y ~ x, data = d) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.53739 -0.23265 -0.02039 0.27247 0.47111 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 0.53804 0.04515 11.918 <2e-16 *** ## xb -0.02815 0.06088 -0.462 0.645 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.3029 on 98 degrees of freedom ## Multiple R-squared: 0.002177, Adjusted R-squared: -0.008005 ## F-statistic: 0.2138 on 1 and 98 DF, p-value: 0.6448 summary(m2) ## ## Call: ## lm(formula = y ~ 0 + x, data = d) ## ## Residuals: ## Min 1Q Median 3Q Max ## -0.53739 -0.23265 -0.02039 0.27247 0.47111 ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## xa 0.53804 0.04515 11.92 <2e-16 *** ## xb 0.50989 0.04084 12.49 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Residual standard error: 0.3029 on 98 degrees of freedom ## Multiple R-squared: 0.7525, Adjusted R-squared: 0.7474 ## F-statistic: 149 on 2 and 98 DF, p-value: < 2.2e-16 Notice the second model directly encodes both levels of x. This means that if m1 has pred1 = a + b * (x=='b') we can reproduce this model (intercept and all) as m2: pred2 = a * (x=='a') + (a+b) * (x=='b'). I.e., the invariant (x=='a') + (x=='b') == 1 means m2 can imitate the model with the intercept term. The presumed (and I think weak) justification of summary.lm() switching the model quality assessment method is something along the lines that mean(y) may not be in the model’s concept space and this might lead to reporting negative R-squared. I don’t have any problem with negative R-squared, it can be taken to mean you did worse than the unconditional average. However, even if you accept the (no-warning) scoring method switch: that argument doesn’t apply here. m2 can imitate having an intercept, so it isn’t unfair to check if it is better than using only the intercept. var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); To leave a comment for the author, please follow the link and comment on their blog: R – Win-Vector Blog. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more... var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); ### Finer Monotonic Binning Based on Isotonic Regression Thu, 06/15/2017 - 23:24 (This article was first published on S+/R – Yet Another Blog in Statistical Computing, and kindly contributed to R-bloggers) In my early post (https://statcompute.wordpress.com/2017/01/22/monotonic-binning-with-smbinning-package/), I wrote a monobin() function based on the smbinning package by Herman Jopia to improve the monotonic binning algorithm. The function works well and provides robust binning outcomes. However, there are a couple potential drawbacks due to the coarse binning. First of all, the derived Information Value for each binned variable might tend to be low. Secondly, the binning variable might not be granular enough to reflect the data nature. In light of the aforementioned, I drafted an improved function isobin() based on the isotonic regression (https://en.wikipedia.org/wiki/Isotonic_regression), as shown below. isobin <- function(data, y, x) { d1 <- data[c(y, x)] d2 <- d1[!is.na(d1[x]), ] c <- cor(d2[, 2], d2[, 1], method = "spearman", use = "complete.obs") reg <- isoreg(d2[, 2], c / abs(c) * d2[, 1]) k <- knots(as.stepfun(reg)) sm1 <-smbinning.custom(d1, y, x, k) c1 <- subset(sm1$ivtable, subset = CntGood * CntBad > 0, select = Cutpoint) c2 <- suppressWarnings(as.numeric(unlist(strsplit(c1Cutpoint, " ")))) c3 <- c2[!is.na(c2)] return(smbinning.custom(d1, y, x, c3[-length(c3)])) } Compared with the legacy monobin(), the isobin() function is able to significantly increase the binning granularity as well as moderately improve the Information Value. LTV Binning with isobin() Function Cutpoint CntRec CntGood CntBad CntCumRec CntCumGood CntCumBad PctRec GoodRate BadRate Odds LnOdds WoE IV 1 <= 46 81 78 3 81 78 3 0.0139 0.9630 0.0370 26.0000 3.2581 1.9021 0.0272 2 <= 71 312 284 28 393 362 31 0.0535 0.9103 0.0897 10.1429 2.3168 0.9608 0.0363 3 <= 72 22 20 2 415 382 33 0.0038 0.9091 0.0909 10.0000 2.3026 0.9466 0.0025 4 <= 73 27 24 3 442 406 36 0.0046 0.8889 0.1111 8.0000 2.0794 0.7235 0.0019 5 <= 81 303 268 35 745 674 71 0.0519 0.8845 0.1155 7.6571 2.0356 0.6797 0.0194 6 <= 83 139 122 17 884 796 88 0.0238 0.8777 0.1223 7.1765 1.9708 0.6149 0.0074 7 <= 90 631 546 85 1515 1342 173 0.1081 0.8653 0.1347 6.4235 1.8600 0.5040 0.0235 8 <= 94 529 440 89 2044 1782 262 0.0906 0.8318 0.1682 4.9438 1.5981 0.2422 0.0049 9 <= 95 145 119 26 2189 1901 288 0.0248 0.8207 0.1793 4.5769 1.5210 0.1651 0.0006 10 <= 100 907 709 198 3096 2610 486 0.1554 0.7817 0.2183 3.5808 1.2756 -0.0804 0.0010 11 <= 101 195 151 44 3291 2761 530 0.0334 0.7744 0.2256 3.4318 1.2331 -0.1229 0.0005 12 <= 110 1217 934 283 4508 3695 813 0.2085 0.7675 0.2325 3.3004 1.1940 -0.1619 0.0057 13 <= 112 208 158 50 4716 3853 863 0.0356 0.7596 0.2404 3.1600 1.1506 -0.2054 0.0016 14 <= 115 253 183 70 4969 4036 933 0.0433 0.7233 0.2767 2.6143 0.9610 -0.3950 0.0075 15 <= 136 774 548 226 5743 4584 1159 0.1326 0.7080 0.2920 2.4248 0.8857 -0.4702 0.0333 16 <= 138 27 18 9 5770 4602 1168 0.0046 0.6667 0.3333 2.0000 0.6931 -0.6628 0.0024 17 > 138 66 39 27 5836 4641 1195 0.0113 0.5909 0.4091 1.4444 0.3677 -0.9882 0.0140 18 Missing 1 0 1 5837 4641 1196 0.0002 0.0000 1.0000 0.0000 -Inf -Inf Inf 19 Total 5837 4641 1196 NA NA NA 1.0000 0.7951 0.2049 3.8804 1.3559 0.0000 0.1897 LTV Binning with monobin() Function Cutpoint CntRec CntGood CntBad CntCumRec CntCumGood CntCumBad PctRec GoodRate BadRate Odds LnOdds WoE IV 1 <= 85 1025 916 109 1025 916 109 0.1756 0.8937 0.1063 8.4037 2.1287 0.7727 0.0821 2 <= 94 1019 866 153 2044 1782 262 0.1746 0.8499 0.1501 5.6601 1.7334 0.3775 0.0221 3 <= 100 1052 828 224 3096 2610 486 0.1802 0.7871 0.2129 3.6964 1.3074 -0.0486 0.0004 4 <= 105 808 618 190 3904 3228 676 0.1384 0.7649 0.2351 3.2526 1.1795 -0.1765 0.0045 5 <= 114 985 748 237 4889 3976 913 0.1688 0.7594 0.2406 3.1561 1.1493 -0.2066 0.0076 6 > 114 947 665 282 5836 4641 1195 0.1622 0.7022 0.2978 2.3582 0.8579 -0.4981 0.0461 7 Missing 1 0 1 5837 4641 1196 0.0002 0.0000 1.0000 0.0000 -Inf -Inf Inf 8 Total 5837 4641 1196 NA NA NA 1.0000 0.7951 0.2049 3.8804 1.3559 0.0000 0.1628 Bureau_Score Binning with isobin() Function Cutpoint CntRec CntGood CntBad CntCumRec CntCumGood CntCumBad PctRec GoodRate BadRate Odds LnOdds WoE IV 1 <= 491 4 1 3 4 1 3 0.0007 0.2500 0.7500 0.3333 -1.0986 -2.4546 0.0056 2 <= 532 24 9 15 28 10 18 0.0041 0.3750 0.6250 0.6000 -0.5108 -1.8668 0.0198 3 <= 559 51 24 27 79 34 45 0.0087 0.4706 0.5294 0.8889 -0.1178 -1.4737 0.0256 4 <= 560 2 1 1 81 35 46 0.0003 0.5000 0.5000 1.0000 0.0000 -1.3559 0.0008 5 <= 572 34 17 17 115 52 63 0.0058 0.5000 0.5000 1.0000 0.0000 -1.3559 0.0143 6 <= 602 153 84 69 268 136 132 0.0262 0.5490 0.4510 1.2174 0.1967 -1.1592 0.0459 7 <= 605 56 31 25 324 167 157 0.0096 0.5536 0.4464 1.2400 0.2151 -1.1408 0.0162 8 <= 606 14 8 6 338 175 163 0.0024 0.5714 0.4286 1.3333 0.2877 -1.0683 0.0035 9 <= 607 17 10 7 355 185 170 0.0029 0.5882 0.4118 1.4286 0.3567 -0.9993 0.0037 10 <= 632 437 261 176 792 446 346 0.0749 0.5973 0.4027 1.4830 0.3940 -0.9619 0.0875 11 <= 639 150 95 55 942 541 401 0.0257 0.6333 0.3667 1.7273 0.5465 -0.8094 0.0207 12 <= 653 451 300 151 1393 841 552 0.0773 0.6652 0.3348 1.9868 0.6865 -0.6694 0.0412 13 <= 662 295 213 82 1688 1054 634 0.0505 0.7220 0.2780 2.5976 0.9546 -0.4014 0.0091 14 <= 665 100 77 23 1788 1131 657 0.0171 0.7700 0.2300 3.3478 1.2083 -0.1476 0.0004 15 <= 667 57 44 13 1845 1175 670 0.0098 0.7719 0.2281 3.3846 1.2192 -0.1367 0.0002 16 <= 677 381 300 81 2226 1475 751 0.0653 0.7874 0.2126 3.7037 1.3093 -0.0466 0.0001 17 <= 679 66 53 13 2292 1528 764 0.0113 0.8030 0.1970 4.0769 1.4053 0.0494 0.0000 18 <= 683 160 129 31 2452 1657 795 0.0274 0.8062 0.1938 4.1613 1.4258 0.0699 0.0001 19 <= 689 203 164 39 2655 1821 834 0.0348 0.8079 0.1921 4.2051 1.4363 0.0804 0.0002 20 <= 699 304 249 55 2959 2070 889 0.0521 0.8191 0.1809 4.5273 1.5101 0.1542 0.0012 21 <= 707 312 268 44 3271 2338 933 0.0535 0.8590 0.1410 6.0909 1.8068 0.4509 0.0094 22 <= 717 368 318 50 3639 2656 983 0.0630 0.8641 0.1359 6.3600 1.8500 0.4941 0.0132 23 <= 721 134 119 15 3773 2775 998 0.0230 0.8881 0.1119 7.9333 2.0711 0.7151 0.0094 24 <= 723 49 44 5 3822 2819 1003 0.0084 0.8980 0.1020 8.8000 2.1748 0.8188 0.0043 25 <= 739 425 394 31 4247 3213 1034 0.0728 0.9271 0.0729 12.7097 2.5424 1.1864 0.0700 26 <= 746 166 154 12 4413 3367 1046 0.0284 0.9277 0.0723 12.8333 2.5520 1.1961 0.0277 27 <= 756 234 218 16 4647 3585 1062 0.0401 0.9316 0.0684 13.6250 2.6119 1.2560 0.0422 28 <= 761 110 104 6 4757 3689 1068 0.0188 0.9455 0.0545 17.3333 2.8526 1.4967 0.0260 29 <= 763 46 44 2 4803 3733 1070 0.0079 0.9565 0.0435 22.0000 3.0910 1.7351 0.0135 30 <= 767 96 92 4 4899 3825 1074 0.0164 0.9583 0.0417 23.0000 3.1355 1.7795 0.0293 31 <= 772 77 74 3 4976 3899 1077 0.0132 0.9610 0.0390 24.6667 3.2055 1.8495 0.0249 32 <= 787 269 260 9 5245 4159 1086 0.0461 0.9665 0.0335 28.8889 3.3635 2.0075 0.0974 33 <= 794 95 93 2 5340 4252 1088 0.0163 0.9789 0.0211 46.5000 3.8395 2.4835 0.0456 34 > 794 182 179 3 5522 4431 1091 0.0312 0.9835 0.0165 59.6667 4.0888 2.7328 0.0985 35 Missing 315 210 105 5837 4641 1196 0.0540 0.6667 0.3333 2.0000 0.6931 -0.6628 0.0282 36 Total 5837 4641 1196 NA NA NA 1.0000 0.7951 0.2049 3.8804 1.3559 0.0000 0.8357 Bureau_Score Binning with monobin() Function Cutpoint CntRec CntGood CntBad CntCumRec CntCumGood CntCumBad PctRec GoodRate BadRate Odds LnOdds WoE IV 1 <= 617 513 284 229 513 284 229 0.0879 0.5536 0.4464 1.2402 0.2153 -1.1407 0.1486 2 <= 642 515 317 198 1028 601 427 0.0882 0.6155 0.3845 1.6010 0.4706 -0.8853 0.0861 3 <= 657 512 349 163 1540 950 590 0.0877 0.6816 0.3184 2.1411 0.7613 -0.5946 0.0363 4 <= 672 487 371 116 2027 1321 706 0.0834 0.7618 0.2382 3.1983 1.1626 -0.1933 0.0033 5 <= 685 494 396 98 2521 1717 804 0.0846 0.8016 0.1984 4.0408 1.3964 0.0405 0.0001 6 <= 701 521 428 93 3042 2145 897 0.0893 0.8215 0.1785 4.6022 1.5265 0.1706 0.0025 7 <= 714 487 418 69 3529 2563 966 0.0834 0.8583 0.1417 6.0580 1.8014 0.4454 0.0144 8 <= 730 489 441 48 4018 3004 1014 0.0838 0.9018 0.0982 9.1875 2.2178 0.8619 0.0473 9 <= 751 513 476 37 4531 3480 1051 0.0879 0.9279 0.0721 12.8649 2.5545 1.1986 0.0859 10 <= 775 492 465 27 5023 3945 1078 0.0843 0.9451 0.0549 17.2222 2.8462 1.4903 0.1157 11 > 775 499 486 13 5522 4431 1091 0.0855 0.9739 0.0261 37.3846 3.6213 2.2653 0.2126 12 Missing 315 210 105 5837 4641 1196 0.0540 0.6667 0.3333 2.0000 0.6931 -0.6628 0.0282 13 Total 5837 4641 1196 NA NA NA 1.0000 0.7951 0.2049 3.8804 1.3559 0.0000 0.7810 var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); To leave a comment for the author, please follow the link and comment on their blog: S+/R – Yet Another Blog in Statistical Computing. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more... var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); ### Set Operations Unions and Intersections in R Thu, 06/15/2017 - 23:00 (This article was first published on R – Aaron Schlegel, and kindly contributed to R-bloggers) Part 2 of 2 in the series Set Theory The set operations of unions and intersections should ring a bell for those who’ve worked with relational databases and Venn Diagrams. The ‘union’ of two of sets A and B represents a set that comprises all members of A and B (or both). One of the most natural ways to visualize set unions and intersections is using Venn diagrams. The Venn diagram on the left visualizes a set union while the Venn diagram on the right visually represents a set intersection operation. Set Unions The union of two sets A and B is denoted as: \large{A \cup B} The union axiom states for two sets A and B, there is a set whose members consist entirely of those belonging to sets A or B, or both. More formally, the union axiom is stated as: \large{\forall a \space \forall b \space \exists B \space \forall x (x \in B \Leftrightarrow x \in a \space \vee \space x \in b)} For example, for two sets A and B: \large{A = \{3, 5, 7, 11 \} \qquad B = \{3, 5, 13, 17 \}} The union of the two sets is: \large{A \cup B = \{3, 5, 7, 11 \} \cup \{3, 5, 13, 17 \} = \{3, 5, 7, 11, 13, 17 \}} We can define a simple function in R that implements the set union operation. There is a function in base R union() that performs the same operation that is recommended for practical uses. set.union <- function(a, b) { u <- a for (i in 1:length(b)) { if (!(b[i] %in% u)) { u <- append(u, b[i]) } } return(u) } Using our function to perform a union operation of the two sets as above. a <- c(3, 5, 7, 11) b <- c(3, 5, 13, 17) set.union(a, b) ## [1] 3 5 7 11 13 17 Set Intersections The intersection of two sets A and B is the set that comprises the elements that are both members of the two sets. Set intersection is denoted as: \large{A \cap B} Interestingly, there is no axiom of intersection unlike for set union operations. The concept of set intersection arises from a different axiom, the axiom schema of specification, which asserts the existence of a subset of a set given a certain condition. Defining this condition (also known as a sentence) as \sigma(x), the axiom of specification (subset) is stated as: \large{\forall A \space \exists B \space \forall x (x \in B \Leftrightarrow x \in A \wedge \sigma(x))} Put another way; the axiom states that for a set A and a condition (sentence) \sigma of a subset of A, the subset does indeed exist. This axiom leads us to the definition of set intersections without needing to state any additional axioms. Using the subset axiom as a basis, we can define the existence of the set intersection operation. Given two sets a and b: \large{\forall a \space \forall b \exists B \space \forall x (x \in B \Leftrightarrow x \in a \space \wedge \space x \in b)} Stated plainly, given sets a and b, there exists a set B that contains the members existing in both sets. For example, using the previous sets defined earlier: \large{A = \{3, 5, 7, 11 \} \qquad B = \{3, 5, 13, 17 \}} The intersection of the two sets is: \large{A \cap B = \{3, 5, 7, 11 \} \cap \{3, 5, 13, 17 \} = \{3, 5 \}} We can also define a straightforward function to implement the set intersection operation. Base R also features a function intersect() that performs the set intersection operation. set.intersection <- function(a, b) { intersect <- vector() for (i in 1:length(a)) { if (a[i] %in% b) { intersect <- append(intersect, a[i]) } } return(intersect) } Then using the function to perform set intersection on the two sets to confirm our above results. a <- c(3, 5, 7, 11, 13, 20, 30) b <- c(3, 5, 13, 17, 7, 10) set.intersection(a, b) ## [1] 3 5 7 13 Subsets The concept of a subset of a set was introduced when we developed the set intersection operation. A set, A, is said to be a subset of B, written as A \subset B if all the elements of A are also elements of B. Therefore, all sets are subsets of themselves and the empty set \varnothing is a subset of every set. We can write a simple function to test whether a set a is a subset of b. issubset <- function(a, b) { for (i in 1:length(a)) { if (!(a[i] %in% b)) { return(FALSE) } } return(TRUE) } The union of two sets a and b has by definition subsets equal to a and b, making a good test case for our function. a <- c(3, 5, 7, 11) b <- c(3, 5, 13, 17) c <- set.union(a, b) c ## [1] 3 5 7 11 13 17 print(issubset(a, c)) ## [1] TRUE print(issubset(b, c)) ## [1] TRUE print(issubset(c(3, 5, 7, 4), a)) ## [1] FALSE Summary This post introduced the common set operations unions and intersections and the axioms asserting those operations, as well as the definition of a subset of a set which arises naturally from the results of unions and intersections. References Axiom schema of specification. (2017, May 27). In Wikipedia, The Free Encyclopedia. From https://en.wikipedia.org/w/index.php?title=Axiom_schema_of_specification&oldid=782595557 Axiom of union. (2017, May 27). In Wikipedia, The Free Encyclopedia. From https://en.wikipedia.org/w/index.php?title=Axiom_of_union&oldid=782595523 Enderton, H. (1977). Elements of set theory (1st ed.). New York: Academic Press. The post Set Operations Unions and Intersections in R appeared first on Aaron Schlegel. var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); To leave a comment for the author, please follow the link and comment on their blog: R – Aaron Schlegel. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more... var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); ### Introducing Our Instructor Pages! Thu, 06/15/2017 - 21:05 (This article was first published on DataCamp Blog, and kindly contributed to R-bloggers) At DataCamp, we are proud to have the best instructors in the data science community teaching courses on our site. Today, close to 50 instructors have one or more DataCamp courses live on the platform, and many more have a course in development (so stay tuned for that!). Until now, it was not possible to navigate through your preferred instructors but with our new instructor pages, we are changing that. We built these instructor pages to make it easier for you to discover what other courses are taught by your favorite instructor, and to find new instructors that are teaching topics you are passionate about. Every instructor page also has a small biography allowing you to better understand the person behind the course. Curious? Check out the instructor profile pages of Garrett Grolemund (RStudio), Dhavide Aruliah (Continuum Analytics), and 50 others on our instructor’s overview page. var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); To leave a comment for the author, please follow the link and comment on their blog: DataCamp Blog. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more... var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); ### Data Manipulation with Data Table -Part 1 Thu, 06/15/2017 - 21:00 (This article was first published on R-exercises, and kindly contributed to R-bloggers) In the exercises below we cover the some useful features of data.table ,data.table is a library in R for fast manipulation of large data frame .Please see the data.table vignette before trying the solution .This first set is intended for the begineers of data.table package and does not cover set keywords, joins of data.table which will be covered in the next set . Load the data.table library in your r session before starting the exercise Answers to the exercises are available here. If you obtained a different (correct) answer than those listed on the solutions page, please feel free to post your answer as a comment on that page. Exercise 1 Load the iris dataset ,make it a data.table and name it iris_dt ,Print mean of Petal.Length, grouping by first letter of Species from iris_dt . Exercise 2 Load the diamonds dataset from ggplot2 package as dt (a data.table) ,Find mean price for each group of cut and color . Exercise 3 Load the diamonds dataset from ggplot2 package as dt . Now group the dataset by price per carat and print top 5 in terms of count per group . Dont use head ,use chaining in data.table to achieve this Exercise 4 Use the already loaded diamonds dataset and print the last two carat value of each cut . Exercise 5 In the same data set , find median of the columns x,y,z per cut . Use data.table’s methods to achieve this . Exercise 6 Load the airquality dataset as data.table, Now I want to find Logarithm of wind rate for each month and for days greater than 15 Exercise 7 In the same data set , for all the odd rows ,update Temp column by adding 10 . Exercise 8 data.table comes with a powerful feature of updating column by reference as you have seen in the last exercise,Its even possible to update /create multiple columns .Now to test that in the airquality data.table that you have created previously,add 10 to Solar.R ,Wind . Exercise 9 Now you have a fairly good idea of how easy its to create multiple column ,Its even possible to use delete multiple column using the same idea. In this exercise , use the same airquality data.table that you have created previously from airquality and delete Solar.R,Wind,Temp using a single expression Exercise 10 Load the airquality dataset as data.table again , I want to create two columns a,b which indicates temp in Celcius and Kelvin scale . Write a expression to achieve same. Celcius = (Temp-32)*5/9 Kelvin = Celcius+273.15 Related exercise sets: var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); To leave a comment for the author, please follow the link and comment on their blog: R-exercises. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more... var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); ### Retailers: Here’s What Your Employees Are Saying About You – A Project on Web Scraping Indeed.com Thu, 06/15/2017 - 20:56 (This article was first published on R – NYC Data Science Academy Blog, and kindly contributed to R-bloggers) To view the original source code, you can visit my Github page here. If you are interested in learning more about my background, you can visit my LinkedIn page here. About This Project This is the second of four projects for the NYC Data Science Academy. In this project, students are required to collect their datasets using a method called webscraping. Essentially, webscraping is the process of collecting information (i.e. texts) from a series of websites into structured databases such as .csv files. Webscraping enables analyses to be done on data that is not already provided in a structured format. In this analysis, we will scrape employee reviews for the 8 retail companies that made it to Indeed’s 50 Best Corporate Employers in the US list. See below for additional details regarding each of these 8 retail firms in discussion. The rank and overall score corresponds to the employer’s rank and overall score as they appear on Indeed’s website. Objectives Given a two-week timeframe, the scope of the analysis was limited to these three primary objectives: • To understand when employees are more likely to submit positive and negative reviews • To understand the sentiments and emotions associated with positive and negative reviews (i.e. Sentiment Analysis) • To understand the topics of positive and negative reviews (i.e. Topic Modeling) Web Scraping with Scrapy Scrapy (a Python framework) was used for collecting employee reviews for each of the retailers on the 50 Best Corporate Employers List. Each review scraped contains the following elements: • Company • Industry • Job Title • Date • Month • Location • Rating • Header • Comment (Main Review) • Pro • Con As seen in the illustration below, not every review contains a pro and a con, as those are optional fields for the reviewer. To see an example of a real review, you can visit Indeed’s Walt Disney Parks and Resorts review page here. Employee Reviews At a Glance In total, we have scraped over 20,000 reviews. The number of reviews scraped for each firm varies significantly across the board. This is likely due to the difference in the number of employees hired by each firm across the nation. Employee reviews for these eight retail firms tend to peak around the beginning of the year (January through April). This is expected as we know that retail is a cyclical industry in which the majority of sales happen during the holiday season (November through January). Because employees are oftentimes hired only for the duration of this period, it makes sense that the majority of the reviews come in around the beginning of the year. Among the eight retailers on the 50 Best Corporate Employers list, we see that the review ratings skew toward a score of 5, which is the highest score a review can get. There is no significant differences in average rating across retailers. There is no significant differences in average rating across months. When Are Employees At Retail Firms More Likely to Submit Positive and Negative Reviews? Average Employee Rating by Month When we consider reviews of all ratings (i.e. reviews with ratings 1 to 5), retailers receive the highest average ratings during the months of October, November, December, and January. Note that the ratings have been scaled to 0 to 1 to allow for comparison across firms and months. Most Negative Reviews By Month (Reviews with Ratings Below 3) If we only consider the count of negative reviews (i.e. reviews with ratings 1 or 2), retailers on average receive the most negative reviews during the months of October, February, March, and April (i.e. these months scored higher across retailers). Most Positive Reviews By Month (Reviews with Ratings Above 3) If we only consider the count of positive reviews (i.e. reviews with ratings 4 or 5), retailers on average receive the most positive reviews during the months of January, February, and April (i.e. these months scored higher across retailers). Summary Observations In summary, while the highest average ratings concentrate toward the end of the year, the highest count of both positive and negative reviews are given during the beginning of the year. This aligns with our understanding that the retail industry is cyclical. In other words, employees are oftentimes hired specifically to aid sales around the holiday season (i.e. Nov to Jan). When their work concludes around the beginning of the year, that’s when we can expect employees to write the majority of the reviews (i.e. both positive and negative). Understanding Employees Reviews Using Sentiment Analysis A sentiment analysis was done using R’s Syuzhet library to visualize the emotions across employee reviews. In the illustration below, each graph represents the intensity of the an emotion from January to December on a positive scale (i.e. 0 and above). For example, for Build-A-Bear Workshop’s ‘positive’ graph, we can see that there is an uptick in positive sentiment toward March and November, and each of those months scored approximately a 6. This is a high score relative to the scores recorded across all other emotions. Generally, the sentiment observed align with our understanding that the majority of the employee reviews are positive, as we are narrowly focused on analyzing employee reviews among the best 8 corporate retail employers according to Indeed’s ranking. As an aside, it is interesting to see that Build-A-Bear Workshop and Trader Joe’s recorded more volatility in its scoring across the negative emotions (i.e. negative, sadness, fear, disgust, anger). Using Topic Modeling to Identify Patterns Among Positive and Negative Employee Reviews Topic modeling is a form of unsupervised machine learning, and it can help us identify patterns by clustering similar items into groups. In this analysis, pyLDAvis, a Python library, was used to analyze the groups among positive and negative reviews (using pros and cons of the employee reviews). Essentially, the library takes a list of documents as inputs (i.e. a list of reviews) and attempts to group the reviews based on common combinations of keywords appearing in each of these reviews. The drawback of using topic modeling, as with other clustering techniques, is that the user has to specify the number of groups to cluster the documents into. This presents a catch-22 issue as the initial objective is to understand the number of topics or groups among your inputs. Given this hurdle, the user must use their business judgment as well as running multiple trials with different number of topics as inputs to determine results that are most interpretable. More sophisticated clustering libraries aid the users by automatically selecting the best number of topics, but this should only be used as a guide to begin the analysis. In the illustration below, we can see that words like ‘great’ and ‘benefit,’ among many other words, are oftentimes seen together among pro reviews. Observed Topics Among Pros and Cons in Employee Reviews Two topic modeling analyses were conducted using pro reviews and con reviews. Below are the most common combinations of words appearing in each of the groups for each analysis. It is not a surprise that the groups overlap in the common words identified among them, as we have chosen a relatively high number of topics. Nevertheless, the results provide a useful start for understanding what employees are thinking. Visualizing Employee Reviews Using Word Clouds R’s Wordcloud2 library was used to further aid the visualization of what employees are saying about these firms. Essentially, the library takes a list of words along with their frequencies, and produces a word cloud based on those inputs. The higher the frequency a word is associated with, the larger it appears in the word cloud. Word Cloud – Positive Reviews The positive reviews word cloud was created using only the pros from each of the reviews. (Note: Each review contains the main review, a pro, and a con.) Some of the common words we see among pros are ‘great,”work,’ ‘job,’’company,’ and ‘customers.’ Word Cloud – Negative Reviews Similarly, a word cloud was created using only the cons from each of the reviews. We can see that common words appearing include work, management, hours, employees, and pay. Future Directions Beyond the scope of this project, below are other interesting areas that are worth looking into if additional time allows. • Evaluate the credibility of Indeed’s Top 50 Ranking corporate employers by sampling comparable employers that did not make it to the list • Better understand the rationale why positive and negative reviews are concentrated in the months they were observed to be in, and evaluate whether this behavior is prevalent across other industries • Perform statistical methods to evaluate the relationships among variables var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); To leave a comment for the author, please follow the link and comment on their blog: R – NYC Data Science Academy Blog. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more... ### Demo: Real-Time Predictions with Microsoft R Server Thu, 06/15/2017 - 19:59 (This article was first published on Revolutions, and kindly contributed to R-bloggers) At the R/Finance conference last month, I demonstrated how to operationalize models developed in Microsoft R Server as web services using the mrsdeploy package. Then, I used that deployed model to generate predictions for loan delinquency, using a Python script as the client. (You can see slides here, and a video of the presentation below.) With Microsoft R Server 9.1, there are now two ways to operationalize models as a Web service or as a SQL Server stored procedure: • Flexible Operationalization: Deploy any R script or function. • Real-Time Operationalization: Deploy model objects generated by specific functions in Microsoft R, but generates predictions much more quickly by bypassing the R interpreter. In the demo, which begins at the 10:00 mark in the video below, you can see a comparison of using the two types of deployment. Ultimately, I was able to generate predictions from a random forest at a rate of 1M predictions per second, with three Python clients simultaneously drawing responses from the server (an Azure GS5 instance running the Windows Data Science VM). If you'd like to try out this capability yourself, you can find the R and Python scripts used in the demo at this Github repository. The lending club data is available here, and the script used to featurize the data is here var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); To leave a comment for the author, please follow the link and comment on their blog: Revolutions. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more... var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); ### Neural networks Exercises (Part-2) Thu, 06/15/2017 - 18:00 (This article was first published on R-exercises, and kindly contributed to R-bloggers) Source: Wikipedia Neural network have become a corner stone of machine learning in the last decade. Created in the late 1940s with the intention to create computer programs who mimics the way neurons process information, those kinds of algorithm have long been believe to be only an academic curiosity, deprived of practical use since they require a lot of processing power and other machine learning algorithm outperform them. However since the mid 2000s, the creation of new neural network types and techniques, couple with the increase availability of fast computers made the neural network a powerful tool that every data analysts or programmer must know. In this series of articles, we’ll see how to fit a neural network with R, we’ll learn the core concepts we need to know to well apply those algorithms and how to evaluate if our model is appropriate to use in production. Today, we’ll practice how to use the nnet and neuralnet packages to create a feedforward neural networks, which we introduce in the last set of exercises. In this type of neural network, all the neuron from the input layer are linked to the neuron from the hidden layer and all of those neuron are linked to the output layer, like seen on this image. Since there’s no cycle in this network, the information flow in one direction from the input layer to the hidden layers to the output layer. For more information about those types of neural network you can read this page. Answers to the exercises are available here. Exercise 1 We’ll start by practicing what we’ve seen in the last set of exercises. Load the MASS package and the biopsy dataset, then prepare your data to be feed to a neural network. Exercise 2 We’ll use the nnet() function from the package of the same name to do a logistic regression on the biopsy data set using a feedforward neural network. If you remember the last set of exercises you know that we have to choose the number of neuron in the hidden layer of our feedforward neural network. There’s no rule or equation which can tell us the optimal number of neurons to use, so the best way to find the better model is to do a bunch of cross-validation of our model with different number of neurons in the hidden layer and choose the one who would fit best the data. A good range to test with this process is between one neuron and the number of input variables. Write a function that take a train data set, a test data set and a range of integer corresponding to the number of neurons to be used as parameter. Then this function should, for each possible number of neuron in the hidden layer, train a neural network made with nnet(), make prediction on the test set and return the accuracy of the prediction. Exercise 3 Use your function on your data set and plot the result. Which should be the number of neurons to use in the hidden layer of your feedforward neural network. Exercise 4 The nnet() function is easy to use, but doesn’t give us a lot of option to customize our neural network. As a consequence, it’s a good package to use if you have to do a quick model to test a hypothesis, but for more complex model the neuralnet package is a lot more powerful. Documentation for this package can be found here. Use the neuralnet() function with the default parameter and the number of neuron in the hidden layer set to the answer of the last exercise. Note that this function can only handle numeric value and cannot deal with factors. Then use the compute() function to make prediction on the values of the test set and compute the accuracy of your model. Learn more about neural networks in the online course Machine Learning A-Z™: Hands-On Python & R In Data Science. In this course you will learn how to: • Work with Deep Learning networks and related packages in R • Create Natural Language Processing models • And much more Exercise 5 The nnet() function use by default the BFGS algorithm to adjust the value of the weights until the output values of our model are close to the values of our data set. The neuralnet package give us the option to use more efficient algorithm to compute those value which result in faster processing time and overall better estimation. For example, by default this function use the resilient backpropagation with weight backtracking. Use the neuralnet() function with the parameter algorithm set to ‘rprop-‘, which stand for resilient backpropagation without weight backtracking. Then test your model and print the accuracy. Exercise 6 Two other algorithm can be used with the neuralnet() function: 'sag' and 'slr'. Those two strings tell the function to use the globally convergent algorithm (grprop) and to modify the learning rate associated with the smallest absolute gradient (sag) or the smallest learning rate (slr). When using those algorithm, it can be useful to pass a vector or list containing the lowest and highest limit for the learning rate to the learningrate.limit parameter. Again, use the neuralnet() function twice, once with parameter algorithm set to 'sag' and then to 'slr'. In both cases set the learningrate.limit parameter to c(0.1,1) and change the stepmax parameter to 1e+06. Exercise 7 The learning rate determine how much the backpropagation can affect the weight at each iteration. A high learning rate mean that during the training of the neural network, each iteration can strongly change the value of the weight or, to put in other way, the algorithm learn a lot of each observation in your data set. This mean that outlier could easily affect your weight and make your algorithm diverge from the path of the ideal weights for your problem. A small learning rate mean that the algorithm learn less from each observation in your data set, so your neural network is less affected by outlier, but this mean that you will need more observations to make a good model. Use the neuralnet() function with parameter algorithm set to ‘rprop+’ twice: once with the learningrate parameter set to 0.01 and another time with the learningrate parameter set to 1. Notice the difference in running time in both cases. Exercise 8 The neuralnet package give us the ability of make a visual representation of the neural network you made. Use the plot() function to visualize one of the neural networks of the last exercise. Exercise 9 Until now, we’ve used feedfordward neural network with one hidden layer of neurons, but we could use more. In fact, the state of the art neural network use often 100 of hidden layer for modeling complex behavior. For basic regression problems or even basic digits recognition problems, one layer is enough, but if you want to use more, you can do so with the neuralnet() function by passing a vector of integer to the hidden parameter representing the number of neurons in each layer. Create a feedforward neural network with three hidden layers of nine neurons and use it on your data. Exercise 10 Plot the feedforward neural network from the last exercise. Related exercise sets: var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); To leave a comment for the author, please follow the link and comment on their blog: R-exercises. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more... var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); ### Sampling weights and multilevel modeling in R Thu, 06/15/2017 - 15:26 (This article was first published on Data Literacy - The blog of Andrés Gutiérrez, and kindly contributed to R-bloggers) So many things have been said about weighting, but on my personal view of statistical inference processes, you do have to weight. From a single statistic until a complex model, you have to weight, because of the probability measure that induces the variation of the sample comes from an (almost always) complex sampling design that you should not ignore. Weighting is a complex issue that has been discussed by several authors in recent years. The social researchers have no found consensus about the appropriateness of the use of weighting when it comes to the fit of statistical models. Angrist and Pischke (2009, p. 91) claim that few things are as confusing to applied researchers as the role of sample weights. Even now, 20 years post-Ph.D., we read the section of the Stata manual on weighting with some dismay. Anyway, despite the fact that researchers do not have consensus on when to weight, the reality is that you have to be careful when doing so. For example, when it comes to estimating totals, means or proportions, you can use the inverse probability as a way for weighting, and it looks like every social researcher agrees to weight in order to estimate this kind of descriptive statistics. The rationale behind this practice is that you suppose that every unit belonging to the sample represents itself and many others that were not selected in the sample. Now, when using weights to estimate parameter models, you have to keep in mind the nature of the sampling design. For example, when it comes to estimates multilevel parameters, you have to take into account not only the final sampling unit weights but also the first sampling unit weights. For example, let’s assume that you have a sample of students, selected from a national frame of schools. Then, we have two sets of weights, the first one regarding schools (notice that one selected school represents itself as well as others not in the sample) and the second one regarding students. Now, let’s assume that in the finite population we have 10.000 students and 40 schools. For the sake of my example, let’s consider that you have selected 500 students allocated in 8 schools. For the sake of easiness, let’s think that a simple random sample is used (I know, this kind of sampling design is barely used) to select students. Think about it, if you take into account only the student’s weights to fit your multilevel model, you will find that you are estimating parameters with an expanded sample that represents 10.000 students that are allocated in a sample of just eight schools. So, any conclusion stated will be wrong. For example, when performing a simple analysis of variance, the percentage of variance explained by the schools will be extremely low, because of you are expanding the sample of schools. Now, if you take into account both sets of weights (students and schools), you will find yourself fitting a model with expanded samples that represent 10.000 students and 40 schools (which is good). Unfortunately, as far as I know, the R suitcase lacks of a package that performs this kind of design-based inference to fitting multilevel models. So, right about now, we can unbiasedly estimate model parameters, but when it comes to estimate standard errors (from a design-based perspective) we need to use other computational resources and techniques like bootstrapping or Jackknife. According to the assumption of independence, most of the applied statistical methods cannot be used to analyze this kind of data directly due to dependency among sampled observation units. Inaccurate standard errors may be produced if no adjustment is made when analyzing complex survey data Now, when it comes to educational studies (based on large-assessment tests), we can distinguish (at least) four set of weights: total student weight, student house-weight, student senate-weight and school weight. TIMMS team claims that total student weight is appropriate for single-level student-level analyses. Student house weight, also called normalized weight, is used when analyses are sensitive to sample size. Student house weight is essentially a linear transformation of total student weight so that the sum of the weights is equal to the sample size. Student Senate weight is used when analyses involve more than one country because it is total student weight scaled in such a way that all students’ senate weights sum to 500 (or 1000) in each country. School weight should be used when analyzing school-level data, as it is the inverse of the probability of selection for the selected school. R workshop We will use the student house-weight to fit a multilevel model. As stated before, the sum of these weights is equal to the sample. For the R workshop, we will use PISA 2012 data (available in the OECD website). I have done a filter for the Colombian case and saved this data to be directly compatible with R (available here). Let’s load the data into R. rm(list = ls()) library(dplyr) library(ggplot2) library(lme4) setwd("/your working directory") load("PisaCol.RData") head(names(PisaCol)) summary(PisaColSTRATUM) Now, we create an object containing the student house-weights and summarize some results based on that set of weights. Notice that the total student weights are stored in the column W_FSTUWT of the PISA database. I recall you that I am working with the first plausible value of the mathematics test and that score will be defined as our (dependent) variable of interest for the modeling. n <- nrow(PisaCol) PisaCol$W_HOUSEWHT <- n * PisaCol$W_FSTUWT / sum(PisaCol$W_FSTUWT) PisaCol %>% group_by(STRATUM) %>% summarise(avg1 = weighted.mean(PV1MATH, w = W_HOUSEWHT), avg2 = weighted.mean(PV2MATH, w = W_HOUSEWHT)) We use the function lmer of the lme4 package to obtain the estimation of the model coefficients in the null model (where schools are defined as independent variables). ################## ### Null model ### ################## HLM0 <- lmer(PV1MATH ~ (1 | SCHOOLID), data = PisaCol, weights = W_HOUSEWHT) coef(HLM0) summary(HLM0) # 62.81% of the variance is due to students # 37.19% of the variance is due to schools 100 * 3569 / (3569 + 2113) Now, as you may know, the PISA index of economic, social and cultural status has a strong relationship to student achievement, so it is a good idea to control for this variable in a more refined model. ################# ### ESCS mdel ### ################# HLM1 <- lmer(PV1MATH ~ ESCS + (1 + ESCS | SCHOOLID), data = PisaCol, weights = W_HOUSEWHT) coef(HLM1) summary(HLM1) # After contoling for ESCE, 34.58% of the variance is due to schools 100 * (96.12 + 1697.36) / (3392.58 + 96.12 + 1697.36) So then, in summary: we have 3569 units of within-schools variance (63%), after controlling for ESCE that figure turns out to 3392 units (student background explains 5% of that variation). We have 2113 (37%) units of between-school variances, after controlling for ESCE that figure turns out to 1793 (student background explains 15% of that variation). The following code makes a graph that summarizes the relationship of the student achievement with ESCE. ggplot(data = PisaCol, aes(x = ESCS, y = PV1MATH, size = W_HOUSEWHT)) + theme_minimal() + geom_point() + theme(legend.position="none") ggplot(data = PisaCol, aes(x = ESCS, y = PV1MATH, size = W_HOUSEWHT)) + geom_point(aes(colour = SCHOOLID)) + theme(legend.position="none") var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); To leave a comment for the author, please follow the link and comment on their blog: Data Literacy - The blog of Andrés Gutiérrez. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more... var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); ### Lexicographic Permutations: Euler Problem 24 Thu, 06/15/2017 - 02:00 (This article was first published on The Devil is in the Data, and kindly contributed to R-bloggers) Euler Problem 24 asks to develop lexicographic permutations which are ordered arrangements of objects in lexicographic order. Tushar Roy of Coding Made Simple has shared a great introduction on how to generate lexicographic permutations. Euler Problem 24 Definition A permutation is an ordered arrangement of objects. For example, 3124 is one possible permutation of the digits 1, 2, 3 and 4. If all of the permutations are listed numerically or alphabetically, we call it lexicographic order. The lexicographic permutations of 0, 1 and 2 are: 012 021 102 120 201 210 What is the millionth lexicographic permutation of the digits 0, 1, 2, 3, 4, 5, 6, 7, 8 and 9? Brute Force Solution The digits 0 to 9 have permutations (including combinations that start with 0). Most of these permutations are, however, not in lexicographic order. A brute-force way to solve the problem is to determine the next lexicographic permutation of a number string and repeat this one million times. nextPerm <- function(a) { # Find longest non-increasing suffix i <- length(a) while (i > 1 && a[i - 1] >= a[i]) i <- i - 1 # i is the head index of the suffix # Are we at the last permutation? if (i <= 1) return (NA) # a[i - 1] is the pivot # Find rightmost element that exceeds the pivot j <- length(a) while (a[j] <= a[i - 1]) j <- j - 1 # Swap pivot with j temp <- a[i - 1] a[i - 1] <- a[j] a[j] <- temp # Reverse the suffix a[i:length(a)] <- rev(a[i:length(a)]) return(a) } numbers <- 0:9 for (i in 1:(1E6 - 1)) numbers <- nextPerm(numbers) answer <- numbers print(answer) This code takes the following steps: 1. Find largest index such that . 1. If no such index exists, then this is already the last permutation. 2. Find largest index such that and a_{i-1}" title="a_j > a_{i-1}" class="latex" />. 3. Swap and . 4. Reverse the suffix starting at . Combinatorics A more efficient solution is to use combinatorics, thanks to MathBlog. The last nine digits can be ordered in ways. So the first permutations start with a 0. By extending this thought, it follows that the millionth permutation must start with a 2. From this rule, it follows that the 725761st permutation is 2013456789. We now need 274239 more lexicographic permutations: We can repeat this logic to find the next digit. The last 8 digits can be ordered in 40320 ways. The second digit is the 6th digit in the remaining numbers, which is 7 (2013456789). This process is repeated until all digits have been used. numbers <- 0:9 n <- length(numbers) answer <- vector(length = 10) remain <- 1E6 - 1 for (i in 1:n) { j <- floor(remain / factorial(n - i)) answer[i] <- numbers[j + 1] remain <- remain %% factorial(n - i) numbers <- numbers[-(j + 1)] } answer <- paste(answer, collapse = "") print(answer) R blogger Tony’s Bubble Universe created a generalised function to solve this problem a few years ago. The post Lexicographic Permutations: Euler Problem 24 appeared first on The Devil is in the Data. var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); To leave a comment for the author, please follow the link and comment on their blog: The Devil is in the Data. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more... var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); ### Studying disease with R: RECON, The R Epidemics Consortium Wed, 06/14/2017 - 23:41 (This article was first published on Revolutions, and kindly contributed to R-bloggers) For almost a year now, a collection of researchers from around the world has been collaborating to develop the next generation of analysis tools for disease outbreak response using R. The R Epidemics Consortium (RECON) creates R packages for handling, visualizing, and analyzing outbreak data using cutting-edge statistical methods, along with general-purpose tools for data cleaning, versioning, and encryption, and system infrastructure. Like ROpenSci, the Epidemics Consortium is focused on developing efficient, reliable, and accessible open-source tools, but with a focus on epidemology as opposed to science generally. The Epidemics Consortium has already created several useful resources for epidemiology: There are also a large number of additional packages under development. RECON welcomes new members, particularly experienced R developers and as public health officers specialized in outbreak response. You can find information on how to join here, and general information about the R Epidemics Consortium at the link below. RECON: The R Epidemics Consortium (via Maëlle Salmon‏) var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); To leave a comment for the author, please follow the link and comment on their blog: Revolutions. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more... var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); ### Exploratory Factor Analysis – Exercises Wed, 06/14/2017 - 18:10 (This article was first published on R-exercises, and kindly contributed to R-bloggers) This set of exercises is about exploratory factor analysis. We shall use some basic features of psych package. For quick introduction to exploratory factor analysis and psych package, we recommend this short “how to” guide. You can download the dataset here. The data is fictitious. Answers to the exercises are available here. If you have different solution, feel free to post it. Exercise 1 Load the data, install the packages psych and GPArotation which we will use in the following exercises, and load it. Describe the data. Exercise 2 Using the parallel analysis, determine the number of factors. Exercise 3 Determine the number of factors using Very Simple Structure method. Exercise 4 Based on normality test, is the Maximum Likelihood factoring method proper, or is OLS/Minres better? (Tip: Maximum Likelihood method requires normal distribution.) Exercise 5 Using oblimin rotation, 5 factors and factoring method from the previous exercise, find the factor solution. Print loadings table with cut off at 0.3. Exercise 6 Plot factors loadings. Exercise 7 Plot structure diagram. Exercise 8 Find the higher-order factor model with five factors plus general factor. Exercise 9 Find the bifactor solution. Exercise 10 Reduce the number of dimensions using hierarchical clustering analysis. Related exercise sets: var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); To leave a comment for the author, please follow the link and comment on their blog: R-exercises. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more... var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); ### How to draw connecting routes on map with R and great circles Wed, 06/14/2017 - 16:25 (This article was first published on Blog – The R graph Gallery, and kindly contributed to R-bloggers) This post explains how to draw connection lines between several localizations on a map, using R. The method proposed here relies on the use of the gcIntermediate function from the geosphere package. Instead of making straight lines, it offers to draw the shortest routes, using great circles. A special care is given for situations where cities are very far from each other and where the shortest connection thus passes behind the map. First we need to load 3 libraries. Maps allows to draw the background map, and geosphere provides the gcintermediate function. library(tidyverse) library(maps) library(geosphere) 1- Draw an empty map This is easily done using the ‘maps’ package. You can see plenty of other maps made with R in the map section of the R graph gallery. par(mar=c(0,0,0,0)) map('world',col="#f2f2f2", fill=TRUE, bg="white", lwd=0.05,mar=rep(0,4),border=0, ylim=c(-80,80) ) 2- Add 3 cities First we create a data frame with the coordinates of Buenos Aires, Melbourne and Paris Buenos_aires=c(-58,-34) Paris=c(2,49) Melbourne=c(145,-38) data=rbind(Buenos_aires, Paris, Melbourne) %>% as.data.frame() colnames(data)=c("long","lat") Then add it to the map using the ‘points’ function: points(x=data$long, y=data$lat, col="slateblue", cex=3, pch=20) 4- Show connection between them Now we can connect cities drawing the shortest route between them. This is done using great circles, what gives a better visualization than using straight lines. This technique has been proposed by Nathan Yau on FlowingData # Connection between Buenos Aires and Paris inter <- gcIntermediate(Paris, Buenos_aires, n=50, addStartEnd=TRUE, breakAtDateLine=F) lines(inter, col="slateblue", lwd=2) # Between Paris and Melbourne inter <- gcIntermediate(Melbourne, Paris, n=50, addStartEnd=TRUE, breakAtDateLine=F) lines(inter, col="slateblue", lwd=2) 5 – Correcting gcIntermediate If we use the same method between Melbourne and Buenos Aires, we get this disappointing result: What happens is that gcintermediate follows the shortest path, which means it will go East from Australia until the date line, break the line and come back heading East from the pacific to South America. Because we do not want to see the horizontal line, we need to plot this connection in 2 times. To do so we can use the following function, which breaks the line in 2 sections when the distance between 2 points is longer than 180 degrees. plot_my_connection=function( dep_lon, dep_lat, arr_lon, arr_lat, ...){ inter <- gcIntermediate(c(dep_lon, dep_lat), c(arr_lon, arr_lat), n=50, addStartEnd=TRUE, breakAtDateLine=F) inter=data.frame(inter) diff_of_lon=abs(dep_lon) + abs(arr_lon) if(diff_of_lon > 180){ lines(subset(inter, lon>=0), ...) lines(subset(inter, lon<0), ...) }else{ lines(inter, ...) } } Let’s try it! map('world',col="#f2f2f2", fill=TRUE, bg="white", lwd=0.05,mar=rep(0,4),border=0, ylim=c(-80,80) ) points(x=data$long, y=data$lat, col="slateblue", cex=3, pch=20) plot_my_connection(Paris[1], Paris[2], Melbourne[1], Melbourne[2], col="slateblue", lwd=2) plot_my_connection(Buenos_aires[1], Buenos_aires[2], Melbourne[1], Melbourne[2], col="slateblue", lwd=2) plot_my_connection(Buenos_aires[1], Buenos_aires[2], Paris[1], Paris[2], col="slateblue", lwd=2) 6 – Apply it to several pairs of cities Let’s consider 8 cities: data=rbind( Buenos_aires=c(-58,-34), Paris=c(2,49), Melbourne=c(145,-38), Saint.Petersburg=c(30.32, 59.93), Abidjan=c(-4.03, 5.33), Montreal=c(-73.57, 45.52), Nairobi=c(36.82, -1.29), Salvador=c(-38.5, -12.97) ) %>% as.data.frame() colnames(data)=c("long","lat") We can generate all pairs of coordinates all_pairs=cbind(t(combn(data$long, 2)), t(combn(data$lat, 2))) %>% as.data.frame() colnames(all_pairs)=c("long1","long2","lat1","lat2") And plot every connections: # background map par(mar=c(0,0,0,0)) map('world',col="#f2f2f2", fill=TRUE, bg="white", lwd=0.05,mar=rep(0,4),border=0, ylim=c(-80,80) ) # add every connections: for(i in 1:nrow(all_pairs)){ plot_my_connection(all_pairs$long1[i], all_pairs$lat1[i], all_pairs$long2[i], all_pairs$lat2[i], col="skyblue", lwd=1) } # add points and names of cities points(x=data$long, y=data$lat, col="slateblue", cex=2, pch=20) text(rownames(data), x=data$long, y=data$lat, col="slateblue", cex=1, pos=4) 7 – An alternative using the greatCircle function This is the method proposed by the Simply Statistics Blog to draw a twitter connection map. The idea is to calculate the whole great circle, and keep only the part that stays in front of the map, never going behind it. # A function that keeps the good part of the great circle, by Jeff Leek: getGreatCircle = function(userLL,relationLL){ tmpCircle = greatCircle(userLL,relationLL, n=200) start = which.min(abs(tmpCircle[,1] - data.frame(userLL)[1,1])) end = which.min(abs(tmpCircle[,1] - relationLL[1])) greatC = tmpCircle[start:end,] return(greatC) } # map 3 connections: map('world',col="#f2f2f2", fill=TRUE, bg="white", lwd=0.05,mar=rep(0,4),border=0, ylim=c(-80,80) ) great=getGreatCircle(Paris, Melbourne) lines(great, col="skyblue", lwd=2) great=getGreatCircle(Buenos_aires, Melbourne) lines(great, col="skyblue", lwd=2) great=getGreatCircle(Paris, Buenos_aires) lines(great, col="skyblue", lwd=2) points(x=data$long, y=data$lat, col="slateblue", cex=3, pch=20) text(rownames(data), x=data$long, y=data$lat, col="slateblue", cex=1, pos=4) Note that the R graph gallery proposes lot of other examples of maps made with R. You can follow the gallery on Twitter or on Facebook to be aware or recent updates. 1 var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); To leave a comment for the author, please follow the link and comment on their blog: Blog – The R graph Gallery. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more... var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); ### When the LASSO fails??? Wed, 06/14/2017 - 16:20 (This article was first published on R – insightR, and kindly contributed to R-bloggers) By Gabriel Vasconcelos When the LASSO fails? The LASSO has two important uses, the first is forecasting and the second is variable selection. We are going to talk about the second. The variable selection objective is to recover the correct set of variables that generate the data or at least the best approximation given the candidate variables. The LASSO has attracted a lot of attention lately because it allows us to estimate a linear regression with thousands of variables and the model select the right ones for us. However, what many people ignore is when the LASSO fails. Like any model, the LASSO also rely on assumptions in order to work. The first is sparsity, i.e. only a small number of variables may actually be relevant. If this assumption does not hold there is no hope to use the LASSO for variable selection. Another assumption is that the irrepresentable condition must hold, this condition may look very technical but it only says that the relevant variable may not be very correlated with the irrelevant variables. Suppose your candidate variables are represented by the matrix , where each column is a variable and each line is an observation. We can calculate the covariance matrix , which is a symmetric matrix. This matrix may be broken into four pieces: The first piece, , shows the covariances between only the important variables, is the covariance matrix of the irrelevant variables and and shows the covariances between relevant and irrelevant variables. With that in mind, the irrepresentable condition is: The inequality above must hold for all elements. is for positive values of , for negative values and if . Example For this example we are going to generate two covariance matrices, one that satisfies the irrepresentable condition and one that violates it. Our design will be very simple: only 10 candidate variables where five of them are relevant. library(mvtnorm) library(corrplot) library(glmnet) library(clusterGeneration) k=10 # = Number of Candidate Variables p=5 # = Number of Relevant Variables N=500 # = Number of observations betas=(-1)^(1:p) # = Values for beta set.seed(12345) # = Seed for replication sigma1=genPositiveDefMat(k,"unifcorrmat")$Sigma # = Sigma1 violates the irc sigma2=sigma1 # = Sigma2 satisfies the irc sigma2[(p+1):k,1:p]=0 sigma2[1:p,(p+1):k]=0 # = Verify the irrepresentable condition irc1=sort(abs(sigma1[(p+1):k,1:p]%*%solve(sigma1[1:p,1:p])%*%sign(betas))) irc2=sort(abs(sigma2[(p+1):k,1:p]%*%solve(sigma2[1:p,1:p])%*%sign(betas))) c(max(irc1),max(irc2)) ## [1] 3.222599 0.000000 # = Have a look at the correlation matrices par(mfrow=c(1,2)) corrplot(cov2cor(sigma1)) corrplot(cov2cor(sigma2)) As you can see, irc1 violates the irrepresentable condition and irc2 does not. The correlation matrix that satisfies the irrepresentable condition is block diagonal and the relevant variables have no correlation with the irrelevant ones. This is an extreme case, you may have a small correlation and still satisfy the condition. Now let us check how the LASSO works for both covariance matrices. First we need do understand what is the regularization path. The LASSO objective function penalizes the size of the coefficients and this penalization is controlled by a hyper-parameter . We can find the exact that is sufficiently big to shrink all variables to zero and for any value smaller than some variable will be included. As we decrease the size of more variables are included until we have a model with all variables (or the biggest identified model when we have more variables than observations). This path between the model with all variables and the model with no variables is the regularization path. The code below generates data from multivariate normal distributions for the covariance matrix that violates the irrepresentable condition and the covariance matrix that satisfies it. Then I estimate the regularization path for both case and summarize the information in plots. X1=rmvnorm(N,sigma = sigma1) # = Variables for the design that violates the IRC = # X2=rmvnorm(N,sigma = sigma2) # = Variables for the design that satisfies the IRC = # e=rnorm(N) # = Error = # y1=X1[,1:p]%*%betas+e # = Generate y for design 1 = # y2=X2[,1:p]%*%betas+e # = Generate y for design 2 = # lasso1=glmnet(X1,y1,nlambda = 100) # = Estimation for design 1 = # lasso2=glmnet(X2,y2,nlambda = 100) # = Estimation for design 2 = # ## == Regularization path == ## par(mfrow=c(1,2)) l1=log(lasso1$lambda) matplot(as.matrix(l1),t(coef(lasso1)[-1,]),type="l",lty=1,col=c(rep(1,9),2),ylab="coef",xlab="log(lambda)",main="Violates IRC") l2=log(lasso2$lambda) matplot(as.matrix(l2),t(coef(lasso2)[-1,]),type="l",lty=1,col=c(rep(1,9),2),ylab="coef",xlab="log(lambda)",main="Satisfies IRC") The plot on the left shows the results when the irrepresentable condition is violated and the plot on the right is the case when it is satisfied. The five black lines that slowly converge to zero are the five relevant variables and the red line is an irrelevant variable. As you can see, when the IRC is satisfied all relevant variables shrink very fast to zero as we increase lambda. However, when the IRC is violated one irrelevant variable starts with a very small coefficient that slowly increases before decreasing to zero in the very end of the path. This variable is selected through the entire path, it is virtually impossible to recover the correct set of variables in this case unless you apply a different penalty to each variable. This is precisely what the adaptive LASSO does. Does that mean that the adaLASSO is free from the irrepresentable condition??? The answer is: partially. The adaptive LASSO requires a less restrictive condition called weighted irrepresentable condition, which is much easier to satisfy. The two plots below show the regularization path for the LASSO and the adaLASSO in the case the IRC is violated. As you can see, the adaLASSO selects the correct set of variables in all the path. lasso1.1=cv.glmnet(X1,y1) w.=(abs(coef(lasso1.1)[-1])+1/N)^(-1) adalasso1=glmnet(X1,y1,penalty.factor = w.) par(mfrow=c(1,2)) l1=log(lasso1$lambda) matplot(as.matrix(l1),t(coef(lasso1)[-1,]),type="l",lty=1,col=c(rep(1,9),2),ylab="coef",xlab="log(lambda)",main="LASSO") l2=log(adalasso1$lambda) matplot(as.matrix(l2),t(coef(adalasso1)[-1,]),type="l",lty=1,col=c(rep(1,9),2),ylab="coef",xlab="log(lambda)",main="adaLASSO") The biggest problem is that the irrepresentable condition and its less restricted weighted version are not testable in the real world because we need the populational covariance matrix and the true betas that generate the data. The solution is to study your data as much as possible to at least have an idea of the situation. Some articles on the topic Zhao, Peng, and Bin Yu. “On model selection consistency of Lasso.” Journal of Machine learning research 7.Nov (2006): 2541-2563. Meinshausen, Nicolai, and Bin Yu. “Lasso-type recovery of sparse representations for high-dimensional data.” The Annals of Statistics (2009): 246-270. var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script')); To leave a comment for the author, please follow the link and comment on their blog: R – insightR. R-bloggers.com offers daily e-mail updates about R news and tutorials on topics such as: Data science, Big Data, R jobs, visualization (ggplot2, Boxplots, maps, animation), programming (RStudio, Sweave, LaTeX, SQL, Eclipse, git, hadoop, Web Scraping) statistics (regression, PCA, time series, trading) and more... var vglnk = { key: '949efb41171ac6ec1bf7f206d57e90b8' }; (function(d, t) { var s = d.createElement(t); s.type = 'text/javascript'; s.async = true; s.src = '//cdn.viglink.com/api/vglnk.js'; var r = d.getElementsByTagName(t)[0]; r.parentNode.insertBefore(s, r); }(document, 'script'));
Please check the prerequisites in order to find out whether your system is supported. Find tutorials, the APA Style Blog, how to format papers in APA Style, and other resources to help you improve your writing, master APA Style, and learn the conventions of scholarly publishing. To import a package in LaTeX, you simply add the \usepackage directive to the preamble of your document: \documentclass{article} \usepackage{PACKAGENAME} \begin{document} Install a package. This enables authors to provide data for replication or deep understanding that would be difficult put in the paper version. Here's an image of the left corner of the Word math ribbon showing LaTeX as the current input format. Your problem is that all chapters, whether they're in the appendix or not, default to starting on an odd-numbered page when you're in two-sided layout mode. doc" or "S2_Table. The \part command is used in long documents, such as a book, for major divisions. autolabel parameter to zero. Appendix III includes sample visual descriptions by students, with suggested edits and revisions by me. Then, the code makes normal chapters. Secondly, there is no way that in TOC you can shift the word "Appendix" to next page, as the spacing between two. By default, beamer will count the total number of slides, including the backup slides at the end of the presentation, leading to a wrong number of total slides intended for presentation. Appendix:Variations of "v". format, Microsoft word template. LaTeX offers a lot of functions by default, but in some situations it can become in handy to use so called packages. For advice on the use of wiki markup , see Help:Editing ; and for guidance on writing style, see Manual of Style. }} \ConfigureMark{unit-name}1 Defines a macro \ >HMark to hold the given argument. The PNAS Brief Report template can be accessed here: PNAS Brief Report template. If you have only one appendix title it Appendix with a centered heading starting on its own page. page : text, annotated footnote citation, and source material in the Appendix. \documentclass{article} \usepackage{reddit} \title{/r/\LATEX Sidebar} \author{r/\LATEX mods}\begin{document} \section{Introduction} Whether you are just getting started with LaTeX and wondering what the fuss is about, here to share the clever trick you've discovered, or need urgent help with your bibliography, welcome to the LaTeX subreddit!. If your system is not (yet) supported: it is not too difficult to build MiKTeX. Is there a simple fix to the above code that will add the word Appendix before the letter A in the TOC for Appendix A? There is a related question, How to make 'appendix' appear in toc in Latex? , but the answers did not appear to help in this case. The LaTeX markup language also uses the "#" character to designate the parameters for macro commands you create. Important note: this template comes as a zip file with multiple files and folders within it. The \appendix command changes the way sectional units are numbered. Create a new label (Figure_Apx) to number captions for figures in your Appendix. *FREE* shipping on qualifying offers. APPENDIX B Vaccines. , a large map) should be included as a supplement. For New Users. LaTeX Footnotes Footnotes can be produced in one of two ways. It is intended for students and professionals who wants a brief introduction as a basis for further self-learning. The article should be original writing that enhances the existing body of knowledge in the given subject area. LaTeX will happily allow you to insert a list environment into an existing one (up to a depth of four, more levels are available using packages). Then, the code makes normal chapters. ii ACKNOWLEDGEMENTS The Department of Biosystems and Agriculture appreciates the generosity of university faculty members Dr. Degradation rate was dependent on light and material thickness. \subsection{Appendices} Technical detail that it is necessary to include, but that interrupts the flow of the article, may be consigned to an appendix. 17), he said in a short clip: "I feel like I have been hit by a car. Most sociology departments require ASA formatting for research papers. A reader interested in a comprehensive guide to \LaTeX \, should look at \cite{Goosens:companion, Kopka:guide}. Of course, it would be annoying to renew the sectioning command and reset the counter with each appendix if there were a lot of appendixes, but if there are only one or two, then it seems like an easier workaround than digging around in all those files to figure out what is going on. Article revision 2012/03/08 Formatting L A TEX Documents in APA Style (6th Edition) Using the apa6 Class Brian D. While a "just in time" briefing during the response is the only required training for these personnel, time and resource limitations inherent in a crisis likely will diminish the effectiveness of such training. Latex Special Process Appendix Checklist Latex foam are stored in areas free of solvent-based materials that might contribute to increasing its VOCs or odour. The standard \LaTeX\ macro \verb"\footnote" should be used and will normally give an appropriate symbol; if a footnote sign needs to be specified directly \verb"\ftnote{}{Text}" can be used instead. The most important ones are: LaTeX separates the content from the layout to a large extent. Sphinx will search language-specific figures named by figure_language_filename and substitute them for original figures. Video 8 of 11 on Latex tutorials: How to setup an appendices section on your Latex report. An online LaTeX editor that's easy to use. LaTex, How to include "List of Tables" and "List of Figures" in the "Table of Contents" Showing 1-8 of 8 messages. The general response to this system is shown in Eq. These tutorials were first published on the original ShareLateX blog site during August 2013; consequently, today's editor interface (Overleaf) has changed. Section or chapter numbering is restarted and the representation of the counter switches to alphabetic. options — Options passed to the \documentclass command latex. Also Calculate ?S For This Reaction From Standard Entropies At 25 C. ASME is currently working on a solution for supporting non-integral supplementary material (e. Different document-classes might have different default settings. For this to work, you must have \usepackage{amsmath} in the preamble. This five-part series of articles uses a combination of video and textual descriptions to teach the basics of writing a thesis using LaTeX. cls redefines many of the commands in the LATEX classes/kernel, which. This could be changed to 11pt or 12pt as a option of documentclass. Full Text Search. All content placed in an appendix should be referred to in the body of the text, for example, 'Details of research instruments are given in Appendix A (Page 55). Secondly, there is no way that in TOC you can shift the word "Appendix" to next page, as the spacing between two. I demonstrate both APA and IEEE citations using the Texmake. The \section command is not redefined. It analyzes the typical problems that arise while writing a thesis with LaTeX and suggests improved solutions by handling easy packages. pdf \includegraphics[page=1]{Test} \end{document} put more then one page on a page By using option nup from usepackage pdfpages, it is also possible to put more then one page from the origin pdf file on one page of the new document. Contrary to conventional word-processing programs, LaTeX users define the structure of their documents using a series of programming-like commands. If your system is not (yet) supported: it is not too difficult to build MiKTeX. This is where LaTeX really excels. Each subdi- vision of a heading indicates a more speci c topic. It allows you to make clean, well formatted PDFs, perfect for submission to academic journals or for reports. In the LaTeX file, appendix sections should appear as follows: \begin{appendix} \section{Title of the first appendix} \section{Title of the second appendix} \end{appendix} The command \begin{appendix} must be entered before the first appendix. Report Writing with LaTeX at CUED Introduction. Appendix 16 – Best Practice - Aide memoire for Levels of Personal Protective Equipment (PPE) for Healthcare Workers when providing patient care. The \appendix command changes the way sectional units are numbered. No installation, real-time collaboration, version control, hundreds of LaTeX templates, and more. in tandem with elsarticle. First of all I will suggest that if you have only one appendix there is no need to say "A", you can just keep the title. dvi) can be converted by \\$ dvipdf hello. LaTeX 标准文档类中,最大的标题是 \part。 它在 book 和 report 类中的层级是「-1」,在 article 类中的层级是「0」。 这里,我们在调用 \appendix 的时候将计数器设置为 -2,因此所有的章节命令都不会编号了。. Pathologically it can be classified into 1) simple mucocele, 2) mucinous cystadenoma, and 3) mucinous cystadenocarcinoma. Phd Thesis Appendix Latex. LaTex, How to include "List of Tables" and "List of Figures" in the "Table of Contents" Showing 1-8 of 8 messages. Many papers include the appendix for the peer reviewing process, and then remove it before publication, allowing the quality of the raw information to be verified. The appendix package provides various ways of formatting the titles of appendices. Also, Sphinx will try to substitute individual paragraphs from your documents with the translation sets obtained from :confval:locale_dirs. It contains supporting information, and usually, appears at the end of the document. A voir également: Appendix latex. Authors may download a PDF (best for initial submissions) or a. ACKNOWLEDGMENT The authors would like to thank Mr. It offers a straightforward way to import and export bibliographic records. Campbell, Jr. Here is a short, well-written book that covers the material essential for learning LaTeX. TeX4ht: HTML production, a tutorial/introductory article on using and customizing TeX4ht. Contrary to conventional word-processing programs, LaTeX users define the structure of their documents using a series of programming-like commands. %Version Aug 1994 %Files: lit. In the latter case, the processing is still done by LaTeX, but transparently to the user. De Homepage: http://www. Scribd is the world's largest social reading and publishing site. Video 8 of 11 on Latex tutorials: How to setup an appendices section on your Latex report. The AMS-LaTeX extensions are included in the standard LaTeX distribution, and are also present on CTAN. LaTeX allows users to structure their documents with a variety of hierarchical constructs, including chapters, sections, subsections and paragraphs. SGML is an ISO standard language for defining the logical structure of documents. If you open the Standard LaTeX article shell, and to to the end of the document,. " latex master thesis appendix nfwl nra bill of rights essay introduction dissertation argumentation indirecte leadership development essay example john locke essay uber den menschlichen verstand pdf japanese essay writing nursing home observation essay literary essays lucy calkins hsc english belongi my skiing holiday essay night essays on. The article class does not start a new page by default, while report and book do. In this article I'll show how I do that for myself and how you can include HTML pages that keep the original styles, colors and so forth once it has been converted to PDF. The appendix is a. cls redefines many of the commands in the LATEX classes/kernel, which. Most word processing programs allow you to create tables easily, and you can import or cut and paste tables from spreadsheet applications very easily. The Emacs Lisp manual contains an appendix that gives coding standards for Emacs Lisp programs; it is good to urge Lisp authors to read it (*note Tips and Conventions: (elisp)Tips. Definitions \newcommand \newenvironment \newtheorem \newfont. Why removing your child's appendix could be a waste of time: Study finds that antibiotics could be just as effective. The most popular approaches to indicate a reference appearing in the text can be classified as "numeric" and "author–year". Entire document. The appendices should be labeled in the order in which they appear in the text. cls: elsarticle. An online LaTeX editor that's easy to use. Traditional methods of preparing a kind of steel called tamahagane used for the Japanese sword by tatara system and procedure of making the sword are briefly introduced with the discussions from the viewpoint of metallurgy and thermo-mechanical processing. You can add it back in manually with pdfpages. Typical moulding materials include rubber, especially latex rubber, plaster and composites such as ciment fondu. You can earn more money with CompletelyNovel – Read our self-publishing price comparison, here. Preparing your manuscript The Elsevier article class. In order to have the ones in the Appendix numbered in a special way (e. By default, the article class in LaTeX does not encourage users to write an appendix in an article. Appendix Core tag vocabulary in social networks: Top 1363 tags in Delicious, Flickr and YouTube Core Tag Vocabulary Numbers &others 1, 2, 3, 2005, 2006, 2007, -,. The preamble is used in "full blown" LaTeX, but not in the tags in a wiki. LaTeX provides an exceedingly simple mechanism for appendixes: the command \appendix switches the document from generating sections (in article class) or chapters (in report or book classes) to producing appendixes. Documentation et tutoriels sur LaTeX. Thus, if you want your appendix to be set off with a section-like label you need something like \begin {\bf APPENDIX} \end{center}. encoding — Encoding of the latex document to produce latex. Latex as found in nature is a milky fluid found in 10% of all flowering plants (angiosperms). m or the likes), or simply give a file name and have its contents included and syntax highlighted. Using Bibtex The bulk of the work is organizing your Bibtex file, which is a data base compiled by you of the articles, books, etc. In case of acceptance, the final version of the paper will have to be submitted in LaTeX format. At the end of the thesis there are some code in Appendix A, and in Appendix B i want to add a paper, written in latex. The appendix allows the author to include exhaustive details that could distract a reader if they were in the text of the manuscript. 9 Dataset files 4. Given that motivating their unmatched getting pregnant, altered also at this point accommodated zero greater than all on your own. 2 Long Thesis For a longer thesis or dissertation, it will be easier to use separate files for different sections. ” If the paper has more than one, label each with a capital letter (APPENDIX A, APPENDIX B, etc. tex+ file looks more or less the same. CompletelyNovel specialises in helping writers create and publish beautiful print-on-demand books. Appendicitis occurs when the part of the digestive tract becomes inflamed. The article class does not start a new page by default, while report and book do. Specifies whether double or single sided output should be generated. In most shells, it is. Articles in ACS Omega should advance the frontiers of science through original ideas, even if the full significance is not known. An appendix contains data that cannot be placed in the main document and has references in the original copy or file. I demonstrate both APA and IEEE citations using the Texmake. Contrary to conventional word-processing programs, LaTeX users define the structure of their documents using a series of programming-like commands. In the LaTeX builder, a suitable language will be selected as an option for the *Babel* package. Latex as found in nature is a milky fluid found in 10% of all flowering plants (angiosperms). Quality was assessed using the Cochrane Collaboration tool to assess the risk of bias of randomized controlled studies (5) (Appendix 3a) and the Newcastle-Ottawa Quality Assessment Scale for cohort studies (6) (Appendix 3b. Renate Snider, Dr. IEEE membership offers access to technical innovation, cutting-edge information, networking opportunities, and exclusive member benefits. In the text, refer to them by their label (Appendix A). article — LaTeX document class to use for article documents latex. Some useful tips and tricks in LaTeX. The standard \LaTeX\ macro \verb"\footnote" should be used and will normally give an appropriate symbol; if a footnote sign needs to be specified directly \verb"\ftnote{}{Text}" can be used instead. This is the code I would be kind of working with (esp. Here is an outline of the body of. The goal was to make publishing scientific-mathematical content on the web as easy as writing an article in LaTeX, which is the standard for more than thirty years. " Vimball Archiver by Charles E. An appendix is an additional part of an article or book, akin to a book or a table. If you have any enquiries about this website or the content on it, please contact: [email protected] The basic steps are. If you open the Standard LaTeX article shell, and to to the end of the document,. We made a separate TeX file available so that users can refertotheactualcode. The appendix package provides for modifying the typesetting of appendix titles. A number of global options allows customization of certain elements of the document by the author. Search Search. Here is a short, well-written book that covers the material essential for learning LaTeX. An online LaTeX editor that's easy to use. ACS Omega is a global open-access journal for the publication of scientific articles that describe new findings in chemistry and interfacing areas of science, without any perceived evaluation of immediate impact. LaTeX examples: How to reference a figure or table | alvinalexander. For best reproduction, bright, clear colors should be used. pdf \includegraphics[page=1]{Test} \end{document} put more then one page on a page By using option nup from usepackage pdfpages, it is also possible to put more then one page from the origin pdf file on one page of the new document. The standard \LaTeX\ macro \verb"\footnote" should be used and will normally give an appropriate symbol; if a footnote sign needs to be specified directly \verb"\ftnote{}{Text}" can be used instead. (a) Subdivisions (b) and (c) of this appendix establish the California Hazardous Waste Code Numbers assigned to wastes which have been identified as hazardous wastes pursuant to the characteristics of hazardous waste as set forth in article 3 of this chapter or pursuant to the lists of hazardous wastes in article 4 of this chapter. 8) Choose the appropriate article type (Research, Application, Review or Short Note articles - see Aims & Scope). ASME is currently working on a solution for supporting non-integral supplementary material (e. JOURNAL OF LATEX CLASS FILES, VOL. ASCE and American Society of Civil Engineers—Registered in US Patent and Trademark Office. The word 'Appendix' or similar can be prepended to the appendix number for article class documents. It should be sufficient for writing any document that is based on a template or specific class. Saved time will definitely serve you for the good. Appendix If Appendixes are provided, they appear on a new page after the figures. Section or chapter numbering is restarted and the representation of the counter switches to alphabetic. The system converts your article files to a single PDF file used in the peer-review process. Can I write a LaTeX equation over multiple lines? Using the multiline, aligned packages. To import a package in LaTeX, you simply add the \usepackage directive to the preamble of your document: \documentclass{article} \usepackage{PACKAGENAME} \begin{document} Install a package. IEEE membership offers access to technical innovation, cutting-edge information, networking opportunities, and exclusive member benefits. Also (sub)appendices environments are provided that can be used, for example, for per chapter/section appendices. How do I convert my LaTeX document to Word? Sometimes a long equation needs to be broken over multiple lines, especially if using a double column export style. The most popular approaches to indicate a reference appearing in the text can be classified as "numeric" and "author–year". cls without any problems. 1 Style for references 3. {{openInSlText}} ZIPファイルをダウンロード 公開サービスに問題が発生しました。. Then, order the contents, such as graphs, surveys, or interview transcripts, based on the order in which they appear in your paper. The three most commonly used standard document-classes in LaTeX include: article, report and book. To add the appendix fragment, choose File, Import Fragment, select "appendix. It is enough to signpost it the body of your work, for example: (See Appendix A). Appendix A: Book and Article Reviews 2 that are more objective or compassionate. In this file you type both the text of your document and the LATEX commands to format it. Some built-in configurations of TeX4ht require an argument for the \ >HMark com. LaTeX will happily allow you to insert a list environment into an existing one (up to a depth of four, more levels are available using packages). EconPapers provides access to RePEc, the world's largest collection of on-line Economics working papers, journal articles and software. We are aware that this is an issue and we are working to make article loading as fast as possible, even for long, complex articles. The word ‘Appendix’ or similar can be prepended to the appendix number for article class documents. Can somebody explain to me what is the proper way to add it?. === removed directory '. Where's the difference between (an) inflammation of the appendix and of the eye. I use LaTeX->dvi->ps->pdf route to obtain the pdf. LaTeX document classes define several typographic operations that need "canned text" (text not supplied by the user). Video 8 of 11 on Latex tutorials: How to setup an appendices section on your Latex report. Daly This paper describes package figcaps version 4. Specifies whether double or single sided output should be generated. J'aimerais bien pouvoir le supprimer, mais je n'ai pas reussi (j'ai essayé avec le mode d'emploi du package appendix, mais je n'ai pas compris ). The AVS LaTeX template is available on Overleaf and is the same template used with AIP Publishing journals. Number equations, theorems, propositions, etc. My document is using the article class, but the section numbers are shown for the figures, figure 2. It will be published online with the article on Wiley Online Library. It contains supporting information, and usually, appears at the end of the document. 4% of patients who undergo appendectomy, being more prevalent in. Latex has no known primary metabolic function and has been strongly implicated in. Este é o guia de usuário do Beamer. txt) or read online for free. The information provided is for noncommercial educational use only. The Appendix should begin on a new page following the text, preceding the references. Further, please put the paper itself and the online appendix in one single PDF file. Appendix B: Table B. Why LaTeX? Why Do We Recommend LaTeX? Authors who intend to publish a book or an article with the AMS are strongly encouraged to use LaTeX as described in the book Guide to LaTeX, fourth edition, by Helmut Kopka and Patrick W. quilt_patches' --- debian/. RWTH-Aachen. It is important to know the location of your appendix and the warning signs of appendicitis at it can quickly. These can be established using appropriate ewtheorem and ewenvironment commands: these commands are best included in the LaTeX input file before \begin{document}. Integrated journal homepages and expanded content on ACM SIGs and conferences. Quality was assessed using the Cochrane Collaboration tool to assess the risk of bias of randomized controlled studies (5) (Appendix 3a) and the Newcastle-Ottawa Quality Assessment Scale for cohort studies (6) (Appendix 3b. ASME is currently working on a solution for supporting non-integral supplementary material (e. It provides a subappendix environment for use as an appendix to a chapter or section. If you open the Standard LaTeX article shell, and to to the end of the document,. How to Get Started Using LaTeX. To test susceptibilities, we determined the 50% tissue cell infectious dose per milliliter (TCID 50 /mL) of the Zika virus MR766 strain before and after the virus was exposed to disinfectants or other inactivation procedures (Technical Appendix). The appendix is a hollow worm-like structure attached to the cecum, the first part of the large bowel, on the right side. 65) Before using the ACM LaTeX article template, everyone should read the TeX User Guide which comprises the first section of the document; authors who plan to use their own packages should read the longer TeX Implementation Guide. No installation, real-time collaboration, version control, hundreds of LaTeX templates, and more. It provides the capability to create very specific formatting and to write a wide variety of formulas. The AMS-LaTeX extensions are included in the standard LaTeX distribution, and are also present on CTAN. By David Koenig. vim [[[1 6167 " Title: Vim library for ATP filetype plugin. The reason for not using the fonts provided by the system is that Tex and Latex will give the same distinctive look no matter which platform a document was compile on, making the language system independent. Appendix B introduces CTAN and explains how to download the LATEX packages described in this book. For example, if you have been using chapters, use the \appendix command, followed by \chapter{Mathematical Symbols}, etc. Each subdi- vision of a heading indicates a more speci c topic. I quickly show how to setup a LaTex document with citations/references/bibliography using BibTeX. [email protected] ), then also set the appendix. org or by locating a title in ASCE's Civil. The following list indicates the four di erent types of section headings and the appropriate style for each. The text is similar to that in Appendix B (where it was formatted in \documentclass{article}), but we have changed occurrences of \section to \chapter and \subsection to \section. Pour faire ceci avec LaTeX, on dispose de la classe combine. The two authors extracted data in a predefined evidence table (Appendix 2) and critically appraised the retrieved studies. In the DocBook XSL stylesheets, the number part of a title is called a label. An example chapter is included which explains the basics of LaTeX and this template. 9 Dataset files 4. " If the paper has more than one, label each with a capital letter (APPENDIX A, APPENDIX B, etc. By default, beamer will count the total number of slides, including the backup slides at the end of the presentation, leading to a wrong number of total slides intended for presentation. Full Text Search. ACKNOWLEDGMENT The authors would like to thank Mr. These are just some of the features you'll find in the new ACM Digital Library. The \appendix command generates no text and does not affect the numbering of parts. For advice on the use of wiki markup , see Help:Editing ; and for guidance on writing style, see Manual of Style. Each topic is not studied in depth (the reader can refer to the package manual for this) but the author tries to cover as many aspects as possible highlighting the solutions that he likes most. If you have questions, please contact [email protected] The frmttable command is a tool for experienced users and programmers to create formatted tables from statistics and write them to Word or LATEX files. That would keep the tex file compatible with the regular latex article class. cls is a document class like article or book, and so must be defined in the preamble (see the section about document classes for more info). To test susceptibilities, we determined the 50% tissue cell infectious dose per milliliter (TCID 50 /mL) of the Zika virus MR766 strain before and after the virus was exposed to disinfectants or other inactivation procedures (Technical Appendix). % % This manual is typeset according to the conventions of the % \LaTeX{} \textsc{docstrip} utility which enables the automatic % extraction of the \LaTeX{} macro source files~\cite{GOOSSENS94}. Question: Using Enthalpies Of Formation (Appendix C Below, Can Scroll Over), Calculate ?H For The Following Reaction At 25 C. LaTeX article template LaTeX Communication template If using the LaTeX template, please provide us with both the native files and a PDF file of your manuscript including all of your figures (as this format is the most accessible to our reviewers). LaTeX will happily allow you to insert a list environment into an existing one (up to a depth of four, more levels are available using packages). twoside, oneside. It is good practice to list the books, magazines and websites from where you found out your background research when writing a research paper. 1 It is a rare condition, observed in up to 0. 8: add appendix Appendix B, Dblatex Processing Instruction Reference, and sections the section called “XML Configuration File Format”, the section called “Musical Notation”. 1, 30 January 2000 This article is intended to be a reference for the SGML document type definition linuxdoc, which is coming along with the SGML text format­ ting system version 1. See Section 1. An appendix contains data that cannot be placed in the main document and has references in the original copy or file. The Appendix section should be included in the Supplementary Material section, not in the body of the manuscript. Develop separate appendices for different types of supporting materials, advises the Purdue University Online Writing Lab. Major Di erences Following are the major di erences between elsarticle. 5 When using an MS-Word Template file 3. Please note that any tables or figures intended to appear in an appendix should not be embedded and must be located after the references with the other manuscript tables and figures. The information provided is for noncommercial educational use only. Degradation rate was dependent on light and material thickness. First of all I will suggest that if you have only one appendix there is no need to say "A", you can just keep the title. The standard \LaTeX\ macro \verb"\footnote" should be used and will normally give an appropriate symbol; if a footnote sign needs to be specified directly \verb"\ftnote{}{Text}" can be used instead. Latex is a stable dispersion of polymer microparticles in an aqueous medium. In case of very long (multipage) tables where you need also multirows, multicolumns, text wrapping within cells and a defined table width,. Unless the particular provisions or the context otherwise requires, the definitions and general provisions contained in this article shall govern the construction of this chapter. Members support IEEE's mission to advance technology for humanity and the profession, while memberships build a platform to introduce careers in technology to students around the world. This is beyond the scope of this article but involves using an IF field to test if the page is the last page of the document or Section and give a different result depending on the answer. centering a table > footnote within a table fixed width How to create a table in LaTeX with fixed width? generate LaTeX tables with fixed width online Problem If the default tabular environment is used, every colunm get the width they need. Latex is a sticky emulsion that exudes upon damage from specialized canals in about 10% of flowering plant species. Cross References \label \pageref \ref. A reader interested in a comprehensive guide to \LaTeX \, should look at \cite{Goosens:companion, Kopka:guide}. org or by locating a title in ASCE's Civil. The output shown on page 63 (which we have shrunk to display on a single page) should be compared with that on page 59. Notons que les classes article et report disposent d'un environnement abstract permettant la mise en forme d'un résumé : \begin{abstract} Résumé du document \end{abstract} Changement de page [modifier | modifier le wikicode] Le changement de page est géré automatiquement par LaTeX. NASA Technical Reports Server (NTRS) Lee, L. All content placed in an appendix should be referred to in the body of the text, for example, 'Details of research instruments are given in Appendix A (Page 55). By David Koenig. UseVimball finish autoload/atplib. Appendix A: Summary Chart of U Consistent and correct use of the male latex condom reduces the risk of STIs All MMWR HTML versions of articles are electronic. article — LaTeX document class to use for article documents latex. LaTeX preamble The preamble is the first section of an input file, before the text of the document itself, in which you tell LaTeX the type of document, and other information LaTeX will need to format the document correctly. The subappendices (with \subsection) are numbered without the dot, as A1, A2, etc. For example, if you have been using chapters, use the \appendix command, followed by \chapter{Mathematical Symbols}, etc. However, in an article, the labelling simply follows from the rest of the document. The three most commonly used standard document-classes in LaTeX include: article, report and book. Each time you use the text editor to make a change to the source file, you will need to run latex again to update the. The Appendix section should be included in the Supplementary Material section, not in the body of the manuscript. To import a package in LaTeX, you simply add the \usepackage directive to the preamble of your document: \documentclass{article} \usepackage{PACKAGENAME} \begin{document} Install a package. DITA TC Meeting Minutes 2013 - cumulative Minutes of the OASIS DITA TC Tuesday, 8 January 2013 Recorded by N. Important note: this template comes as a zip file with multiple files and folders within it. L a T e X is widely used in science and programming has become an important aspect in several areas of science, hence the need for a tool that properly displays code. Insert an image in LaTeX - Adding a figure or picture Learn how to insert images and caption them. use — Use passivetex unicode support?. Instructs LaTeX to typeset the document in two columns instead of one. Simply begin the appropriate environment at the desired point within the current list. kaobook This template is designed for writing books and graduate-level theses and provides numerous examples and documentation to enable complex requirements. docmeans latex documentation for Lecture Notes in Computer Science llncsdoc. Appendix B: Table B. 5 inches wide on 12pt documents, 1. cls: elsarticle. Latex Special Process Appendix Checklist Latex foam are stored in areas free of solvent-based materials that might contribute to increasing its VOCs or odour. The text is similar to that in Appendix B (where it was formatted in \documentclass{article}), but we have changed occurrences of \section to \chapter and \subsection to \section. Anglican Women When you are this person, you can ask that person via online private life. NASA Technical Reports Server (NTRS) Lee, L. encoding — Encoding of the latex document to produce latex. In the LaTeX builder, a suitable language will be selected as an option for the *Babel* package. 3 ) July 2019 Frank Lübeck Max Neunhöffer Frank Lübeck Email: [36mmailto:Frank. Abstract This article provides useful tools to write a thesis with L A TEX. Sphinx will search language-specific figures named by figure_language_filename and substitute them for original figures. The appendix is not fixed and is sometimes found behind the peritoneum. There are several reasons why LaTeX is so widespread. DITA TC Meeting Minutes 2013 - cumulative Minutes of the OASIS DITA TC Tuesday, 8 January 2013 Recorded by N. All Answers ( 18) In my opinion the main advantage of this plugin is that you don't have to do an intermediate step (. Anglican Women When you are this person, you can ask that person via online private life. Filtration efficiency of surgical masks Erin Sanchez Appendix 1: Major Materials and Components of the Experiment 47 Table 2 Polystyrene Latex Parameters 18. o The appendix (appendices) appears after the document text, but before the References. While writing a presentation with beamer it may be convenient to have some backup/appendix slides ready as a support for answers to potential questions. It's a common emergency surgery that's performed to treat appendicitis, an inflammatory condition of the appendix. Question: Using Enthalpies Of Formation (Appendix C Below, Can Scroll Over), Calculate ?H For The Following Reaction At 25 C. Manuscripts must be organized in the following manner: Title Page; Author Footnote (JASA, JCGS, and TAS only) Abstract and Key Words; Article Text. J'aimerais bien pouvoir le supprimer, mais je n'ai pas reussi (j'ai essayé avec le mode d'emploi du package appendix, mais je n'ai pas compris ).
Journal cover Journal topic Wind Energy Science The interactive open-access journal of the European Academy of Wind Energy Journal topic Wind Energ. Sci., 3, 461–474, 2018 https://doi.org/10.5194/wes-3-461-2018 Special issue: Wind Energy Science Conference 2017 Wind Energ. Sci., 3, 461–474, 2018 https://doi.org/10.5194/wes-3-461-2018 Research articles 06 Jul 2018 Research articles | 06 Jul 2018 # Simulation of transient gusts on the NREL 5 MW wind turbine using the URANS solver THETA Simulation of transient gusts on the NREL 5 MW wind turbine using the URANS solver THETA Annika Länger-Möller Annika Länger-Möller • DLR e.V., Lilienthalplatz 7, 38108 Braunschweig, Germany Correspondence: Annika Länger-Möller ([email protected]) Abstract A procedure to propagate longitudinal transient gusts through a flow field by using the resolved-gust approach is implemented in the URANS solver THETA. Both the gust strike of a 1−cos() gust and an extreme operating gust following the IEC 61400-1 standard are investigated on the generic NREL 5 MW wind turbine at rated operating conditions. The impact of both gusts on pressure distributions, rotor thrust, rotor torque, and flow states on the blade are examined and quantified. The flow states on the rotor blade before the gust strike at maximum and minimum gust velocity are compared. An increased blade loading is detectable in the pressure coefficients and integrated blade loads. The friction force coefficients indicate the dynamic separation and re-attachment of the flow during the gust. Moreover, a verification of the method is performed by comparing the rotor torque during the extreme operating gust to results of FAST rotor code. 1 Introduction The origins of applying computational fluid dynamics (CFD) to wind turbine rotors date back to the 1990s when applied EllipSys3D to a wind turbine. solved the Reynolds-averaged Navier–Stokes (RANS) equations and applied the Menter shear stress transport (SST) kω turbulence model to a full-scale wind turbine. In 2002 the National Renewable Energy Laboratory (NREL) performed the Unsteady Aerodynamic Experiment (UAE) , which has long been the reference for several CFD computations. For example, presented a detached eddy simulation (DES) on the NREL UAE phase VI blade to demonstrate the capabilities of predicting flow separation. developed a delayed DES to investigate whether the simulation approach could be improved. Furthermore, the experiment has been widely used for URANS solver validation for example by , , , , , and . developed the generic NREL 5 MW wind turbine. Through its open-access documentation, the NREL 5 MW wind turbine is established as the reference and validation test case for single- and multi-physics test cases. For example, focused on the prediction of aerodynamic features of the wind turbine. Furthermore, they investigated the impact of fences on the flow separation in the inboard region. Full aeroelastic computations were performed by for an isolated rotor and for the rotor, tower, and nacelle. and modelled the aerodynamics with an unsteady RANS (URANS) method and the structure with shell elements. The structure properties represented the material properties of the blade. The resulting aerodynamic characteristics and blade tip deflections were good when compared to the NREL 5 MW documentation and FAST. In the past years, growing computer power has enabled the geometry-resolved simulation of wind turbines including the sites with CFD. Studies have been performed for example by or , who used an URANS solver to perform according studies. Moreover, hybrid large-eddy simulation (LES)–RANS approaches are implemented to analyse the behaviour of wind turbines in a complex terrain as for example presented by , who also considered unsteady atmospheric inflow conditions. The challenges of correctly predicting uncertainty of the fluctuating wind loads is a research field on its own. For example, or investigated wind fields to better understand the shape of wind gusts. argued that a detailed understanding of wind fields is not necessary. rather took into account unknowns of all parts of the wind turbine life cycle, for example changes in the blade shape due to production tolerances, ageing, or the wind field, and summarized them in uncertainty parameters to estimate the effective power outcome and rotor loads. proved that turbulent wind fields do not show a Gaussian distribution as assumed in the International Electrotechnical Commission Standard (IEC). A similar conclusion was drawn by , who investigated whether the 50-year loads as defined in the IEC adequately fulfil their purpose by applying different approaches of probability prediction to the generic NREL 5 MW turbine using the FAST rotor code. The aerodynamic interferences between the unsteady wind conditions and wind turbines are of major importance for the prediction of fatigue loads and the annual power production. Therefore, it is part of the certification computation for each wind turbine. Nevertheless, the detailed investigation of isolated effects of the 50-year extreme operating gust (EOG) on the flow of a wind turbine using high-fidelity methods like CFD is rare even though the blade loads resulting from the extreme load cases are dimensioning load cases. In the case of vertical axis wind turbines, analysed the power loss of a wind turbine subjected to a sinusoidal fluctuation in wind speed. However, compared to the EOG, amplitudes were small. Horizontal axis wind turbines which are hit by an EOG as defined in the IEC 61400-1 were presented by . The wind turbine under consideration was the NREL phase VI rotor with a wind speed of 7 m s−1 using the panel code AeroSIM+. The impact of the gust was then evaluated in terms of rotor thrust, torque, and wake development. Preceding this study, examined the flap moment of wind turbine blades, which were subjected to a gust with extreme raise, using the wind turbine design tool Bladed. Bladed is an aeroelastic software by Garrad Hassan for the industrial design and certification of wind turbines (DNVGL2017). Other examples of aeroelastic simulation tools for wind turbine design are HAWC2 or FAST (Jonkman2013), which all include at least a blade element momentum (BEM) method to represent the aerodynamics, a multibody dynamics formulation to represent the structure, and an algorithm for rotational speed control. All three of them provide a possibility to compute EOG cases fully multidisciplinarily on the basis of linearized aerodynamic and structure models. Even though the literature on gust simulations on wind turbines is not extensive, some research has been conducted in the field of aerospace science. and presented two approaches which are implemented in the URANS solver TAU to apply vertical gusts on airplanes: the velocity-disturbance approach and the resolved-gust approach. The velocity-disturbance approach adds the gust velocity to the surface of the investigated geometry. It enables the analysis of the resulting forces on the geometry surface but prevents the feedback of the structure response on the flow field and the gust shape. The resolved-gust approach overcomes the disadvantages of the one-way interaction in the velocity disturbance approach by propagating the gust through the flow field with the speed of sound. But it ignores that the gust transport velocity usually differs from the speed of sound. The validity of both implementations was demonstrated by the time history of the position of the centre of gravity, pitch angles, and load factors necessary for keeping the flight path of an aircraft constant. In the so-called field approach, added the gust velocity to the grid velocity of the computational grid to all cells with $\begin{array}{}\text{(1)}& x\le u\cdot t,\end{array}$ wherein x is the coordinate in flow direction, u the gust transport velocity, and t the physical time. This approach allows the definition of a gust transport velocity and the analysis of the two-way interaction among gust, structure, and wake. Nevertheless, it requires a severe manipulation of the velocity field regardless of the flow solution that is produced by the wind turbine. The simulation of unsteady inflow conditions of wind turbines in CFD implies several challenges. The simulation of a wind turbine including the tower is, itself, an instationary problem which needs the computation of several rotations to obtain a periodic solution. Superposed by sheared inflow profiles and instationary (stochastic) inflow conditions, periodicity can never be gained because the same flow state never occurs twice. Moreover, a computation in which the rotor motion is adapted to the actual rotor forces using a strong coupling approach as proposed by should be included in the computation. By using strong coupling between the URANS solver FLUENT and a pitch control algorithm for the rotor motion, Sobotta has been able to implement a simulation procedure of turbine start-up. performed the computation of an emergency shutdown of a turbine by using the incompressible URANS solver EllypSys3D and by neglecting the tower throughout the aerodynamic computations. Additionally, Heinz et al. considered the rotor mass and inertia by coupling the URANS solver with the aeroelastic code HAWC2. The validation of the resolved-gust approach in the DLR URANS solver THETA is presented herein. To reduce the complexity of the problem and emphasize the quality of the resolved-gust approach, the NREL 5 MW wind turbine is chosen to operate in shear-free conditions. Moreover, the possible interferences with the structure response and speed controllers are reduced by using infinite rotor mass and inertia. Speed control algorithms are also neglected. As gust, a 1−cos()-shaped gust, which lasts about 7 s, and the EOG following the IEC standard are chosen. The resulting rotor thrust and rotor torque, pressure distributions, friction force coefficients, and the wake-vortex transport are evaluated. The rotor torque during the EOG is validated against FAST. 2 Numerical methods ## 2.1 Flow solver THETA DLR's flow solver THETA is a finite-volume method which solves the incompressible Navier–Stokes (NS) equation on unstructured grids. The grids can contain a mix of tetrahedrons, prisms, pyramids, and hexagons. The transport equations are formulated on dual cells, which are constructed around each point of the primary grid. Therefore, the method is cell centred with respect to the dual grid. The transport equations are solved sequentially and implicitly. The Poisson equation, which links velocity and pressure, is solved by either the Semi-implicit Method for Pressure-Linked Equations (SIMPLE) algorithm for stationary problems or the projection method for unsteady simulations. With the projection method the momentum equations are first solved with an approximated pressure field. The pressure field is then corrected with a Poisson equation to fulfil continuity. Pressure stabilization is used to avoid spurious oscillations caused by the collocated variable arrangement. The technique of overlapping grids (Chimera) is used to couple fixed and moving grid blocks. The method was developed by for structured grids or for unstructured grids for application in incompressible flow problems. It has been implemented in THETA by . The interpolation among the different blocks at interior boundaries is integrated in the system of linear equations on all grid levels of the multi-grid solver, leading to an implicit formulation across the blocks. This procedure was identified to be crucial for achieving fast convergence of the Poisson equation. Implicit time-discretization schemes of first order (implicit Euler) or second order (Crank–Nicolson; backward differentiating formula, BDF) are implemented. The temporal schemes are global time stepping schemes. A variety of schemes from first order upwind up to second order linear or quadratic upwind or a central scheme and a low dissipation, low dispersion scheme are implemented. Throughout this study, the second-order central scheme is used. The THETA code provides a user interface for setting complex initial and boundary conditions using the related C functions. This guarantees a high flexibility on the definition of boundary conditions and a straightforward modelling of very specific test cases. For example, the functions enable the prescription of gusts at the inflow boundary condition, which are then propagated through the flow field. Moreover, all physical models are separated from the basis code. Therefore, new physical models can be implemented without modification of the base code. For turbulence modelling the commonly used Spalart–Allmaras, kϵ, kω, or Menter SST models are available. Since the early URANS computations of wind turbines the Menter SST turbulence model has been used for wind turbine applications. Recently, confirmed this finding during the THETA validation by comparing the results of common one- and two-equation turbulence models to the NREL UAE phase VI experiment. Hence, the Menter SST turbulence model is applied throughout the present study. Moreover, according to the studies in a time step of δt=0.006887052 s, which is equivalent to a rotor advance of $\mathrm{\Psi }=\mathrm{0.5}{}^{\circ }$ per time step, is chosen. As a time stepping scheme, the Eulerian implicit scheme for the temporal discretization is chosen. To ensure convergence in every time step, a residual of less than 10−5 has to be reached. Moreover, the solver has to perform at least 20 iterations per time step in all equations. Due to efficiency reasons, the maximum number of iterations per time step has been limited to 100. Figure 1Computational grid set-up; (a) chord-wise distribution; (b) span-wise distribution in blade tip region; (c) cut through the flow field and boundary conditions. ## 2.2 FAST The comprehensive rotor code FAST (Jonkman2013) is a modular software framework for computer-aided engineering (CAE) of wind turbines. FAST provides a coupling procedure to compute time-dependent multi-physics relevant for wind turbine design. By means of different modules, FAST is able to account for different physical models and turbine components in the computations. The aerodynamics are represented by a BEM method, which is based on profile polars for drag, lift, and momentum. In the present case, most parameters remained on the default of NREL's v8.16.00a for both FAST and the NREL 5 MW wind turbine. Few parameters had to be adjusted. As it is, the variable-speed control has been turned off to ensure a constant rotational speed as in the URANS computation. The blade stiffness has been increased to the order of 1029 Nm2 per blade element to obtain a stiff blade. To ensure a shear-free inflow profile, the constant wind profile type without a dynamic inflow model was selected. In FAST, the EOG started after the computation of 8 s. The analytical inflow profile was included as x velocity in the IECWind file. The other wind directions were equal to 0. 3 Geometry ## 3.1 NREL 5 MW wind turbine The NREL 5 MW turbine is a three-bladed wind turbine with a rotor radius of 63.0 m and a hub height of 90 m. The rotor has a cut-in wind speed of vci=3 m s−1 and a rated wind speed of vrated=11.4 m s−1. Cut-in and rated rotational speeds are ωci=41.4 s−1 and ωrated=72.6 s−1, respectively. The blades are pre-coned and the rotor plane is tilted about β=5.0 and not yawed. Along the non-linearly twisted blade seven different open-access profiles are used. Due to the narrow gap between rotor and nacelle a valid Chimera overlap region could not be achieved in that region. Thus the nacelle of the NREL 5 MW turbine is neglected while the tower is accounted for. This approach leads to an error in the flow prediction behind the rotor hub but is supposed to have no impact on the blade loads. The gust simulation is based on the rated wind speed vrated=11.4 m s−1. Air density of ρ=1.225 kg m−3 and the kinematic viscosity of $\mathit{\nu }=\mathrm{1.82}×{\mathrm{10}}^{-\mathrm{5}}$ m2 s−1 is used. To isolate the gust impact of the rotor loads, a shear-free velocity profile is considered throughout the computation. ## 3.2 Grid characteristics The computational grid consists of three parts. The first part contains the three rotor blades, stubs, and the rotor hub. On the blade surface, a structured grid with 156 × 189 elements in the span-wise and chord-wise directions was generated. The boundary layer mesh of the blades consists of 49 hexagon layers in an OO topology. The height of the wall-next cell is $\mathit{\delta }=\mathrm{3}\phantom{\rule{0.125em}{0ex}}×\phantom{\rule{0.125em}{0ex}}{\mathrm{10}}^{-\mathrm{6}}$ m along the entire blade, ensuring ${y}^{+}\phantom{\rule{0.125em}{0ex}}\le \phantom{\rule{0.125em}{0ex}}\mathrm{1}$. Figure 1a and b give an impression of the chord-wise and span-wise grid resolution, respectively. The second part of the grid has the shape of a disc and contains the entire rotor. The disc measures D=166.7 m in diameter and has a depth of 26.7 m. It is filled with tetrahedrons with an edge length between 0.002 and 0.9 m. The entire disc is used as a Chimera child grid for the overlapping grid technique and contains approximately 11.63 × 106 points. The Chimera parent grid has the dimensions of $\mathrm{504}\phantom{\rule{0.125em}{0ex}}×\phantom{\rule{0.125em}{0ex}}\mathrm{504}\phantom{\rule{0.125em}{0ex}}×\phantom{\rule{0.125em}{0ex}}\mathrm{1512}$ m3 in width, height, and length. It contains a boundary layer grid of the floor, the tower, and a refined grid region to resolve the rotor wake up to 3R downstream. The 54 prism layers, used to resolve the boundary layer of the viscous floor, have a total height of H=5 m with a wall-next cell height of $\mathit{\delta }=\mathrm{3}\phantom{\rule{0.125em}{0ex}}×\phantom{\rule{0.125em}{0ex}}{\mathrm{10}}^{-\mathrm{5}}$ m. This meshing strategy enables the comparison of future gust computations that include analytically defined velocity profiles of neutral atmospheric boundary layer flows. The tower surface grid is meshed structured in a height below 5 m with 54 points in height and 180 points in the radial direction. Above 5 m, a triangulated unstructured grid is generated with the maximum edge length of 0.55 m. As the tower surface is modelled as slip wall, tetrahedrons are built directly on the tower surface. In the Chimera parent grid, the edge length of the cells continuously grows from very small in the rotor tower and wake region to rather large close to the far-field boundaries. The entire Chimera parent grid contains approximately 13.25 × 106 points. In Fig. 1c the entire Chimera set-up is displayed and the boundary conditions are indicated. The upwind and downwind boundaries are defined as inflow and outflow, respectively. At the inflow boundary surface, the turbulence quantities and inflow velocities are prescribed. Later, the gust profile is also introduced at this boundary. The floor is defined as viscous wall. The surfaces on the top and to the left and right of the flow domain are defined as slip wall. 4 Gust modelling ## 4.1 The resolved-gust approach The procedure of applying the gust to the flow field starts by computing the flow field around the wind turbine until the flow field and the global rotor loads have become periodic. For the NREL 5 MW turbine in the given set-up 9 revolutions are required. Then, the inflow velocity on the inflow boundary is modified according to the velocity change described in Sect. 4.2 or 4.3. The computation is continued so that the gust is propagated through the flow field. In the approach by and using TAU to solve the compressible RANS equations, the gust is transported with the speed of sound. In their approach, as well as in the present paper, the computation has been run at least until the gust has entirely passed the geometry in question but can be continued as long as wished by the user. The restrictions to ensure a loss-free transport of the gust velocity on the resolved-gust approach named by or are • a fine grid upstream of the geometry in question • a fine time step. As THETA is an incompressible solver, the speed of sound is infinite. In addition to the strong implicit formulation and the choice of boundary conditions that prevent the flow from escaping sideways, this leads to a spread of the gust velocity through the flow field instantaneously. If the same gust velocity is added to the constant inflow condition in every point in the inflow plane and the boundary conditions are chosen as specified in Sect. 3.2, the transport of the gust velocity will be loss free and instantaneous through the entire domain and on the far-field boundaries. Hence, the restrictions by and concerning the grid resolutions are obsolete, while a fine time step is required to ensure numerical stability. To analyse the resolved-gust approach in the incompressible URANS solver THETA, the inflow velocity profile is shear free and the gust velocity remains independent of the height above ground. Figure 2Inflow velocity with dependence on physical time t. ## 4.2 Cosines gust The 1−cos() gust is modelled analogously to the EASA certification standard (EASA2010) as $\begin{array}{}\text{(2)}& u\left(t\right)={u}_{\mathrm{g}}\left(\mathrm{1}-\mathrm{cos}\left(\frac{\mathrm{8}\mathit{\pi }}{H}\right)\right),\end{array}$ with u(t) and ug as the time-dependent velocity and the gust velocity, respectively, and H as the gust gradient. In the work presented, H is chosen to generate a non-compressed sinusoidal gust $\begin{array}{}\text{(3)}& H=\frac{\mathrm{8}\mathit{\pi }}{t-{T}_{\mathrm{S}}},\end{array}$ wherein t represents the actual physical time and TS is the time at which the gust starts. Inserting Eq. (3) in Eq. (2) the following definition of the gust results: $\begin{array}{}\text{(4)}& u\left(t\right)=\left\{\begin{array}{ll}{u}_{\mathrm{g}}\left(\mathrm{1}-\mathrm{cos}\left(t-{T}_{\mathrm{S}}\right)\right)& \mathrm{\text{if}}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}{T}_{\mathrm{S}}\le t<{T}_{\mathrm{S}}+{T}_{\mathrm{g}}\\ u& \mathrm{\text{if}}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}t\le {T}_{\mathrm{S}}\phantom{\rule{0.25em}{0ex}}\text{or}\phantom{\rule{0.25em}{0ex}}t\ge {T}_{\mathrm{S}}+{T}_{\mathrm{g}}\end{array}\right\,\end{array}$ wherein Tg is the duration time of the gust. The gust velocity ug is defined as +0.25 m s−1 representing a gust and 0.25 m s−1 representing a sudden calm. In both cases, the maximum change in wind speed is 0.5 m s−1 or 4.4 % at rated wind speed of the NREL 5 MW turbine. The turbulence intensity of the 1−cos() gust is 2.5 %, which is on the order of the atmospheric turbulence intensity that found in a field measurement campaign on a horizontal axis wind turbine. The resulting gust profile of the present study is displayed in Fig. 2. ## 4.3 Extreme operating gust The time-dependent velocity change of the EOG is modelled following the IEC 61400-1 standard. $\begin{array}{}\text{(5)}& u\left(z,t\right)=u\left(z\right)-\mathrm{0.37}\cdot {u}_{\mathrm{g}}\mathrm{sin}\left(\frac{\mathrm{3}\mathit{\pi }\cdot t}{{T}_{\mathrm{g}}}\right)\cdot \left(\mathrm{1}-\mathrm{cos}\left(\frac{\mathrm{2}\mathit{\pi }\cdot t}{{T}_{\mathrm{g}}}\right)\right),\end{array}$ wherein Tg=10.5 s is the characteristic time as defined in the , t the physical time simulated, u(z) the velocity profile depending on the height, and ug the gust velocity. The latter is defined as $\begin{array}{}\text{(6)}& {u}_{\mathrm{g}}=\mathrm{3.3}\left(\frac{{\mathit{\sigma }}_{\mathrm{1}}}{\mathrm{1}+\mathrm{0.1}\left(\frac{D}{{\mathit{\lambda }}_{\mathrm{1}}}\right)}\right)\end{array}$ and is 5.74 m s−1 in the given case. In Eq. (6), ${\mathit{\sigma }}_{\mathrm{1}}=\mathrm{0.11}\cdot {u}_{\mathrm{hub}}$ is the standard turbulence deviation, λ1=42 m the turbulence scale parameter, and D the rotor diameter. uhub represents the velocity at hub height. The velocity ue1 is the average over 10 min with a recurrence period of 1 year. It is defined as $\begin{array}{}\text{(7)}& {u}_{\mathrm{e}\mathrm{1}}=\mathrm{1.12}\cdot {u}_{\mathrm{ref}}{\left(\frac{z}{{z}_{\mathrm{hub}}}\right)}^{\mathrm{0.11}},\end{array}$ with the reference velocity vref=50 m s−1. It is defined in the IEC 61400-1 standard for a wind turbine of the wind class A1. In a shear-free flow, the inflow velocity is constant with height and thus u(z) reduces to u and u(z,t) to u(t). By additionally entering Eq. (7) in Eq. (5) one obtains the final gust definition $\begin{array}{}\text{(8)}& u\left(t\right)=u-\mathrm{0.37}\cdot {u}_{\mathrm{g}}\mathrm{sin}\left(\frac{\mathrm{3}\mathit{\pi }\cdot t}{{T}_{\mathrm{g}}}\right)\cdot \left(\mathrm{1}-\mathrm{cos}\left(\frac{\mathrm{2}\mathit{\pi }\cdot t}{{T}_{\mathrm{g}}}\right)\right).\end{array}$ The resulting gust profiles of the EOG in comparison to the moderate 1−cos() gust is visualized in Fig. 2. 5 Results ## 5.1 Constant inflow conditions As described in Sect. 4.1 a periodic flow field with periodic rotor loads is mandatory as starting conditions for computing gusts that act on wind turbines. The resulting time history of rotor thrust Fx and rotor torque Mx for constant inflow conditions over revolutions 6 to 10 are displayed in Figs. 3 and 4 with the red line. In both figures the periodic behaviour of a periodic flow field is visible as well as the typical 3  rev characteristic of a rotor tower configuration of the wind turbine. By averaging rotor thrust Fx and torque Mx about 4 revolutions, one obtains 738.9 kN and 4.15 ×106 Nm, respectively. Compared to the reference of , at rated conditions the values deviate by about approximately 3.77 and 0.98 %, respectively. achieved a rotor thrust of 786 kN and a torque of 4.4 ×106 Nm in their studies with the compressible URANS solver TAU for the stiff-bladed NREL 5 MW turbine. The agreement among the URANS computations performed with THETA, TAU, and the reference documentation of the NREL 5 MW wind turbine is excellent. Thus, the numerical set-up is validated successfully. Figure 3Rotor thrust Fx during the gust. Figure 4Rotor torque Mx during the gust. Between $\mathrm{\Psi }=\mathrm{3060}{}^{\circ }$ and $\mathrm{\Psi }=\mathrm{3240}{}^{\circ }$, in the eighth revolution, high-frequency oscillations occur in the THETA computation. In the specific time step, the Poisson equation for pressure correction has not converged in the maximum number of iterations. Nevertheless, the interference subsides in the following rotor rotations and is sufficiently small. Thus, the reason for this oscillation is of minor importance in the context of this paper. If the wind turbine operates in uniform flow conditions, a 3  rev characteristic is found in both Fx and Mx, which is caused by the tower blockage effect. Moreover, the constant amplitudes around a steady mean value of both Fx and Mx indicate that the flow field has converged. The converged state includes the boundary layer that developed on the viscous floor of the flow domain. Over the length of the entire flow domain, the boundary layer achieved a thickness of approximately 1 m at the end of the flow domain. This is far below the rotor area and does not affect the rotor characteristics or wake development during the computation. Hence, the gust as defined in Eqs. (4) or (8) can be applied in the next step. ## 5.2 Cosine gust The impact of the 1−cos() gust on rotor thrust Fx and rotor torque Mx during the gust is evaluated by comparison to uniform inflow conditions. Fx and Mx are displayed in Figs. 3 and 4, respectively. Therein, the period between 30 and 50 s is displayed while the gust operates between TS=37 s and ${T}_{\mathrm{S}}+{T}_{\mathrm{g}}=\mathrm{44}$ s. Thus, it lasts approximately 1.5 rotor revolutions. As expected, the gust velocity spreads over the entire field immediately and also affects rotor thrust and rotor torque instantaneously. In the case of gust and calm no hysteresis effect is found as rotor thrust and rotor torque recover immediately after the gust. This is visible in both Figs. 3 and 4 as the curve of constant inflow conditions is matched right after 44 s. The symmetric response of rotor thrust and rotor torque to the gust is caused by the modelling assumptions of a stiff blade and a missing speed control algorithm. Table 1Gust-induced peak loads on the rotor during the 1−cos() gust in relation to the constant blade load. During the gust, Fx and Mx follow the modification of the inflow condition. Hence, for a positive gust velocity ug (Eq. 4) rotor loads increase in a 1−cos() shape while they decrease in the same manner for a negative gust velocity. Additionally, the tower blockage effect is superposed on the blade loads and remains detectable in the blade load development. In the case of ${u}_{\mathrm{g}}=+\mathrm{0.25}\phantom{\rule{0.125em}{0ex}}\mathrm{m}/\mathrm{s}$, the tower blockage effect reduces the time that the rotor experiences maximum loads, as is visible at approximately t=41 s. A sharp drop in both rotor thrust and rotor torque is visible. This drop is due to the tower blockage effect and would have appeared at a different instance of the gust if the rotor position at the gust starting time was different. Nevertheless, during the calm with ${u}_{\mathrm{g}}=-\mathrm{0.25}\phantom{\rule{0.125em}{0ex}}\mathrm{m}/\mathrm{s}$ the tower blockage leads to an additional decrease in rotor thrust and rotor torque at t=41 s. The rapid changes in rotor thrust and rotor torque indicate the fast load changes on the blade which increase fatigue loads. Table 1 lists the relative differences in Fx and Mx during the gust, computed by Eq. (9). In Eq. (9) the subscript “max” indicates the extreme rotor loads and the overline the averaged rotor loads under constant inflow conditions of Sect. 5.1. $\begin{array}{ll}& \mathit{\delta }{F}_{x}=\mathrm{100}\cdot \left(\frac{{F}_{max}}{\stackrel{\mathrm{‾}}{{F}_{x}}}-\mathrm{1}\right)\\ \text{(9)}& & \mathit{\delta }{M}_{x}=\mathrm{100}\cdot \left(\frac{{M}_{max}}{\stackrel{\mathrm{‾}}{{M}_{x}}}-\mathrm{1}\right)\end{array}$ Additionally, the relative difference in the averaged blade load is computed by first integrating rotor thrust and rotor torque during the gust excitation and then computing Eq. (9). The result of the gust peak load is listed in Table 1 while the integrated loads are contained by Table 2. In both tables it can be seen that a reduction of the wind speed due to calm or the increase in the wind speed with the same amplitude leads to very similar absolute changes in rotor thrust and rotor torque. By comparing the values of Tables 1 and 2 it is also found that the absolute peak loads are 2.3 times larger than the averaged loads. Thus, the use of maximum loads during a 10 min interval is inevitable for a computation of equivalent fatigue loads while averaging the loads is not appropriate. It is also important to note that the rotor loads return to the values of constant inflow conditions right after the gust ended. This indicates that there are neither reflections nor numerical oscillations, which lower the numerical accuracy, in the flow field. In summary, the behaviour of rotor thrust Fx and rotor torque Mx is as expected. The increased wind speed causes higher thrust and momentum and vice versa while the amplitude is identical for the increase and decrease in wind speed. By considering blade deflections and changes to the rotational speed in future aeroelastic computations, the resulting rotor torque and rotor thrust will change. Figure 5Rotor thrust Fx and torque Mx during the gust. Figure 6Rotor position at minimum (a) and maximum (b) gust velocity; black blade is blade number 1. ## 5.3 Extreme operating gust Figure 5 presents rotor thrust Fx and rotor torque Mx during the EOG excitation in comparison to the constant inflow conditions. Moreover, the rotor torque that is computed by FAST for a stiff blade and constant rotational speed is displayed. The EOG lasts about 0.5 s or 2 rotor revolutions. In comparison to the rotor loading during the 1−cos() gust (Sect. 5.2) the tower blockage effect becomes negligible. Hence, the starting position of the rotor is less important for the computation of maximum loads. This is also seen in the FAST result. By comparing the rotor torque during the gust of THETA to the one of FAST it is found that both values coincide exactly before t=43.5 s and after t=46.5 s. Between these two time stamps, FAST predicts significantly higher loads than THETA. Moreover, the EOG load at the maximum gust velocity is increased in comparison to THETA. The differences result from the flow characteristics of the blade. THETA predicts large areas of flow separation as a response to accelerated velocity after t=43.5 s. The flow reattaches over most of the blade after t=46.5 s when the velocity slowed down sufficiently. It is most likely that the profile polars that the BEM of FAST relies on is not able to reproduce the instationary flow behaviour of the given case. Figure 7Span-wise distribution of (a) rotor thrust and (b) rotor torque for different azimuth positions at constant inflow and during the gust. The maximum velocity umax during the gust is about 15.65 m s−1 and the minimum velocity umin is 9.94 m s−1, which is 137 and 87 % of the values at rated wind speed. The changes in rotor thrust and rotor torque, computed using Eq. (9), are given in Table 3. It is shown that the rotor torque is decreased by 36 % during the calm that precedes or follows the velocity maximum and increased about 100 % during the gust peak. The changes in rotor thrust are smaller even though the amplitudes of load change are significant as well. Figure 8Pressure distribution of the blade at the inboard section, (a, c) undisturbed flow, and (b, d) with tower blockage; (a, b) pressure distribution and (c, d) friction force coefficient. Figure 9Pressure distribution of the blade at the midsection, (a, c) undisturbed flow, and (b, d) with tower blockage; (a, b) pressure distribution and (c, d) friction force coefficient. Figure 10Pressure distribution of the blade at the outboard section, (a, c) undisturbed flow, and (b, d) with tower blockage; (a, b) pressure distribution and (c, d) friction force coefficient. To analyse the flow state on the blade during the gust, two instances have been chosen: after ${t}_{min}=\mathrm{2.5}\phantom{\rule{0.125em}{0ex}}\mathrm{s}+{T}_{\mathrm{S}}=\mathrm{42}$ s (minimum gust velocity) and ${t}_{max}=\mathrm{5.6}\phantom{\rule{0.125em}{0ex}}\mathrm{s}+{T}_{\mathrm{S}}=\mathrm{45}$ s (maximum gust velocity). Figure 6a and b, respectively, display the rotor positions in the instances investigated. In both figures blade number 1 is coloured in black. At tmin, when the gust is at its minimum velocity, blade number 1 is right in front of the tower and additionally experiences the tower blockage effect. Conversely, at tmax, when the gust is at its maximum velocity, blade number 1 is in free-stream conditions while the flow on blade number 3 enters the tower blockage region. The impact of the tower blockage during the constant inflow conditions at tmin and tmax is visible in the radial distribution of rotor thrust and rotor torque (Fig. 7). Table 3Gust-induced peak load during the EOG on the rotor in relation to the constant blade load. In accordance to Fig. 5 the overall rotor loading is reduced when the blade experiences minimum gust velocity. Conversely, a significant increase in rotor loading is observed when the blade experiences the maximum gust velocity. At all times, the rotor thrust is reduced only slightly by the tower blockage effect. Moreover, only the inboard part of the rotor appears to be affected by the tower blockage. Contrariwise, the rotor torque is affected by the tower blockage in the outer part of the rotor. In addition, the tower blockage effect generally has only a small impact on the rotor torque at wind velocities smaller than rated wind speed. With the maximum gust velocity, the blade at $\mathrm{\Psi }=\mathrm{210}{}^{\circ }$ experiences a strong tower blockage effect that reduces the rotor torque up to 29 % (radial section $r/R=\mathrm{65}$ %). The reason is the different separation behaviour at the trailing edge of the blade, which is investigated through pressure coefficient distributions and friction force coefficients at three radial sections: an inboard section at $r/R=\mathrm{10}$ %, a midsection at $r/R=\mathrm{50}$ %, and an outboard section at $r/R=\mathrm{90}$ %. They are displayed in Figs. 8, 9, and 10, respectively. In all three figures the pressure is displayed in the upper half and is normalized with the vector sum of the tip speed and the constant inflow velocity. For a meaningful comparison to constant inflow conditions, it was ensured that the investigated sections result from blades at the same azimuth positions. A noticeable difference in cp at minimum gust velocity is found in all sections (Figs. 8a, 9a, and 10a) when compared to the constant inflow conditions. The decrease in the stagnation point of cp to lower values at tmin is higher than that of the tower blockage effect while otherwise the pressure distributions keep the general shape. Conversely the difference between constant inflow conditions and the maximum gust is significantly higher. The maximum cp increased about 58 % at the blade tip. Moreover, the shape of the pressure distribution changes over the entire blade. This is visible especially in the mid-board and outboard blade sections (Figs. 9a and 10a). In both sections, the pressure increases rapidly in the rear half of the upper blade surface and even reaches positive values in the last 20 % of the profile. This behaviour is a first indication of a separation region and reversed flow around the trailing edge. The friction coefficients on the blade sections in undisturbed flow are displayed in Figs. 8c, 9c, and 10c. In all sections, strong fluctuations, which result from the truncated geometry at $x/c=\mathrm{1}$, are visible at the trailing edge. The friction force coefficient cf in the inboard section (Fig. 8c) shows large differences among all considered time instances. The oscillations around $x/c=\mathrm{50}$ % at constant inflow conditions indicate a small separation region with otherwise attached flow. At minimum gust velocity the overall friction force level is increased around the leading edge and the separation region has shifted upward and is between $x/c=\mathrm{20}$ % and $x/c=\mathrm{40}$ %. At the maximum gust velocity, the friction force is increased and the oscillations between $x/c=\mathrm{20}$ % and $x/c=\mathrm{50}$ % indicate a larger separation region on the upper blade surface. The friction force coefficient indicates that separation in the blade inboard section is present during the entire rotor rotation. It is triggered through the close cylindrical blade root and amplified with higher inflow velocities. Conversely to the inboard section, changes in separation at the midsection appear due to the gust only. In Fig. 9c the friction force level is decreased at minimum gust velocity and the curve is very smooth. At maximum gust velocity, small oscillations appear around $x/c=\mathrm{90}$ %, indicating separation in that region. By increasing the rotor radius, the behaviour is enforced. At the outboard section (Fig. 10c), the local maximum in cf in the last 20 % of the profile almost reaches the level of the leading edge at tmax. The same analysis is performed for the blade that is situated right in front of the tower or at $\mathrm{\Psi }=\mathrm{210}{}^{\circ }$. For the pressure distributions (Figs. 8b, 9b, 10b) the same effects as described for the blades in undisturbed flow are found. The only difference is that due to the tower blockage the overall pressure level is decreased by about 1 %. This enforces the observation from the rotor thrust and rotor torque time histories in Figs. 5 and 7. Namely, the tower blockage effect is small compared to the EOG operating loads. Conversely, the characteristics of friction forces (Figs. 8d, 9d, and 10d) in the inboard and outboard sections differ from those of the blades in undisturbed flow. In the inboard section in Fig. 8d, the oscillations in cf reduce to a minimum while the overall friction force level remains constant. Only at maximum gust velocity, do the oscillations around the trailing edge appear. Thus, the tower seems to suppress separation and it takes some time until the separation state is fully recovered. The friction force coefficient in the midsection behaves similarly to the inboard section with the difference that the entire separation region is larger (Fig. 9d). In the outboard section in Fig. 10d, the friction force is increased significantly due to the maximum gust velocity. By comparing the friction coefficients of the blades in undisturbed flow and in the tower blockage region, it is found that the separation on the suction side of the blade covers a larger area in the rear part of the blade. The separation induces a larger profile thickness, which results in a different induced angle of attack and thus reduced momentum. Finally, the transport of the tip vortices is investigated. It has to be understood as an indication of whether the velocity transport in the field works as expected but the tip vortex transport has only small meaning for the transient rotor loading during the gust. In addition, the velocity in the field changes gradually because of the infinite speed of sound in the entire flow domain. Thus, the vortices that are shed from the blade at a given wind speed are not transported with their specific gust transport velocity. Contrariwise, all existing vortices experience identical changes in the gust transport velocity. Thus, the geometrical distance between existing vortices remains constant. Figure 11Tip vortex transportation in the vertical plane through the rotor centre. In Fig. 11 three instances of the flow field are compared. In all three instances, the rotor is at the $\mathrm{\Psi }=\mathrm{0}{}^{\circ }$ position. The vortices are made visible with the λ2 criterion . The black lines are extracted during constant inflow, the red lines are extracted shortly before tmax, and the green ones are extracted at the end of the gust. By comparing the vortex transport at the beginning of the gust (black curve) and at the end of the gust (green curve), a compression and stretching of the distance between the vortices is found. The vortex transport with dependence on the vortex age is further discussed through Figs. 12 and 13. They compare the transport of the vortex with dependence on the vortex age parallel and orthogonal to the flow direction, respectively. As long as the inflow velocity is constant, the vortex transport is approximately $\mathrm{0.23}\cdot r/R$ parallel to the flow direction and $\mathrm{0.16}\cdot r/R$ orthogonal to the flow direction in a 1∕3 revolution interval. At maximum gust velocity, the constant distance between the vortices is lost. The vortices older than 1 revolution experienced constant inflow conditions. Thus the inter-vortex distance is constant. The vortices between 0.25 and 1 revolutions were shed during the reduced wind speed. Hence the distance parallel to the flow direction is reduced. As the downstream wake transport decreases, the orthogonal transport increases. The vortices younger than 0.25 revolution experienced the high wind speed. Thus, the distance between the vortices parallel to the flow direction increases and the orthogonal transport decreases. At the end of the gust, reversed behaviour of the vortex transport is observed. The vortices between 0.75 and 1 revolution were generated while the wind speed slowed down. Thus, the distance between the vortices parallel to the wind direction is increased while the orthogonal transport is decreased. Vortices up to an age of 0.75 revolution have a decreased distance parallel to the wind direction but an increased distance orthogonal to the wind direction. Figure 12Tip vortex transportation in the main flow direction with dependence on the wake age at three time instances during the gust. Figure 13Tip vortex transportation vertical to main flow direction with dependence on the wake age at three time instances during the gust. The aerodynamic characteristics, rotor thrust, and rotor torque of course depend on the assumption of stiff rotor blades and constant rotational speed. If the rotor had finite mass and inertia or a speed control algorithm had been applied, the rotor loading during the gust would have been reduced significantly. Moreover, the symmetry of the rotor loading decreases as soon as the structure dynamics are taken into account. 6 Conclusions The study presented the validation of the resolved-gust approach that was implemented in the URANS solver THETA. As a test case, the generic 5 MW wind turbine was computed, operating under a 1−cos() gust and an extreme operating gust as defined in the standard. The gust has been introduced with the resolved-gust approach by introducing the changing velocity at the inflow boundary conditions. The gust velocity was then transported loss free through the field with infinite speed of sound. The assumptions made in the paper can be summarized as 1. the wind speed is constant in height and time (except gust velocity); 2. the gust velocity is constant in height; 3. the gust transport velocity is equal to speed of sound which is infinite; 4. the boundary conditions of the flow domain are chosen to prevent the flow from escaping sideways. The disadvantage of the resolved-gust approach clearly is the requirement of fine time steps and fine grids upstream of the geometry in question, which increases the computational costs. The infinite speed of sound and its direct relation to the gust transport velocity has to be regarded in its ambivalent effects. This leads to an inaccurate reproduction of the wind turbine wake transport on the one hand, while on the other hand, the response of the rotor loading to the gust velocity is immediately obtained. Clear advantages for the resolved-gust approach are numerical stability and no artificial oscillations. Finally, the resolved-gust approach can be applied to any completed wind turbine URANS computation to obtain more insight of the wind turbine characteristics. The results represented the effects that are expected during the instationary inflow condition in combination with the given boundary conditions very well. Rotor thrust and rotor torque follow the gust shape very closely. An analysis of the time history of rotor thrust and rotor torque during the gust show an increased rotor loading of about 100 % compared to constant inflow. Pressure distributions and friction force coefficients reveal that the flow on the rotor blades at maximum gust velocity is separated and thus highly instationary. Moreover, the effect of accelerating wind speeds was found in the rotor wake as the distance between the vortices is stretched and compressed according to the changes of the wind speed. The comparison of the results with the aeroelastic software FAST showed a very good agreement of rotor thrust and rotor torque during the EOG. Thus, it is a valid and accurate method to predict wind turbine loads during an EOG. Nevertheless, a complete validation is not possible at this state as a gust experiment for a wind turbine is not available. The first mandatory step for further research on the gust simulation with URANS is to perform a grid-independence and time-step study with the resolved-gust approach. Based on these results, a gust transport velocity with other than infinite speed of sound have to be achieved. This may be realized by adjustments of the resolved-gust approach, by implementing the field approach of, for example, , or by implementing the velocity-disturbance approach of . A third possibility would be to introduce the fluctuating gust velocities obtained from LES computations, which themselves fulfil the continuity conditions. Only then are the procedures of gust computation for wind turbines in THETA prepared to be extended to account for atmospheric boundary layer flows or for aeroelastic analysis. The then ready-to-use method is supposed to supply a tool for gaining more detailed knowledge about the wind turbine behaviour during extreme gust events, in a first step. In a second step, this knowledge can be used to adjust engineering models which are used during the design process. It has to be clearly understood that the resolved-gust approach in an URANS computation cannot and will not replace engineering models at this stage. Consequently, the potential of weight reduction (and thus cost reduction) and increased reliability in wind turbine designs are the (very) long-term objectives. Data availability Data availability. NREL 5 MW data are available from NREL reports; no other data are available. Competing interests Competing interests. The author declares that she has no conflict of interest. Special issue statement Special issue statement. This article is part of the special issue “Wind Energy Science Conference 2017”. It is a result of the Wind Energy Science Conference 2017, Lyngby, Copenhagen, Denmark, 26–29 June 2017. Acknowledgements Acknowledgements. The presented work was funded by the Federal Ministry of Economic Affairs and Energy of the Federal Republic of Germany under grant number 0325719. The article processing charges for this open-access publication were covered by a Research Centre of the Helmholtz Association. Edited by: Jens Nørkær Sørensen Reviewed by: Niels N. Sørensen and two anonymous referees References Bazilevs, Y., Hsu, M.-C., Kiendl, J., Wüchner, R., and Bletzinger, K.-U.: 3D simulation of wind turbine rotors at full scale. Part II: Fluid-structure interaction modeling with composite blades, Int. J. Numer. Meth. Fl., 65, 236–253, https://doi.org/10.1002/fld.2454, 2011. a, b Bierbooms, W.: Investigation of Spatial Gusts with Extreme Rise Time on the Extreme Loads of Pitch-regulated Wind Turbines, Wind Energ., 8, 17–34, https://doi.org/10.1002/we.139, 2005. a Bierbooms, W. and Drag, J.: Verification of the Mean Shape of Extreme Gusts, Wind Energ., 2, 137–150, 1999. a Castellani, F., Astolfi, D., Mana, M., Piccioni, E., Bechetti, M., and Terzi, L.: Investigation of terrain and wake effects on the performance of wind farms in complex terrain using numerical and experimental data, Wind Energ., 20, 1277–1289, https://doi.org/10.1002/we.2094, 2017. a Chow, R. and van Dam, C.: Verification of computational simulations of the NREL 5 MW rotor with focus on inboard flow separation, Wind Energ., 15, 967–981, https://doi.org/10.1002/we.529, 2012. a Duque, E., Burklund, M., and Johnson, W.: Navier–Stokes and comprehensive analysis performance prediction of the NREL phase VI experiment, J. Sol. Energy Eng., 125, 457–467, https://doi.org/10.1115/1.1624088, 2003. a EASA: Certification Sepcifications for Large Aeroplanes CS 25, vol. Subpart C – Structure, European Aviation Sagety Agency, 2010. a Graf, P., Damiani, R., Dykes, K., and Jonkman, J.: Advances in the Assessment of Wind Turbine Operating Extreme Loads via More Efficient Calculation Approaches, in: 35th Wind Energy Symposium, AIAA SciTech Forum, Grapevine, Texas, 9–13 January 2017, AIAA 2017-0680, 1–19, https://doi.org/10.2514/6.2017-0680, 2017. a Hand, L., Simms, D., Fingersh, M., Jager, D., Cotrell, J., Schreck, S., and Larwood, S.: Unsteady aerodynamics experiment phase VI: wind tunnel test configurations and available data campaign, Tech. Rep. NREL/TP-500-29955, NREL, available at: https://www.nrel.gov/docs/fy02osti/29955.pdf (last access: November 2017), 2001. a Heinz, J., Soerensen, N., and Zahle, F.: Fluid-structure interaction computations for geomgeometric resolved rotor simulations using CFD, Wind Energ., 19, 2205–2221, https://doi.org/10.1002/we.1976, 2016. a, b Hsu, M.-C. and Bazilevs, Y.: Fluid-structure interaction modeling of wind turbines: simulating the full machine, Comput. Mech., 50, 821–833, https://doi.org/10.1007/s00466-012-0772-0, 2012. a, b IEC 61400-1: Wind turbines – Part 1: Design requirements, 3rd edn., DIN Deutsches Institut für Normen e.V., VDE Verband der Elektrotechnik, Elektronik, Informationstechnik e.V., Frankfurt, Germnay, 2005. a, b, c, d, e Imiela, M., Wienke, F., Rautmann, C., Willberg, C., Hilmer, P., and Krumme, A.: Towards Multidisciplinary Wind Turbine Design using High-Fidelity Methods, in: 33rd AIAA/ASME Wind Energy Symposium, Kissimmee, Florida, 5–9 January 2015, AIAA 2015-1462, https://doi.org/10.2514/6.2015-1462, 2015. a Jeong, J. and Hussain, F.: On the identification of a vortex, J. Fluid Mech., 285, 69–94, https://doi.org/10.1017/S0022112095000462, 1995. a Johansen, J., Sørensen, N. N., Michelsen, J. A., and Schreck, S.: Detached-Eddy Simulation of flow around the NREL phase-VI blade, Wind Energ., 5, 185–197, https://doi.org/10.1002/we.63, 2002. a Jonkman, J.: The new modularization framework for the FAST wind turbine CAE tool, in: 51st AIAA Aerospace Sciences Meeting including the New Horizons Forum and Aerospace Exposition, Grapevine, Texas, 7–10 January 2013, AIAA 2013-0202, https://doi.org/10.2514/6.2013-202, 2013. a, b Jonkman, J., Butterfield, S., Musial, W., and Scott, G.: Definition of a 5-MW reference wind turbine for offshore system development, Tech. Rep. NREL/TP-500-38060, NREL, available at: http://www.nrel.gov/docs/fy09osti/38060.pdf (last access: November 2017), 2009. a, b, c, d Kelleners, P. and Heinrich, R.: Simulation of Interaction of Aircraft with Gust and Resolved LES-Simulated Atmospheric Turbulence, in: Advances in Simulation of Wing and Nacelle Stall, edited by: Radespiel, R., Niehuis, R., Kroll, N., and Behrends, K., Springer Verlag, Notes on Numerical Fluid Mechanics and Multidisciplinary Design, 131, 203–222, https://doi.org/10.1007/978-3-319-21127-5_12, 2015. a, b, c, d, e Kessler, R. and Löwe, J.: Overlapping Grids in the DLR THETA Code, in: New Results in Numerical and Experimental Fluid Mechanics IX, edited by: Dillmann, A., Heller, G., Krämer, E., Kreplin, H. P., Nitsche, W., and Rist, U., Springer, Cham, Notes on Numerical Fluid Mechanics and Multidisciplinary Design, 124, 425–433, Springer International Publishing, Cham, https://doi.org/10.1007/978-3-319-03158-3_43, 2014. a Länger-Möller, A.: Investigation of the NREL Phase VI experiment with the incompressible CFD solver THETA, Wind Energ., 20, 1529–1549, https://doi.org/10.1002/we.2107, 2017. a, b, c, d Larsen, T. and Hansen, A.: How 2 HAWC2, the user's manual, Tech. Rep. R-1597 (ver.4-6), RISO National Laboratory, Technical university of Denmark, Roskilde, Denmark, 2015. a Le Pape, A. and Lecanu, J.: 3D Navier-Stokes computations of a stall-regulated wind turbine, Wind Energ., 7, 309–324, https://doi.org/10.1002/we.129, 2004. a Löwe, J., Probst, A., Knopp, T., and Kessler, R.: A Low-Dissipation Low-Dispersion Second-Order Scheme for Unstructured Finite-Volume Flow Solvers, AIAA Journal, 54, 2961–2971 https://doi.org/10.2514/1.J054956, 2015. a, b Lynch, C. and Smith, M.: Unstructured overset incompressible computational fluid dynamics for unsteady wind turbine simulations, Wind Energ., 16, 1033–1048, https://doi.org/10.1002/we.1532, 2013. a Matthäus, D., Bortolotti, P., Loganathan, J., and Bottasso, C. L.: A Study on the Propagation of Aero and Wind Uncertainties and their Effect on the Dynamic Loads of a Wind Turbine, AIAA SciTech Forum, 35th Wind Energy Symposium, Grapevine, Texas, 9–13 January 2017, AIAA 2017-1849, https://doi.org/10.2514/6.2017-1849, 2017. a, b Mücke, T., Kleinhans, D., and Peinke, J.: Atmospheric turbulence and its influence on the alternating loads on wind turbines, Wind Energ., 14, 301–316, https://doi.org/10.1002/we.422, 2011. a Murali, A. and Rajagopalan, R.: Numerical simulation of multiple interacting wind turbines on a complex terrain, J. Wind Eng. Ind. Aerod., 162, 57–72, https://doi.org/10.1016/j.jweia.2017.01.005, 2017. a Oe, H., Tanabe, Y., Sugiura, M., Aoyama, T., Matsuo, Y., Sugawara, H., and Yamamoto, M.: Application of rFlow3D code to performance prediction and the wake structure investigation of wind turbines, AHS 70th Annual Forum, Montréal, Québec, Canada, 20–22 May 2014, 9 pp., 2014. a Pan, H. and Damodaran, M.: Parallel computation of viscous incompressible flows using Godunov-projection method on overlapping grids, Int. J. Numer. Meth. Fl., 39, 441–463, https://doi.org/10.1002/fld.339, 2002. a Parameswaran, V. and Baeder, J.: Indical Aerodynamics in Compressible Flow Direct Computational Fluid Dynamic Calculation, J. Aircraft, 34, 131–133, https://doi.org/10.2514/2.2146, 1997. a, b Reimer, L., Ritter, M., Heinrich, R., and Krüger, W.: CFD-based Gust Load Analysis for a Free-flying Flexible Passenger Aircraft in Comparison to a DLM-based Approach, in: AIAA AVIATION 2015, Dallas, TX, USA, 22–26 Juni 2015, AIAA 2015-2455, 1–17, https://doi.org/10.2514/6.2015-2455, 2015. a, b, c, d, e, f Schaffarczyk, P., Schwab, D., and Breuer, M.: Experimental detection of laminar-turbulent transition on a rotating wind turbine blade in the free atmosphere, Wind Energ., 20, 211–220, https://doi.org/10.1002/we.2001, 2017. a Scheurich, F. and Brown, R.: Modelling the aerodynamics of vertical-axis wind turbines in unsteady wind conditions, Wind Energ., 16, 91–107, https://doi.org/10.1002/we.532, 2013. a Schulz, C., Klein, L., Weihing, P., and Lutz, T.: Investigations into the interaction of a wind turbine with atmospheric turbulence in complex terrain, J. Phys. Conf. Ser., 753, 032016, https://doi.org/10.1088/1742-6596/753/3/032016, 2016. a Schwamborn, D., Gerhold, T., and Heinrich, R.: The DLR TAU-code: recent applications in research and industry, in: ECCOMAS CFD 2006 conference: Proceedings of the European Conference on Computational Fluid Dynamics, Egmond aan Zee, Netherlands, 4–8 September 2006. a, b Sezer-Uzol, N. and Uzol, O.: Effect of steady and transient wind shear on the wake structure and performance of a horizontal axis wind turbine rotor, Wind Energ., 16, 1–17, https://doi.org/10.1002/we.514, 2013. a Sobotta, D.: The Aerodynamics and Performance of Small Scale Wind Turbine Starting, PhD thesis, University of Sheffield, available at: http://etheses.whiterose.ac.uk/11789/ (last access: November 2017), 2015. a Soerensen, N. and Hansen, M.: Rotor performance predictions using a Navier–Stokes method, in: 1998 ASME Wind Energy Symposium, Reno, NV, USA, Aerospace Sciences Meetings, American Institute of Aeronautics and Astronautics, AIAA-98-0025, https://doi.org/10.2514/6.1998-25, 1998. a, b, c Soerensen, N. and Schreck, S.: Transitional DDES computations of the NREL Phase-VI rotor in axial flow conditions, J. Phys. Conf. Ser., 555, 012096, https://doi.org/10.1088/1742-6596/555/1/012096, 2012. a Suomi, I., Vihma, T., Gryning, S. E., and Fortelius, C.: Wind-gust parametrizations at heights relevant for wind energy: a study based on mast observations, Q. J. Roy. Meteor. Soc., 139, 1298–1310, https://doi.org/10.1002/qj.2039, 2013. a Yelmule, M. and Anjuri, E.: CFD predictions of NREL phase VI rotor experiments in NASA/AMES wind tunnel, International Journal of Renewable Energy Research, 3, 1–9, http://www.ijrer.org/ijrer/index.php/ijrer/article/viewFile/570, 2013.  a Zhang, X., Ni, S., and He, G.: A pressure-correction method and its applications on an unstructured Chimera grid, Comput. Fluids, 37, 993–1010, https://doi.org/10.1016/j.compfluid.2007.07.019, 2008. a Special issue
# Relation between Torque and Angular Acceleration Suppose that the angular speed of a body changes from ω to (ω+Δω) in a very small time interval Δt. If the angular displacement of the body during this time interval is Δθ = (ωΔt) then From  Work Energy Theorem, Work done = Change in K.E τ Δθ = ΔK.E. (of rotation) ; where τ = average torque during this time interval. $\displaystyle \tau \Delta\theta = \frac{1}{2}I [(\omega + \Delta \omega)^2 – \omega^2]$ $\displaystyle \tau \Delta\theta = \frac{1}{2}I(2\omega \Delta \omega)$ ⇒ τ ω Δt = I ω Δω  (as Δθ = ωΔt) ⇒ $\large \tau = I \frac{\Delta \omega}{\Delta t}$ ⇒ $\large \tau = I \alpha$ Illustration : A uniform disc of radius R and mass M is free to rotate about a fixed horizontal axis perpendicular to its plane and passing through its centre. A string is wrapped over its rim and a block of mass m is attached to the free end of the string. The block is released from rest. If string does not slip on the rim then find the acceleration of the block. Neglect the mass of the string.
# A particle starts from rest at 𝑡 = 0 and moves in a straight line with an acceleration as shown below. The velocity of the particle at 𝑡 = 3𝑠 is Question : A particle starts from rest at 𝑡 = 0 and moves in a straight line with an acceleration as shown below. The velocity of the particle at 𝑡 = 3𝑠 is (A) $2ms^{-1}$ (B) $4ms^{-1}$ (C) $6ms^{-1}$ (D) $8ms^{-1}$ Answer : option (B) Solution : Tags: No tags
## 3.2 Classical Seasonal Decomposition ### 3.2.1 How to do? One of the classical textbook methods for decomposing the time series into unobservable components is “Classical Seasonal Decomposition” . It assumes either a pure additive or pure multiplicative model, is done using centred moving averages and is focused on splitting the data into components, not on forecasting. The idea of the method can be summarised in the following steps: 1. Decide which of the models to use based on the type of seasonality in the data: additive (3.1) or multiplicative (3.2) 2. Smooth the data using a centred moving average (CMA) of order equal to the periodicity of the data $$m$$. If $$m$$ is an odd number then the formula is: $$$d_t = \frac{1}{m}\sum_{i=-(m-1)/2}^{(m-1)/2} y_{t+i}, \tag{3.4}$$$ which means that, for example, the value on Thursday is the average of values from Monday to Sunday. If $$m$$ is an even number then a different weighting scheme is typically used, involving the inclusion of additional an value: $$$d_t = \frac{1}{m}\left(\frac{1}{2}\left(y_{t+(m-1)/2}+y_{t-(m-1)/2}\right) + \sum_{i=-(m-2)/2}^{(m-2)/2} y_{t+i}\right), \tag{3.5}$$$ which means that we use half of the December of the previous year and half of the December of the current year to calculate the centred moving average in June. The values $$d_t$$ are placed in the middle of the window going through the series (e.g. on Thursday, the average will contain values from Monday to Sunday). The resulting series is deseasonalised. When we average, e.g. sales in a year, we automatically remove the potential seasonality, which can be observed each month individually. A drawback of using CMA is that we inevitably lose $$\frac{m}{2}$$ observations at the beginning and the end of the series. In R, the ma() function from the forecast package implements CMA. 1. De-trend the data: • For the additive decomposition this is done using: $${y^\prime}_t = y_t -d_t$$; • For the multiplicative decomposition, it is: $${y^\prime}_t = \frac{y_t}{d_t}$$; 1. If the data is seasonal, the average value for each period is calculated based on the de-trended series. e.g. we produce average seasonal indices for each January, February, etc. This will give us the set of seasonal indices $$s_t$$; 2. Calculate the residuals based on what you assume in the model: • additive seasonality: $$e_t = y_t -d_t -s_t$$; • multiplicative seasonality: $$e_t = \frac{y_t}{d_t s_t}$$; • no seasonality: $$e_t = {y^\prime}_t$$. Remark. The functions in R typically allow selecting between additive and multiplicative seasonality. There is no option for “none”, and so even if the data is not seasonal, you will nonetheless get values for $$s_t$$ in the output. Also, notice that the classical decomposition assumes that there is a deseasonalised series $$d_t$$ but does not make any further split of this variable into level $$l_t$$ and trend $$b_t$$. ### 3.2.2 A couple of examples An example of the classical decomposition in R is the decompose() function from stats package. Here is an example with pure multiplicative model and AirPassengers data (Figure 3.5). ourDecomposition <- decompose(AirPassengers, type="multiplicative") plot(ourDecomposition) We can see from Figure 3.5 that the function has smoothed the original series and produced the seasonal indices. Note that the trend component has gaps at the beginning and the end. This is because the method relies on CMA (see above). Note also that the error term still contains some seasonal elements, which is a downside of such a simple decomposition procedure. However, the lack of precision in this method is compensated by the simplicity and speed of calculation. Note again that the trend component in decompose() function is in fact $$d_t = l_{t}+b_{t}$$. Figure 3.6 shows an example of decomposition of the non-seasonal data (we assume pure additive model in this example). y <- ts(c(1:100)+rnorm(100,0,10),frequency=12) plot(ourDecomposition) As we can see from Figure 3.6, the original data is not seasonal, but the decomposition assumes that it is and proceeds with the default approach returning a seasonal component. You get what you ask for. ### 3.2.3 Other decomposition techniques There are other techniques that decompose series into error, trend and seasonal components but make different assumptions about each component. The general procedure, however, always remains the same: 1. smooth the original series, 2. extract the seasonal components, 3. smooth them out. The methods differ in the smoother they use (e.g., LOESS uses a bisquare function instead of CMA), and in some cases, multiple rounds of smoothing are performed to make sure that the components are split correctly. There are many functions in R that implement seasonal decomposition. Here is a small selection: • decomp() from the tsutils package does classical decomposition and fills in the tail and head of the smoothed trend with forecasts from exponential smoothing; • stl() from the stats package uses a different approach – seasonal decomposition via LOESS. It is an iterative algorithm that smoothes the states and allows them to evolve over time. So, for example, the seasonal component in STL can change; • mstl() from the forecast package does the STL for data with several seasonalities; • msdecompose() from the smooth package does a classical decomposition for multiple seasonal series. ### 3.2.4 “Why bother?” “Why to decompose?” you may wonder at this point. Understanding the idea behind decompositions and how to perform them helps understand ETS, which relies on it. From a practical point of view, it can be helpful if you want to see if there is a trend in the data and whether the residuals contain outliers or not. It will not show you if the data is seasonal as the seasonality is assumed in the decomposition (I stress this because many students think otherwise). Additionally, when seasonality cannot be added to a particular model under consideration, decomposing the series, predicting the trend and then reseasonalising can be a viable solution. Finally, the values from the decomposition can be used as starting points for the estimation of components in ETS or other dynamic models relying on the error-trend-seasonality. ### References • Warren M. Persons, 1919. General Considerations and Assumptions. The Review of Economics and Statistics. 1, 5–107. https://doi.org/10.2307/1928754
Rank of a partition The rank of a partition, shown as its Young diagram In mathematics, particularly in the fields of number theory and combinatorics, the rank of a partition of a positive integer is a certain integer associated with the partition. In fact at least two different definitions of rank appear in the literature. The first definition, with which most of this article is concerned, is that the rank of a partition is the number obtained by subtracting the number of parts in the partition from the largest part in the partition. The concept was introduced by Freeman Dyson in a paper published in the journal Eureka.[1] It was presented in the context of a study of certain congruence properties of the partition function discovered by the Indian mathematical genius Srinivasa Ramanujan. A different concept, sharing the same name, is used in combinatorics, where the rank is taken to be the size of the Durfee square of the partition. Definition By a partition of a positive integer n we mean a finite multiset λ = { λk, λk − 1, . . . , λ1 } of positive integers satisfying the following two conditions: • λk ≥ . . . ≥ λ2 ≥ λ1 > 0. • λk + . . . + λ2 + λ1 = n. If λk, . . . , λ2, λ1 are distinct, that is, if • λk > . . . > λ2 > λ1 > 0 then the partition λ is called a strict partition of n. The integers λk, λk − 1, ..., λ1 are the parts of the partition. The number of parts in the partition λ is k and the largest part in the partition is λk. The rank of the partition λ (whether ordinary or strict) is defined as λkk.[1] The ranks of the partitions of n take the following values and no others:[1] n − 1, n −3, n −4, . . . , 2, 1, 0, −1, −2, . . . , −(n − 4), −(n − 3), −(n − 1). The following table gives the ranks of the various partitions of the number 5. Ranks of the partitions of the integer 5 Partition (λ) Largest part (λk) Number of parts (k) Rank of the partition (λkk ) { 5 } 5 1 4 { 4, 1 } 4 2 2 { 3, 2 } 3 2 1 { 3, 1, 1 } 3 3 0 { 2, 2, 1 } 2 3 −1 { 2, 1, 1, 1 } 2 4 −2 { 1, 1, 1, 1, 1 } 1 5 −4 Notations The following notations are used to specify how many partitions have a given rank. Let n, q be a positive integers and m be any integer. • The total number of partitions of n is denoted by p(n). • The number of partitions of n with rank m is denoted by N(m, n). • The number of partitions of n with rank congruent to m modulo q is denoted by N(m, q, n). • The number of strict partitions of n is denoted by Q(n). • The number of strict partitions of n with rank m is denoted by R(m, n). • The number of strict partitions of n with rank congruent to m modulo q is denoted by T(m, q, n). For example, p(5) = 7 , N(2, 5) = 1 , N(3, 5) = 0 , N(2, 2, 5) = 5 . Q(5) = 3 , R(2, 5) = 1 , R(3, 5) = 0 , T(2, 2, 5) = 2. Some basic results Let n, q be a positive integers and m be any integer.[1] • ${\displaystyle N(m,n)=N(-m,n)}$ • ${\displaystyle N(m,q,n)=N(q-m,q,n)}$ • ${\displaystyle N(m,q,n)=\sum _{r=-\infty }^{\infty }N(m+rq,n)}$ Ramanujan's congruences and Dyson's conjecture Srinivasa Ramanujan in a paper published in 1919 proved the following congruences involving the partition function p(n):[2] • p(5 n + 4) ≡ 0 (mod 5) • p(7n + 5) ≡ 0 (mod 7) • p(11n + 6) ≡ 0 (mod 11) In commenting on this result, Dyson noted that " . . . although we can prove that the partitions of 5n + 4 can be divided into five equally numerous subclasses, it is unsatisfactory to receive from the proofs no concrete idea of how the division is to be made. We require a proof which will not appeal to generating functions, . . . ".[1] Dyson introduced the idea of rank of a partition to accomplish the task he set for himself. Using this new idea, he made the following conjectures: • N(0, 5, 5n + 4) = N(1, 5, 5n + 4) = N(2, 5, 5n + 4) = N(3, 5, 5n + 4) = N(4, 5, 5n + 4) • N(0, 7, 7n + 5) = N(1, 7, 7n + 5) = N(2, 7, 7n + 5) = . . . = N(6, 7, 7n + 5) These conjectures were proved by Atkin and Swinnerton-Dyer in 1954.[3] The following tables show how the partitions of the integers 4 (5 × n + 4 with n = 0) and 9 (5 × n + 4 with n = 1 ) get divided into five equally numerous subclasses. Partitions of the integer 4 Partitions with rank ≡ 0 (mod 5) Partitions with rank ≡ 1 (mod 5) Partitions with rank ≡ 2 (mod 5) Partitions with rank ≡ 3 (mod 5) Partitions with rank ≡ 4 (mod 5) { 2, 2 } { 3, 1 } { 1, 1, 1, 1 } { 4 } { 2, 1, 1 } Partitions of the integer 9 Partitions with rank ≡ 0 (mod 5) Partitions with rank ≡ 1 (mod 5) Partitions with rank ≡ 2 (mod 5) Partitions with rank ≡ 3 (mod 5) Partitions with rank ≡ 4 (mod 5) { 7, 2 } { 8, 1 } { 6, 1, 1, 1 } { 9 } { 7, 1, 1 } { 5, 1, 1, 1, 1 } { 5, 2, 1, 1 } { 5, 3, 1} { 6, 2, 1 } { 6, 3 } { 4, 3, 1, 1 } { 4, 4, 1 } { 5, 2, 2 } { 5, 4 } { 4, 2, 1, 1, 1 } { 4, 2, 2, 1 } { 4, 3, 2 } { 3, 2, 1, 1, 1, 1 } { 3, 3, 1, 1, 1 } { 3, 3, 2, 1 } { 3, 3, 3 } { 3, 1, 1, 1, 1, 1, 1 } { 2, 2, 2, 2, 1 } { 4, 1, 1, 1, 1, 1 } { 3, 2, 2, 2 } { 2, 2, 1, 1, 1, 1, 1 } { 2, 2, 2, 1, 1, 1 } { 1, 1, 1, 1, 1, 1, 1, 1, 1 } { 3, 2, 2, 1, 1} { 2, 1, 1, 1, 1, 1, 1, 1 } Generating functions • The generating function of p(n) was discovered by Euler and is well known.[4] ${\displaystyle \sum _{n=0}^{\infty }p(n)x^{n}=\prod _{k=1}^{\infty }{\frac {1}{(1-x^{k})}}}$ • The generating function for N(mn) is given below:[5] ${\displaystyle \sum _{m=-\infty }^{\infty }\sum _{n=0}^{\infty }N(m,n)z^{m}q^{n}=1+\sum _{n=1}^{\infty }{\frac {q^{n^{2}}}{\prod _{k=1}^{n}(1-zq^{k})(1-z^{-1}q^{k})}}}$ • The generating function for Q ( n ) is given below:[6] ${\displaystyle \sum _{n=0}^{\infty }Q(n)x^{n}=\prod _{k=0}^{\infty }{\frac {1}{(1-x^{2k-1})}}}$ • The generating function for Q ( m , n ) is given below:[6] ${\displaystyle \sum _{m,n=0}^{\infty }Q(m,n)z^{m}q^{n}=1+\sum _{s=1}^{\infty }{\frac {q^{s(s+1)/2}}{(1-zq)(1-zq^{2})\cdots (1-zq^{s})}}}$ Alternate definition In combinatorics, the phrase rank of a partition is sometimes used to describe a different concept: the rank of a partition λ is the largest integer i such that λ has at least i parts each of which is no smaller than i.[7] Equivalently, this is the length of the main diagonal in the Young diagram or Ferrers diagram for λ, or the side-length of the Durfee square of λ. The table of ranks of partitions of 5 is given below. Ranks of the partitions of the integer 5 Partition Rank { 5 } 1 { 4, 1 } 1 { 3, 2 } 2 { 3, 1, 1 } 1 { 2, 2, 1 } 2 { 2, 1, 1, 1 } 1 { 1, 1, 1, 1, 1 } 1 • Asymptotic formulas for the rank partition function:[8] • Congruences for rank function:[9] • Generalisation of rank to BG-rank:[10]
BREAKING NEWS Newsvendor model ## Summary The newsvendor (or newsboy or single-period[1] or salvageable) model is a mathematical model in operations management and applied economics used to determine optimal inventory levels. It is (typically) characterized by fixed prices and uncertain demand for a perishable product. If the inventory level is ${\displaystyle q}$, each unit of demand above ${\displaystyle q}$ is lost in potential sales. This model is also known as the newsvendor problem or newsboy problem by analogy with the situation faced by a newspaper vendor who must decide how many copies of the day's paper to stock in the face of uncertain demand and knowing that unsold copies will be worthless at the end of the day. ## History The mathematical problem appears to date from 1888[2] where Edgeworth used the central limit theorem to determine the optimal cash reserves to satisfy random withdrawals from depositors.[3] According to Chen, Cheng, Choi and Wang (2016), the term "newsboy" was first mentioned in an example of the Morse and Kimball (1951)'s book.[4] The modern formulation relates to a paper in Econometrica by Kenneth Arrow, T. Harris, and Jacob Marshak.[5] More recent research on the classic newsvendor problem in particular focused on behavioral aspects: when trying to solve the problem in messy real-world contexts, to what extent do decision makers systematically vary from the optimum? Experimental and empirical research has shown that decision makers tend to be biased towards ordering too close to the expected demand (pull-to-center effect[6]) and too close to the realisation from the previous period (demand chasing[7]). ## Overview This model can also be applied to period review systems.[8] ### Assumptions 1. Products are separable 2. Planning is done for a single period 3. Demand is random 5. Costs of overage or underage are linear ### Profit function and the critical fractile formula The standard newsvendor profit function is ${\displaystyle \operatorname {E} [{\text{profit}}]=\operatorname {E} \left[p\min(q,D)\right]-cq}$ where ${\displaystyle D}$  is a random variable with probability distribution ${\displaystyle F}$  representing demand, each unit is sold for price ${\displaystyle p}$  and purchased for price ${\displaystyle c}$ , ${\displaystyle q}$  is the number of units stocked, and ${\displaystyle E}$  is the expectation operator. The solution to the optimal stocking quantity of the newsvendor which maximizes expected profit is: Critical fractile formula ${\displaystyle q=F^{-1}\left({\frac {p-c}{p}}\right)}$ where ${\displaystyle F^{-1}}$  denotes the generalized inverse cumulative distribution function of ${\displaystyle D}$ . Intuitively, this ratio, referred to as the critical fractile, balances the cost of being understocked (a lost sale worth ${\displaystyle (p-c)}$ ) and the total costs of being either overstocked or understocked (where the cost of being overstocked is the inventory cost, or ${\displaystyle c}$  so total cost is simply ${\displaystyle p}$ ). The critical fractile formula is known as Littlewood's rule in the yield management literature. #### Numerical examples In the following cases, assume that the retail price, ${\displaystyle p}$ , is $7 per unit and the purchase price is ${\displaystyle c}$ , is$5 per unit. This gives a critical fractile of ${\displaystyle {\frac {p-c}{p}}={\frac {7-5}{7}}={\frac {2}{7}}}$ ##### Uniform distribution Let demand, ${\displaystyle D}$ , follow a uniform distribution (continuous) between ${\displaystyle D_{\min }=50}$  and ${\displaystyle D_{\max }=80}$ . ${\displaystyle q_{\text{opt}}=F^{-1}\left({\frac {7-5}{7}}\right)=F^{-1}\left(0.285\right)=D_{\min }+(D_{\max }-D_{\min })\cdot 0.285=58.55\approx 59.}$ Therefore, the optimal inventory level is approximately 59 units. ##### Normal distribution Let demand, ${\displaystyle D}$ , follow a normal distribution with a mean, ${\displaystyle \mu }$ , demand of 50 and a standard deviation, ${\displaystyle \sigma }$ , of 20. ${\displaystyle q_{\text{opt}}=F^{-1}\left({\frac {7-5}{7}}\right)=\mu +\sigma Z^{-1}\left(0.285\right)=50+20(-0.56595)=38.68\approx 39.}$ Therefore, optimal inventory level is approximately 39 units. ##### Lognormal distribution Let demand, ${\displaystyle D}$ , follow a lognormal distribution with a mean demand of 50, ${\displaystyle \mu }$ , and a standard deviation, ${\displaystyle \sigma }$ , of 0.2. ${\displaystyle q_{\text{opt}}=F^{-1}\left({\frac {7-5}{7}}\right)=\mu e^{Z^{-1}\left(0.285\right)\sigma }=50e^{\left(0.2\cdot (-0.56595)\right)}=44.64\approx 45.}$ Therefore, optimal inventory level is approximately 45 units. ##### Extreme situation If ${\displaystyle p  (i.e. the retail price is less than the purchase price), the numerator becomes negative. In this situation, it isn't worth keeping any items in the inventory. ### Derivation of optimal inventory level #### Critical fractile formula To derive the critical fractile formula, start with ${\displaystyle \operatorname {E} \left[{\min\{q,D\}}\right]}$  and condition on the event ${\displaystyle D\leq q}$ : {\displaystyle {\begin{aligned}&\operatorname {E} [\min\{q,D\}]=\operatorname {E} [\min\{q,D\}\mid D\leq q]\operatorname {P} (D\leq q)+\operatorname {E} [\min\{q,D\}\mid D>q]\operatorname {P} (D>q)\\[6pt]={}&\operatorname {E} [D\mid D\leq q]F(q)+\operatorname {E} [q\mid D>q][1-F(q)]=\operatorname {E} [D\mid D\leq q]F(q)+q[1-F(q)]\end{aligned}}} Now use ${\displaystyle \operatorname {E} [D\mid D\leq q]={\frac {\int \limits _{x\leq q}xf(x)\,dx}{\int \limits _{x\leq q}f(x)\,dx}},}$ where ${\displaystyle f(x)=F'(x)}$ . The denominator of this expression is ${\displaystyle F(q)}$ , so now we can write: ${\displaystyle \operatorname {E} [\min\{q,D\}]=\int \limits _{x\leq q}xf(x)\,dx+q[1-F(q)]}$ So ${\displaystyle \operatorname {E} [{\text{profit}}]=p\int \limits _{x\leq q}xf(x)\,dx+pq[1-F(q)]-cq}$ Take the derivative with respect to ${\displaystyle q}$ : ${\displaystyle {\frac {\partial }{\partial q}}\operatorname {E} [{\text{profit}}]=pqf(q)+pq(-F'(q))+p[1-F(q)]-c=p[1-F(q)]-c}$ Now optimize: ${\displaystyle p\left[1-F(q^{*})\right]-c=0\Rightarrow 1-F(q^{*})={\frac {c}{p}}\Rightarrow F(q^{*})={\frac {p-c}{p}}\Rightarrow q^{*}=F^{-1}\left({\frac {p-c}{p}}\right)}$ Technically, we should also check for convexity: ${\displaystyle {\frac {\partial ^{2}}{\partial q^{2}}}\operatorname {E} [{\text{profit}}]=p[-F'(q)]}$ Since ${\displaystyle F}$  is monotone non-decreasing, this second derivative is always non-positive, so the critical point determined above is a global maximum. #### Alternative formulation The problem above is cast as one of maximizing profit, although it can be cast slightly differently, with the same result. If the demand D exceeds the provided quantity q, then an opportunity cost of ${\displaystyle (D-q)(p-c)}$  represents lost revenue not realized because of a shortage of inventory. On the other hand, if ${\displaystyle D\leq q}$ , then (because the items being sold are perishable), there is an overage cost of ${\displaystyle (q-D)c}$ . This problem can also be posed as one of minimizing the expectation of the sum of the opportunity cost and the overage cost, keeping in mind that only one of these is ever incurred for any particular realization of ${\displaystyle D}$ . The derivation of this is as follows: {\displaystyle {\begin{aligned}&\operatorname {E} [{\text{opportunity cost}}+{\text{overage cost}}]\\[6pt]={}&\operatorname {E} [{\text{overage cost}}\mid D\leq q]\operatorname {P} (D\leq q)+\operatorname {E} [{\text{opportunity cost}}\mid D>q]\operatorname {P} (D>q)\\[6pt]={}&\operatorname {E} [(q-D)c\mid D\leq q]F(q)+\operatorname {E} [(D-q)(p-c)\mid D>q][1-F(q)]\\[6pt]={}&c\operatorname {E} [q-D\mid D\leq q]F(q)+(p-c)\operatorname {E} [D-q\mid D>q][1-F(q)]\\[6pt]={}&cqF(q)-c\int \limits _{x\leq q}xf(x)\,dx+(p-c)[\int \limits _{x>q}xf(x)\,dx-q(1-F(q))]\\[6pt]={}&p\int \limits _{x>q}xf(x)\,dx-pq(1-F(q))-c\int \limits _{x>q}xf(x)\,dx+cq(1-F(q))+cqF(q)-c\int \limits _{x\leq q}xf(x)\,dx\\[6pt]={}&p\int \limits _{x>q}xf(x)\,dx-pq+pqF(q)+cq-c\operatorname {E} [D]\end{aligned}}} The derivative of this expression, with respect to ${\displaystyle q}$ , is ${\displaystyle {\frac {\partial }{\partial q}}\operatorname {E} [{\text{opportunity cost}}+{\text{overage cost}}]=p(-qf(q))-p+pqF'(q)+pF(q)+c=pF(q)+c-p}$ This is obviously the negative of the derivative arrived at above, and this is a minimization instead of a maximization formulation, so the critical point will be the same. #### Cost based optimization of inventory level Assume that the 'newsvendor' is in fact a small company that wants to produce goods to an uncertain market. In this more general situation the cost function of the newsvendor (company) can be formulated in the following manner: ${\displaystyle K(q)=c_{f}+c_{v}(q-x)+p\operatorname {E} \left[\max(D-q,0)\right]+h\operatorname {E} \left[\max(q-D,0)\right]}$ where the individual parameters are the following: • ${\displaystyle c_{f}}$  – fixed cost. This cost always exists when the production of a series is started. [$/production] • ${\displaystyle c_{v}}$ – variable cost. This cost type expresses the production cost of one product. [$/product] • ${\displaystyle q}$  – the product quantity in the inventory. The decision of the inventory control policy concerns the product quantity in the inventory after the product decision. This parameter includes the initial inventory as well. If nothing is produced, then this quantity is equal to the initial quantity, i.e. concerning the existing inventory. • ${\displaystyle x}$  – initial inventory level. We assume that the supplier possesses ${\displaystyle x}$  products in the inventory at the beginning of the demand of the delivery period. • ${\displaystyle p}$  – penalty cost (or back order cost). If there is less raw material in the inventory than needed to satisfy the demands, this is the penalty cost of the unsatisfied orders. [$/product] • ${\displaystyle D}$ – a random variable with cumulative distribution function ${\displaystyle F}$ representing uncertain customer demand. [unit] • ${\displaystyle E[D]}$ – expected value of random variable ${\displaystyle D}$ . • ${\displaystyle h}$ – inventory and stock holding cost. [$ / product] In ${\displaystyle K(q)}$ , the first order loss function ${\displaystyle E\left[\max(D-q,0)\right]}$  captures the expected shortage quantity; its complement, ${\displaystyle E\left[\max(q-D,0)\right]}$ , denotes the expected product quantity in stock at the end of the period.[9] On the basis of this cost function the determination of the optimal inventory level is a minimization problem. So in the long run the amount of cost-optimal end-product can be calculated on the basis of the following relation:[1] ${\displaystyle q_{\text{opt}}=F^{-1}\left({\frac {p-c_{v}}{p+h}}\right)}$ ## Data-driven models There are several data-driven models for the newsvendor problem. Among them, a deep learning model provides quite stable results in any kind of non-noisy or volatile data.[10] More details can be found in a blog explained the model.[11] ## References 1. ^ a b William J. Stevenson, Operations Management. 10th edition, 2009; page 581 2. ^ F. Y. Edgeworth (1888). "The Mathematical Theory of Banking". Journal of the Royal Statistical Society. 51 (1): 113–127. JSTOR 2979084. 3. ^ Guillermo Gallego (18 Jan 2005). "IEOR 4000 Production Management Lecture 7" (PDF). Columbia University. Retrieved 30 May 2012. 4. ^ R. R. Chen; T.C.E. Cheng; T.M. Choi; Y. Wang (2016). "Novel Advances in Applications of the Newsvendor Model". Decision Sciences. 47: 8–10. doi:10.1111/deci.12215. 5. ^ K. J. Arrow, T. Harris, Jacob Marshak, Optimal Inventory Policy, Econometrica 1951 6. ^ Schweitzer, M.E.; Cachon, G.P. (2000). "Decision bias in the newsvendor problem with a known demand distribution: Experimental evidence". Management Science. 43 (3): 404–420. doi:10.1287/mnsc.46.3.404.12070. 7. ^ Lau, N.; Bearden, J.N. (2013). "Newsvendor demand chasing revisited". Management Science. 59 (5): 1245–1249. doi:10.1287/mnsc.1120.1617. 8. ^ W.H. Hopp, M. L. Spearman, Factory Physics, Waveland Press 2008 9. ^ Axsäter, Sven (2015). Inventory Control (3rd ed.). Springer International Publishing. ISBN 978-3-319-15729-0. 10. ^ Oroojlooyjadid, Afshin; Snyder, Lawrence; Takáč, Martin (2016-07-07). "Applying Deep Learning to the Newsvendor Problem". arXiv:1607.02177 [cs.LG]. 11. ^ Afshin (2017-04-11). "Deep Learning for Newsvendor Problem". Afshin. Retrieved 2019-03-10.
# Can a polynomial with no positive roots have variations in its signs in the expanded form? I found this proof for the Descartes' Rule of Signs. Towards the end the author writes this: Now return to our original polynomial, $$f(x) = x_n + a_{n-1}x^{n-1} + ... + a_1x + a_0.$$ We can express f(x) in factored form as $$f(x) =N(x)(x - p_1)(x - p_2) ... (x-p_m)$$ where $$N(x)$$ has no positive roots (but may, of course, nevertheless have variations in sign), and the $$p_i$$ are the $$m$$ positive roots of $$f(x)$$, listed repeatedly, if necessary, according to their multiplicity. I wanted to know how $$N(x)$$ could have variations in signs if it has no positive roots. What I assumed is $$N(x)$$ will have negative roots which will be of the form $$(x+k)$$ or they will have complex roots, which will be of the form $$(x^{2n}+k)$$, where n is a positive integer and k is a positive real number, in the factorized polynomial. In this case, I see that all of the signs of coefficients in the expanded polynomial format will be positive and there would be no sign variations. How is my assumption wrong? Is there an example of a polynomial without positive roots but with sign variations? You have failed to consider complex roots that aren't wholly imaginary, whose corresponding quadratic factors in $$N(x)$$ might have negative linear terms. For example, if $$f(x)=x^2-x+1$$ then $$N(x)=f(x)$$. How about $$x^2-x+1$$? Its roots are complex (with positive real part), but it has no positive real roots.
# evaluating a meromorphic section of a line bundle at a point Let $D$ be a Cartier divisor of a variety $X/K$ with associated line bundle $\mathcal{O}(D)$ and meromorphic section $s_D$. How do you define $s_D(P) \in K$ for $P \in X(K) \setminus \mathrm{supp}(D^{-1})$? Perhaps one has to choose an embedding $\mathcal{O}(D) \hookrightarrow \mathcal{M}_X$ into the meromorphic functions. But this is not unique? Edit: No, $\mathcal{O}(D) = \{f \in \mathcal{M}_X \mid div(f) \geq -D\} \cup \{0\}$. • In general you can't, because $s_D$ can have poles outside of the support of $D$ (which is also the support of $D^{-1}$). Take for instance $D=0$ and $s_D$ not regular. Moreover, even if $P$ is not a pole of $s_D$, $s_D(P)$ depends on the choice of a basis of $\mathcal O(D)_P$. – Cantlog Oct 1 '13 at 21:51
# 3.3.10. RASSI — A RAS State Interaction Program¶ Program RASSI (RAS State Interaction) computes matrix elements of the Hamiltonian and other operators in a wave function basis, which consists of individually optimized CI expansions from the RASSCF program. Also, it solves the Schrödinger equation within the space of these wave functions. There are many possible applications for such type of calculations. The first important consideration to have into account is that RASSI computes the interaction among RASSCF states expanding the same set of configurations, that is, having the same active space size and number of electrons. The RASSI program is routinely used to compute electronic transition moments, as it is shown in the Advanced Examples in the calculation of transition dipole moments for the excited states of the thiophene molecule using CASSCF-type wave functions. By default the program will compute the matrix elements and expectation values of all the operators for which SEWARD has computed the integrals and has stored them in the ONEINT file. RASSCF (or CASSCF) individually optimized states are interacting and non-orthogonal. It is imperative when the states involved have different symmetry to transform the states to a common eigenstate basis in such a way that the wave function remains unchanged. The State Interaction calculation gives an unambiguous set of non-interacting and orthonormal eigenstates to the projected Schrödinger equation and also the overlaps between the original RASSCF wave functions and the eigenstates. The analysis of the original states in terms of RASSI eigenstates is very useful to identify spurious local minima and also to inspect the wave functions obtained in different single-root RASSCF calculations, which can be mixed and be of no help to compare the states. Finally, the RASSI program can be applied in situations when there are two strongly interacting states and there are two very different MCSCF solutions. This is a typical situation in transition metal chemistry when there are many close states associated each one to a configuration of the transition metal atom. It is also the case when there are two close quasi-equivalent localized and delocalized solutions. RASSI can provide with a single set of orbitals able to represent, for instance, avoided crossings. RASSI will produce a number of files containing the natural orbitals for each one of the desired eigenstates to be used in subsequent calculations. RASSI requires as input files the ONEINT and ORDINT integral files and the JOBIPH files from the RASSCF program containing the states which are going to be computed. The JOBIPH files have to be named consecutively as JOB001, JOB002, etc. The input for the RASSI module has to contain at least the definition of the number of states available in each of the input JOBIPH files. Block 3.3.10.1 lists the input file for the RASSI program in a calculation including two JOBIPH files (2 in the first line), the first one including three roots (3 in the first line) and the second five roots (5 in the first line). Each one of the following lines lists the number of these states within each JOBIPH file. Also in the input, keyword NATOrb indicates that three files (named sequentially NAT001, NAT002, and NAT003) will be created for the three lowest eigenstates. Block 3.3.10.1 Sample input requesting the RASSI module to calculate the matrix elements and expectation values for eight interacting RASSCF states &RASSI NROFjobiph= 2 3 5; 1 2 3; 1 2 3 4 5 NATOrb= 3 ## 3.3.10.1. RASSI Output¶ The RASSI section of the Molcas output is basically divided in three parts. Initially, the program prints the information about the JOBIPH files and input file, optionally prints the wave functions, and checks that all the configuration spaces are the same in all the input states. In second place RASSI prints the expectation values of the one-electron operators, the Hamiltonian matrix, the overlap matrix, and the matrix elements of the one-electron operators, all for the basis of input RASSCF states. The third part starts with the eigenvectors and eigenvalues for the states computed in the new eigenbasis, as well as the overlap of the computed eigenstates with the input RASSCF states. After that, the expectation values and matrix elements of the one-electron operators are repeated on the basis of the new energy eigenstates. A final section informs about the occupation numbers of the natural orbitals computed by RASSI, if any. In the Advanced Examples a detailed example of how to interpret the matrix elements output section for the thiophene molecule is displayed. The rest of the output is self-explanatory. It has to be remembered that to change the default origins for the one electron operators (the dipole moment operator uses the nuclear charge centroid and the higher order operators the center of the nuclear mass) keyword CENTer in GATEWAY must be used. Also, if multipoles higher than order two are required, the option MULTipole has to be used in GATEWAY. The program RASSI can also be used to compute a spin–orbit Hamiltonian for the input CASSCF wave functions as defined above. The keyword AMFI has to be used in SEWARD to ensure that the corresponding integrals are available. Block 3.3.10.2 Sample input requesting the RASSI module to calculate and diagonalize the spin–orbit Hamiltonian the ground and triplet excited state in water. &RASSI NROFjobiph= 2 1 1; 1; 1 Spinorbit Ejob The first JOBMIX file contains the wave function for the ground state and the second file the $$^3B_2$$ state discussed above. The keyword Ejob makes the RASSI program use the CASPT2 energies which have been written on the JOBMIX files in the diagonal of the spin–orbit Hamiltonian. The output of this calculation will give four spin–orbit states and the corresponding transition properties, which can for example be used to compute the radiative lifetime of the triplet state. ## 3.3.10.2. RASSI — Basic and Most Common Keywords¶ NROFjob Number of input files, number of roots, and roots for each file EJOB/HDIAG Read energies from input file / inline SPIN Compute spin–orbit matrix elements for spin properties
# conjugate transpose properties proof You should provide a proof of these results for your own practice. This leads to the possibility of an H" theory for Hermite expansions and analogues of the classical applications of ordinary conjugate … Even if and have the same eigenvalues, they do not necessarily have the same eigenvectors. ). 3. Active 2 years, 4 months ago. Proposition 11.1.3. Let be the space of all vectors having complex entries. The row vector is called a left eigenvector of . Lemma 7.3. Corollary 5.8. Proof: (59) If , it is a Hermitian matrix. 3. Conjugate of a complex number z = a + ib, denoted by $$\bar{z}$$, is defined as $$\bar{z}$$ = a - ib i.e., $$\overline{a + ib}$$ = a - ib. Proof . Prove that if A is an invertible matrix, then the transpose of A is invertible and the inverse matrix of the transpose is the transpose of the inverse matrix. That is, must operate on the conjugate of and give the same result for the integral as when operates on . (kA) T =kA T. (AB) T =B T A T, the transpose of a product is the product of the transposes in the reverse order. Definition of conjugate complex numbers: In any two complex numbers, if only the sign of the imaginary part differ then, they are known as complex conjugate of each other. Proof of the properties of conjugate matrices. or in matrix notation: , where A T stands for A transposed. The proof is similar when addition is in the second component. 1 $\begingroup$ Closed. This operation is called the conjugate transpose of $$M(T)$$, and we denote it b y $$(M(T))^{*}$$. We collect several elementary properties of the adjoint operation into the following proposition. The proof of Lemma 7.3 uses the concept of the conjugate of a complex number and the conjugate transpose of a complex matrix (Definition A.3). ', but not its complex conjugate transpose, A'. Properties of Transpose of a Matrix. Substitute results for the case/z= 1 are also proved. By virtue of the preceding Theorem there is a unique u 2 U such that jjv ujj jjv u′jj whenever u′ 2 U. Theorem 0.2 (The Cauchy-Schwartz Inequality. The following properties hold: (A T) T =A, that is the transpose of the transpose of A is A (the operation of taking the transpose is an involution). Unless there is a solution in the back of the book, it appears that they have not clarified what "corresponding" means. So, for example, if M= 0 @ 1 i 0 2 1 i 1 + i 1 A; then its Hermitian conjugate Myis My= 1 0 1 + i i 2 1 i : In terms of matrix elements, [My] ij = ([M] ji): Note that for any matrix (Ay)y= A: Thus, the conjugate of the conjugate … Hence we've shown that The eigenvalues of a Hermitian matrix are real. 2 Some Properties of Conjugate Unitary Matrices Theorem 1. Proof. Can someone walk me through the proof? I think the point of my question is why the complex conjugate does not change the rank of the matrix. We list several properties of the conjugate transpose of a matrix in the following theo-rem. A(3,1) = -1i; Determine if the modified matrix is Hermitian. Browse other questions tagged fourier-transform conjugate or ask your own question. When the matrix and the vectors are allowed to be complex, the quadratic form becomes where denotes the conjugate transpose of . Thread starter diogomgf; Start date May 30, 2019; D. diogomgf Junior Member. Change the element in A(3,1) to be -1i. If A is conjugate unitary matrix then secondary transpose of A is conjugate unitary matrix. A = A *. These again follow from writing the inner product as a matrix product. 1. The complex case. Defn: The Hermitian conjugate of a matrix is the transpose of its complex conjugate. Joined Oct 19, 2018 Messages 127. In physics the dagger symbol is often used instead of the star: Furthermore, the conjugate Poisson integral converges in Lp norm and pointwise almost everywhere to the conjugate function. A complex conjugate and transpose matrix equation. Hermitian matrices are named after Charles Hermite, who demonstrated in 1855 that matrices of this form share a property with real symmetric matrices of always having real eigenvalues.Other, equivalent notations in common use are = † = ∗, although note that in quantum mechanics, ∗ typically means the complex conjugate only, and not the conjugate transpose. And so here we're going to explore how we can use system descriptions given by these matrices to put constraints on a system. Le t $$S,T\in \mathcal{L}(V)$$ and $$a\in \mathbb{F}$$. Eigenvalues of a triangular matrix. (A+B) T =A T +B T, the transpose of a sum is the sum of transposes. Viewed 16k times 6. The meaning of this conjugate is given in the following equation. If A = [a ij] be an m × n matrix, then the matrix obtained by interchanging the rows and columns of A would be the transpose of A. of It is denoted by A′or (A T).In other words, if A = [a ij] mxn,thenA′ = [a ji] nxm.For example, This is an elementary (yet important) fact in matrix analysis. tf = ishermitian(A) tf = logical 1 The matrix, A, is now Hermitian because it is equal to its complex conjugate transpose, A'. But T was upper triangular, and this can only happen if T is diagonal. Some properties of transpose of a matrix are given below: (i) Transpose of the Transpose Matrix. for μ ranging from 1 to m and for κ ranging from 1 to k. Notice that transposition is distributive, i.e., we have (A+B) T = (A T + B T). A real Hermitian matrix is a symmetric matrix. Let be two different eigenvalues of .Let be the two eigenvectors of corresponding to the two eigenvalues and , respectively.. Then the following is true: Here denotes the usual inner product of two vectors . $\endgroup$ – user94273 Sep 11 '13 at 10:27 2 $\begingroup$ What ways to characterise or determine the rank do you know? Definition. Thus A = QDQ H as desired. The meaning of this conjugate is given in the following equation. Proving that the hermitian conjugate of the product of two operators is the product of the two hermitian congugate operators in opposite order [closed] Ask Question Asked 7 years ago. Note that when , time function is stretched, and is compressed; when , is compressed and is stretched. The transpose of a matrix A, denoted by A T, A′, A tr, t A or A t, may be constructed by any one of the following methods: . The proofs of these properties are straightforward and are left for you to supply in Exercises 49–52. 1. A hermitian operator is equal to its hermitian conjugate (which, remem-ber, is the complex conjugate of the transpose of the matrix representing the operator). The transpose A T of the matrix A is defined as the k x m matrix with the components . In summary, if A is n×n Hermitian, it has the following properties: •A has n … Proof. Taking the conjugate transpose of both sides, QHAHQ = TH However, A = AH and so we get T = TH. The inverse of an invertible Hermitian matrix is also Hermitian, i.e., if , … In this section, by using the real presentation of a complex and the vector operator, we offer a new convergence proof of the gradient-based iterative algorithm for the complex conjugate and transpose matrix equation. The diagonal elements of a Hermitian matrix are real. First let us define the Hermitian Conjugate of an operator to be . Unitary Matrices Recall that a real matrix A is orthogonal if and only if In the complex system, (But not identical -- you'll need the fact that the complex conjugate is distributive, rather than the transpose.) A Hermitian matrix (or self-adjoint matrix) is one which is equal to its Hermitian adjoint (also known as its conjugate transpose). 4 Proof. The new matrix obtained by interchanging the rows and columns of the original matrix is called as the transpose of the matrix. If is an eigenvector of the transpose, it satisfies By transposing both sides of the equation, we get. Definition A complex square matrix A is Hermitian, if. is uniquely characterized by its values on ordered pairs of basis vectors; moreover two bilinear pairings are equal precisely if for all pairs .So define be the matrix with entry given by By construction, the pairing is sesquilinear, and agrees with on ordered pairs of basis vectors. Eigenvalues and the Hermitian matrices Hermitian Matrices are ones whose conjugate transpose [1] is the matrix itself, i.e. Combining the preceding definitions, the transpose of the matrix product AB has the components . The proof is essentially the same as in the real case, with some minor modifications. The eigenvalues of a symmetric matrix are real, and the corresponding eigenvectors can always be assumed to be real. The diagonal elements of a triangular matrix are equal to its eigenvalues. That's a very powerful approach for first order design. And each of the four terms in that matrix have very important properties. The properties of the conjugate transpose: . Statement. A complex matrix is said to be: positive definite iff is … 2. Let C = v U and note that C is a nonempty closed convex subset of V. (Of course U = U since U is a linear subspace of U, but this representation of C is more convenient for our purposes.) For a complex matrix A, let A ∗ = A ¯ T, where A T is the transpose, and A ¯ is the complex conjugate of A. $\endgroup$ – Daniel Fischer ♦ Sep 11 '13 at 10:30 To understand the properties of transpose matrix, we will take two matrices A and B which have equal order. The Overflow Blog The Overflow #47: How to lead with clarity and empathy in the remote world Let be an complex Hermitian matrix which means where denotes the conjugate transpose operation. May 30, 2019 #1 Hello math helpers, I'm having problems with demonstrating the following … Why do Hermitian matrices have real eigenvalues? Assume that A is conjugate unitary matrix. Proof. This is a general feature of Fourier transform, i.e., compressing one of the and will stretch the other and vice versa. If we take transpose of transpose matrix, the matrix obtained is equal to the original matrix. Properties. i.e., AA = A A = I T o show A s (A s) = (A s) A s = I Ca se (i): AA = I International Journal of Pure and Applied Mathematics Special Issue 76 And it turns out the conjugate matrix N, that we defined earlier, is the way to do that. In this case, A is equal to its transpose, A. Every entry in the transposed matrix is equal to the complex conjugate of the corresponding entry in the original matrix: . It can't mean that you can just directly replace the conjugate with the transconjugate everywhere, as we know the transpose behaves differently. The modified matrix is equal to its transpose, a ' and so here we 're to... In a ( 3,1 ) to be complex, the transpose behaves.... Only happen if T is diagonal matrices Hermitian matrices Hermitian matrices are ones whose conjugate transpose 1... This case, a is equal to the conjugate transpose of a triangular matrix real! Will stretch the other and vice versa they do not necessarily have the same eigenvectors be to! Which have equal order denotes the conjugate transpose of the matrix and the vectors allowed! That the complex conjugate is distributive, rather than the transpose, it that! The conjugate function a solution in the following proposition, as we the. As a matrix are real conjugate transpose properties proof, QHAHQ = TH writing the inner as! Lp norm and pointwise almost everywhere to the original matrix: not identical -- you 'll need the that. As we know the transpose behaves differently be: positive definite iff is ….., that we defined earlier, is the matrix and the vectors are allowed to be -1i have very properties. The row vector is called a left eigenvector of the four terms that. The element in a ( 3,1 ) to be -1i having complex entries you 'll need fact. Case, with some minor modifications meaning conjugate transpose properties proof this conjugate is given the... ; Start date May 30, 2019 ; D. diogomgf Junior Member the corresponding entry in the matrix. Matrix a is conjugate unitary matrix is the way to do that,,... If the modified matrix is Hermitian matrix, we will take two matrices a B. When operates on that you can just directly replace the conjugate matrix N, that we earlier... Earlier, is the way to do that ) transpose of a sum is the sum of transposes elementary... The book, it is a solution in the following proposition original matrix a symmetric matrix are real have same. You should provide a proof of these results for the integral as when operates on TH However a. Said to be real for the case/z= 1 are also proved QHAHQ =.. That you can just directly replace the conjugate with the transconjugate everywhere, as know... The other and vice versa thread starter diogomgf ; Start date May,... Or ask your own question triangular matrix are given below: ( 59 ) if, satisfies. Fact that the complex conjugate of and give the same as in the original matrix: conjugate! Symmetric matrix are real, and the vectors are allowed to be complex, the transpose behaves.... Ah and so we get of and give the same eigenvectors several properties of book. Following equation identical -- you 'll need the fact that the complex conjugate of and give same... Provide a proof of these results for the integral as when operates on quadratic becomes. Case/Z= 1 are also proved solution in the real case, with some modifications. Take transpose of both sides of the four terms in that matrix have important., they do not necessarily have the same as in the real,! By these matrices to put constraints on a system -- you 'll need the fact the. T =A T +B T, the transpose behaves differently Determine if the modified matrix is equal to conjugate... Its eigenvalues ) to be -1i to explore how we can use system descriptions given by these matrices put! In matrix notation:, where a T stands for a transposed has components! A general feature of Fourier transform, i.e., compressing one of the transpose of the transpose of a are... The eigenvalues of a matrix product change the element in a ( 3,1 ) to real! Fourier transform, i.e., compressing one of the equation, we get and so we get T TH! Obtained is equal to its transpose, a is Hermitian, i.e ( ). Compressing one of the matrix itself, i.e to be complex, the transpose of a matrix... Matrix, the quadratic form becomes where denotes the conjugate transpose of transpose matrix, we take... Some properties of transpose matrix, the transpose, a ' essentially the same result for the as... Descriptions given conjugate transpose properties proof these matrices to put constraints on a system its transpose, a is unitary... … 3 diagonal elements of a symmetric matrix are equal to its eigenvalues put constraints on system! Ca n't mean that you can just directly replace the conjugate matrix,... Result for the integral as when operates on ', but not complex!
# Equivalence relations, counting equivalence classes between sets Let $S=\{1,2,3..,9\}$ and $T=\{1,2,3,4,5\}$ . Let $R$ be the relation on the power set of S ie P(S) defined by For all elements X, Y in P(S), X$R$Y iff |X ∩ T| = |Y ∩ T|. I wanted to confirm my answers. Because this is an equivalence relation (I proved it via showing reflexivity, symmetry and transitivity), I rewrote it another way as X$R$Y iff |X| = |Y| to simplify the following. (a) how many equivalence classes are there? 6 or 10, leaning more towards 10 (as there are 10 possible sizes of elements in P(S)) (b) how many elements does [∅] have? 1 (c) how many elements does [T] have? 9 choose 5 or 5 choose 5, leaning more towards 9 choose 5 (d) how many elements does [{1,2}] have? 9 choose 2 or 5 choose 2, leaning more towards 9 choose 2 • Note that $X=\{1,2,6,7,8\}$ is related to $Y=\{3,4\}$ despite $|X|\neq |Y|$. What is important is that $|X\cap T|=2=|Y\cap T|$ – JMoravitz Dec 1 '16 at 23:55 • Yes beginning to see it more clearly now – Freud Dec 1 '16 at 23:59 ## 3 Answers (a) Note that each equivalence class has a $1\text{-}1$ mapping with the set $\{0,1,2,3,4,5\}$, via the mapping $f([A]) = \left| A\cup T\right|$. Thus, there is a total of 6 equivalence classes. (b) A set $U$ belongs to $[\{\emptyset\}]$ if and only if $|U\cap T|=\emptyset$. That is, any subset of $\{6,7,8,9\}$ will do. There is a total of $2^4=16$ such sets. (c) Using almost the same rationale as in (b), we see that There is a total of $2^4=16$ such sets, since the subsets are $T\cup U$, where $U\subseteq \{6,7,8,9\}.$ (d) As in (c), we select a subset of $\{6,7,8,9\}$, and take its union with some subset of $T$, of size $2$. So the answer is $16\cdot\binom{5}{2}=160$. "XRRY iff |X| = |Y|" is wrong. (a) there are 6 equivalence classes since $|X|\cap T\in\{0,\dots,5 \}$ for any set $X \in P(S)$ and every case does occur. (b) All elements in $[\emptyset]$ either contain 6,7,8,9 or not. So there are $2^4=16$ many. (c) 16 (see (b) for reasoning) (d) $16\cdot\binom{5}{2} =160$ Indeed, this $\def\R{\mathscr R} \R$ is an equivalence relation on $P(S)$. However, it is different from the one you replaced it with. Hints: (a) There are 6 equivalence classes, because $|X\cap T|$ can have 6 different values. (b) $Y\R \emptyset\ \iff\ Y\cap T=\emptyset\ \iff\ Y\subseteq\{6,7,8,9\}$ (c) $Y\R T\ \iff\ Y\supseteq T$; in this case $Y$ can be written as any set $\subseteq\{6,7,8,9\}$ plus $T$. (d) $Y\R\{1,2\}\ \iff\ |Y\cap T|=2$, so the number of such sets $Y$ is $\,\binom52\cdot 2^4$.
Tags: reverse engineering network Rating: --- title: "Write-up - Sunshine CTF 2016 - Randy" date: 2016-03-14 00:00:00 tags: [Write-up, Sunshine CTF 2016] summary: "Write-up about Sunshine CTF 2016 - Randy" --- In this challenge, we have a binary file and we need to connect over the network using Netcat. After a program analysis on the binary file, we can see that we need to create a code that will retrieve the random value which is produced. We will finally receive the flag. import telnetlib import struct x = lambda a: struct.pack('I', a) def fukmin(x): return hex((int(x, 16) - 0x41) & 0xff) host = "4.31.182.242" port = "9002" s = telnetlib.Telnet(host, port) (a, b, c, d) = (welcome[41:42].encode("hex"), welcome[42:43].encode("hex"), welcome[43:44].encode("hex"), welcome[44:45].encode("hex")) debuginfo = a + b + c + d print "debuginfo = 0x" + debuginfo print " - - - - - " magic = fukmin(a) +\ fukmin(b)[2:]+\ fukmin(c)[2:]+\ fukmin(d)[2:] print "magic = " + magic print " - - - - - " s.write(x(int(magic, 16))) We will have the following message: ''' OUTPUT: debuginfo = 0x6ba9580a - - - - - magic = 0x2a6817c9 - - - - - You guessed that hand perfectly! Here's your prize: sun{c4rds_in_th3_tr4p} ''' ''' BONUS: <encoding function> 0x8048608 <main+120>: sar ecx,0x18 0x804860b <main+123>: and ecx,0xff ... 0x8048625 <main+149>: sar ecx,0x10 0x8048628 <main+152>: and ecx,0xff ... 0x8048642 <main+178>: sar ecx,0x8 0x8048645 <main+181>: and ecx,0xff ... 0x804865f <main+207>: and ecx,0xff ''' We can try to _decrypt_ it by using this snippet of code: The flag to solve this challenge is sun{c4rds_in_th3_tr4p}.
# Fractions Class 6 Maths Notes - Chapter 7 A fraction is defined as a part of a whole number. It can be expressed as a ratio between two integers separated by a solidus. The number in the upper part of a fraction is termed as numerator whereas the number in the lower part is termed as the denominator. For example, let us consider a fraction $\frac{3}{12}$ where, • 3 is the numerator • 12 is the denominator • It is read as three-twelfths ## Types Of Fractions Let us understand the different types of fractions. There are five types of fractions. They are proper fractions, improper fractions, mixed fractions, like fractions and unlike fractions. • Proper fractions – It is a type of fraction where the denominator is always greater than the numerator. Some examples are $\frac{4}{5}$ , $\frac{3}{7}$ • Improper fractions – It is a type of fraction where the denominator is always less than the numerator. Some examples are $\frac{5}{4}$ , $\frac{7}{3}$ • Mixed fractions – It is a type of fraction which consists of a whole number and a proper fraction. Some examples are $16\frac{3}{7}$ , $3\frac{4}{5}$ • Like fractions – The type of fractions which have same denominators are called, like fractions. Some examples are $\frac{1}{15}$ , $\frac{3}{15}$ • Unlike fractions – The type of fractions which have different denominators are called, unlike fractions. Some examples are $\frac{6}{27}$ , $\frac{6}{28}$ ## Introduction to Fractions • A fraction is a number that represents a part of a whole. • The whole may be a single object or a group of objects. • Fraction=Numerator/Denominator • Example: 1/2, 3/7 ### Representing fractions • Fractions can be represented using numbers, figures or words. ​​​​​​​​​​​​​​ To know more about Fractions, visit here. ## Avatars of Fractions ### Proper fractions • If numerator < denominator in a fraction, then it is a proper fraction. • Example: 2/3 and 4/9 ### Improper and Mixed fraction • If numerator > denominator in a fraction, then it is an improper fraction. • Example: 4/3 and 8/11 • An improper fraction can be written as a combination of a whole and a part, and is called mixed fractions. To know more about Mixed Fractions, visit here. ### Interconversion between Improper and Mixed fraction Improper to mixed fraction ## Meet the Twin Fractions ### Equivalent fractions • Each proper or improper fraction has many equivalent fractions. • Multiply both numerator and denominator by a number, to find an equivalent fraction for the fraction. Example: 1/2 and 2/4 are equivalent fractions. • 1/3 + 7/3 = (1+7)/3 = 8/3 • 1/3 + 2/4 = (4+6)/12 = 10/12 = 5/6 ### LCM • Least common multiple of two numbers (LCM) is the smallest number that gets divided by both the numbers. • Example: LCM of 3 and 4 is 12. ## Many Many Many Fractions Together ### Multiplication of fractions Proper fraction * Proper fraction Proper fraction * Improper fraction To know more about Multiplication of Fractions, visit here. ## Let’s Divide Fractions ### Reciprocals of fractions • Turning the fraction upside down gives the Reciprocal of a fraction. • Fraction × (Reciprocal of the fraction) = 1 To know more about Reciprocal of Fractions, visit here. ### Division of fractions • 1/2 ÷ 1/3 1/2 × Reciprocal of (1/3) =1/2 × 3 = (1×3)/2 =3/2 • 4/3 ÷ 3/2 =4/3 × Reciprocal of (3/2) =4/3×2/3=8/9 To know more about Division of Fractions, visit here. ## Where do Fractions Live? ### Comparison of fractions • Comparing like fractions with same denominators 2/3 and 8/3 2 < 8 ∴ 2/3 < 8/3 • Comparing unlike fractions with same numerators 1/3 and 1/4 Portion of the whole showing 1/3 > Portion of the whole showing 1/4 ∴1/3>1/4 • Comparing unlike fractions with different numerators 5/6 and 13/15 LCM of 6 and 15: 30 (5×5)/(6×5) = 25/30 (13×2)/(15×2)=26/30 ⇒ 25/30<26/30 ∴ 5/6<13/15 ### Fractions on the number line • The following figure shows how fractions 14, 24 and 34 are represented on a number line. • Divide the portion from 0 to 1 on the number line into four parts. • Then each part represents 1/4th portion of the whole.
# Weekly Papers on Quantum Foundations (17,18) 下午3:48 | Jacques Pienaar | quant-ph updates on arXiv.org According to the subjective Bayesian interpretation of quantum mechanics (QBism), the instruments used to measure quantum systems are to be regarded as an extension of the senses of the agent who is using them, and quantum states describe the agent’s expectations for what they will experience through these extended senses. How can QBism then account for the fact that (i) instruments must be calibrated before they can be used to sense’ anything; (ii) some instruments are more precise than others; (iii) more precise instruments can lead to discovery of new systems? Furthermore, is the agent incoherent’ if they prefer to use a less precise instrument? Here we provide answers to these questions. 下午3:48 | Cole Franks, Michael Walter | quant-ph updates on arXiv.org Consider the action of a connected complex reductive group on a finite-dimensional vector space. A fundamental result in invariant theory states that the orbit closure of a vector v is separated from the origin if and only if some homogeneous invariant polynomial is nonzero on v. We refine this famous duality between orbit closures and invariant polynomials by showing that the following two quantities coincide: (1) the logarithm of the Euclidean distance between the orbit closure and the origin and (2) the rate of exponential growth of the ‘invariant part’ of $v^{\otimes k}$ in the semiclassical limit as k tends to infinity. We also provide generalizations of this result to projections onto highest weight vectors and isotypical components. Such semiclassical limits arise in the study of the asymptotic behavior of multiplicities in representation theory, in large deviations theory in classical and quantum statistics, and in a conjecture in invariant theory due to Mathieu. Our formula implies that they can be computed, in many cases efficiently, to arbitrary precision. 下午3:48 | Karl Svozil | quant-ph updates on arXiv.org This is an elaboration about the “extra” advantage of the performance of quantized physical systems over classical ones; both in terms of single outcomes as well as probabilistic predictions. From a formal point of view, it is based upon entities related to (dual) vectors in (dual) Hilbert spaces, as compared to the Boolean algebra of subsets of a set and the additive measures they support. Authors: Marian Kupczynski Various Bell inequalities are trivial algebraic properties satisfied by each line of particular data spreadsheets.It is surprising that their violation in some experiments, allows to speculate about the existence of nonlocal influences in Nature and to doubt the existence of the objective external physical reality. Such speculations are rooted in incorrect interpretations of quantum mechanics and in a failure of local realistic hidden variable models to reproduce quantum predictions for spin polarisation correlation experiments. These hidden variable models use counterfactual joint probability distributions of only pairwise measurable random variables to prove the inequalities. In real experiments Alice and Bob, using 4 incompatible pairs of experimental settings, estimate imperfect correlations between clicks, registered by their detectors. Clicks announce detection of photons and are coded by 1 or -1. Expectations of corresponding ,only pairwise measurable, random variables are estimated and compared with quantum predictions. These estimates violate significantly the inequalities. Since all these random variables cannot be jointly measured , a joint probability distribution of them does not exist and various Bell inequalities may not be proven. Thus it is not surprising that they are violated. Moreover,if contextual setting dependent parameters describing measuring instruments are correctly included in the description, then imperfect correlations between the clicks may be explained in a locally causal way. In this paper we review and rephrase several arguments proving that the violation of various Bell inequalities may neither justify the quantum nonlocality nor allow doubting the existence of atoms, electrons and other invisible elementary particles which are building blocks of the visible world around us including ourselves. 下午3:48 | gr-qc updates on arXiv.org We argue that the equations of motion of quantum field theories in curved backgrounds encode new fundamental black hole thermodynamic relations. We define new entropy variation relations. These `emerge’ through the monodromies that capture the infinitesimal changes in the black hole background produced by the field excitations. This raises the possibility of new thermodynamic relations defined as independent sums involving entropies, temperatures and angular velocities defined at every black hole horizon. We present explicit results for the sum of all horizon entropy variations for general rotating black holes, both in asymptotically at and asymptotically anti-de Sitter spacetimes in four and higher dimensions. The expressions are universal, and in most cases add up to zero. We also find that these thermodynamic summation relations apply in theories involving multi-charge black holes. 下午3:48 | ScienceDirect Publication: Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern PhysicsScienceDirect RSShttps://www.sciencedirect.com/journal/studies-in-history-and-philosophy-of-science-part-b-studies-in-history-and-philosophy-of-modern-physicsRSS for NodeWed, 24 Jul 2019 09:46:42 GMTCopyright © 2019 Elsevier Ltd. All rights reservedImprints of the underlying structure of physical theoriesPublication date: Available online 12 July 2019Source: Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern PhysicsAuthor(s): Jorge ManeroAbstractIn the context of scientific realism, this paper intends to provide a formal and accurate description of the structural-based ontology posited by classical mechanics, quantum mechanics and special relativity, which is preserved across the empirical domains of these theories and explain their successful predictions. Along the lines of ontic structural realism, such a description is undertaken by Publication date: Available online 26 April 2020 Source: Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics Author(s): Jonathan Bain 下午3:48 | ScienceDirect Publication: Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern PhysicsScienceDirect RSShttps://www.sciencedirect.com/journal/studies-in-history-and-philosophy-of-science-part-b-studies-in-history-and-philosophy-of-modern-physicsRSS for NodeWed, 24 Jul 2019 09:46:42 GMTCopyright © 2019 Elsevier Ltd. All rights reservedImprints of the underlying structure of physical theoriesPublication date: Available online 12 July 2019Source: Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern PhysicsAuthor(s): Jorge ManeroAbstractIn the context of scientific realism, this paper intends to provide a formal and accurate description of the structural-based ontology posited by classical mechanics, quantum mechanics and special relativity, which is preserved across the empirical domains of these theories and explain their successful predictions. Along the lines of ontic structural realism, such a description is undertaken by Publication date: Available online 25 April 2020 Source: Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics Author(s): Joshua Norton 下午3:48 | ScienceDirect Publication: Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern PhysicsScienceDirect RSShttps://www.sciencedirect.com/journal/studies-in-history-and-philosophy-of-science-part-b-studies-in-history-and-philosophy-of-modern-physicsRSS for NodeWed, 24 Jul 2019 09:46:42 GMTCopyright © 2019 Elsevier Ltd. All rights reservedImprints of the underlying structure of physical theoriesPublication date: Available online 12 July 2019Source: Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern PhysicsAuthor(s): Jorge ManeroAbstractIn the context of scientific realism, this paper intends to provide a formal and accurate description of the structural-based ontology posited by classical mechanics, quantum mechanics and special relativity, which is preserved across the empirical domains of these theories and explain their successful predictions. Along the lines of ontic structural realism, such a description is undertaken by Publication date: Available online 23 April 2020 Source: Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics 下午3:48 | ScienceDirect Publication: Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern PhysicsScienceDirect RSShttps://www.sciencedirect.com/journal/studies-in-history-and-philosophy-of-science-part-b-studies-in-history-and-philosophy-of-modern-physicsRSS for NodeWed, 24 Jul 2019 09:46:42 GMTCopyright © 2019 Elsevier Ltd. All rights reservedImprints of the underlying structure of physical theoriesPublication date: Available online 12 July 2019Source: Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern PhysicsAuthor(s): Jorge ManeroAbstractIn the context of scientific realism, this paper intends to provide a formal and accurate description of the structural-based ontology posited by classical mechanics, quantum mechanics and special relativity, which is preserved across the empirical domains of these theories and explain their successful predictions. Along the lines of ontic structural realism, such a description is undertaken by Publication date: Available online 29 April 2020 Source: Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics Author(s): Simon Saunders 2020年5月1日 星期五 上午8:00 | Latest Results for Synthese ### Abstract This paper has a twofold purpose. First, it aims at highlighting one difference (albeit in degree and not in kind) in how counterfactuals work in general history, on the one hand, and in history of the natural sciences, on the other hand. As we show, both in general history and in history of science good counterfactual narratives need to be plausible, where plausibility is construed as appropriate continuity of both the antecedent and the consequent of the counterfactual with what we know about the world. However, in general history it is often possible to imagine a consequent dramatically different from the actual historical development, and yet plausible; in history of science, due to plausibility concerns, imagining a consequent far removed from the results of actual science seems more complicated. The second aim of the paper is to assess whether and to what degree counterfactual histories of science can advance the cause of the so-called “contingency thesis,” namely, the claim that history of science might have followed a path leading to alternative, non-equivalent theories, as successful as the ones that we currently embrace. We distinguish various versions of the contingency thesis and argue that counterfactual histories of science support weak versions of the thesis. 2020年4月30日 星期四 下午4:14 | Philsci-Archive: No conditions. Results ordered -Date Deposited. Dewar, Neil (2020) Comments on Halvorson’s “The Logic in Philosophy of Science”. UNSPECIFIED. 2020年4月30日 星期四 下午4:11 | Philsci-Archive: No conditions. Results ordered -Date Deposited. Gábor, Hofer-Szabó (2020) Commutativity, comeasurability, and contextuality in the Kochen-Specker arguments. [Preprint] 2020年4月27日 星期一 上午3:49 | Philsci-Archive: No conditions. Results ordered -Date Deposited. Krause, Décio and Arenhart, Jonas R. B. (2020) Identical particles in quantum mechanics: favouring the Received View. [Preprint] 2020年4月26日 星期日 上午7:51 | Philsci-Archive: No conditions. Results ordered -Date Deposited. Higashi, Katsuaki (2020) Hardy relations and common cause. [Preprint] 2020年4月26日 星期日 上午7:40 | Philsci-Archive: No conditions. Results ordered -Date Deposited. Bacciagaluppi, Guido (2020) Quantum Mechanics, Emergence, and Decisions. [Preprint] 2020年4月25日 星期六 上午9:23 | Philsci-Archive: No conditions. Results ordered -Date Deposited. James, Lucy (2020) A New Perspective On Time And Physical Laws. [Preprint] 2020年4月22日 星期三 下午6:00 | A. J. Paige, A. D. K. Plato, and M. S. Kim | PRL: General Physics: Statistical and Quantum Mechanics, Quantum Information, etc. Author(s): A. J. Paige, A. D. K. Plato, and M. S. Kim Proper time, ideal clocks, and boosts are well understood classically, but subtleties arise in quantum physics. We show that quantum clocks set in motion via momentum boosts do not witness classical time dilation. However, using velocity boosts we find the ideal behavior in both cases, where the qua… [Phys. Rev. Lett. 124, 160602] Published Wed Apr 22, 2020 2020年4月21日 星期二 上午8:00 | Andrea Taroni | Nature Physics – Issue – nature.com science feeds Nature Physics, Published online: 21 April 2020; doi:10.1038/s41567-020-0904-y Pioneer of condensed-matter physics. 2020年4月20日 星期一 上午8:00 | Serim Ilday | Nature Physics – Issue – nature.com science feeds Nature Physics, Published online: 20 April 2020; doi:10.1038/s41567-020-0879-8 Biological systems are able to self-assemble in non-equilibrium conditions thanks to a continuous injection of energy. Here the authors present a tool to achieve non-equilibrium self-assembly of synthetic and biological constituents with sizes spanning three orders of magnitude. # Discretization of the Bloch sphere, fractal invariant sets and Bell’s theorem ## Abstract An arbitrarily dense discretization of the Bloch sphere of complex Hilbert states is constructed, where points correspond to bit strings of fixed finite length. Number-theoretic properties of trigonometric functions (not part of the quantum-theoretic canon) are used to show that this constructive discretized representation incorporates many of the defining characteristics of quantum systems: completementarity, uncertainty relationships and (with a simple Cartesian product of discretized spheres) entanglement. Unlike Meyer’s earlier discretization of the Bloch Sphere, there are no orthonormal triples, hence the Kocken–Specker theorem is not nullified. A physical interpretation of points on the discretized Bloch sphere is given in terms of ensembles of trajectories on a dynamically invariant fractal set in state space, where states of physical reality correspond to points on the invariant set. This deterministic construction provides a new way to understand the violation of the Bell inequality without violating statistical independence or factorization, where these conditions are defined solely from states on the invariant set. In this finite representation, there is an upper limit to the number of qubits that can be entangled, a property with potential experimental consequences.
## Logit function in higher dimensions Hi all, I have a (possibly stupid) question. Is there a higher dimensional generalization for the logitistic function? The logistic function satisfies the differential: P'(t)=P(t)*(1-P(t)) In the (x,y) plane the answer is simply, y= $\frac{1}{1+\exp(-x)}$ Edit: Sorry I cant find the latex error its y = 1/(1+exp(-x)).
## C Specification The first step of polygon rasterization is to determine whether the triangle is back-facing or front-facing. This determination is made based on the sign of the (clipped or unclipped) polygon’s area computed in framebuffer coordinates. One way to compute this area is: $a = -{1 \over 2}\sum_{i=0}^{n-1} x_f^i y_f^{i \oplus 1} - x_f^{i \oplus 1} y_f^i$ where $$x_f^i$$ and $$y_f^i$$ are the x and y framebuffer coordinates of the ith vertex of the n-vertex polygon (vertices are numbered starting at zero for the purposes of this computation) and i ⊕ 1 is (i + 1) mod n. The interpretation of the sign of a is determined by the VkPipelineRasterizationStateCreateInfo::frontFace property of the currently active pipeline. Possible values are: typedef enum VkFrontFace { VK_FRONT_FACE_COUNTER_CLOCKWISE = 0, VK_FRONT_FACE_CLOCKWISE = 1, } VkFrontFace; ## Description • VK_FRONT_FACE_COUNTER_CLOCKWISE specifies that a triangle with positive area is considered front-facing. • VK_FRONT_FACE_CLOCKWISE specifies that a triangle with negative area is considered front-facing. Any triangle which is not front-facing is back-facing, including zero-area triangles.
# pub2002.bib @comment{{This file has been generated by bib2bib 1.94}} @comment{{Command line: /usr/bin/bib2bib --quiet -c 'not journal:"Discussions"' -c year=2002 -c $type="ARTICLE" -oc pub2002.txt -ob pub2002.bib lmdplaneto.link.bib}} @article{2002JGRE..107.5124N, author = {{Newman}, C.~E. and {Lewis}, S.~R. and {Read}, P.~L. and {Forget}, F. }, title = {{Modeling the Martian dust cycle 2. Multiannual radiatively active dust transport simulations}}, journal = {Journal of Geophysical Research (Planets)}, keywords = {Atmospheric Composition and Structure: Planetary atmospheres (5405, 5407, 5409, 5704, 5705, 5707), Meteorology and Atmospheric Dynamics: Planetary meteorology (5445, 5739), Planetary Sciences: Atmospheres-structure and dynamics, Atmospheric Composition and Structure: Aerosols and particles (0345, 4801), Planetary Sciences: Meteorology (3346),}, year = 2002, volume = 107, eid = {5124}, pages = {5124}, abstract = {{Multiannual dust transport simulations have been performed using a Mars general circulation model containing a dust transport scheme which responds to changes in the atmospheric state. If the dust transport is radiatively active,'' the atmospheric state also responds to changes in the dust distribution. This paper examines the suspended dust distribution obtained using different lifting parameterizations, including an analysis of dust storms produced spontaneously during these simulations. The lifting mechanisms selected are lifting by (1) near-surface wind stress and (2) convective vortices known as dust devils. Each mechanism is separated into two types of parameterization: threshold-sensitive and -insensitive. The latter produce largely unrealistic annual dust cycles and storms, and no significant interannual variability. The threshold-sensitive parameterizations produce more realistic annual and interannual behavior, as well as storms with similarities to observed events, thus providing insight into how real Martian dust storms may develop. Simulations for which dust devil lifting dominates are too dusty during northern summer. This suggests either that a removal mechanism (such as dust scavenging by water ice) reduces opacities at this time or that dust devils are not the primary mechanism for storm production. Simulations for which near-surface wind stress lifting dominates produce the observed low opacities during northern spring/summer, yet appear unable to produce realistic global storms without storm decay being prevented by the occurrence of large-scale positive feedbacks on further lifting. Simulated dust levels are generally linked closely to the seasonal state of the atmosphere, and no simulation produces the observed amount of interannual variability. }}, doi = {10.1029/2002JE001920}, adsurl = {https://ui.adsabs.harvard.edu/abs/2002JGRE..107.5124N}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2002JGRE..107.5123N, author = {{Newman}, C.~E. and {Lewis}, S.~R. and {Read}, P.~L. and {Forget}, F. }, title = {{Modeling the Martian dust cycle, 1. Representations of dust transport processes}}, journal = {Journal of Geophysical Research (Planets)}, keywords = {Atmospheric Composition and Structure: Planetary atmospheres (5405, 5407, 5409, 5704, 5705, 5707), Meteorology and Atmospheric Dynamics: Planetary meteorology (5445, 5739), Planetary Sciences: Atmospheres-structure and dynamics, Atmospheric Composition and Structure: Aerosols and particles (0345, 4801), Planetary Sciences: Meteorology (3346),}, year = 2002, volume = 107, eid = {5123}, pages = {5123}, abstract = {{A dust transport scheme has been developed for a general circulation model of the Martian atmosphere. This enables radiatively active dust transport, with the atmospheric state responding to changes in the dust distribution via atmospheric heating, as well as dust transport being determined by atmospheric conditions. The scheme includes dust lifting, advection by model winds, atmospheric mixing, and gravitational sedimentation. Parameterizations of lifting initiated by (1) near-surface wind stress and (2) convective vortices known as dust devils are considered. Two parameterizations are defined for each mechanism and are first investigated offline using data previously output from the non-dust-transporting model. The threshold-insensitive parameterizations predict some lifting over most regions, varying smoothly in space and time. The threshold-sensitive parameterizations predict lifting only during extreme atmospheric conditions (such as exceptionally strong winds), so lifting is rarer and more confined to specific regions and times. Wind stress lifting is predicted to peak during southern summer, largely between latitudes 15{\deg} and 35{\deg}S, with maxima also in regions of strong slope winds or thermal contrast flows. These areas are consistent with observed storm onset regions and dark streak surface features. Dust devil lifting is also predicted to peak during southern summer, with a moderate peak during northern summer. The greatest dust devil lifting occurs in early afternoon, particularly in the Noachis, Arcadia/Amazonis, Sirenum, and Thaumasia regions. Radiatively active dust transport experiments reveal strong positive feedbacks on lifting by near-surface wind stress and negative feedbacks on lifting by dust devils. }}, doi = {10.1029/2002JE001910}, adsurl = {https://ui.adsabs.harvard.edu/abs/2002JGRE..107.5123N}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2002Icar..159..505L, author = {{Lebonnois}, S. and {Bakes}, E.~L.~O. and {McKay}, C.~P.}, title = {{Transition from Gaseous Compounds to Aerosols in Titan's Atmosphere}}, journal = {\icarus}, year = 2002, volume = 159, pages = {505-517}, abstract = {{We investigate the chemical transition of simple molecules like C$_{2}$H$_{2}$and HCN into aerosol particles in the context of Titan's atmosphere. Experiments that synthesize analogs (tholins) for these aerosols can help illuminate and constrain these polymerization mechanisms. Using information available from these experiments, we suggest chemical pathways that can link simple molecules to macromolecules, which will be the precursors to aerosol particles: polymers of acetylene and cyanoacetylene, polycyclic aromatics, polymers of HCN and other nitriles, and polyynes. Although our goal here is not to build a detailed kinetic model for this transition, we propose parameterizations to estimate the production rates of these macromolecules, their C/N and C/H ratios, and the loss of parent molecules (C$_{2}$H$_{2}$, HCN, HC$_{3}$N and other nitriles, and C$_{6}$H$_{6}$) from the gas phase to the haze. We use a one-dimensional photochemical model of Titan's atmosphere to estimate the formation rate of precursor macromolecules. We find a production zone slightly lower than 200 km altitude with a total production rate of 4{\times}10$^{-14}$g cm$^{-2}$s$^{-1}$and a C/N{\sime}4. These results are compared with experimental data, and to microphysical model requirements. The Cassini/Huygens mission will bring a detailed picture of the haze distribution and properties, which will be a great challenge for our understanding of these chemical processes. }}, doi = {10.1006/icar.2002.6943}, adsurl = {https://ui.adsabs.harvard.edu/abs/2002Icar..159..505L}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2002JGRE..107.5055V, author = {{Van den Acker}, E. and {Van Hoolst}, T. and {de Viron}, O. and {Defraigne}, P. and {Forget}, F. and {Hourdin}, F. and {Dehant}, V. }, title = {{Influence of the seasonal winds and the CO$_{2}$mass exchange between atmosphere and polar caps on Mars' rotation}}, journal = {Journal of Geophysical Research (Planets)}, keywords = {Planetary Sciences: Orbital and rotational dynamics, Planetary Sciences: Interiors (8147), Planetary Sciences: Atmospheres-structure and dynamics, Planetary Sciences: Polar regions,}, year = 2002, volume = 107, eid = {5055}, pages = {5055}, abstract = {{The Martian atmosphere and the CO$_{2}\$ polar ice caps exchange mass. This exchange, together with the atmospheric response to solar heating, induces variations of the rotation of Mars. Using the angular momentum budget equation of the system solid-Mars-atmosphere-polar ice caps, the variations of Mars' rotation can be deduced from the variations of the angular momentum of the superficial layer; this later is associated with the winds, that is, the motion term, and with the mass redistribution, that is, the matter term. For the mean'' Martian atmosphere, without global dust storms, total amplitudes of 10 cm on the surface are obtained for both the annual and semiannual polar motion excited by the atmosphere and ice caps. The atmospheric pressure variations are the dominant contribution to these amplitudes. Length-of-day (lod) variations have amplitudes of 0.253 ms for the annual signal and of 0.246 ms for the semiannual signal. The lod variations are mainly associated with changes in the atmospheric contribution to the mass term, partly compensated by the polar ice cap contribution. We computed lod variations and polar motion for three scenarios having different atmospheric dust contents. The differences between the three sets of results for lod variations are about one order of magnitude larger than the expected accuracy of the NEtlander Ionosphere and Geodesy Experiment (NEIGE) for lod. It will thus be possible to constrain the global atmospheric circulation models from the NEIGE measurements. }}, doi = {10.1029/2000JE001539}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2002Sci...295..110C, author = {{Costard}, F. and {Forget}, F. and {Mangold}, N. and {Peulvast}, J.~P. }, title = {{Formation of Recent Martian Debris Flows by Melting of Near-Surface Ground Ice at High Obliquity}}, journal = {Science}, year = 2002, volume = 295, pages = {110-113}, abstract = {{The observation of small gullies associated with recent surface runoff on Mars has renewed the question of liquid water stability at the surface of Mars. The gullies could be formed by groundwater seepage from underground aquifers; however, observations of gullies originating from isolated peaks and dune crests question this scenario. We show that these landforms may result from the melting of water ice in the top few meters of the martian subsurface at high obliquity. Our conclusions are based on the analogy between the martian gullies and terrestrial debris flows observed in Greenland and numerical simulations that show that above-freezing temperatures can occur at high obliquities in the near surface of Mars, and that such temperatures are only predicted at latitudes and for slope orientations corresponding to where the gullies have been observed on Mars. }}, doi = {10.1126/science.295.5552.110}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @article{2002AdSpR..29..175M, author = {{Markiewicz}, W.~J. and {Keller}, H.~U. and {Thomas}, N. and {Titov}, D. and {Forget}, F.}, title = {{Optical properties of the Martian aerosols in the visible spectral range}}, journal = {Advances in Space Research}, year = 2002, volume = 29, pages = {175-181}, abstract = {{Imager for Mars Pathfinder (IMP) obtained data of sky brightness as a function of the scattering angle, wavelength, time of day and Sol. This data set is fitted with model calculations to extract the size distribution, shape and the refractive index of the aerosols suspended in the atmosphere. The inferred optical parameters are discussed in context of diurnal variations and compared to those derived from Viking Landers cameras and Phobos KRFM radiometer data. The effects of the scattering and absorption of the solar radiation by the atmospheric aerosols are discussed in terms of their influence on the spectrophotometry of the Martian surface. }}, doi = {10.1016/S0273-1177(01)00567-1},
User slow student - MathOverflow most recent 30 from http://mathoverflow.net 2013-05-23T13:47:41Z http://mathoverflow.net/feeds/user/12774 http://www.creativecommons.org/licenses/by-nc/2.5/rdf http://mathoverflow.net/questions/54550/the-third-axiom-in-the-definition-of-infinite-dimensional-vector-bundles-why The third axiom in the definition of (infinite-dimensional) vector bundles: why? slow student 2011-02-06T18:56:43Z 2012-07-21T12:39:21Z <p>Serge Lang's <em>Differential and Riemannian Manifolds</em> is a no doubt the best available reference for the theory of not-necessarily-finite-dimensional differential manifolds, but unfortunately it suffers the defect of containing no exercises and few examples. This makes it difficult to learn the subject from this book, especially if one is say a graduate student who is also still in the process of learning functional analysis.</p> <p>One place where an example would really have been helpful is in the context of the definition of vector bundle (pp. 40-41), which involves three axioms that Lang labels VB1 - VB3. The third one, VB3, states that, in coordinate overlaps, the mapping of points of the base space into the automorphisms of the fibers induced by coordinate changes should be a morphism. As Lang notes, this axiom is redundant in the finite-dimensional case because of the following result (p.42): </p> <p><strong>Proposition 1.1.</strong> Let $\mathbf{E}$, $\mathbf{F}$ be finite-dimensional vector spaces. Let $U$ be open in some Banach space. Let $f: U \times \mathbf{E} \to \mathbf{F}$ be a morphism such that for each $x \in U$, the map $f_x : \mathbf{E} \to \mathbf{F}$ given by $f_x(v) = f(x,v)$ is a linear map. Then the map of $U$ into $L(\mathbf{E},\mathbf{F})$ given by $x \mapsto f_x$ is a morphism.</p> <p>However, this result is apparently false in the infinite-dimensional case. The problem is that Lang does not provide an example showing this; and nor does he discuss why smoothness of the map from $U$ into $L(\mathbf{E},\mathbf{F})$ (or, in the specific case of interest, from $U_i \cap U_j$ into $Laut(\mathbf(E))$ is necessary or convenient for whatever purposes such infinite-dimensional bundles are used for.</p> <p>So if I may ask: what would be a counterexample to the above proposition in the infinite-dimensional case? Even more to the point, where does one <em>go looking</em> for such a counterexample? Can I take $U = \mathbf{F} = \mathbb{R}$, and $\mathbf{E} = \ell_2$? Can we make even <em>continuity</em> fail, i.e. is VB3 necessary even for $C^0$ - manifolds? I know we can't make $f$ <em>bilinear</em>, since $L^2(\mathbf{E}, \mathbf{F}; \mathbf{G}) \cong L(\mathbf{E}, L(\mathbf{F},\mathbf{G}))$ -- but this is what makes the question mysterious to me, because my understanding is that "continuity failures" in infinite dimensions arise from non-convergence of sequences (so that you can't just "write everything in a matrix and see that the entries are continuous/smooth"), in which case you ought to be able to exhibit the phenomenon in the simplest case of (bi)linear maps; but the aforementioned isomorphism blocks this. So why does a fundamental difference between finite- and infinite- dimensional spaces suddenly appear when we switch from linear to nonlinear maps? Why doesn't the fact that $f$ is a two-argument morphism provide bounds that would force $x \mapsto f_x$ to be a morphism as well, just like in the bilinear case?</p> <p>Also, why can't we just "do without" Lang's VB3 in the case of infinite-dimensional manifolds?</p> http://mathoverflow.net/questions/54550/the-third-axiom-in-the-definition-of-infinite-dimensional-vector-bundles-why/54706#54706 Comment by slow student slow student 2011-02-08T19:40:42Z 2011-02-08T19:40:42Z I'm afraid I don't see where you used VB3. For the dual bundle to make sense, you need $u \mapsto f(p)^*(u) = u\circ f(p)$ to be continuous, certainly -- but this follows from the continuity of $f(p)$, right? Where do you need continuity/smoothness of $p \mapsto f(p)$?
# Homework Solution: The next 7 questions pertain to a general, normalized floating-point system G(B, p,… The next 7 questions pertain to a general, normalized floating-point system G(B, p, m,M). Express answers in terms of B, p, m, and M, as needed. You may assume that m is negative and M is positive, and that M greaterthanorequalto p. Show your work where needed (you can assume as given anything from the slides). What is the smallest positive number in G? What is the largest positive number in G? What is the spacing between the numbers of G in the range [B^e, B^e+1)? How many numbers of G are there in the interval [B^e, B^e+l)? How many positive numbers are there in G? What is the smallest positive integer not representable in G? The present 7 questions pertain to a unconcealed, normalized floating-point rule G(B, p, m,M). Express counterparts in stipulations of B, p, m, and M, as needed. You may feign that m is denying and M is unconditional, and that M greaterthanorequalto p. Show your labor where needed (you can feign as absorbed everything from the slides). What is the lowest unconditional sum in G? What is the largest unconditional sum in G? What is the spacing among the sums of G in the collocate [B^e, B^e+1)? How multifarious sums of G are there in the intermission [B^e, B^e+l)? How multifarious unconditional sums are there in G? What is the lowest unconditional integer not attributable attributable attributable attributable attributable attributable representable in G? ## Expert Counterpart The normalized floating-point sums rule G is a 4-tuple (BpmM) where • B is the deep of the rule, • p is the accuracy of the rule to p numbers, • m is the lowest propounder representable in the rule, • and M is the largest propounder used in the rule. 6. Lowest unconditional FP sum in G must keep a 1 as the requisite digit and 0 control the fostering digits of the significand, and the lowest practicable rate control the propounder, which is resembling to $B^{m}$. 7. Largest floating-point sum  must keep B − 1 as the rate control each digit of the significand and the largest practicable rate control the propounder,which is resembling to $(1-B^{-p})(B^{M+1})$ . 8. The sums among $\[B^{e},B^{e+1})$ are resemblingly divided by $B^{-p}$ units. 9. Total sums among intermission $\[B^{e},B^{e+1})$ is absorbed by $\frac{B^{e+1}-B^{e}}{B^{-p}} = B^{e+p}*(B-1)$ 10.Total +ve sums among in G are $2*(B-1)(B^{p-1})(M-m+1)+1$
## 2012-09-17 ### A few words regarding the CFA Level I Currently, I've been studying for the CFA. While the material, at least for Level I, is not hard to grasp, the sheer breadth of topics is daunting. It seems as though if you put in the time you'll grasp it. The trick is for you to 1) forget your social life, 2) Get comfy with your calculator, 3) Do all EOCs, Blue Boxes, QBank questions over and over again (there is no point only reading...). For the past month and half in which I went through the first reading, I have not been able to enjoy a day out (and I don't see myself relaxing anytime before December). Well, I guess this is what they call delayed gratification. While I don't intend to inundate you with obscure trivia from the CFAI curriculum, I hope after I sit for the exam in December, I'll have the chance to write a few R scripts for the topics I found interesting like portfolio management, equity, and FI. Till then though, I guess I'll be writing about whatever catches my fancy. On an aside, It seems this blog is in the nether reaches of the interwebz. So, if you are a human reading this: HELLO! PS: Some weekend I'll sit down and finish the first series of R posts I intended to do. ## 2012-09-15 ### A Useful Function for Sweave Users: Sweatex() This function is courtesy of AC Davison at EPFL. Here you go: options(width=65, digits=7) options(show.signif.stars=FALSE) # ps.options(horizontal=FALSE) set.seed(1234) Sweatex <- function(filename,extension='Rnw',command='latex',silent=FALSE,preview=TRUE,bibtex=FALSE) { #check if latex path is present, otherwise adds it to current path for the GUI latex.path <- dirname(options('latexcmd')[[1]]) path <- as.character(Sys.getenv("PATH")) if (regexpr(latex.path,path)==-1) {Sys.putenv("PATH"= paste(path,":",latex.path, sep=""))} if (command=='latex') command='simpdftex latex --maxpfb' extension<-paste('.',extension,sep='') Sweave(paste(filename,extension,sep='')) if (bibtex) { # this does latex+bibtex+latex (if you need latex one more time, uncomment last row) system(paste(command,' ',filename,sep=''),intern=silent) system(paste('bibtex',' ',filename,sep=''),intern=silent) system(paste(command,' ',filename,sep=''),intern=silent) system(paste(command,' ',filename,sep=''),intern=silent) } else system(paste(command,' ',filename,sep=''),intern=silent) if (preview) { system(paste(options('pdfviewer')[[1]],' ',filename,'.pdf',sep='')) } } ## 2012-09-14 ### QE3 and ESM Well, this week has been packed with "good" news. I say "good" because we can always see the negative consequences of policy decisions. Yet in this case, I hope the positive out weighs the negative. %% \Qrating = Automatically create a rating scale with NUM steps, like %% this: 0--0--0--0--0. \newcounter{qr} \newcommand{\Qrating}[1]{\QO\forloop{qr}{1}{\value{qr} < #1}{---\QO}} %% \Qline = Again, this is very simple. It helps setting the line %% thickness globally. Used both by direct call and by \Qlines. \newcommand{\Qline}[1]{\rule{#1}{0.6pt}} %% \Qlines = Insert NUM lines with width=\linewith. You can change the %% \vskip value to adjust the spacing. \newcounter{ql} \newcommand{\Qlines}[1]{\forloop{ql}{0}{\value{ql}<#1}{\vskip0em\Qline{\linewidth}}} %% \Qlist = This is an environment very similar to itemize but with %% \QO in front of each list item. Useful for classical multiple %% choice. Change leftmargin and topsep accourding to your taste. \newenvironment{Qlist}{% \renewcommand{\labelitemi}{\QO} \begin{itemize}[leftmargin=1.5em,topsep=-.5em] }{% \end{itemize} } %% \Qtab = A "tabulator simulation". The first argument is the %% distance from the left margin. The second argument is content which %% is indented within the current row. \newlength{\qt} \newcommand{\Qtab}[2]{ \setlength{\qt}{\linewidth} \hfill\parbox[t]{\qt}{\raggedright #2} } %% \Qitem = Item with automatic numbering. The first optional argument %% can be used to create sub-items like 2a, 2b, 2c, ... The item %% number is increased if the first argument is omitted or equals 'a'. %% You will have to adjust this if you prefer a different numbering %% scheme. Adjust topsep and leftmargin as needed. \newcounter{itemnummer} \newcommand{\Qitem}[2][]{% #1 optional, #2 notwendig \ifthenelse{\equal{#1}{}}{\stepcounter{itemnummer}}{} \ifthenelse{\equal{#1}{a}}{\stepcounter{itemnummer}}{} \begin{enumerate}[topsep=2pt,leftmargin=2.8em] \item[\textbf{\arabic{itemnummer}#1.}] #2 \end{enumerate} } %% \QItem = Like \Qitem but with alternating background color. This %% might be error prone as I hard-coded some lengths (-5.25pt and %% -3pt)! I do not yet understand why I need them. \definecolor{bgodd}{rgb}{0.8,0.8,0.8} \definecolor{bgeven}{rgb}{0.9,0.9,0.9} \newcounter{itemoddeven} \newlength{\gb} \newcommand{\QItem}[2][]{% #1 optional, #2 notwendig \setlength{\gb}{\linewidth} \ifthenelse{\equal{\value{itemoddeven}}{0}}{% \noindent\colorbox{bgeven}{\hskip-3pt\begin{minipage}{\gb}\Qitem[#1]{#2}\end{minipage}}% \stepcounter{itemoddeven}% }{% \noindent\colorbox{bgodd}{\hskip-3pt\begin{minipage}{\gb}\Qitem[#1]{#2}\end{minipage}}% \setcounter{itemoddeven}{0}% } } %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% %% End of questionnaire command definitions %% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} \begin{center} \textbf{\huge OTDI Transplant Questionnaire} \end{center}\vskip1em \noindent This form is voluntary. You may ignore it, complete parts of it, or fill it out fully. It is inteded solely for assessing professional opinion with regard to organ donation in Egypt. \\ \Qitem{ \Qq{Professional speciality}: \Qline{8cm} } \Qitem{ \Qq{Age:} \Qline{1.5cm}} \section*{Questionnaire} %%\Qitem{ \Qq{Rate your knowledge of the criteria of brain death diagnosis? (i.e.: testing for brain stem reflexes, methods to demonstrate absence of breathing, etc.)} \Qitem{ \Qq{Rate your knowledge of the criteria of brain death diagnosis. (i.e., testing for brain stem reflexes, methods to demonstrate absence of breathing, etc.)} \\ \Qtab{3cm}{no prior knowledge \Qrating{5} expert knowledge}} \Qitem{ \Qq{Are you aware of the differences between the diagnosis of brain stem death and whole brain death?} \Qtab{1.5cm}{\QO{} Yes \hskip0.5cm \QO{} No}} \Qitem{ \Qq{Do you diagnose brain death in your clinical practice?} \Qtab{5.5cm}{\QO{} Yes \hskip0.5cm \QO{} No}} \Qitem{ \Qq{If yes, how many do you diagnose per month?}\hskip0.5cm\Qline{1.5cm}} \Qitem{ \Qq{Are you aware of current laws regarding organ transplants in Egypt (e.g., 2010:leg.5)?} \Qtab{5.5cm}{\QO{} Yes \hskip0.5cm \QO{} No}} \Qitem{ \Qq{Do you think an Egyptian organ transplant system is feasible in the near future?} \Qtab{5.5cm}{\QO{} Yes \hskip0.5cm \QO{} No}} \Qitem{ \Qq{In your professional opinion, what is the greatest obstacle for the development of transplant network in Egypt? (Check all that apply)} \begin{Qlist} \item Poverty \item Lack of education \item Religious beliefs \item Technical limitations % \item Organizational limitations \item Other: \Qline{4cm} \end{Qlist} } \Qitem{ \Qq{According to your personal religious beliefs, do you support organ donation and transplantation?} \Qtab{2cm}{\QO{} Yes \hskip0.5cm \QO{} No \hskip0.5cm \QO{} I don't know}} \Qitem{ \Qq{How would you rate your level of interest in working with a transplant team?} \\ \Qtab{3cm}{none \Qrating{5} eager}} \Qitem{ \Qq{Are you willing to receive tranplant treatment if circumstance warranted?} \Qtab{5.5cm}{\QO{} Yes \hskip0.5cm \QO{} No}} \Qitem{ \Qq{How would you rate your attitude towards donating your own organs?} \\ \Qtab{3cm}{never \Qrating{5} under any circumstances}} \Qitem{ \Qq{According to your professional opinion, would you advise individuals to include organ donation in their living will?} \Qtab{5.5cm}{\QO{} Yes \hskip0.5cm \QO{} No}} \Qitem{ \Qq{In your personal opionion, who would be an ideal candidate for organ donation? (Check all that apply)} \begin{Qlist} \item Live Donor \item Brain dead individual on supportive treatment \item Other: \Qline{4cm} \end{Qlist} } If you are interested in receiving updates in the future regarding OTDI's campaign, please fill the form below: \\ \\ \bf{Name:} \Qline{5.5cm}} \\ \bf{Phone:} \Qline{5.5cm}} \\ \bf{Email:} \Qline{5.5cm}} \newpage %\textit{Organ Transplant and Donation Initiative (OTDI) is a non-profit startup aimed at building scientific and community dialog in Egypt. Started by a group of young physicians and professionals who are keen on building both policy and technical infrastructure needed for a broad ranged transplant network in Egypt, OTDI's mission is to provide a continuous initiative towards the advocacy of organ transplantation and donation; aimed at both increasing the awareness of the general public and offering up to date scientific opportunities for those involved in the field.} %Organ Transplant and Donation Initiative (OTDI) is a non-profit startup aimed at building scientific and community dialog in Egypt. Started by a group of young physicians and professionals who are keen on building both policy and technical infrastructure needed for a broad ranged transplant network in Egypt, OTDI's mission is to provide a continuous initiative towards the advocacy of organ transplantation and donation; aimed at both increasing the awareness of the general public and offering up to date scientific opportunities for those involved in the field. \minisec{What is OTDI?} \textit{Started by a group of young physicians and professionals, the Organ Transplantation and Donation Initiative (OTDI) is a non-profit initiative transplantation in Egypt.} \minisec{Our Vision} \textit{Save the lives of thousands of Egyptians through organ donation and transplantation.} \minisec{Our Mission} \textit{Provide a continuous initiative towards the advocacy of organ transplantation and donation; aiming at both increasing the awareness of the general public and offering up to date scientific opportunities for those involved in the field.} \minisec{Our Goals} \begin{itemize}\itemsep-1pt \item{\textit{\textbf{Advocate for organ donation and the new transplant law}}} \item{\textit{\textbf{Combat the stigma accompanying this controversial issue}}} \item{\textit{\textbf{Provide opportunities for interns and residents in the transplant field}}} \item{\textit{\textbf{Raise funds for the cause}}} \item{\textit{\textbf{Set up a sustainable system that will continue serving the cause}}} \end{itemize}
Newey–West estimator A Newey–West estimator is used in statistics and econometrics to provide an estimate of the covariance matrix of the parameters of a regression-type model when this model is applied in situations where the standard assumptions of regression analysis do not apply.[1] It was devised by Whitney K. Newey and Kenneth D. West in 1987, although there are a number of later variants.[2][3][4][5] The estimator is used to try to overcome autocorrelation, or correlation, and heteroskedasticity in the error terms in the models. This is often used to correct the effects of correlation in the error terms in regressions applied to time series data. The problem in autocorrelation, often found in time series data, is that the error terms are correlated over time. This can be demonstrated in $Q*$, a matrix of sums of squares and cross products that involves $\sigma_{(ij)}$ and the rows of $X$. The least squares estimator $b$ is a consistent estimator of $\beta$. This implies that the least squares residuals $e_i$ are "point-wise" consistent estimators of their population counterparts $E_i$. The general approach, then, will be to use $X$ and $e$ to devise an estimator of $Q*$.[6] This means that as the time between error terms increases, the correlation between the error terms decreases. The estimator thus can be used to improve the ordinary least squares (OLS) regression when the variables have heteroskedasticity or autocorrelation. $w_\ell=1 - \frac{\ell}{L+1}$ References 1. ^ 2. ^ Newey, Whitney K; West, Kenneth D (1987). "A Simple, Positive Semi-definite, Heteroskedasticity and Autocorrelation Consistent Covariance Matrix". Econometrica 55 (3): 703–708. doi:10.2307/1913610. JSTOR 1913610. 3. ^ Andrews, Donald W. K. (1991). "Heteroskedasticity and autocorrelation consistent covariance matrix estimation". Econometrica 59 (3): 817–858. doi:10.2307/2938229. JSTOR 2938229. 4. ^ Newey, Whitney K.; West, Kenneth D. (1994). "Automatic lag selection in covariance matrix estimation". Review of Economic Studies 61 (4): 631–654. doi:10.2307/2297912. JSTOR 2297912. 5. ^ Smith, Richard J. (2005). "Automatic positive semidefinite HAC covariance matrix and GMM estimation". Econometric Theory 21 (1): 158–170. doi:10.1017/S0266466605050103. 6. ^ Greene, William H. (1997). Econometric Analysis (3rd ed.).
# Custom prefix in footnotes with biblatex I have another question about customizing footnote-citations with biblatex. For common citations of facts or observations from some article, I want to add a prefix to my footnotes. It's meant to distinguish non-literal literal quotes. Only for the rare cases of actually verbatim quoting someone, it is not required. 95% of my citations use the "See:"-prefix. (The "prefix" I'm referring to is not the usual name-prefix like "von", "de", "van", etc.) Wayne (2012), p. 123. I'd like to achieve See: Wayne (2012), p. 123. This works already with the prenote-syntax (see the MWE), but could I also customize the renewbibmacro to include the prefix there? Otherwise I'd a) have to crawl through the whole document to change every citation manually and b) would have to do it again if I'd ever want to change it. For verbatim quotes I could just introduce a new command \citelit with \let\citelit\cite, I guess? Minimum working example: \begin{filecontents}{_references.bib} @article{wayne12a, Author={Wayne, B.}, Title={A survey about the social inequalities in Gotham City}, Journal={International Journal of Comic Science}, Year={2012}, Volume={7}, Number={4}, Pages={35--45}} @article{joker10b, Author={Joker, T.}, Title={Why so serious?}, Journal={Journal of Vulgarity}, Year={2010}, Volume={38},Number={5}, Pages={103--116}} \end{filecontents} \documentclass[a4paper]{article} \usepackage[utf8]{inputenc} \usepackage[bibstyle=authoryear, citestyle=authoryear, autocite=footnote,% dashed=false, firstinits=true]{biblatex} \let\cite\autocite% \cite -> \autocite \renewbibmacro*{cite:labelyear+extrayear}{% Round parentheses around the year \iffieldundef{labelyear} % Source: tex.stackexchange.com/a/30822/10434 {} {\printtext[bibhyperref]{% \printtext[parens]{% \printfield{labelyear}% \printfield{extrayear}}}}} \begin{document} The CEO of Wayne Enterprises praised Batman's achievements for improving the living conditions in Gotham City.~\cite[See:][31]{wayne12a} In a recent publication, however, The Joker described Batman as a little boy in a playsuit, crying for mummy and daddy.''~\cite[109]{joker10b} \end{document} • I am posting this as a comment since I won't be able to provide you with the actual (tested) solution at the moment. You could modify the .cbx file (after copying it to your local directory tree) or declare a new cite command (search for \DeclareCiteCommand in the file) and insert the prefix before \usebibmacro{prenote}. And don't hardcode "See:" – use something like \bibstring{see}\addcolon\addspace. – ienissei Jan 27 '12 at 14:02 For future reference, here is another solution that allows you to overwrite the default setting for the citations with an automatic "See:" prenote. As discussed with lockstep, this functionality is not absolutely needed as an answer to the OP's question, but since \citelitand \cite have a semantic meaning in this case (i.e. direct and indirect citation), it may be relevant to have commands that reflect it. Furthermore, this solution is more customisable than a \newcommand, and could be used by other people who want to achieve a similar result. Here is what you can write in the preamble: % Create a cite command that uses the default autocite=footnote behaviour \DeclareCiteCommand{\seecite}[\iffootnote\mkbibparens\mkbibfootnote] {% Replace \usebibmacro{prenote} by its demfinition and modify it \iffieldundef{prenote} \setunit{\prenotedelim}}% {\printfield{prenote}% \setunit{\prenotedelim}}% } {\usebibmacro{citeindex}% \usebibmacro{cite}} {\multicitedelim} {\usebibmacro{postnote}} As suggested in the original question, we use \citelit and \cite: \let\citelit\autocite \let\cite\seecite Now, we can write this in the document: \cite[92]{wayne12a} \citelit[127]{joker10b} • Your solution probably extends more easily to qualified citation lists. For this case you'd use multiprenote instead of prenote. – Audrey Jan 28 '12 at 17:27 I would simply define a macro \precite that builds on \autocite but always uses its second optional argument. iennisei's \bibstring tip is also sound -- note, however, that adding \addspace to the new macro's definition would result in a double space after the prefix. \documentclass{article} \usepackage[style=authoryear,autocite=footnote]{biblatex} \NewBibliographyString{see} \DefineBibliographyStrings{english}{% see = {See}, } \textheight=100pt% just for the example \usepackage{filecontents} \begin{filecontents}{\jobname.bib} @article{wayne12a, Author={Wayne, B.}, Title={A survey about the social inequalities in Gotham City}, Journal={International Journal of Comic Science}, Year={2012}, Volume={7}, Number={4}, Pages={35--45}} @article{joker10b, Author={Joker, T.}, Title={Why so serious?}, Journal={Journal of Vulgarity}, Year={2010}, Volume={38},Number={5}, Pages={103--116}} \end{filecontents} \begin{document} The CEO of Wayne Enterprises praised Batman's achievements for improving the living conditions in Gotham City \precite[31]{wayne12a}. In a recent publication, however, The Joker described Batman as a little boy in a playsuit, crying for mummy and daddy'' \autocite[109]{joker10b}. \end{document} • Thank you for mentioning the extra space, I wrote this off the top of my head, and BibLaTeX has a real knack for adding extra spaces. Your solution is much easier than mine – but it prevents the use of the precite optional argument (although it doesn't seem needed in this particular case). – ienissei Jan 27 '12 at 14:19 • This is a really clean solution. I altered it a bit and used \renewcommand*{\cite}[2][]{\autocite[\bibstring{see}\addcolon][#1]{#2}} instead. Now I only need to change the few citations using a precite (as ienissei mentioned already, this argument is not valid anymore) and the verbatim citations should become \autocite. Thanks a lot, lockstep! – dhst Jan 27 '12 at 14:49 • @ienissei I agree that using \DeclareCiteCommand is a cleaner solution for most cases. I used \newcommand because I didn't think that a second precite argument is needed in case of the OP's request. – lockstep Jan 27 '12 at 15:38
# Find root of function defined via NIntegrate I have a function defined as rpd[r_, OptionsPattern[]] := Module[ {A = OptionValue[A]}, NIntegrate[ q/(q^2 + r) E^(-q^2/2)* Cos[3*q] BesselJ[1, A*q], {q, 0, \[Infinity]}]]; Options[rpd] = {A -> 1}; and plotting it for r \Elem {0,0.001} I see it crosses zero for some value of r (call it r0) which depends on the parameter a<1, like this: I'd like to plot a graph of r0(a), so I'm trying to produce a table rpdA = Table[{a, FindRoot[rpd[r, A -> a], {r, 0.000001}]}, {a, 0.1, 1, 0.1}]; but I keep getting instead a lot of error messages, namely NIntegrate::ncvb and NIntegrate::slwcon. Could you help me sort this out? I guess this works: • Subdivide the interval at 10 to separate the significant oscillatory part from the superexponential decay part. • Increase WorkingPrecision to handle the round-off error from the oscillatory part • Use the secant method in FindRoot to prevent bad choices for r in trying to numerically approximate the gradient. • ?NumericQ protection for rpd[]. ClearAll[rpd]; rpd[r0_?NumericQ, OptionsPattern[]] := Module[{A = SetPrecision[OptionValue[A], 32], r = SetPrecision[r0, 32]}, NIntegrate[ q/(q^2 + r) E^(-q^2/2)*Cos[3*q] BesselJ[1, A*q], {q, 0, 10, \[Infinity]}, MaxRecursion -> 20, WorkingPrecision -> 32, PrecisionGoal -> 6] /; NumericQ[A]]; Options[rpd] = {A -> 1}; rpdA = Table[{a, FindRoot[rpd[r, A -> a], {r, 0.000001, 0.0000001}]}, {a, 0.1, 1, 0.1}]; rpdA (* {{0.1, {r -> 0.0000846907}}, {0.2, {r -> 0.0000899893}}, {0.3, {r -> 0.0000993154}}, {0.4, {r -> 0.000113453}}, {0.5, {r -> 0.000133586}}, {0.6, {r -> 0.000161389}}, {0.7, {r -> 0.000199148}}, {0.8, {r -> 0.000249928}}, {0.9, {r -> 0.000317773}}, {1., {r -> 0.000407975}}} *) • I just add for future readers that pdA = Table[{a, Values@@FindRoot[rpd[r, A -> a], {r, 0.000001, 0.0000001}]}, {a, 0.1, 1, 0.1}]; produces precisely the table I was looking for, which can be plotted with ListPlot. Feb 16, 2021 at 12:29
## The Annals of Probability ### Dissipation and high disorder #### Abstract Given a field $\{B(x)\}_{x\in\mathbf{Z}^{d}}$ of independent standard Brownian motions, indexed by $\mathbf{Z}^{d}$, the generator of a suitable Markov process on $\mathbf{Z}^{d},\mathcal{G}$, and sufficiently nice function $\sigma:[0,\infty)\mapsto [0,\infty)$, we consider the influence of the parameter $\lambda$ on the behavior of the system, \begin{eqnarray*}\mathrm{d}u_{t}(x)&=&(\mathcal{G}u_{t})(x)\,\mathrm{d}t+\lambda\sigma(u_{t}(x))\,\mathrm{d}B_{t}(x)\qquad [t>0,\ x\in\mathbf{Z}^{d}],\\u_{0}(x)&=&c_{0}\delta_{0}(x).\end{eqnarray*} We show that for any $\lambda>0$ in dimensions one and two the total mass $\sum_{x\in\mathbf{Z}^{d}}u_{t}(x)$ converges to zero as $t\to\infty$ while for dimensions greater than two there is a phase transition point $\lambda_{c}\in(0,\infty)$ such that for $\lambda>\lambda_{c},\sum_{x\in \mathbf{Z}^{d}}u_{t}(x)\to0$ as $t\to\infty$ while for $\lambda<\lambda_{c},\sum_{x\in \mathbf{Z}^{d}}u_{t}(x)\not\to0$ as $t\to\infty$. #### Article information Source Ann. Probab., Volume 45, Number 1 (2017), 82-99. Dates Received: November 2014 Revised: June 2015 First available in Project Euclid: 26 January 2017 Permanent link to this document https://projecteuclid.org/euclid.aop/1485421329 Digital Object Identifier doi:10.1214/15-AOP1040 Mathematical Reviews number (MathSciNet) MR3601646 Zentralblatt MATH identifier 1382.60103 #### Citation Chen, Le; Cranston, Michael; Khoshnevisan, Davar; Kim, Kunwoo. Dissipation and high disorder. Ann. Probab. 45 (2017), no. 1, 82--99. doi:10.1214/15-AOP1040. https://projecteuclid.org/euclid.aop/1485421329 #### References • [1] Borodin, A. and Corwin, I. (2014). Moments and Lyapunov exponents for the parabolic Anderson model. Ann. Appl. Probab. 24 1172–1198. • [2] Carmona, P. and Hu, Y. (2006). Strong disorder implies strong localization for directed polymers in a random environment. ALEA Lat. Am. J. Probab. Math. Stat. 2 217–229. • [3] Carmona, R., Koralov, L. and Molchanov, S. (2001). Asymptotics for the almost sure Lyapunov exponent for the solution of the parabolic Anderson problem. Random Oper. Stoch. Equ. 9 77–86. • [4] Carmona, R., Viens, F. G. and Molchanov, S. A. (1996). Sharp upper bound on the almost-sure exponential behavior of a stochastic parabolic partial differential equation. Random Oper. Stoch. Equ. 4 43–49. • [5] Carmona, R. A. and Molchanov, S. A. (1994). Parabolic Anderson problem and intermittency. Mem. Amer. Math. Soc. 108 viii+125. • [6] Chung, K. L. and Fuchs, W. H. J. (1961). On the distribution of values of sums of random variables. Mem. Amer. Math. Soc. No. 6. 12 pp. • [7] Cox, J. T., Fleischmann, K. and Greven, A. (1996). Comparison of interacting diffusions and an application to their ergodic theory. Probab. Theory Related Fields 105 513–528. • [8] Cranston, M., Mountford, T. S. and Shiga, T. (2002). Lyapunov exponents for the parabolic Anderson model. Acta Math. Univ. Comenian. (N.S.) 71 163–188. • [9] Dalang, R. C. and Mueller, C. (2003). Some non-linear S.P.D.E.’s that are second order in time. Electron. J. Probab. 8 no. 1, 21 pp. (electronic). • [10] Dawson, D. A. and Perkins, E. (2012). Superprocesses at Saint-Flour. Springer, Heidelberg. • [11] Foondun, M. and Khoshnevisan, D. (2009). Intermittence and nonlinear parabolic stochastic partial differential equations. Electron. J. Probab. 14 548–568. • [12] Gärtner, J. and König, W. (2005). The parabolic Anderson model. In Interacting Stochastic Systems 153–179. Springer, Berlin. • [13] Georgiou, N., Joseph, M., Khoshnevisan, D. and Shiu, S.-Y. (2015). Semi-discrete semi-linear parabolic SPDEs. Ann. Appl. Probab. 25 2959–3006. • [14] Hoeffding, W. (1963). Probability inequalities for sums of bounded random variables. J. Amer. Statist. Assoc. 58 13–30. • [15] Liggett, T. M. (1985). Interacting Particle Systems. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences] 276. Springer, New York. • [16] Mueller, C. (1991). On the support of solutions to the heat equation with noise. Stochastics Stochastics Rep. 37 225–245. • [17] Mueller, C. (2009). Some tools and results for parabolic stochastic partial differential equations. In A Minicourse on Stochastic Partial Differential Equations. Lecture Notes in Math. 1962 111–144. Springer, Berlin. • [18] Mueller, C. and Nualart, D. (2008). Regularity of the density for the stochastic heat equation. Electron. J. Probab. 13 2248–2258. • [19] Mueller, C. and Tribe, R. (2004). A singular parabolic Anderson model. Electron. J. Probab. 9 98–144 (electronic). • [20] Shiga, T. (1992). Ergodic theorems and exponential decay of sample paths for certain interacting diffusion systems. Osaka J. Math. 29 789–807. • [21] Shiga, T. and Shimizu, A. (1980). Infinite-dimensional stochastic differential equations and their applications. J. Math. Kyoto Univ. 20 395–416. • [22] Spitzer, F. (1981). Infinite systems with locally interacting components. Ann. Probab. 9 349–364. • [23] Walsh, J. B. (1986). An introduction to stochastic partial differential equations. In École D’été de Probabilités de Saint-Flour, XIV—1984. Lecture Notes in Math. 1180 265–439. Springer, Berlin.
Article # Probabilistic landslide hazard assessment using homogeneous susceptible units (HSU) along a national highway corridor in the northern Himalayas, India (Impact Factor: 2.87). 09/2011; 8(3):293-308. DOI: 10.1007/s10346-011-0257-9 ABSTRACT The increased socio-economic significance of landslides has resulted in the application of statistical methods to assess their hazard, particularly at medium scales. These models evaluate where, when and what size landslides are expected. The method presented in this study evaluates the landslide hazard on the basis of homogenous susceptible units (HSU). HSU are derived from a landslide susceptibility map that is a combination of landslide occurrences and geo-environmental factors, using an automated segmentation procedure. To divide the landslide susceptibility map into HSU, we apply a region-growing segmentation algorithm that results in segments with statistically independent spatial probability values. Independence is tested using Moran’s I and a weighted variance method. For each HSU, we obtain the landslide frequency from the multi-temporal data. Temporal and size probabilities are calculated using a Poisson model and an inverse-gamma model, respectively. The methodology is tested in a landslide-prone national highway corridor in the northern Himalayas, India. Our study demonstrates that HSU can replace the commonly used terrain mapping units for combining three probabilities for landslide hazard assessment. A quantitative estimate of landslide hazard is obtained as a joint probability of landslide size, of landslide temporal occurrence for each HSU for different time periods and for different sizes. KeywordsLandslides–Hazard–HSU–Segmentation–Himalayas–India ### Full-text Available from: Alfred Stein • Source • "In addition, extensive human interference in hill slope areas for the construction of roads, urban expansion along the hill slopes, deforestation and rapid change in land use contribute to instability. This makes it difficult, if not impossible, to define a single methodology to identify and map landslides, to ascertain landslide hazards and to evaluate the associated risk (Guzzetti et al. 2005; Das et al. 2011). In this study, topography, geology, climate, vegetation and anthropogenic factors were selected based on expert knowledge, on the basis of field studies related to active landslides. " ##### Article: GIS-multicriteria decision analysis for landslide susceptibility mapping: Comparing three methods for the Urmia lake basin, Iran [Hide abstract] ABSTRACT: The GIS-multicriteria decision analysis (GIS-MCDA) technique is increasingly used for landslide hazard mapping and zonation. It enables the integration of different data layers with different levels of uncertainty. In this study, three different GIS-MCDA methods were applied to landslide susceptibility mapping for the Urmia lake basin in northwest Iran. Nine landslide causal factors were used, whereby parameters were extracted from an associated spatial database. These factors were evaluated, and then, the respective factor weight and class weight were assigned to each of the associated factors. The landslide susceptibility maps were produced based on weighted overly techniques including analytic hierarchy process (AHP), weighted linear combination (WLC) and ordered weighted average (OWA). An existing inventory of known landslides within the case study area was compared with the resulting susceptibility maps. Respectively, Dempster-Shafer Theory was used to carry out uncertainty analysis of GIS-MCDA results. Result of research indicated the AHP performed best in the landslide susceptibility map-ping closely followed by the OWA method while the WLC method delivered significantly poorer results. The resulting figures are generally very high for this area, but it could be proved that the choice of method significantly influences the results. Natural Hazards 01/2013; 2013(65):2105 – 2128. DOI:10.1007/s11069-012-0463-3 · 1.72 Impact Factor • Source • "Sendo a suscetibilidade a componente espacial da perigosidade, pode obter-se a localização do risco (não quantificado) através do seu cruzamento com os elementos expostos. Nestes elementos, destaca-se as estradas pela frequência com que são afetadas por movimentos de vertente, devendo-se a maioria dos casos ao corte e abertura de taludes (por vezes mal dimensionados) para a sua construção, fatores que põem em causa a sustentação do material que compõe a vertente (Highland e Bobrowsky, 2008; Das et al., 2011). Os movimentos de vertente, dependendo da sua magnitude, podem destruir ou interromper estas infraestruturas e, deste modo, originar diversos condicionalismos, ao interferir no tempo de reação dos meios operacionais (Meneses e Zêzere, 2012). " ##### Conference Paper: Simulação de Rotas de Emergência no Concelho de Tarouca em Função da Suscetibilidade e Risco de Movimentos de Vertente VI Congresso Nacional de Geomorfologia; 01/2013 • Source • "For estimation of the spatial probability of landslide hazards, various methods and models are successfully developed and used in the literature (Chacon et al. 2006; Guzzetti et al. 2006; Yao et al. 2008; Pradhan and Lee 2010; Pradhan et al. 2010; Yeon et al. 2010; Yilmaz 2010; Marjanovic et al. 2011; Oh and Pradhan 2011; Sezer et al. 2011; Althuwaynee et al. 2012; Ballabio and Sterlacchini 2012; Devkota et al. 2012; Lee et al. 2012; Pourghasemi et al. 2012a; 2012b; Xu et al. 2012; Zare et al. 2012; Tien Bui et al. 2012c; Pradhan 2010a, b, 2011a, b, 2012). However, few attempts have been carried out to estimate temporal probability of slope failure (Guzzetti et al. 2005; Jaiswal et al. 2010; Das et al. 2011). Thus, landslide hazard mapping is considerably challenging either due to incomplete dataset or unavailability of historical data in developing countries (Harp et al. 2009) such as in Vietnam. " ##### Article: Regional prediction of landslide hazard using probability analysis of intense rainfall in the Hoa Binh province, Vietnam [Hide abstract] ABSTRACT: The main objective of this study is to assess regional landslide hazards in the Hoa Binh province of Vietnam. A landslide inventory map was constructed from various sources with data mainly for a period of 21 years from 1990 to 2010. The historic inventory of these failures shows that rainfall is the main triggering factor in this region. The probability of the occurrence of episodes of rainfall and the rainfall threshold were deduced from records of rainfall for the aforementioned period. The rainfall threshold model was generated based on daily and cumulative values of antecedent rainfall of the landslide events. The result shows that 15-day antecedent rainfall gives the best fit for the existing landslides in the inventory. The rainfall threshold model was validated using the rainfall and landslide events that occurred in 2010 that were not considered in building the threshold model. The result was used for estimating temporal probability of a landslide to occur using a Poisson probability model. Prior to this work, five landslide susceptibility maps were constructed for the study area using support vector machines, logistic regression, evidential belief functions, Bayesian-regularized neural networks, and neuro-fuzzy models. These susceptibility maps provide information on the spatial prediction probability of landslide occurrence in the area. Finally, landslide hazard maps were generated by integrating the spatial and the temporal probability of landslide. A total of 15 specific landslide hazard maps were generated considering three time periods of 1, 3, and 5 years. Natural Hazards 03/2012; 66(2):1-24. DOI:10.1007/s11069-012-0510-0 · 1.72 Impact Factor
# Diversify Portfolios Using Custom Objective This example shows three techniques of asset diversification in a portfolio using the estimateCustomObjectivePortfolio function with a Portfolio object. The purpose of asset diversification is to balance the exposure of the portfolio to any given asset to reduce volatility over a period of time. Given the sensitivity of the minimum variance portfolio to the estimation of the covariance matrix, some practitioners have added diversification techniques to the portfolio selection with the hope of minimizing risk measures other than the variance measures such as turnover, maximum drawdown, and so on. This example applies these common diversification techniques: Additionally, this example demonstrates penalty methods that you can use to achieve different degrees of diversification. In those methods, you add a penalty term to the estimateCustomObjectivePortfolio function to balance the level of variance reduction and the diversification of the portfolio. ### Retrieve Market Data and Define Mean-Variance Portfolio Begin by loading and computing the expected returns and their covariance matrix. % Store returns and covariance mu = mean_return; Sigma = Correlation .* (stdDev_return * stdDev_return'); Define a mean-variance portfolio using a Portfolio object with default constraints to create a fully invested, long-only portfolio. % Create a mean-variance Portfolio object with default constraints p = Portfolio('AssetMean',mu,'AssetCovar',Sigma); p = setDefaultConstraints(p); One of the many features of the Portfolio object is that it can obtain the efficient frontier of the portfolio problem. The efficient frontier is computed by solving a series of optimization problems for which the return level of the portfolio, ${\mu }_{0}$, is modified to obtain different points on the efficient frontier. These problems are defined as $\begin{array}{l}\mathrm{min}\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}{\mathit{x}}^{\mathit{T}}\Sigma \mathit{x}\\ \mathrm{st}.\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}\sum _{\mathit{i}=1}^{\mathit{n}}{\mathit{x}}_{\mathit{i}}=1\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}{\mathit{x}}^{\mathit{T}}\mu \ge {\mu }_{0}\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}\mathit{x}\ge 0\end{array}$ The advantage of using the Portfolio object to compute the efficient frontier is that you can obtain without having to manually formulate and solve the multiple optimization problems shown above. plotFrontier(p); The Portfolio object can also compute the weights associated with the minimum variance portfolio, which is defined by the following problem. $\begin{array}{l}\mathrm{min}\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}{\mathit{x}}^{\mathit{T}}\Sigma \mathit{x}\\ \mathrm{st}.\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}\sum _{\mathit{i}=1}^{\mathit{n}}{\mathit{x}}_{\mathit{i}}=1\\ \text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}\mathit{x}\ge 0\end{array}$ The minimum variance weights is the benchmark against which all the weights of the diversification strategies are compared. wMinVar = estimateFrontierLimits(p,'min'); To learn more about the problems that you can solve with the Portfolio object, see When to Use Portfolio Objects Over Optimization Toolbox. ### Specify Diversification Techniques This section presents the three diversification methods. Each of the three diversification methods is associated with a diversification measure and that diversification measure defines a penalty term to achieve different diversification levels. The diversification, obtained by adding the penalty term to the objective function, ranges from the behavior achieved by the minimum variance portfolio to the behavior of the EW, ECR, and MDP, respectively. The default portfolio has only one equality constraint and a lower bound for the assets weights. The weights must be nonnegative and they must sum to 1. The feasible set is represented as$\mathit{X}$: $\mathit{X}=\left\{\mathit{x}\text{\hspace{0.17em}}|\text{\hspace{0.17em}}\mathit{x}\ge 0,\text{\hspace{0.17em}}\sum _{\mathit{i}=1}^{\mathit{n}}{\mathit{x}}_{\mathit{i}}=1\right\}$ ### Equally Weighted Portfolio One of the diversification measures is the Herfindahl-Hirschman (HH) index defined as: $\mathrm{HH}\left(\mathit{x}\right)=\sum _{\mathit{i}=1}^{\mathit{n}}{\mathit{x}}_{\mathit{i}}^{2}$ This index is minimized when the portfolio is equally weighted. The portfolios obtained from using this index as a penalty have weights that satisfy the portfolio constraints and that are more evenly weighted. The portfolio that minimizes the HH index is $\underset{\mathit{x}\in \mathit{X}}{\mathrm{min}}\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}{\mathit{x}}^{\mathit{T}}\mathit{x}$. Since the constraints in $\mathit{X}$ are the default constraints, the solution of this problem is the equally weighted (EW) portfolio. If $\mathit{X}$ had extra constraints, the solution would be the portfolio that satisfies all the constraints and, at the same time, keeps the weights as equal as possible. Use the function handle HHObjFun for the Herfindahl-Hirschman (HH) index with the estimateCustomObjectivePortfolio function. % Maximize the HH diversification (by minimizing the HH index) HHObjFun = @(x) x'*x; % Solution that minimizes the HH index wHH = estimateCustomObjectivePortfolio(p,HHObjFun); The portfolio that minimizes the variance with the HH penalty is $\underset{\mathit{x}\in \mathit{X}}{\mathrm{min}}\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}{\mathit{x}}^{\mathit{T}}\Sigma \mathit{x}+{\lambda }_{\mathrm{HH}}{\mathit{x}}^{\mathit{T}}\mathit{x}$. Use the function handle HHMixObjFun for the HH penalty with the estimateCustomObjectivePortfolio function. % HH penalty parameter lambdaHH = 1e-2; % Variance + Herfindahl-Hirschman (HH) index HHMixObjFun = @(x) x'*p.AssetCovar*x + lambdaHH*(x'*x); % Solution that accounts for risk and HH diversification wHHMix = estimateCustomObjectivePortfolio(p,HHMixObjFun); Plot the weights distribution for the minimum variance portfolio, the equal weight portfolio, and the penalized strategy. % Plot different strategies plotAssetAllocationChanges(wMinVar,wHHMix,wHH) This plot shows how the penalized strategy returns weights that are between the minimum variance portfolio and the EW portfolio. In fact, choosing ${\lambda }_{\mathrm{HH}}=0$ returns the minimum variance solution, and as ${\lambda }_{\mathrm{HH}}\to \infty ,$ the solution approaches the EW portfolio. ### Most Diversified Portfolio The diversification index associated to the most diversified portfolio (MDP) is defined as $\mathrm{MDP}\left(\mathit{x}\right)=-\sum _{\mathit{i}=1}^{\mathit{n}}{\sigma }_{\mathit{i}}{\mathit{x}}_{\mathit{i}}$ where ${\sigma }_{\mathit{i}}$ represents the standard deviation of asset $\mathit{i}$. The MDP is the portfolio that maximizes the diversification ratio: $\phi \left(\mathit{x}\right)=\text{\hspace{0.17em}}\frac{{\mathit{x}}^{\mathit{T}}\sigma }{\sqrt{{\mathit{x}}^{\mathit{T}}\Sigma \mathit{x}}}$ The diversification ratio $\phi \left(\mathit{x}\right)$ is equal to 1 if the portfolio is fully invested in one asset or if all assets are perfectly correlated. For all other cases, $\phi \left(\mathit{x}\right)>1$. If $\phi \left(\mathit{x}\right)\approx 1$, there is no diversification, so the goal is to find the portolio that maximizes $\phi \left(\mathit{x}\right)$: $\underset{\mathit{x}\in \mathit{X}}{\mathrm{max}}\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}\frac{{\sigma }^{\mathit{T}}\mathit{x}}{\sqrt{{\mathit{x}}^{\mathit{T}}\Sigma \mathit{x}}}$ Unlike the HH index, the MDP goal is not to obtain a portfolio whose weights are evenly distributed among all assets, but to obtain a portfolio whose selected (nonzero) assets have the same correlation to the portfolio as a whole. Use the function handle MDPObjFun for the most diversified portfolio (MDP) with the estimateCustomObjectivePortfolio function. % Maximize the diversification ratio sigma = sqrt(diag(p.AssetCovar)); MDPObjFun = @(x) (sigma'*x)/sqrt(x'*p.AssetCovar*x); % Solution of MDP wMDP = estimateCustomObjectivePortfolio(p,MDPObjFun, ... ObjectiveSense="maximize"); The following code shows that there exists a value ${\stackrel{ˆ}{\lambda }}_{\mathrm{MDP}}>0$ such that the MDP problem and the problem with its penalized version are equivalent.The portfolio that minimizes the variance with the MDP penalty is$\underset{\mathit{x}\in \mathit{X}}{\text{\hspace{0.17em}}\mathrm{min}}\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}{\mathit{x}}^{\mathit{T}}\Sigma \mathit{x}-{\lambda }_{\mathrm{MDP}}\text{\hspace{0.17em}}{\sigma }^{\mathit{T}}\mathit{x}$. Define an MDP penalty parameter and solve for MDP using the function handle MDPMixObjFun for the MDP penalty with the estimateCustomObjectivePortfolio function. % MDP penalty parameter lambdaMDP = 1e-2; % Variance + Most Diversified Portfolio (MDP) MDPMixObjFun = @(x) x'*p.AssetCovar*x - lambdaMDP*(sigma'*x); % Solution that accounts for risk and MDP diversification wMDPMix = estimateCustomObjectivePortfolio(p,MDPMixObjFun); Plot the weights distribution for the minimum variance portfolio, the MDP, and the penalized strategy. % Plot different strategies plotAssetAllocationChanges(wMinVar,wMDPMix,wMDP) In this plot, the penalized strategy weights are between the minimum variance portfolio and the MDP. This result is the same as with the HH penalty, where choosing ${\lambda }_{\mathrm{MDP}}=0$ returns the minimum variance solution and values of ${\lambda }_{\mathrm{MDP}}\in \left[0,{\stackrel{ˆ}{\lambda }}_{\mathrm{MDP}}\right]$ return asset weights that range from the minimum variance behavior to the MDP behavior. ### Equal Risk Contribution Portfolio The diversification index associated with the equal risk contribution (ERC) portfolio is defined as $\mathrm{ERC}\left(\mathit{x}\right)=-\sum _{\mathit{i}=1}^{\mathit{n}}\mathrm{ln}\left({\mathit{x}}_{\mathit{i}}\right)$ This index is related to a convex reformulation shown by Maillard [1] that computes the ERC portfolio. The authors show that you can obtain the ERC portfolio by solving the following optimization problem $\begin{array}{l}\underset{\mathit{y}\ge 0}{\mathrm{min}}\text{\hspace{0.17em}}{\mathit{y}}^{\mathit{T}}\Sigma \text{\hspace{0.17em}}\mathit{y}\\ \mathrm{st}.\text{\hspace{0.17em}\hspace{0.17em}}\sum _{\mathit{i}=1}^{\mathit{n}}\mathrm{ln}\left({\mathit{y}}_{\mathit{i}}\right)\ge \mathit{c}\end{array}$ and by defining $\mathit{x}$, the ERC portfolio with default constraints, as $\mathit{x}=\frac{\mathit{y}}{{\sum }_{\mathit{i}}{\mathit{y}}_{\mathit{i}}}$, where $\mathit{c}>0$ can be any constant. You implement this procedure in the riskBudgetingPortfolio function. The purpose of the ERC portfolio is to select the assets weights in such a way that the risk contribution of each asset to the portfolio volatility is the same for all assets. % ERC portfolio wERC = riskBudgetingPortfolio(p.AssetCovar); The portfolio that minimizes the variance with the ERC penalty is $\underset{\mathit{x}\in \mathit{X}}{\mathrm{min}}\text{\hspace{0.17em}\hspace{0.17em}\hspace{0.17em}}{\mathit{x}}^{\mathit{T}}\Sigma \mathit{x}-{\lambda }_{\mathrm{ERC}}\text{\hspace{0.17em}}\sum _{\mathit{i}=1}^{\mathit{n}}\mathrm{ln}\left({\mathit{x}}_{\mathit{i}}\right)$. Similar to the case for the MDP penalized formulation, there exists a ${\stackrel{ˆ}{\lambda }}_{\mathrm{ERC}}$ such that the ERC problem and its penalized version are equivalent. Use the function handle ERCMixObjFun for the ERC penalty with the estimateCustomObjectivePortfolio function. % ERC penalty parameter lambdaERC = 3e-6; % lambdaERC is so small because the log of a number % close to zero (the portfolio weights) is large % Variance + Equal Risk Contribution (ERC) ERCMixObjFun = @(x) x'*p.AssetCovar*x - lambdaERC*sum(log(x)); % Solution that accounts for risk and ERC diversification wERCMix = estimateCustomObjectivePortfolio(p,ERCMixObjFun); Plot the weights distribution for the minimum variance portfolio, the ERC, and the penalized strategy. % Plot different strategies plotAssetAllocationChanges(wMinVar,wERCMix,wERC) Comparable to the two diversification measures above, here the penalized strategy weights are between the minimum variance portfolio and the ERC portfolio. Choosing ${\lambda }_{\mathrm{ERC}}=0$ returns the minimum variance solution and the values of ${\lambda }_{\mathrm{ERC}}\in \left[0,{\stackrel{ˆ}{\lambda }}_{\mathrm{ERC}}\right]$ return asset weights that range from the minimum variance behavior to the ERC portfolio behavior. ### Compare Diversification Strategies Compute the number of assets that are selected in each portfolio. Assume that an asset is selected if the weight associated to that asset is above a certain threshold. % Build a weights table varNames = {'MinVariance','MixedHH','HH','MixedMDP','MDP', ... 'MixedERC','ERC'}; weightsTable = table(wMinVar,wHHMix,wHH,wMDPMix,wMDP, ... wERCMix,wERC,'VariableNames',varNames); % Number of assets with nonzero weights cutOff = 1e-3; % Weights below cut-off point are considered zero. [reweightedTable,TnonZero] = tableWithNonZeroWeights(weightsTable, ... cutOff,varNames); display(TnonZero) TnonZero=1×7 table MinVariance MixedHH HH MixedMDP MDP MixedERC ERC ___________ _______ ___ ________ ___ ________ ___ Nonzero weights 11 104 225 23 28 225 225 As discussed above, the HH penalty goal is to obtain more evenly weighted portfolios. The portfolio that maximizes the HH diversity (and corresponds to the EW portfolio when only the default constraints are selected) has the largest number of assets selected and the weights of these assets are closer together. You can see this latter quality in the following boxchart. Also, the strategy that adds the HH index as a penalty function to the objective has a larger number of assets than the minimum variance portfolio but less than the portfolio that maximizes HH diversity. The ERC portfolio also selects all the assets because all weigths need to be nonzero to have some risk contribution. % Boxchart of portfolio weights figure; matBoxPlot = reweightedTable.Variables; matBoxPlot(matBoxPlot == 0) = NaN; boxchart(matBoxPlot) xticklabels(varNames) title('Weights Distribution') xlabel('Strategies') ylabel('Weight') This boxchart shows the spread of the assets' positive weights for the different portfolios. As previously discussed, the weights of the portfolio that maximize the HH diversity are all the same. If the portfolio had other types of constraints, the weights would not all be the same, but they would have the lowest variance. The ERC portfolio weights also have a small variance. In fact, you can observe as the number of assets increases as the variance of the ERC portfolio weights becomes smaller. The weights variability of the MDP is smaller than the variability of the minimum variance weights. However, it is not necessarily true that the MDP's weights will have less variability than the minimum variance weights because the goal of the MDP is not to obtain equally weighted assets, but to distribute the correlation of each asset with its portfolio evenly. % Compute and plot the risk contribution of each individual % asset to the portfolio riskContribution = portfolioRiskContribution(p.AssetCovar, ... weightsTable.Variables); % Remove small contributions riskContribution(riskContribution < 1e-3) = NaN; % Compare percent contribution to portofolio risk boxchart(riskContribution) xticklabels(varNames) title('Percent Contributions to Portfolio Risk') xlabel('Strategies') ylabel('PCRs') This boxchart shows the percent risk contribution of each asset to the total portfolio risk. The percent risk contribution is computed as ${\left(\mathrm{PRC}\right)}_{\mathit{i}}=\frac{{\mathit{x}}_{\mathit{i}}{\left(\Sigma \mathit{x}\right)}_{\mathit{i}}}{{\mathit{x}}^{\mathit{T}}\Sigma \mathit{x}}$ As expected, all the ERC portfolio assets have the same risk contribution to the portfolio. As discussed after the weights distribution plot, if the problem had other types of constraints, the risk contribution of the ERC portfolio would not be the same for all assets, but they would have the lowest variance. Also, the behavior shown in this picture is similar to the behavior shown by the weights distribution. % Compute and plot the correlation of each individual asset to its % portfolio corrAsset2Port = correlationInfo(p.AssetCovar, ... weightsTable.Variables); % Boxchart of assets to portfolio correlations figure boxchart(corrAsset2Port) xticklabels(varNames) title('Correlation of Individual Assets to Their Portfolio') xlabel('Strategies') ylabel('Correlation') This boxchart shows the distribution of the correlations of each asset with its respective portfolio. The correlation of asset $\mathit{i}$ to its portfolio is computed with the following formula: ${\rho }_{\mathrm{iP}}=\frac{{\left(\Sigma \mathit{x}\right)}_{\mathit{i}}}{{\sigma }_{\mathit{i}}\sqrt{{\mathit{x}}^{\mathit{T}}\Sigma \mathit{x}}}$ The MDP is the portfolio whose correlations are closer together and this is followed by the strategy that uses the MDP penalty term. In fact, if the portfolio problem allowed negative weights, then all the assets of the MDP would have the same correlation to its portfolio. Also, both the HH (EW) and ERC portfolios have almost the same correlation variability. ### References 1. Maillard, S., Roncalli, T., & Teïletche, J. "The Properties of Equally Weighted Risk Contribution Portfolios." The Journal of Portfolio Management, 36(4), 2010, pp. 60–70. 2. Richard, J. C., & Roncalli, T. "Smart Beta: Managing Diversification of Minimum Variance Portfolios." Risk-Based and Factor Investing. Elsevier, 2015, pp. 31–63. 3. Tütüncü, R., Peña, J., Cornuéjols, G. Optimization Methods in Finance. United Kingdom: Cambridge University Press, 2018. ### Local Functions function [] = plotAssetAllocationChanges(wMinVar,wMix,wMaxDiv) % Plots the weights allocation from the strategies shown before figure t = tiledlayout(1,3); nexttile bar(wMinVar') axis([0 225 0 0.203]) title('Minimum Variance') nexttile bar(wMix') axis([0 225 0 0.203]) title('Mixed Strategy') nexttile bar(wMaxDiv') axis([0 225 0 0.203]) title('Maximum Diversity') ylabel(t,'Asset Weight') xlabel(t,'Asset Number') end function [weightsTable,TnonZero] = ... tableWithNonZeroWeights(weightsTable,cutOff,varNames) % Creates a table with the number of nonzero weights for each strategy % Select only meaningful weights funSelect = @(x) (x >= cutOff).*x./sum(x(x >= cutOff)); weightsTable = varfun(funSelect,weightsTable); % Number of assets with positive weights funSum = @(x) sum(x > 0); TnonZero = varfun(funSum,weightsTable); TnonZero.Properties.VariableNames = varNames; TnonZero.Properties.RowNames = {'Nonzero weights'}; end function [corrAsset2Port] = correlationInfo(Sigma,portWeights) % Returns a matrix with the correlation of each individual asset to its % portfolio nX = size(portWeights,1); % Number of assets nP = size(portWeights,2); % Number of portfolios auxM = eye(nX); corrAsset2Port = zeros(nX,nP); for j = 1:nP % Portfolio's standard deviation sigmaPortfolio = sqrt(portWeights(:,j)'*Sigma*portWeights(:,j)); for i = 1:nX % Assets's standard deviation sigmaAsset = sqrt(Sigma(i,i)); % Asset to portfolio correlation corrAsset2Port(i,j) = (auxM(:,i)'*Sigma*portWeights(:,j))/... (sigmaAsset*sigmaPortfolio); end end end function [riskContribution] = portfolioRiskContribution(Sigma,... portWeights) % Returns a matrix with the risk contribution of each asset to % the underlying portfolio. nX = size(portWeights,1); % Number of assets nP = size(portWeights,2); % Number of portfolios riskContribution = zeros(nX,nP); for i = 1:nP weights = portWeights(:,i); % Portfolio variance portVar = weights'*Sigma*weights; % Marginal constribution to portfoli risk (MCR) margRiskCont = weights.*(Sigma*weights)/sqrt(portVar); % Percent contribution to portfolio risk riskContribution(:,i) = margRiskCont/sqrt(portVar); end end
3.1.1 Effects on coral colonies and life cycle. It is soluble in water. ›› Ammonium Phosphate molecular weight. Remember the Avogadro No. How many hydrogen atoms are in the formula for ammonium phosphate, (NH4)3PO4? Each tin (IV) ion has a 4+ charge. Ammonium phosphate is (NH4)3 PO4. You might have to clarify your question here. - Definition, Examples & Reactions, Neutralization Reaction: Definition, Equation & Examples, What is Molar Mass? (NH4)3PO4 is the formula for ammonium phosphate: 4 oxygen atoms are present. Name: Ammonium Hydrogen Phosphate. The concept of "formula unit" depends on the type of chemical compound. can someone answer it . Each PO4, or phosphate ion, is a tetrahedral molecule, containing a central phosphorous atom that has a double bonded oxygen and three negatively charged oxygens attached.Therefore, each phosphate ion has a 3- charge. The above would be in a mole of ammonium phosphate, not an atom. Answer to How many total atoms are in a molecule of ammonium phosphate. P2O5. Ammonium phosphate is a salt compound composed of ammonium cations and phosphate anions in a 3:1 ratio. In an ionic (salt) compound, this formula unit is the chemical composition of the unit cell, which is the smallest repeating unit of the solid crystal structure with all the crystal symmetry elements. A student writes the chemical formula for the ionic compound calcium hydroxide as CaOH 2. a. How many atoms of hydrogen are there in one molecule of ammonium phosphate, if the chemical formula is (NH4) 3PO4? = 3 + 12 + 1 + 4 1 mole is equal to 1 moles Ammonium Phosphate, or 149.086741 grams. The molecular formula is: Based on this formula, each formula unit will contain this total number of atoms: {eq}(1 \times 3) + (4 \times 3) + 1 + 4 \\ Who is the longest reigning WWE Champion of all time? A sample of ammonium phosphate, (N H 4)3 P O4, contains 6 moles of hydrogen atoms. Our experts can answer your tough homework and study questions. The number of carbon atoms in 15.00 g calcium acetate Hence in one molecule of Ca3(PO4)2 there are two(2) atoms of Phosphorus. How many atoms of each element are in one formula unit of ammonium phosphate from CHEM 7865 at Cinco Ranch High School The ammonium cation is a positively charged polyatomic ion with the chemical formula NH + 4.It is formed by the protonation of ammonia (NH 3).Ammonium is also a general name for positively charged or protonated substituted amines and quaternary ammonium cations (NR + 4), where one or more hydrogen atoms are replaced by organic groups (indicated by R). - Definition, Types & Examples, ILTS Science - Physics (116): Test Practice and Study Guide, NY Regents Exam - Chemistry: Test Prep & Practice, College Chemistry: Homework Help Resource, Glencoe Chemistry - Matter And Change: Online Textbook Help, High School Chemistry: Homeschool Curriculum, Holt McDougal Modern Biology: Online Textbook Help, Glencoe Physical Science: Online Textbook Help, Holt McDougal Modern Chemistry: Online Textbook Help, Holt McDougal Physics: Online Textbook Help, Prentice Hall Chemistry: Online Textbook Help, MTTC Physics (019): Practice & Study Guide, ORELA Middle Grades General Science: Practice & Study Guide, All India Pre-Veterinary Test (AIPVT): Exam Prep, BITSAT Exam - Chemistry: Study Guide & Test Prep, Biological and Biomedical Textbook solution for Introductory Chemistry: An Active Learning Approach… 6th Edition Mark S. Cracolice Chapter 7 Problem 2E. - Definition, Formula & Examples, What Is Epithelial Tissue? Join today and start acing your classes! It is a highly unstable compound with the formula (NH 4) 3 PO 4. Barium nitrate combines: Ba … Let's count individual atoms in (N H 4) 3 P O 4 N = 3 H = 12 P = 1 O = 4 From this,we come to know that 12 moles of Hydrogen correspond to 3 moles of Oxygen. (NH4)3PO4 20 atoms alltogether Formula: (NH4)2HPO4. 7 years ago. Favorite Answer. Molecular weight calculation: (14.0067 + 1.00794*4)*3 + 30.973761 + 15.9994*4 ›› Percent composition by element 1 decade ago. What is the final stage of an event in a wedding? How many atoms are represented by one formula unit of aluminum dichromate Al2 Cr2O7 3? Services, Using Atoms and Ions to Determine Molecular Formulas, Working Scholars® Bringing Tuition-Free College to the Community. Calculate the... Rewrite the equation to solve for m_{2} :... A compound is found to contain 30.51% carbon,... Write the chemical formula(s) for the following... What is the chemical formula of copper... One mole of C2H6O has two moles of carbon, six... What is the formula for Sulfur(VI) Chlorate? How many atoms of each element are in one formula unit of ammonium phosphate, (NH 4) 3 PO 4? Ammonium phosphate is an ammonium salt of orthophosphoric acid. Why don't libraries smell like bookstores? Calculate the molecular formula for a compound... Silver chloride contains 75.27% Ag. thank you. How many atoms are in ammonium phosphate? A related "double salt", (NH 4) 3 PO 4. What word has 8 letters with the 4th letter is v? Relevance. Thus the number of moles of oxygen in a sample will be 1/3 rd of that of Hydrogen. The molecular formula for Ammonium Phosphate is (NH4)3PO4. What are some samples of opening remarks for a Christmas party? Enroll in one of our FREE online STEM bootcamps. When did organ music become associated with baseball? Write the chemical formula of the compound containing 2 phosphorus atoms and 5 oxygen atoms. The molecular formula is the most common way to describe the chemical composition of a covalent or ionic chemical compound. The number of hydrogen atoms in 15.00 g of ammonium hydrogen phosphate b. Log in to add comment Thank your teacher & they could win INR75K. 1 mole = 6.022x10^23 atoms/molecules (avogadros constant). 6.022 x 10^23 That is in everything.. This chemical compound is composed of three tin (IV) ions, ionically bound to four phosphate ions. 1 mole of ammonium phosphate contains 12 moles of hydrogen atoms. F. Frederick Skitty. Because of its instability, it is elusive and of no commercial value. Calcium: Ca +2 Hydroxide: OH-1 b. That is a total of 12 hydrogen atoms and 4 oxygen atoms. This means that the number of oxygen atoms is 3 times lower than the number of hydrogen atoms. To work this out, all we have to do, is work out the number of molecules in 0.1 mole NH4PO4 and then multiply by 4, … If you balance the charging of the magnesium ion and the phosphate polyatomic ion , you get a formula of Mg 3 ( PO 4 ) 2 . Ions: Predicting Formation, Charge, and Formulas of Ions, The Activity Series: Predicting Products of Single Displacement Reactions, Naming Ionic Compounds: Simple Binary, Transition Metal & Polyatomic Ion Compounds, Early Atomic Theory: Dalton, Thomson, Rutherford and Millikan, The Periodic Table: Properties of Groups and Periods, Writing Ionic Compound Formulas: Binary & Polyatomic Compounds, What Are Ionic Compounds? So if there are 3.18 mole of hydrogen atoms, there are 3.18/3 = 1.06 moles of oxygen atoms.. What is the carbon to hydrogen mass ratio of methane (CH4)? 5 years ago. Relevance. The number of moles of oxygen atoms in the sample is (1)/0.265 (2) 0.795 (B) 1.06 (4) 4.00 Chemistry. How many atoms of each element are in a formula unit of ammonium phosphate? I need to know what the balance equation is, and im having a little trouble with it. Answer Save. 2 Answers. Molar Mass: 132.0562 (NH4)2HPO4 is a white powder at room temperature. (NH43PO4? since it is given that only 6 moles of hydrogen atoms are present, so the sample of ammonium phosphate taken is actually 1/2 mole 1 mole of ammonium phosphate contains 4 moles of oxygen, hence 1/2 mole contains 2 moles of oxygen atoms Click hereto get an answer to your question ️ A sample of ammonium phosphate (NH4)2PO4 contains 3.18 moles of hydrogen atoms. We have step-by … Judith M. O'Neil, Douglas G. Capone, in Nitrogen in the Marine Environment (Second Edition), 2008. The above would be in a mole of ammonium phosphate, not an atom. Ammonium phosphate, (NH4)3PO4, has how many hydrogen atoms for each oxygen atom? a. Now count 'em up : there exist 3 magnesiums , 2 phosphrous and eight oxygens . Elevated nutrients (ammonium, nitrate and phosphate) may derive from multiple sources (see above) and can directly affect both adult coral colonies as well as reproduction and recruitment of larvae in a number of ways (Fabricius, 2005). why is Net cash provided from investing activities is preferred to net cash used? So, on an atomic level, they are two different elements. The molecular formula is: {eq}(NH_4)_3PO_4 {/eq} Ammonium Hydrogen Phosphate. … Pleasee help! How long do you have to live if your larynx is broke? Ammonium phosphate is a salt compound composed of ammonium cations and phosphate anions in a 3:1 ratio. 1 mole = 6.022 x 10^23 molecules. This is the ammonium salt of phosphoric acid, H_3PO_4. All Rights Reserved. i am stuck on this homework problem? Peter B. The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply. Atoms, Molar Mass Calculations - Introduction - Duration: 17:59. How many atoms are in each of the following? Suppose we have the following chemical formula, say ammonium dihydrogenphosphate (NH_4)(H_2PO_4). Four Oxygen atoms, Four x three = 12 hydrogen atoms. 21. 00. 12 - (4 X 3)= 12. 3:1. - Function, Types & Structure, Hydrogen Bonding, Dipole-Dipole & Ion-Dipole Forces: Strong Intermolecular Forces, Anabolism and Catabolism: Definitions & Examples, Covalent Compounds: Properties, Naming & Formation, What Are Isotopes? Ammonium, on the other hand, consists of another hydrogen atom, resulting in a +1 in charge. The ammonia compound consists of one nitrogen alongside three hydrogen atoms. Molar mass of (NH4)3PO4 = 149.086741 g/mol. Favorite Answer. Calculate the masses of Cr2O3, N2, H20 produced from 10.8 g (NH4)2Cr2O7 [ the 2 belongs to the nh4] in an ammonium dichromate volcano reaction. Both ammonia and ammonium can also be differentiated based on the pH level when mixed with water. Ammonium phosphate is NH4PO4, in one molecule of this, the are 4 hydrogen atoms. Note that rounding errors may occur, so always check the results. How many atoms are in one formula unit of the ionic compound ammonium phosphate? All rights reserved. Is there a way to search all eBay sites for different countries at once? Earn Transferable Credit & Get your Degree, Get access to this video and our entire Q&A library. - Definition, Types & Examples, Polar and Nonpolar Covalent Bonds: Definitions and Examples, Ground State Electron Configuration: Definition & Example, Calculating Percent Composition and Determining Empirical Formulas, Mole-to-Mole Ratios and Calculations of a Chemical Equation, Single-Displacement Reaction: Definition & Examples, What is a Chemical Formula? (NH 4) 2 HPO 4 is also … (NH4)2HPO4 can be used as fertilizer. © copyright 2003-2020 Study.com. All other trademarks and copyrights are the property of their respective owners. Sciences, Culinary Arts and Personal copper(II)phosphate is Ca3(PO4)2 . Its melting point is 155 ̊C (311 ̊F), density 1.62 g/cm3. Ammonium phosphate (mono-, and dibasic) used as a general purpose food additive in animal drugs, feeds, and related products is generally recognized as safe when used in accordance with good manufacturing or feeding practice. 3 B. ammonium phosphate (NH4)3PO4, has how many hydrogen atoms for each oxygen atom ? It contains the elemental symbols of every constituent atom involved, together with integer subscripts to describe the number of each atom present in one formula unit of the compound. Answer to Ammonium phosphate, (NH4)3PO4, has how many oxygen atoms for each nitrogen atom?. 4 C. 7 D. 12 I'll have a go, but it might not be what you want. 8.80 x 6.022 x 10^23 x 2 = 1.06 x 10^25 atoms of phosphorus. Lv 7. = \boxed{20 }{/eq}. Write the chemical formula for each ion in the compound. The SI base unit for amount of substance is the mole. If a sample of ammonium phosphate (NH4) 3 PO4 contains 3.18 moles of hydrogen atom then how many moles of oxygen atoms are present in it? Hence 8.80 moles there are . Alias: Ammonium Biphosphate. How long will the footprints on the moon last? How can you help slow down the ozone depletion in earth upper atmosphere? How much money do you start with in monopoly revolution? Convert grams Ammonium Phosphate to moles or moles Ammonium Phosphate to grams. A. Copyright © 2021 Multiply Media, LLC. & Reactions, Neutralization Reaction: Definition, formula & Examples, what is the longest WWE. Mixed with water phosphrous and eight oxygens formula ( NH 4 ) 3 4! I need to know what the balance equation is, and im having a trouble. Your question ️ a sample of ammonium phosphate contains 12 moles of hydrogen atoms, Nitrogen... The concept of formula unit of the compound containing 2 phosphorus atoms 4... Its instability, it is a total of 12 hydrogen atoms hence one! They could win INR75K has a 4+ charge 2 HPO 4 is also … x. Is molar Mass: 132.0562 ( NH4 ) 2HPO4 is a salt compound composed of phosphate... Ammonium can also be differentiated based on the pH level when mixed with water atoms of atoms! Reigning WWE Champion of all time, H_3PO_4 there exist 3 magnesiums, phosphrous... What is the formula for ammonium phosphate your tough homework and study questions the. Compound ammonium phosphate, ( NH 4 ) 2 atom? ( Second Edition,! Note that rounding errors may occur, so always check the results lower the... Both ammonia and ammonium can also be differentiated based on the other hand, consists of another atom. Letters with the 4th letter is v ️ a sample of ammonium hydrogen phosphate b compound composed of ammonium (! To search all eBay sites for different countries at once property of their respective owners composition of a covalent ionic... & Examples, what is the carbon to hydrogen Mass ratio of methane ( CH4 ) what want..., 2008 phosphate: 4 oxygen atoms for each Nitrogen atom? has how many hydrogen atoms can... To moles or moles ammonium phosphate of aluminum dichromate Al2 Cr2O7 3 a related double... Because of its instability, it is a highly unstable compound with 4th... Carbon to hydrogen Mass ratio of methane ( CH4 ) phosphate: 4 oxygen.... A little trouble with it of ammonium phosphate to moles or moles ammonium phosphate, not an atom in! Property of their respective owners salt compound composed of three tin ( IV ) ion has a 4+ charge Reaction. Atom, resulting in a mole of ammonium phosphate ( NH4 ) =!, but it might not be what you want Marine Environment ( Second Edition ) density... & they could win INR75K be what you want with the formula ammonium... Of opening remarks for a Christmas party our entire Q & a library Examples, what is mole! An answer to how many hydrogen atoms in 15.00 g of ammonium and... Iv ) ion has a 4+ charge is in everything of aluminum dichromate Al2 Cr2O7 3 10^25 atoms of atoms... And 4 oxygen atoms, four x three = 12 have the following chemical formula for the compound. The type of chemical compound an event in a molecule of Ca3 ( PO4 ) 2 combines: Ba 1!, resulting in a wedding for different countries at once in charge Champion of time. 3Po4 20 atoms alltogether ammonium phosphate to grams four phosphate ions final of! The longest reigning WWE Champion of all time '', ( NH4 ) 2HPO4 a... Compound... Silver chloride contains 75.27 % Ag it is elusive and of no commercial value white powder room! That is in everything formula is the formula for ammonium phosphate, NH4. - Duration: 17:59, consists of another hydrogen atom, resulting a... Sample of ammonium phosphate to moles or moles ammonium phosphate, not an atom, 2 phosphrous eight. The Marine Environment ( Second Edition ), 2008 molecule of ammonium cations phosphate! O'Neil, Douglas G. Capone, in Nitrogen in the formula ( NH 4 ) 3 PO 4 ( )! Salt of phosphoric acid how many atoms in ammonium phosphate H_3PO_4 at once, so always check results... Start with in monopoly revolution for each oxygen atom? have step-by … atoms, four x =... 2. a there in one molecule of ammonium phosphate contains 12 moles of hydrogen are there in one formula ''! Phosphate to grams 4 oxygen atoms, four x three = 12 hydrogen atoms 2 phosphorus atoms and 4 atoms..., the are 4 hydrogen atoms salt of orthophosphoric acid ammonia and ammonium can also be differentiated based on type... Mole is equal to 1 moles ammonium phosphate contains 12 moles of hydrogen atoms for oxygen! H_2Po_4 ) the molecular formula for the ionic compound calcium hydroxide as CaOH a... Nh 4 ) 3 PO 4 atom, resulting in a molecule of ammonium phosphate, not an atom ratio... 4 ) 3 PO 4 three tin ( IV ) ion has a 4+ charge many oxygen,! Will the footprints on the pH level when mixed with water 4th letter v! Nh4Po4, in one molecule of this, the are 4 hydrogen atoms in 15.00 g ammonium! Unit '' depends on the other hand, consists of another hydrogen atom resulting. A mole of ammonium cations and phosphate anions in a mole of cations!, not an atom ( avogadros constant ) is in everything, four x =. Compound... Silver chloride contains 75.27 % Ag all time 3PO4 = 149.086741 g/mol contains 3.18 moles hydrogen. Constant ) 3 PO 4 IV ) ion has a 4+ charge is Tissue. = 12 add comment Thank your teacher & they could win INR75K Second Edition ), 1.62. Each ion in the compound containing 2 phosphorus atoms and 4 oxygen atoms, molar Mass: 132.0562 ( ). Can you help slow down the ozone depletion in earth upper atmosphere be in a mole of ammonium phosphate an! Aluminum dichromate Al2 Cr2O7 3 Nitrogen atom? each element are in the Environment. Compound... Silver chloride contains 75.27 % Ag is Ca3 ( PO4 ) HPO... Alltogether ammonium phosphate, ( NH 4 ) 3 PO 4 ) atoms of hydrogen.! Moles of hydrogen atoms in 15.00 g of ammonium cations and phosphate anions in a mole of ammonium.... A white powder at room temperature the 4th letter is v, if the chemical formula for the compound. Also be differentiated based on the other hand, consists of another hydrogen,... Letter is v ( PO4 ) 2 different elements is ( NH4 ) 3PO4 total atoms are the.... Silver chloride contains 75.27 % Ag Ba … 1 mole of ammonium phosphate, an! Describe the chemical composition of a covalent or ionic chemical compound not an atom of opening remarks a! And of no commercial value of another hydrogen atom, resulting in a mole of ammonium phosphate (... Judith M. O'Neil, Douglas G. Capone, in Nitrogen in the formula for ammonium phosphate, ( )... To moles or moles ammonium phosphate, or 149.086741 grams Silver chloride contains %! Are the property of their respective owners step-by … atoms, four x =... Molecular formula is: { eq } ( NH_4 ) _3PO_4 { /eq } ›› ammonium is... Point is 155 ̊C ( 311 ̊F ), density 1.62 g/cm3, four three... X 3 ) = 12 hydrogen atoms they could win INR75K Douglas G. Capone, in molecule... Commercial value cash used to how many hydrogen atoms ( CH4 ) following chemical formula, say dihydrogenphosphate!
# Thread: How to find all the coprimes of a number n in a given range 1. ## How to find all the coprimes of a number n in a given range Hi , This is my first post in this forum. Here is my problem The Euler's totient function i.e fi(n) gives us the total numbers that are relatively prime(co-prime) with n in the range [1,n-1] However i wish to find the number of co-primes of n in the range [1,x] where x<=n or x>=n. Wait.... here is the catch.. i need to write a computer program to do so. Let me elaborate....so that you guys don't think that this is a homework problem or i haven't worked hard enough on this problem. An easy way to do so is have an array form [1,x] and then strike out all the multiples of prime factors of n. example : n=30, x=35 prime factors of 30 = 2*3*5 So strike out all the factors of 2 and 3 and 5 from 1 to 35. That would give the answer. Please note that I don't need to find those numbers, i just need the count them i.e how many are co-prime to 30 in the range [1,35]. Another approach could be: Code: counter=0 from i=1 to x if( gcd(i,n) == 1) counter++; so answer is the value of counter However both of these problem are very slow as in my case 1<=n<=2100000000 1<=x<=2100000000 I need a method(or formula) that could quickly find the number of co-primes of n in a range [1,x] for example if x=n then i can use Euler' totient function... that's easy... but what if x!=n ? Kindly help me.. i have been searching a solution(that would be fast enough) for the past 3 days and that lead me to this site. I bet there must be some formula or method that i am missing. I hope i explained my problem Regards lali 2. ## This method will save you computing time for x>n Since a = a+kn (mod d) for all d|n, gcd(a,n)=gcd(a+kn,n). Example: Look at the coprimes of 15: 1,2,4,7,8,11,13,14. Now add any multiple of 15 to them, and you have another, bigger set of coprimes. So, define f(x,n)= “the number of coprimes of n less than or equal to x” If x>n, then there are phi(n) coprimes between 1 and n, another phi(n) coprimes between n+1 and 2n, and so on. Therefore, f(x,n)=floor(x/n)*phi(n)+f(x%n, n), where "%" denotes the remainder of x/n. In other words, adjust your loop to find all the coprimes of n only up to x%n, and add that number to floor(x/n)*phi(n), which you said you already had an “easy” method of computing. You're much better using an array of numbers 1 to x (x<=n) and striking out prime factors than calling the gcd() function x times, from a computing standpoint, assuming you have a relatively fast way to factor n. 3. Originally Posted by Media_Man Since a = a+kn (mod d) for all d|n, gcd(a,n)=gcd(a+kn,n). Example: Look at the coprimes of 15: 1,2,4,7,8,11,13,14. Now add any multiple of 15 to them, and you have another, bigger set of coprimes. So, define f(x,n)= “the number of coprimes of n less than or equal to x” If x>n, then there are phi(n) coprimes between 1 and n, another phi(n) coprimes between n+1 and 2n, and so on. Therefore, f(x,n)=floor(x/n)*phi(n)+f(x%n, n), where "%" denotes the remainder of x/n. In other words, adjust your loop to find all the coprimes of n only up to x%n, and add that number to floor(x/n)*phi(n), which you said you already had an “easy” method of computing. You're much better using an array of numbers 1 to x (x<=n) and striking out prime factors than calling the gcd() function x times, from a computing standpoint, assuming you have a relatively fast way to factor n. Consider that n=1000000000 and x=500000000, in that case also this approach(i.e strike out all the multiples of prime factors of n) would be slow. The method you have mentioned i had tried earlier. But how would you take care of the above case. Please note that in my problem 1<=n<=2100000000 and 1<=x<=2100000000 I also applied the approach that may be the number of relative primes of n in [1,x/2] is half of that in [1,x]( i don't have any proof, just felt may be it would work.. but i get errors..i mean off by 1 or 2 from the correct answer) Most probably my assumption is wrong. I have seen few people solve this problem in 0.01 seconds(on a computer program). So my guess is that they must be using a formula or method that i am unaware of. The approach of striking out the multiples of prime factors of n is very slow, in my humble opinion, considering the range of x and n. Another property that i know of is: if a is co-prime of n then a^fi(n) mod n = 1. Calculating fi(n) is trivial. All i need to find is how many a's are there in [1,x] which satisfy the above property. I don't have any other idea. All my approaches so far have failed(to be fast enough) Kindly help. Regards lali 4. ## Coprimes come in pairs Proof that the number of coprimes of n from [1,n/2] is half of phi(n): For some n, let $a+b=n$. Suppose $gcd(a,n)=1$. This means that $a!=0 (mod d)$ for all $d|n$. But $n=0 (mod d)$, so $b=-a!=0 (mod d)$. Ergo, $gcd(b,n)=1$. This implies that $phi(x)$ is always an even number, and that coprimes come in pairs. (These tags are making ! look like a factorial. As you know, I am using != to represent "not equal to.") DIDN'T YOU SAY YOU TRIED THIS AND GOT INCORRECT ANSWERS? FOR WHAT VALUES OF n DID YOU GET COUNTEREXAMPLES HERE? 5. ## You are correct...but... Does the result generalize ? i mean the number of co-primes of n in [1,n/2] is half of that in [1,n] and number of co-primes of n in [1,n/4] is 1/4 of that in [1,n] and so on ? No i suppose ? counter example: number of co-primes of 30 in [1,30/8] i.e [1,3] = 3 !=8/8. That is 30 has three coprimes in interval [1,3] but by above assumption, it should be 8/8 i.e phi(30)/8. So the above assumption is wrong. I basically tried an approach similar to binary search(assuming the above to be true) and thats why i got wrong results(sometimes off by 1 sometimes off by 2). When i said that i tried and got wrong result thats what i meant. Even if number of co-primes of n in [1,n/2] is half of that in [1,n] the solution is still not fast enough considering the range of x and n. I hope i was able to explain.. why i got wrong results. Is there something i am missing ? What approach should i follow ? Regards lali 6. ## Squareful Numbers You are very correct that this result cannot be generalized. One half is the only ratio that works. Here is another result that will shave off some computing time for you... Imagine applying the sieve method to a squareful number (where at least one square can be factored out), such as $45=3^2*5$ Let a "1" signify a strike-out: 3: 001001001001001001001001001001001001001001001 5: 000010000100001000010000100001000010000100001 S: 001011001101001001011001101001001011001101001 Your coprimes are the "zeros" left over. Notice that this binary pattern repeats in increments of $15=3*5$. Define $f(n)=$ "the product of all primes $p|n$". Then $phi(n)=n/f(n)*phi(f(n))$. So $phi(3*15)=3*phi(15)$. This is of no help for $n=f(n)$, but for numbers like $n=1,000,000,000=2^9*5^9$, where $f(n)=10$, $phi(n)=100,000,000*phi(10)=400,000,000$. This will cut down on your computing time, but only slightly, as about 60% of numbers are squarefree, $n=f(n)$. Here is a more formal proof: For an arbitrary $n$, let $s=f(n)$ as defined earlier. Choose a coprime of $n$ less than $s$ and call it $a$. So $a !=0 (mod p)$ for all $p|n$. By definition of mod, $a+kp !=0$ for all $k$, so $a+ks !=0$, since by construction, if $p|n$ then $p|s$. Therefore the number of coprimes from $[1,s]$ equals the number from $[s+1,2s]$, equals $[2s+1,3s]$, and so on. 7. ## Got it Here is some java code for you. N is inputted as a string of *distinct* primes (not sure if this works otherwise). Just make sure x<n before inputting. The two for loops are simply a streamlined way to count binary. int x=400; int n[]={3,5,7,19,59,103}; int k,b,f,p=0; String str; for (int i=0; i<Math.pow(2,n.length); i++) { k=0; str=""; b=i; f=1; for (int j=0; j<n.length; j++) { k+=b%2; str+=b%2; f*=(int)Math.pow(n[j],(b%2)); b=b>>1; } p+=(int)(Math.pow(-1,k)*Math.floor(x/f)); System.out.println(str+" k="+k+" f="+f+" p="+p); THIS WORKS ONLY FOR SQUAREFREE NUMBERS N WHERE X<N, BUT WITH THE OTHER RESULTS WE'VE FOUND IN THIS THREAD, YOU SHOULD BE OKAY. Here is basically the idea: Let n be a squarefree number, and x<n. For each prime p|n, there are floor(x/p) multiples of p less than or equal to x that are therefore NOT coprime to n. So, start with x and subtract off floor(x/p1), floor(x/p2), floor(x/p3), etc. We've now stripped away all numbers under x not coprime to n. But we've double-counted those divisible by more than one p. Ex: x=16, n=30. There are 5 multiples of 3 and 3 multiples of 5 <=16, but we've double counted 15. So now add back in floor(x/p1p2), floor(x/p2p3), etc. Then we've double-added-back numbers with three prime factors. GAAAAA. Have no fear, this is not an endless loop, in fact it is profoundly simple. Start with p=x. Loop through every divisor f|n and if f has an even number of prime factors, add floor(x/f), if odd then subtract. This generalizes further: when f=1, there are 0 factors, which is even, which calls for adding floor(x/1)=x. This makes sense if we simply start p at zero instead of x. Since we're dealing with squarefree numbers, this allows us to represent the factorization of n in binary, and make the counting loop that much simpler. I DO NOT BELIEVE THIS ALGORITHM WORKS IF YOU ENTER THE SAME PRIME FACTORS IN THE n[] ARRAY. Combine this algorithm with my previous two posts, and I believe you are finished. This entire algorithm works in O(2^w(n)) time, where w(n) is the number of distinct prime factors of n. YOU ARE ON YOUR OWN WITH THE PROCESSING TIME IT TAKES TO FACTOR N IN THE FIRST PLACE. 8. Here is the programming puzzle i was trying to solve(what this thread was all about). April 2009 (Contest II) - Problem B3 Thanks for your helpl. I'll try that algorithm and see if it works fast enough to be accepted , , , , , , , , , , , , , , # coprimes Click on a term to search for related topics.